Import VM from OVA not working
by kiv@intercom.pro
Hi all.
Make dir /var/lib/exports/import
chown 36:36 /var/lib/exports/import/VM.ova
Go to oVirt admin portal. Import Vm from OVA source. In file path - /var/lib/exports/import/VM.ova
Error:
Failed to load VM configuration from OVA file: /var/lib/exports/import/VM.ova
Log file ovirt-query-ova-ansible:
2019-01-18 10:26:25,018 p=8066 u=ovirt | Traceback (most recent call last):
File "/root/.ansible/tmp/ansible-tmp-1547789184.32-243441920380227/query_ova.py", line 59, in <module>
ovf = get_ovf_from_dir(ova_path, sys.argv[2])
File "/root/.ansible/tmp/ansible-tmp-1547789184.32-243441920380227/query_ova.py", line 29, in get_ovf_from_dir
files = os.listdir(ova_path)
OSError: [Errno 2] No such file or directory: '/var/lib/exports/import/VM.ova'
ls -la /var/lib/exports/import/
-rw-r--r--. 1 vdsm kvm 4165205504 Jan 17 22:51 VM.ova
6 years, 4 months
ovirt node ng 4.3.0 rc1 upgrade fails
by Jorick Astrego
Hi,
Trying to update oVirt Node 4.2.7.1 to 4.3.0, but it fails with the
following dependencies:
"Loaded plugins: enabled_repos_upload, fastestmirror,
imgbased-persist,\n : package_upload, product-id,
search-disabled-repos, subscription-\n : manager,
vdsmupgrade\nThis system is not registered with an entitlement
server. You can use subscription-manager to register.\nLoading
mirror speeds from cached hostfile\n * ovirt-4.3-epel:
ftp.nluug.nl\nResolving Dependencies\n--> Running transaction
check\n---> Package ovirt-host.x86_64 0:4.2.3-1.el7 will be
updated\n---> Package ovirt-host.x86_64 0:4.3.0-1.el7 will be an
update\n--> Processing Dependency: ovirt-host-dependencies =
4.3.0-1.el7 for package: ovirt-host-4.3.0-1.el7.x86_64\n-->
Processing Dependency: aide for package:
ovirt-host-4.3.0-1.el7.x86_64\n--> Processing Dependency: openscap
for package: ovirt-host-4.3.0-1.el7.x86_64\n--> Processing
Dependency: openscap-utils for package:
ovirt-host-4.3.0-1.el7.x86_64\n--> Processing Dependency: pam_pkcs11
for package: ovirt-host-4.3.0-1.el7.x86_64\n--> Processing
Dependency: scap-security-guide for package:
ovirt-host-4.3.0-1.el7.x86_64\n--> Running transaction check\n--->
Package ovirt-host.x86_64 0:4.3.0-1.el7 will be an update\n-->
Processing Dependency: aide for package:
ovirt-host-4.3.0-1.el7.x86_64\n--> Processing Dependency: openscap
for package: ovirt-host-4.3.0-1.el7.x86_64\n--> Processing
Dependency: openscap-utils for package:
ovirt-host-4.3.0-1.el7.x86_64\n--> Processing Dependency: pam_pkcs11
for package: ovirt-host-4.3.0-1.el7.x86_64\n--> Processing
Dependency: scap-security-guide for package:
ovirt-host-4.3.0-1.el7.x86_64\n---> Package
ovirt-host-dependencies.x86_64 0:4.2.3-1.el7 will be updated\n--->
Package ovirt-host-dependencies.x86_64 0:4.3.0-1.el7 will be an
update\n--> Processing Dependency: vdsm >= 4.30.5 for package:
ovirt-host-dependencies-4.3.0-1.el7.x86_64\n--> Processing
Dependency: vdsm-client >= 4.30.5 for package:
ovirt-host-dependencies-4.3.0-1.el7.x86_64\n--> Running transaction
check\n---> Package ovirt-host.x86_64 0:4.3.0-1.el7 will be an
update\n--> Processing Dependency: aide for package:
ovirt-host-4.3.0-1.el7.x86_64\n--> Processing Dependency: openscap
for package: ovirt-host-4.3.0-1.el7.x86_64\n--> Processing
Dependency: openscap-utils for package:
ovirt-host-4.3.0-1.el7.x86_64\n--> Processing Dependency: pam_pkcs11
for package: ovirt-host-4.3.0-1.el7.x86_64\n--> Processing
Dependency: scap-security-guide for package:
ovirt-host-4.3.0-1.el7.x86_64\n---> Package vdsm.x86_64
0:4.20.43-1.el7 will be updated\n--> Processing Dependency: vdsm =
4.20.43-1.el7 for package:
vdsm-hook-ethtool-options-4.20.43-1.el7.noarch\n--> Processing
Dependency: vdsm = 4.20.43-1.el7 for package:
vdsm-gluster-4.20.43-1.el7.x86_64\n--> Processing Dependency: vdsm =
4.20.43-1.el7 for package:
vdsm-hook-vmfex-dev-4.20.43-1.el7.noarch\n--> Processing Dependency:
vdsm = 4.20.43-1.el7 for package:
vdsm-hook-fcoe-4.20.43-1.el7.noarch\n---> Package vdsm.x86_64
0:4.30.5-1.el7 will be an update\n--> Processing Dependency:
vdsm-http = 4.30.5-1.el7 for package: vdsm-4.30.5-1.el7.x86_64\n-->
Processing Dependency: vdsm-jsonrpc = 4.30.5-1.el7 for package:
vdsm-4.30.5-1.el7.x86_64\n--> Processing Dependency: vdsm-python =
4.30.5-1.el7 for package: vdsm-4.30.5-1.el7.x86_64\n--> Processing
Dependency: qemu-kvm-rhev >= 10:2.12.0-18.el7_6.1 for package:
vdsm-4.30.5-1.el7.x86_64\n---> Package vdsm-client.noarch
0:4.20.43-1.el7 will be updated\n---> Package vdsm-client.noarch
0:4.30.5-1.el7 will be an update\n--> Processing Dependency:
vdsm-api = 4.30.5-1.el7 for package:
vdsm-client-4.30.5-1.el7.noarch\n--> Processing Dependency:
vdsm-yajsonrpc = 4.30.5-1.el7 for package:
vdsm-client-4.30.5-1.el7.noarch\n--> Running transaction check\n--->
Package ovirt-host.x86_64 0:4.3.0-1.el7 will be an update\n-->
Processing Dependency: aide for package:
ovirt-host-4.3.0-1.el7.x86_64\n--> Processing Dependency: openscap
for package: ovirt-host-4.3.0-1.el7.x86_64\n--> Processing
Dependency: openscap-utils for package:
ovirt-host-4.3.0-1.el7.x86_64\n--> Processing Dependency: pam_pkcs11
for package: ovirt-host-4.3.0-1.el7.x86_64\n--> Processing
Dependency: scap-security-guide for package:
ovirt-host-4.3.0-1.el7.x86_64\n---> Package qemu-kvm-ev.x86_64
10:2.10.0-21.el7_5.7.1 will be updated\n---> Package
qemu-kvm-ev.x86_64 10:2.12.0-18.el7_6.1.1 will be an update\n-->
Processing Dependency: qemu-kvm-common-ev = 10:2.12.0-18.el7_6.1.1
for package: 10:qemu-kvm-ev-2.12.0-18.el7_6.1.1.x86_64\n-->
Processing Dependency: qemu-img-ev = 10:2.12.0-18.el7_6.1.1 for
package: 10:qemu-kvm-ev-2.12.0-18.el7_6.1.1.x86_64\n--> Processing
Dependency: libibumad.so.3()(64bit) for package:
10:qemu-kvm-ev-2.12.0-18.el7_6.1.1.x86_64\n--> Processing
Dependency: libgbm.so.1()(64bit) for package:
10:qemu-kvm-ev-2.12.0-18.el7_6.1.1.x86_64\n--> Processing
Dependency: libepoxy.so.0()(64bit) for package:
10:qemu-kvm-ev-2.12.0-18.el7_6.1.1.x86_64\n---> Package
vdsm-api.noarch 0:4.20.43-1.el7 will be updated\n---> Package
vdsm-api.noarch 0:4.30.5-1.el7 will be an update\n---> Package
vdsm-gluster.x86_64 0:4.20.43-1.el7 will be updated\n---> Package
vdsm-gluster.x86_64 0:4.30.5-1.el7 will be an update\n---> Package
vdsm-hook-ethtool-options.noarch 0:4.20.43-1.el7 will be
updated\n---> Package vdsm-hook-ethtool-options.noarch
0:4.30.5-1.el7 will be an update\n---> Package vdsm-hook-fcoe.noarch
0:4.20.43-1.el7 will be updated\n---> Package vdsm-hook-fcoe.noarch
0:4.30.5-1.el7 will be an update\n---> Package
vdsm-hook-vmfex-dev.noarch 0:4.20.43-1.el7 will be updated\n--->
Package vdsm-hook-vmfex-dev.noarch 0:4.30.5-1.el7 will be an
update\n---> Package vdsm-http.noarch 0:4.20.43-1.el7 will be
updated\n---> Package vdsm-http.noarch 0:4.30.5-1.el7 will be an
update\n---> Package vdsm-jsonrpc.noarch 0:4.20.43-1.el7 will be
updated\n---> Package vdsm-jsonrpc.noarch 0:4.30.5-1.el7 will be an
update\n---> Package vdsm-python.noarch 0:4.20.43-1.el7 will be
updated\n---> Package vdsm-python.noarch 0:4.30.5-1.el7 will be an
update\n--> Processing Dependency: vdsm-common = 4.30.5-1.el7 for
package: vdsm-python-4.30.5-1.el7.noarch\n--> Processing Dependency:
vdsm-network = 4.30.5-1.el7 for package:
vdsm-python-4.30.5-1.el7.noarch\n---> Package vdsm-yajsonrpc.noarch
0:4.20.43-1.el7 will be updated\n---> Package vdsm-yajsonrpc.noarch
0:4.30.5-1.el7 will be an update\n--> Running transaction
check\n---> Package ovirt-host.x86_64 0:4.3.0-1.el7 will be an
update\n--> Processing Dependency: aide for package:
ovirt-host-4.3.0-1.el7.x86_64\n--> Processing Dependency: openscap
for package: ovirt-host-4.3.0-1.el7.x86_64\n--> Processing
Dependency: openscap-utils for package:
ovirt-host-4.3.0-1.el7.x86_64\n--> Processing Dependency: pam_pkcs11
for package: ovirt-host-4.3.0-1.el7.x86_64\n--> Processing
Dependency: scap-security-guide for package:
ovirt-host-4.3.0-1.el7.x86_64\n---> Package qemu-img-ev.x86_64
10:2.10.0-21.el7_5.7.1 will be updated\n---> Package
qemu-img-ev.x86_64 10:2.12.0-18.el7_6.1.1 will be an update\n--->
Package qemu-kvm-common-ev.x86_64 10:2.10.0-21.el7_5.7.1 will be
updated\n---> Package qemu-kvm-common-ev.x86_64
10:2.12.0-18.el7_6.1.1 will be an update\n---> Package
qemu-kvm-ev.x86_64 10:2.12.0-18.el7_6.1.1 will be an update\n-->
Processing Dependency: libibumad.so.3()(64bit) for package:
10:qemu-kvm-ev-2.12.0-18.el7_6.1.1.x86_64\n--> Processing
Dependency: libgbm.so.1()(64bit) for package:
10:qemu-kvm-ev-2.12.0-18.el7_6.1.1.x86_64\n--> Processing
Dependency: libepoxy.so.0()(64bit) for package:
10:qemu-kvm-ev-2.12.0-18.el7_6.1.1.x86_64\n---> Package
vdsm-common.noarch 0:4.20.43-1.el7 will be updated\n---> Package
vdsm-common.noarch 0:4.30.5-1.el7 will be an update\n---> Package
vdsm-network.x86_64 0:4.20.43-1.el7 will be updated\n---> Package
vdsm-network.x86_64 0:4.30.5-1.el7 will be an update\n--> Finished
Dependency Resolution\n You could try using --skip-broken to work
around the problem\n You could try running: rpm -Va --nofiles
--nodigest\nUploading Enabled Repositories Report\nLoaded plugins:
fastestmirror, product-id, subscription-manager\nThis system is not
registered with an entitlement server. You can use
subscription-manager to register.\n"
]
}
MSG:
2019-01-11 14:25:52,185 [INFO] yum:52830:MainThread
@connection.py:871 - Connection built:
host=subscription.rhsm.redhat.com port=443 handler=/subscription
auth=identity_cert ca_dir=/etc/rhsm/ca/ insecure=False
2019-01-11 14:25:52,186 [INFO] yum:52830:MainThread @repolib.py:494
- repos updated: Repo updates
Total repo updates: 0
Updated
<NONE>
Added (new)
<NONE>
Deleted
<NONE>
Error: Package: 10:qemu-kvm-ev-2.12.0-18.el7_6.1.1.x86_64
(ovirt-4.3-centos-qemu-ev)
Requires: libibumad.so.3()(64bit)
Error: Package: 10:qemu-kvm-ev-2.12.0-18.el7_6.1.1.x86_64
(ovirt-4.3-centos-qemu-ev)
Requires: libgbm.so.1()(64bit)
Error: Package: ovirt-host-4.3.0-1.el7.x86_64 (ovirt-4.3-pre)
Requires: openscap-utils
Error: Package: ovirt-host-4.3.0-1.el7.x86_64 (ovirt-4.3-pre)
Requires: openscap
Error: Package: ovirt-host-4.3.0-1.el7.x86_64 (ovirt-4.3-pre)
Requires: pam_pkcs11
Error: Package: ovirt-host-4.3.0-1.el7.x86_64 (ovirt-4.3-pre)
Requires: aide
Error: Package: 10:qemu-kvm-ev-2.12.0-18.el7_6.1.1.x86_64
(ovirt-4.3-centos-qemu-ev)
Requires: libepoxy.so.0()(64bit)
Error: Package: ovirt-host-4.3.0-1.el7.x86_64 (ovirt-4.3-pre)
Requires: scap-security-guide
Met vriendelijke groet, With kind regards,
Jorick Astrego
Netbulae Virtualization Experts
----------------
Tel: 053 20 30 270 info(a)netbulae.eu Staalsteden 4-3A KvK 08198180
Fax: 053 20 30 271 www.netbulae.eu 7547 TA Enschede BTW NL821234584B01
----------------
6 years, 4 months
ovirtmgmt always out of sync on ovirt node
by Jorick Astrego
Hi,
We're switching from Centos 7 hosts to oVirt node ng (tried 4.2.7,
4.3rc1 and 4.3rc2) and after adding them to oVirt (currently on 4.3rc2)
the ovirtmgmt interface is always out of sync.
Tried synching the network and refresh capabilities. I also tried
removing ovirtmgmt from the interface and adding it to a bond, then I
get this error:
Cannot setup Networks. The following Network definitions on the
Network Interface are different than those on the Logical Network.
Please synchronize the Network Interface before editing network
ovirtmgmt. The non-synchronized values are: ${OUTAVERAGELINKSHARE}
${HOST_OUT_OF_SYNC} - null, ${DC_OUT_OF_SYNC} - 50
I can setup the bond at install so the ovirtmgmt will use it so I can
use the host, but I'm hesitant to do this in production as I cannot
change the interface anymore.
Regards,
Jorick Astrego
Met vriendelijke groet, With kind regards,
Jorick Astrego
Netbulae Virtualization Experts
----------------
Tel: 053 20 30 270 info(a)netbulae.eu Staalsteden 4-3A KvK 08198180
Fax: 053 20 30 271 www.netbulae.eu 7547 TA Enschede BTW NL821234584B01
----------------
6 years, 4 months
Converting Thin/Dependent to Thick/Independent
by Luca 'remix_tj' Lorenzetto
Hello,
I'm looking for a way for speeding up the deployment of new vms. I've
seen that if i use, under "Resource Allocation", "Storage Allocation"
flagged Thin, the vm creation is very quick.
But this makes the machine dependent from the template and if, for any
reason, the template gets corrupted or is not accessible, the vm will
not boot.
Is there any way for converting the vm from dependent to independent
while this is online?
Luca
--
"E' assurdo impiegare gli uomini di intelligenza eccellente per fare
calcoli che potrebbero essere affidati a chiunque se si usassero delle
macchine"
Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716)
"Internet è la più grande biblioteca del mondo.
Ma il problema è che i libri sono tutti sparsi sul pavimento"
John Allen Paulos, Matematico (1945-vivente)
Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , <lorenzetto.luca(a)gmail.com>
6 years, 4 months
oVirt 4.3 RC2 and libgfapi
by Gianluca Cecchi
Just installed a single host HCI with gluster, with only the engine vm
running.
Is it expected this situation below?
# virsh -r list
Id Name State
----------------------------------------------------
2 HostedEngine running
and
# virsh -r dumpxml 2
. . .
<disk type='file' device='disk' snapshot='no'>
<driver name='qemu' type='raw' cache='none' error_policy='stop'
io='native' iothread='1'/>
<source
file='/var/run/vdsm/storage/e4eb6832-e0f6-40ee-902f-f301e5a3a643/fc34d770-9318-4539-9233-bfb1c5d68d14/b151557e-f1a2-45cb-b5c9-12c1f470467e'>
<seclabel model='dac' relabel='no'/>
</source>
<backingStore/>
<target dev='vda' bus='virtio'/>
<serial>fc34d770-9318-4539-9233-bfb1c5d68d14</serial>
<alias name='ua-fc34d770-9318-4539-9233-bfb1c5d68d14'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x07'
function='0x0'/>
</disk>
. . .
where
# ll /var/run/vdsm/storage/e4eb6832-e0f6-40ee-902f-f301e5a3a643/
total 24
lrwxrwxrwx. 1 vdsm kvm 133 Jan 16 14:44
39df7b45-4932-4bfe-b69e-4fb2f8872f4f ->
/rhev/data-center/mnt/glusterSD/10.10.10.216:
_engine/e4eb6832-e0f6-40ee-902f-f301e5a3a643/images/39df7b45-4932-4bfe-b69e-4fb2f8872f4f
lrwxrwxrwx. 1 vdsm kvm 133 Jan 16 14:44
5ba6cd9e-b78d-4de4-9b7f-9688365128bf ->
/rhev/data-center/mnt/glusterSD/10.10.10.216:
_engine/e4eb6832-e0f6-40ee-902f-f301e5a3a643/images/5ba6cd9e-b78d-4de4-9b7f-9688365128bf
lrwxrwxrwx. 1 vdsm kvm 133 Jan 16 15:56
8b8e41e0-a875-4204-8ab1-c10214a49f5c ->
/rhev/data-center/mnt/glusterSD/10.10.10.216:
_engine/e4eb6832-e0f6-40ee-902f-f301e5a3a643/images/8b8e41e0-a875-4204-8ab1-c10214a49f5c
lrwxrwxrwx. 1 vdsm kvm 133 Jan 16 15:56
c21a62ba-73d2-4914-940f-cee6a67a1b08 ->
/rhev/data-center/mnt/glusterSD/10.10.10.216:
_engine/e4eb6832-e0f6-40ee-902f-f301e5a3a643/images/c21a62ba-73d2-4914-940f-cee6a67a1b08
lrwxrwxrwx. 1 vdsm kvm 133 Jan 16 14:44
fc34d770-9318-4539-9233-bfb1c5d68d14 ->
/rhev/data-center/mnt/glusterSD/10.10.10.216:
_engine/e4eb6832-e0f6-40ee-902f-f301e5a3a643/images/fc34d770-9318-4539-9233-bfb1c5d68d14
lrwxrwxrwx. 1 vdsm kvm 133 Jan 16 14:44
fd73354d-699b-478e-893c-e2a0bd1e6cbb ->
/rhev/data-center/mnt/glusterSD/10.10.10.216:
_engine/e4eb6832-e0f6-40ee-902f-f301e5a3a643/images/fd73354d-699b-478e-893c-e2a0bd1e6cbb
so hosted engine not using libgfapi?
Also, on hosted engine
[root@hciengine ~]# engine-config -g LibgfApiSupported
LibgfApiSupported: false version: 4.1
LibgfApiSupported: false version: 4.2
LibgfApiSupported: false version: 4.3
[root@hciengine ~]#
So that if I import a CentOS7 Atomic host image from glance repo as
template and create a new vm from it,when running this VM I get
# virsh -r dumpxml 3
. . .
<disk type='file' device='disk' snapshot='no'>
<driver name='qemu' type='qcow2' cache='none' error_policy='stop'
io='native' iothread='1'/>
<source file='/rhev/data-center/mnt/glusterSD/10.10.10.216:
_data/601d725a-1622-4dc8-a24d-2dba72ddf6ae/images/e4f92226-0f56-4822-a622-d1ebff41df9f/c6b2e076-1519-433e-9b37-2005c9ce6d2e'>
<seclabel model='dac' relabel='no'/>
</source>
<backingStore/>
<target dev='vda' bus='virtio'/>
<serial>e4f92226-0f56-4822-a622-d1ebff41df9f</serial>
<boot order='1'/>
<alias name='ua-e4f92226-0f56-4822-a622-d1ebff41df9f'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06'
function='0x0'/>
</disk>
. . .
I remember there was an "old" bug opened causing this default of not
enabling libgfapi
Does this mean it was not solved yet?
If I remember correctly the bugzilla was this one related to HA:
https://bugzilla.redhat.com/show_bug.cgi?id=1484227
that is still in new status.... since almost 2 years
Is this the only one open?
Thanks,
Gianluca
6 years, 4 months
ovirt 4.3 / Adding NFS storage issue
by Devin Acosta
I installed the latest 4.3 release candidate and tried to add an NFS mount
to the Data Center, and it errors in the GUI with “Error while executing
action New NFS Storage Domain: Invalid parameter”, then in the vdsm.log I
see it is passing “block_size=None”. Does this regardless if NFS v3 or v4.
InvalidParameterException: Invalid parameter: 'block_size=None'
2019-01-12 20:37:58,241-0700 INFO (jsonrpc/7) [vdsm.api] START
createStorageDomain(storageType=1,
sdUUID=u'b30c64c4-4b1f-4ebf-828b-e54c330ae84c', domainName=u'nfsdata',
typeSpecificArg=u'192.168.19.155:/data/data', domClass=1, domVersion=u'4',
block_size=None, max_hosts=2000, options=None)
from=::ffff:192.168.19.178,51042, flow_id=67743df7,
task_id=ad82f581-9638-48f1-bcd9-669b9809b34a (api:48)
2019-01-12 20:37:58,241-0700 INFO (jsonrpc/7) [vdsm.api] FINISH
createStorageDomain error=Invalid parameter: 'block_size=None'
from=::ffff:192.168.19.178,51042, flow_id=67743df7,
task_id=ad82f581-9638-48f1-bcd9-669b9809b34a (api:52)
2019-01-12 20:37:58,241-0700 ERROR (jsonrpc/7) [storage.TaskManager.Task]
(Task='ad82f581-9638-48f1-bcd9-669b9809b34a') Unexpected error (task:875)
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882,
in _run
return fn(*args, **kargs)
File "<string>", line 2, in createStorageDomain
File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 50, in
method
ret = func(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 2583,
in createStorageDomain
alignment = clusterlock.alignment(block_size, max_hosts)
File "/usr/lib/python2.7/site-packages/vdsm/storage/clusterlock.py", line
661, in alignment
raise se.InvalidParameterException('block_size', block_size)
InvalidParameterException: Invalid parameter: 'block_size=None'
2019-01-12 20:37:58,242-0700 INFO (jsonrpc/7) [storage.TaskManager.Task]
(Task='ad82f581-9638-48f1-bcd9-669b9809b34a') aborting: Task is aborted:
u"Invalid parameter: 'block_size=None'" - code 100 (task:1181)
2019-01-12 20:37:58,242-0700 ERROR (jsonrpc/7) [storage.Dispatcher] FINISH
createStorageDomain error=Invalid parameter: 'block_size=None'
(dispatcher:81)
2019-01-12 20:37:58,242-0700 INFO (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC
call StorageDomain.create failed (error 1000) in 0.00 seconds (__init__:312)
2019-01-12 20:37:58,541-0700 INFO (jsonrpc/1) [vdsm.api] START
disconnectStorageServer(domType=1,
spUUID=u'00000000-0000-0000-0000-000000000000', conList=[{u'tpgt': u'1',
u'id': u'db7d16c8-7497-42db-8a75-81cb7f9d3350', u'connection':
u'192.168.19.155:/data/data', u'iqn': u'', u'user': u'', u'ipv6_enabled':
u'false', u'protocol_version': u'auto', u'password': '********', u'port':
u''}], options=None) from=::ffff:192.168.19.178,51042,
flow_id=7e4cb4fa-1437-4d5b-acb5-958838ecd54c,
task_id=1d004ea2-ae84-4c95-8c70-29e205efd4b1 (api:48)
2019-01-12 20:37:58,542-0700 INFO (jsonrpc/1) [storage.Mount] unmounting
/rhev/data-center/mnt/192.168.19.155:_data_data (mount:212)
2019-01-12 20:37:59,087-0700 INFO (jsonrpc/1) [vdsm.api] FINISH
disconnectStorageServer return={'statuslist': [{'status': 0, 'id':
u'db7d16c8-7497-42db-8a75-81cb7f9d3350'}]}
from=::ffff:192.168.19.178,51042,
flow_id=7e4cb4fa-1437-4d5b-acb5-958838ecd54c,
task_id=1d004ea2-ae84-4c95-8c70-29e205efd4b1 (api:54)
2019-01-12 20:37:59,089-0700 INFO (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC
call StoragePool.disconnectStorageServer succeeded in 0.55 seconds
(__init__:312)
6 years, 4 months
ovirt node ng 4.3.0 rc1 and HCI single host problems
by Gianluca Cecchi
Let's start a new thread more focused on the subject
I'm just testing deployment of HCI single host using oVirt Node NG CentOS 7
iso
I was able to complete the gluster setup via cockpit with these
modifications:
1) I wanted to check via ssh and found that files *key under /etc/ssh/ had
too weak config so that ssh daemon didn't started after installation of
node from iso
changing to 600 and restarting the service was ok
2) I used a single disk configured as jbod, so I choose that option instead
of the default proposed RAID6
But the playook failed with
. . .
PLAY [gluster_servers]
*********************************************************
TASK [Create LVs with specified size for the VGs]
******************************
changed: [192.168.124.211] => (item={u'lv': u'gluster_thinpool_sdb',
u'size': u'50GB', u'extent': u'100%FREE', u'vg': u'gluster_vg_sdb'})
PLAY RECAP
*********************************************************************
192.168.124.211 : ok=1 changed=1 unreachable=0
failed=0
Ignoring errors...
Error: Section diskcount not found in the configuration file
Reading inside the playbooks involved here:
/usr/share/gdeploy/playbooks/auto_lvcreate_for_gluster.yml
/usr/share/gdeploy/playbooks/vgcreate.yml
and the snippet
- name: Convert the logical volume
lv: action=convert thinpool={{ item.vg }}/{{item.pool }}
poolmetadata={{ item.vg }}/'metadata' poolmetadataspare=n
vgname={{ item.vg }} disktype="{{disktype}}"
diskcount="{{ diskcount }}"
stripesize="{{stripesize}}"
chunksize="{{ chunksize | default('') }}"
snapshot_reserve="{{ snapshot_reserve }}"
with_items: "{{ lvpools }}"
ignore_errors: yes
I simply edited the gdeploy.conf from the gui button adding this section
under the [disktype] one
"
[diskcount]
1
"
then clean lv/vg/pv and the gdeploy step completed successfully
3) at first stage of ansible deploy I have this failed command that seems
not to prevent from completion but that I have not understood..
PLAY [gluster_servers]
*********************************************************
TASK [Run a command in the shell]
**********************************************
failed: [192.168.124.211] (item=vdsm-tool configure --force) => {"changed":
true, "cmd": "vdsm-tool configure --force", "delta": "0:00:01.475528",
"end": "2019-01-11 10:59:55.147601", "item": "vdsm-tool configure --force",
"msg": "non-zero return code", "rc": 1, "start": "2019-01-11
10:59:53.672073", "stderr": "Traceback (most recent call last):\n File
\"/usr/bin/vdsm-tool\", line 220, in main\n return
tool_command[cmd][\"command\"](*args)\n File
\"/usr/lib/python2.7/site-packages/vdsm/tool/__init__.py\", line 40, in
wrapper\n func(*args, **kwargs)\n File
\"/usr/lib/python2.7/site-packages/vdsm/tool/configurator.py\", line 143,
in configure\n _configure(c)\n File
\"/usr/lib/python2.7/site-packages/vdsm/tool/configurator.py\", line 90, in
_configure\n getattr(module, 'configure', lambda: None)()\n File
\"/usr/lib/python2.7/site-packages/vdsm/tool/configurators/bond_defaults.py\",
line 39, in configure\n sysfs_options_mapper.dump_bonding_options()\n
File
\"/usr/lib/python2.7/site-packages/vdsm/network/link/bond/sysfs_options_mapper.py\",
line 48, in dump_bonding_options\n with
open(sysfs_options.BONDING_DEFAULTS, 'w') as f:\nIOError: [Errno 2] No such
file or directory: '/var/run/vdsm/bonding-defaults.json'", "stderr_lines":
["Traceback (most recent call last):", " File \"/usr/bin/vdsm-tool\", line
220, in main", " return tool_command[cmd][\"command\"](*args)", " File
\"/usr/lib/python2.7/site-packages/vdsm/tool/__init__.py\", line 40, in
wrapper", " func(*args, **kwargs)", " File
\"/usr/lib/python2.7/site-packages/vdsm/tool/configurator.py\", line 143,
in configure", " _configure(c)", " File
\"/usr/lib/python2.7/site-packages/vdsm/tool/configurator.py\", line 90, in
_configure", " getattr(module, 'configure', lambda: None)()", " File
\"/usr/lib/python2.7/site-packages/vdsm/tool/configurators/bond_defaults.py\",
line 39, in configure", " sysfs_options_mapper.dump_bonding_options()",
" File
\"/usr/lib/python2.7/site-packages/vdsm/network/link/bond/sysfs_options_mapper.py\",
line 48, in dump_bonding_options", " with
open(sysfs_options.BONDING_DEFAULTS, 'w') as f:", "IOError: [Errno 2] No
such file or directory: '/var/run/vdsm/bonding-defaults.json'"], "stdout":
"\nChecking configuration status...\n\nabrt is already configured for
vdsm\nlvm is configured for vdsm\nlibvirt is already configured for
vdsm\nSUCCESS: ssl configured to true. No conflicts\nManual override for
multipath.conf detected - preserving current configuration\nThis manual
override for multipath.conf was based on downrevved template. You are
strongly advised to contact your support representatives\n\nRunning
configure...\nReconfiguration of abrt is done.\nReconfiguration of passwd
is done.\nReconfiguration of libvirt is done.", "stdout_lines": ["",
"Checking configuration status...", "", "abrt is already configured for
vdsm", "lvm is configured for vdsm", "libvirt is already configured for
vdsm", "SUCCESS: ssl configured to true. No conflicts", "Manual override
for multipath.conf detected - preserving current configuration", "This
manual override for multipath.conf was based on downrevved template. You
are strongly advised to contact your support representatives", "", "Running
configure...", "Reconfiguration of abrt is done.", "Reconfiguration of
passwd is done.", "Reconfiguration of libvirt is done."]}
to retry, use: --limit @/tmp/tmpQXe2el/shell_cmd.retry
PLAY RECAP
*********************************************************************
192.168.124.211 : ok=0 changed=0 unreachable=0
failed=1
Would it be possible to save in some way the ansible playbook log even if
it completes ok, without going directly to the "successful" page?
Or is anyway stored in some location on disk of host?
I then proceeded with Hosted Engine install/setup and
4) it fails here at final stages of the local vm engine setup during host
activation:
[ INFO ] TASK [oVirt.hosted-engine-setup : Set Engine public key as
authorized key without validating the TLS/SSL certificates]
[ INFO ] changed: [localhost]
[ INFO ] TASK [oVirt.hosted-engine-setup : include_tasks]
[ INFO ] ok: [localhost]
[ INFO ] TASK [oVirt.hosted-engine-setup : Obtain SSO token using
username/password credentials]
[ INFO ] ok: [localhost]
[ INFO ] TASK [oVirt.hosted-engine-setup : Ensure that the target
datacenter is present]
[ INFO ] ok: [localhost]
[ INFO ] TASK [oVirt.hosted-engine-setup : Ensure that the target cluster
is present in the target datacenter]
[ INFO ] ok: [localhost]
[ INFO ] TASK [oVirt.hosted-engine-setup : Enable GlusterFS at cluster
level]
[ INFO ] changed: [localhost]
[ INFO ] TASK [oVirt.hosted-engine-setup : Set VLAN ID at datacenter level]
[ INFO ] skipping: [localhost]
[ INFO ] TASK [oVirt.hosted-engine-setup : Force host-deploy in offline
mode]
[ INFO ] changed: [localhost]
[ INFO ] TASK [oVirt.hosted-engine-setup : Add host]
[ INFO ] changed: [localhost]
[ INFO ] TASK [oVirt.hosted-engine-setup : Wait for the host to be up]
then after several minutes:
[ ERROR ] fatal: [localhost]: FAILED! => {"ansible_facts": {"ovirt_hosts":
[{"address": "ov4301.localdomain.local", "affinity_labels": [],
"auto_numa_status": "unknown", "certificate": {"organization":
"localdomain.local", "subject":
"O=localdomain.local,CN=ov4301.localdomain.local"}, "cluster": {"href":
"/ovirt-engine/api/clusters/5e8fea14-158b-11e9-b2f0-00163e29b9f2", "id":
"5e8fea14-158b-11e9-b2f0-00163e29b9f2"}, "comment": "", "cpu": {"speed":
0.0, "topology": {}}, "device_passthrough": {"enabled": false}, "devices":
[], "external_network_provider_configurations": [], "external_status":
"ok", "hardware_information": {"supported_rng_sources": []}, "hooks": [],
"href": "/ovirt-engine/api/hosts/4202de75-75d3-4dcb-b128-2c4a1d257a15",
"id": "4202de75-75d3-4dcb-b128-2c4a1d257a15", "katello_errata": [],
"kdump_status": "unknown", "ksm": {"enabled": false},
"max_scheduling_memory": 0, "memory": 0, "name":
"ov4301.localdomain.local", "network_attachments": [], "nics": [],
"numa_nodes": [], "numa_supported": false, "os": {"custom_kernel_cmdline":
""}, "permissions": [], "port": 54321, "power_management":
{"automatic_pm_enabled": true, "enabled": false, "kdump_detection": true,
"pm_proxies": []}, "protocol": "stomp", "se_linux": {}, "spm": {"priority":
5, "status": "none"}, "ssh": {"fingerprint":
"SHA256:iqeQjdWCm15+xe74xEnswrgRJF7JBAWrvsjO/RaW8q8", "port": 22},
"statistics": [], "status": "install_failed",
"storage_connection_extensions": [], "summary": {"total": 0}, "tags": [],
"transparent_huge_pages": {"enabled": false}, "type": "ovirt_node",
"unmanaged_networks": [], "update_available": false}]}, "attempts": 120,
"changed": false}
[ INFO ] TASK [oVirt.hosted-engine-setup : Fetch logs from the engine VM]
[ INFO ] ok: [localhost]
[ INFO ] TASK [oVirt.hosted-engine-setup : Set destination directory path]
[ INFO ] ok: [localhost]
[ INFO ] TASK [oVirt.hosted-engine-setup : Create destination directory]
[ INFO ] changed: [localhost]
[ INFO ] TASK [oVirt.hosted-engine-setup : include_tasks]
[ INFO ] ok: [localhost]
[ INFO ] TASK [oVirt.hosted-engine-setup : Find the local appliance image]
[ INFO ] ok: [localhost]
[ INFO ] TASK [oVirt.hosted-engine-setup : debug]
[ INFO ] ok: [localhost]
[ INFO ] TASK [oVirt.hosted-engine-setup : Set local_vm_disk_path]
[ INFO ] ok: [localhost]
[ INFO ] TASK [oVirt.hosted-engine-setup : Give the vm time to flush dirty
buffers]
[ INFO ] ok: [localhost]
[ INFO ] TASK [oVirt.hosted-engine-setup : Copy engine logs]
[ INFO ] TASK [oVirt.hosted-engine-setup : include_tasks]
[ INFO ] ok: [localhost]
[ INFO ] TASK [oVirt.hosted-engine-setup : Remove local vm dir]
[ INFO ] changed: [localhost]
[ INFO ] TASK [oVirt.hosted-engine-setup : debug]
[ INFO ] ok: [localhost]
[ INFO ] TASK [oVirt.hosted-engine-setup : Remove temporary entry in
/etc/hosts for the local VM]
[ INFO ] changed: [localhost]
[ INFO ] TASK [oVirt.hosted-engine-setup : Notify the user about a failure]
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "The
system may not be provisioned according to the playbook results: please
check the logs for the issue, fix accordingly or re-deploy from scratch.\n"}
Going to see the log
/var/log/ovirt-engine/host-deploy/ovirt-host-deploy-20190111113227-ov4301.localdomain.local-5d387e0d.log
it seems the error is about ovirt-imageio-daemon
2019-01-11 11:32:26,893+0100 DEBUG otopi.plugins.otopi.services.systemd
plugin.executeRaw:863 execute-result: ('/usr/bin/systemctl', 'start',
'ovirt-imageio-daemon.service'), rc=1
2019-01-11 11:32:26,894+0100 DEBUG otopi.plugins.otopi.services.systemd
plugin.execute:921 execute-output: ('/usr/bin/systemctl', 'start',
'ovirt-imageio-daemon.service') stdout:
2019-01-11 11:32:26,895+0100 DEBUG otopi.plugins.otopi.services.systemd
plugin.execute:926 execute-output: ('/usr/bin/systemctl', 'start',
'ovirt-imageio-daemon.service') stderr:
Job for ovirt-imageio-daemon.service failed because the control process
exited with error code. See "systemctl status ovirt-imageio-daemon.service"
and "journalctl -xe" for details.
2019-01-11 11:32:26,896+0100 DEBUG otopi.context context._executeMethod:143
method exception
Traceback (most recent call last):
File "/tmp/ovirt-PBFI2dyoDO/pythonlib/otopi/context.py", line 133, in
_executeMethod
method['method']()
File
"/tmp/ovirt-PBFI2dyoDO/otopi-plugins/ovirt-host-deploy/vdsm/packages.py",
line 175, in _start
self.services.state('ovirt-imageio-daemon', True)
File "/tmp/ovirt-PBFI2dyoDO/otopi-plugins/otopi/services/systemd.py",
line 141, in state
service=name,
RuntimeError: Failed to start service 'ovirt-imageio-daemon'
2019-01-11 11:32:26,898+0100 ERROR otopi.context context._executeMethod:152
Failed to execute stage 'Closing up': Failed to start service
'ovirt-imageio-daemon'
2019-01-11 11:32:26,899+0100 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:204 DIALOG:SEND **%EventEnd STAGE closeup METHOD
otopi.plugins.ovirt_host_deploy.vdsm.packages.Plugin._start
(odeploycons.packages.vdsm.started)
The reason:
[root@ov4301 ~]# systemctl status ovirt-imageio-daemon -l
● ovirt-imageio-daemon.service - oVirt ImageIO Daemon
Loaded: loaded (/usr/lib/systemd/system/ovirt-imageio-daemon.service;
disabled; vendor preset: disabled)
Active: failed (Result: start-limit) since Fri 2019-01-11 11:32:29 CET;
27min ago
Process: 11625 ExecStart=/usr/bin/ovirt-imageio-daemon (code=exited,
status=1/FAILURE)
Main PID: 11625 (code=exited, status=1/FAILURE)
Jan 11 11:32:29 ov4301.localdomain.local systemd[1]:
ovirt-imageio-daemon.service: main process exited, code=exited,
status=1/FAILURE
Jan 11 11:32:29 ov4301.localdomain.local systemd[1]: Failed to start oVirt
ImageIO Daemon.
Jan 11 11:32:29 ov4301.localdomain.local systemd[1]: Unit
ovirt-imageio-daemon.service entered failed state.
Jan 11 11:32:29 ov4301.localdomain.local systemd[1]:
ovirt-imageio-daemon.service failed.
Jan 11 11:32:29 ov4301.localdomain.local systemd[1]:
ovirt-imageio-daemon.service holdoff time over, scheduling restart.
Jan 11 11:32:29 ov4301.localdomain.local systemd[1]: Stopped oVirt ImageIO
Daemon.
Jan 11 11:32:29 ov4301.localdomain.local systemd[1]: start request repeated
too quickly for ovirt-imageio-daemon.service
Jan 11 11:32:29 ov4301.localdomain.local systemd[1]: Failed to start oVirt
ImageIO Daemon.
Jan 11 11:32:29 ov4301.localdomain.local systemd[1]: Unit
ovirt-imageio-daemon.service entered failed state.
Jan 11 11:32:29 ov4301.localdomain.local systemd[1]:
ovirt-imageio-daemon.service failed.
[root@ov4301 ~]#
The file /var/log/ovirt-imageio-daemon/daemon.log contains
2019-01-11 10:28:30,191 INFO (MainThread) [server] Starting (pid=3702,
version=1.4.6)
2019-01-11 10:28:30,229 ERROR (MainThread) [server] Service failed
(remote_service=<ovirt_imageio_daemon.server.RemoteService object at
0x7fea9dc88050>, local_service=<ovirt_imageio_daemon.server.LocalService
object at 0x7fea9ca24850>, control_service=None, running=True)
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/ovirt_imageio_daemon/server.py",
line 58, in main
start(config)
File "/usr/lib/python2.7/site-packages/ovirt_imageio_daemon/server.py",
line 99, in start
control_service = ControlService(config)
File "/usr/lib/python2.7/site-packages/ovirt_imageio_daemon/server.py",
line 206, in __init__
config.tickets.socket, uhttp.UnixWSGIRequestHandler)
File "/usr/lib64/python2.7/SocketServer.py", line 419, in __init__
self.server_bind()
File "/usr/lib/python2.7/site-packages/ovirt_imageio_daemon/uhttp.py",
line 79, in server_bind
self.socket.bind(self.server_address)
File "/usr/lib64/python2.7/socket.py", line 224, in meth
return getattr(self._sock,name)(*args)
error: [Errno 2] No such file or directory
One potential problem I noticed is that on this host I setup eth0 with
192.168.122.x (for ovirtmgmt) and eth1 with 192.168.124.y (for gluster,
even if only one host, but aiming at adding other 2 hosts in second step)
and the libvirt network temporarily created for the local engine vm is also
on 192.168.124.0 network.....
4: virbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state
UP group default qlen 1000
link/ether 52:54:00:b8:6b:3c brd ff:ff:ff:ff:ff:ff
inet 192.168.124.1/24 brd 192.168.124.255 scope global virbr0
valid_lft forever preferred_lft forever
5: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master
virbr0 state DOWN group default qlen 1000
link/ether 52:54:00:b8:6b:3c brd ff:ff:ff:ff:ff:ff
I can change my gluster network of this env and re-test, but would it be
possible to have the libvirt network configurable? It seems risky to have a
fixed one...
Can I go ahead from this failed hosted engine after understanding reason of
ovirt-imageio-daemon failure or am I forced to scratch?
Supposing I go to power down and then power on again this host, how can I
retry without scratching?
Gianluca
6 years, 4 months
Problems uploading ISO
by jerry@tom.com
I using ovirt-iso-uploader command to upload iso. but failed and giving me the following error.[root@ha ~]# engine-iso-uploader --verbose -i iso upload delta1.iso
Please provide the REST API password for the admin@internal oVirt Engine user (CTRL+D to abort):
DEBUG: API Vendor(None) API Version(4.2.0)
DEBUG: id=029786a6-4686-44a1-a667-b948df945712 address=storage1 path=/iso
Uploading, please wait...
INFO: Start uploading delta1.iso
ERROR: glfs_init failed: Success
[root@ha ~]# ^C
I've been looking for a possible solution but nothing works yet. Any
guidance on this matter will be appreciate it.
m1988m(a)tom.com
6 years, 4 months
oVirt 4.3 hosted engine migration
by Maton, Brett
In the updated UI, it doesn't seem possible to migrate hosted engine from
Compute -> Host -> Virtual Machines
anymore, although there does appear to be a 'new' Cancel Migration button.
It's handy to be able to migrate the hosted engine from this view, I
normally manually migrate the hosted engine to another host before
upgrading and it's nice to be able to do it all in the same area, rather
than having to switch to the (all) virtual machines view and then back to
hosts.
Any chance this feature could be re-enabled?
Regards,
Brett
6 years, 4 months
oVirt 4.3 RC2 test on single host HCI --> OK
by Gianluca Cecchi
Just tested what in subject on a vm (so it has been a nested env)
configured with 2 disks: 1x60Gb for OS and 1x100Gb for gluster.
Used node ng from ovirt-node-ng-installer-4.3.0-2019011608.el7.iso and
almost all the problems had with RC1 here:
https://www.mail-archive.com/users@ovirt.org/msg52870.html
have gone solved.
The hosted engine deploy completed successfully and I'm able to access
engine web admin and see the VM and its storage domain and connect to
console.
The only persisting problem is that I choose JBOD in my disk config
(because I have only one disk) and the deploy gives error about missing
diskcount parameter in ansible playbook, so I have to edit the gdeploy conf
file from the gui and add the section
[diskcount]
1
after that all goes well.
Good job guys!
BTW: is it correct hat for data and vmstore I only have the volumes
configured, while I have to manullay create storage domains?
If I want to extend this initial configuration to 3 nodes, what would be
the installation path to execute?
Thanks,
Gianluca
6 years, 4 months
The admin portal ui should be more simplified
by flerxu@hotmail.com
We have a rhv of 11 Datacerters, 11 clusters, 40 hosts and 300 vms.
The 4 of us administrators are suffering from the new 4.2 UI lack of active area 。The manipulation logic also make us confused.
A simple operation needs more clicks than before.
Please just make the UI more simplified,
6 years, 4 months
Use oVirt Node as virt-v2v server
by Vinícius Ferrão
Hello,
I’m using virt-p2v tool to do some VM migrations from unsupported hypervisors and stumbled across the need of a virt-v2v dedicated server to handle conversions.
My first action was: rpm -qa on oVirt Node and found virt-v2v already installed. So the natural ideia was to use oVirt Node as the conversion server, it already have everything needed: virt-v2v and the export domain. So why not?
Well, it turns out it doesn’t work. After pointing to oVirt Node within virt-p2v tool, the conversion procedure hangs at the begging, requesting an username and password for libvirt that I’m unaware. And even if I was aware it is impossible to input the values since there’s no escape from the console to put the values nor any form on the GUI to do so.
libguestfs: libvirt version = 4005000 (4.5.0)
libguestfs: guest random name = guestfs-sf2i0jl3mmcxtwly
libguestfs: connect to libvirt
libguestfs: opening libvirt handle: URI = qemu:///system, auth = default+wrapper, flags = 0
libvirt needs authentication to connect to libvirt URI qemu:///system
(see also: http://libvirt.org/auth.html http://libvirt.org/uri.html)
Please enter your authentication name: Please enter your password:
I eventually gave up and spun a VM with virt-v2v and the conversion is running.
The question is: wasn’t a good ideia to use oVirt Node as the conversion server? Why it failed? I still have a lot of conversions to do, and if I can use the node installation it would be great.
Thanks,
Sent from my iPhone
6 years, 4 months
V2V Proxmox to Ovirt
by Sebastian Antunez N.
Hello Guys
Have 3 nodes with Proxmox and 23 Virtual Machines (Linux, Windows) and need
migrate to Ovirt.
I search information but ever show me is not supported.
Any idea who I can migrate Proxmox to Ovirt? I can shutdown all VM of
Proxmox.
Regards
Sebastian
6 years, 4 months
Windows Virtual Machine
by Sebastian Antunez N.
Hello Guys
When a VM Windows is created I install OVTools, drivers etc. and ever show
me this messages.
The latest guest agent needs to be installed and running on the guest.
The version of Ovirt is 4.2.
Any idea?
Thanks
Sebastian
6 years, 4 months
Re: Cannot Increase Hosted Engine VM Memory
by Douglas Duckworth
Sure, they're attached. In "first attempt" the error seems to be:
2019-01-17 07:49:24,795-05 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-29) [680f82b3-7612-4d91-afdc-43937aa298a2] EVENT_ID: FAILED_HOT_SET_MEMORY_NOT_DIVIDABLE(2,048), Failed to hot plug memory to VM HostedEngine. Amount of added memory (4000MiB) is not dividable by 256MiB.
Followed by:
2019-01-17 07:49:24,814-05 WARN [org.ovirt.engine.core.bll.UpdateRngDeviceCommand] (default task-29) [26f5f3ed] Validation of action 'UpdateRngDevice' failed for user admin@internal-authz. Reasons: ACTION_TYPE_FAILED_VM_IS_RUNNING
2019-01-17 07:49:24,815-05 ERROR [org.ovirt.engine.core.bll.UpdateVmCommand] (default task-29) [26f5f3ed] Updating RNG device of VM HostedEngine (adf14389-1563-4b1a-9af6-4b40370a825b) failed. Old RNG device = VmRngDevice:{id='VmDeviceId:{deviceId='6435b2b5-163c-4f0c-934e-7994da60dc89', vmId='adf14389-1563-4b1a-9af6-4b40370a825b'}', device='virtio', type='RNG', specParams='[source=urandom]', address='', managed='true', plugged='true', readOnly='false', deviceAlias='', customProperties='null', snapshotId='null', logicalName='null', hostDevice='null'}. New RNG device = VmRngDevice:{id='VmDeviceId:{deviceId='6435b2b5-163c-4f0c-934e-7994da60dc89', vmId='adf14389-1563-4b1a-9af6-4b40370a825b'}', device='virtio', type='RNG', specParams='[source=urandom]', address='', managed='true', plugged='true', readOnly='false', deviceAlias='', customProperties='null', snapshotId='null', logicalName='null', hostDevice='null'}.
In "second attempt" I used values that are dividable by 256 MiB so that's no longer present. Though same error:
2019-01-17 07:56:59,795-05 INFO [org.ovirt.engine.core.vdsbroker.SetAmountOfMemoryVDSCommand] (default task-22) [7059a48f] START, SetAmountOfMemoryVDSCommand(HostName = ovirt-hv1.med.cornell.edu<http://ovirt-hv1.med.cornell.edu>, Params:{hostId='cdd5ffda-95c7-4ffa-ae40-be66f1d15c30', vmId='adf14389-1563-4b1a-9af6-4b40370a825b', memoryDevice='VmDevice:{id='VmDeviceId:{deviceId='7f7d97cc-c273-4033-af53-bc9033ea3abe', vmId='adf14389-1563-4b1a-9af6-4b40370a825b'}', device='memory', type='MEMORY', specParams='[node=0, size=2048]', address='', managed='true', plugged='true', readOnly='false', deviceAlias='', customProperties='null', snapshotId='null', logicalName='null', hostDevice='null'}', minAllocatedMem='6144'}), log id: 50873daa
2019-01-17 07:56:59,855-05 INFO [org.ovirt.engine.core.vdsbroker.SetAmountOfMemoryVDSCommand] (default task-22) [7059a48f] FINISH, SetAmountOfMemoryVDSCommand, log id: 50873daa
2019-01-17 07:56:59,862-05 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-22) [7059a48f] EVENT_ID: HOT_SET_MEMORY(2,039), Hotset memory: changed the amount of memory on VM HostedEngine from 4096 to 4096
2019-01-17 07:56:59,881-05 WARN [org.ovirt.engine.core.bll.UpdateRngDeviceCommand] (default task-22) [28fd4c82] Validation of action 'UpdateRngDevice' failed for user admin@internal-authz. Reasons: ACTION_TYPE_FAILED_VM_IS_RUNNING
2019-01-17 07:56:59,882-05 ERROR [org.ovirt.engine.core.bll.UpdateVmCommand] (default task-22) [28fd4c82] Updating RNG device of VM HostedEngine (adf14389-1563-4b1a-9af6-4b40370a825b) failed. Old RNG device = VmRngDevice:{id='VmDeviceId:{deviceId='6435b2b5-163c-4f0c-934e-7994da60dc89', vmId='adf14389-1563-4b1a-9af6-4b40370a825b'}', device='virtio', type='RNG', specParams='[source=urandom]', address='', managed='true', plugged='true', readOnly='false', deviceAlias='', customProperties='null', snapshotId='null', logicalName='null', hostDevice='null'}. New RNG device = VmRngDevice:{id='VmDeviceId:{deviceId='6435b2b5-163c-4f0c-934e-7994da60dc89', vmId='adf14389-1563-4b1a-9af6-4b40370a825b'}', device='virtio', type='RNG', specParams='[source=urandom]', address='', managed='true', plugged='true', readOnly='false', deviceAlias='', customProperties='null', snapshotId='null', logicalName='null', hostDevice='null'}.
This message repeats throughout engine.log:
2019-01-17 07:55:43,270-05 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-89) [] EVENT_ID: VM_MEMORY_UNDER_GUARANTEED_VALUE(148), VM HostedEngine on host ovirt-hv1.med.cornell.edu<http://ovirt-hv1.med.cornell.edu> was guaranteed 8192 MB but currently has 4224 MB
As you can see attached the host has plenty of memory.
Thank you Simone!
Thanks,
Douglas Duckworth, MSc, LFCS
HPC System Administrator
Scientific Computing Unit<https://scu.med.cornell.edu>
Weill Cornell Medicine
1300 York Avenue
New York, NY 10065
E: doug(a)med.cornell.edu<mailto:doug@med.cornell.edu>
O: 212-746-6305
F: 212-746-8690
On Thu, Jan 17, 2019 at 5:09 AM Simone Tiraboschi <stirabos(a)redhat.com<mailto:stirabos@redhat.com>> wrote:
On Wed, Jan 16, 2019 at 8:22 PM Douglas Duckworth <dod2014(a)med.cornell.edu<mailto:dod2014@med.cornell.edu>> wrote:
Sorry for accidental send.
Anyway I try to increase physical memory however it won't go above 4096MB. The hypervisor has 64GB.
Do I need to modify this value with Hosted Engine offline?
No, it's not required.
Can you please attach your engine.log for the relevant time frame?
Thanks,
Douglas Duckworth, MSc, LFCS
HPC System Administrator
Scientific Computing Unit<https://scu.med.cornell.edu>
Weill Cornell Medicine
1300 York Avenue
New York, NY 10065
E: doug(a)med.cornell.edu<mailto:doug@med.cornell.edu>
O: 212-746-6305
F: 212-746-8690
On Wed, Jan 16, 2019 at 1:58 PM Douglas Duckworth <dod2014(a)med.cornell.edu<mailto:dod2014@med.cornell.edu>> wrote:
Hello
I am trying to increase Hosted Engine physical memory above 4GB
Thanks,
Douglas Duckworth, MSc, LFCS
HPC System Administrator
Scientific Computing Unit<https://scu.med.cornell.edu>
Weill Cornell Medicine
1300 York Avenue
New York, NY 10065
E: doug(a)med.cornell.edu<mailto:doug@med.cornell.edu>
O: 212-746-6305
F: 212-746-8690
_______________________________________________
Users mailing list -- users(a)ovirt.org<mailto:users@ovirt.org>
To unsubscribe send an email to users-leave(a)ovirt.org<mailto:users-leave@ovirt.org>
Privacy Statement: https://www.ovirt.org/site/privacy-policy/<https://urldefense.proofpoint.com/v2/url?u=https-3A__www.ovirt.org_site_p...>
oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/<https://urldefense.proofpoint.com/v2/url?u=https-3A__www.ovirt.org_commun...>
List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/WGSXQVVPJJ2...<https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.ovirt.org_arch...>
6 years, 4 months
Windows 10 VM Virtio Drivers
by Sebastian Antunez N.
Hello Guys
In my enviroment Ovirt 4.2 when create a new VM with Windows 10, in the
boot options menu, add the Floppy Disk for mount virtio driver but only
show me sysprep.
I check in my ISO Folder and show all vfd drivers for Windows and ISO but I
can not add the drivers who floppy.
This problem only is with Windows VM.
Any idea because this is issue.
Regards
Sebastian
6 years, 4 months
Cannot Increase Hosted Engine VM Memory
by Douglas Duckworth
Hello
I am trying to increase Hosted Engine physical memory above 4GB
Thanks,
Douglas Duckworth, MSc, LFCS
HPC System Administrator
Scientific Computing Unit<https://scu.med.cornell.edu>
Weill Cornell Medicine
1300 York Avenue
New York, NY 10065
E: doug(a)med.cornell.edu<mailto:doug@med.cornell.edu>
O: 212-746-6305
F: 212-746-8690
6 years, 4 months
migrate hosted-engine vm to another cluster?
by Douglas Duckworth
Hello
I am trying to migrate my hosted-engine VM to another cluster in the same data center. Hosts in both clusters have the same logical networks and storage. Yet migrating the VM isn't an option.
To get the hosted-engine VM on the other cluster I started the VM on host in that other cluster using "hosted-engine --vm-start."
However HostedEngine still associated with old cluster as shown attached. So I cannot live migrate the VM. Does anyone know how to resolve? With other VMs one can shut them down then using the "Edit" option. Though that will not work for HostedEngine.
Thanks,
Douglas Duckworth, MSc, LFCS
HPC System Administrator
Scientific Computing Unit<https://scu.med.cornell.edu>
Weill Cornell Medicine
1300 York Avenue
New York, NY 10065
E: doug(a)med.cornell.edu<mailto:doug@med.cornell.edu>
O: 212-746-6305
F: 212-746-8690
6 years, 4 months
vGPU with NVIDIA M60 mdev_type not showing
by Josep Manel Andrés Moscardó
Hi all,
I have a host with 2 M60 with the latest supported driver installed, and
working as you can see:
root@esxh-03 vdsm]# lsmod | grep vfio
nvidia_vgpu_vfio 49475 0
nvidia 16633974 1 nvidia_vgpu_vfio
vfio_mdev 12841 0
mdev 20336 2 vfio_mdev,nvidia_vgpu_vfio
vfio_iommu_type1 22300 0
vfio 32656 3 vfio_mdev,nvidia_vgpu_vfio,vfio_iommu_type1
[root@esxh-03 vdsm]# nvidia-smi
Mon Jan 14 17:39:30 2019
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 410.91 Driver Version: 410.91 CUDA Version: N/A
|
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile
Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util
Compute M. |
|===============================+======================+======================|
| 0 Tesla M60 Off | 00000000:05:00.0 Off |
Off |
| 16% 27C P0 41W / 120W | 14MiB / 8191MiB | 0%
Default |
+-------------------------------+----------------------+----------------------+
| 1 Tesla M60 Off | 00000000:06:00.0 Off |
Off |
| 17% 24C P0 39W / 120W | 14MiB / 8191MiB | 0%
Default |
+-------------------------------+----------------------+----------------------+
| 2 Tesla M60 Off | 00000000:84:00.0 Off |
Off |
| 15% 28C P0 41W / 120W | 14MiB / 8191MiB | 0%
Default |
+-------------------------------+----------------------+----------------------+
| 3 Tesla M60 Off | 00000000:85:00.0 Off |
Off |
| 16% 25C P0 40W / 120W | 14MiB / 8191MiB | 0%
Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU
Memory |
| GPU PID Type Process name Usage
|
|=============================================================================|
| No running processes found
|
+-----------------------------------------------------------------------------+
But the issue is that when I do :
# vdsm-client Host hostdevListByCaps
I don't see any "mdev" device. Also the directory /sys/class/mdev_bus
is not existing.
Am I missing something ?
Cheers.
6 years, 4 months
[ANN] oVirt 4.3.0 Second Release Candidate is now available for testing
by Sandro Bonazzola
The oVirt Project is pleased to announce the availability of the Second
Release Candidate of oVirt 4.3.0, as of January 16th, 2018
This is pre-release software. This pre-release should not to be used in
production.
Please take a look at our community page[1] to learn how to ask questions
and interact with developers and users.
All issues or bugs should be reported via oVirt Bugzilla[2].
This update is the first release candidate of the 4.3.0 version.
This release brings more than 130 enhancements and more than 440 bug fixes
on top of oVirt 4.2 series.
What's new in oVirt 4.3.0?
* Q35 chipset, support booting using UEFI and Secure Boot
* Skylake-server and AMD EPYC support
* New smbus driver in windows guest tools
* Improved support for v2v
* OVA export / import of Templates
* Full support for live migration of High Performance VMs
* Microsoft Failover clustering support (SCSI Persistent Reservation) for
Direct LUN disks
* Hundreds of bug fixes on top of oVirt 4.2 series
* New VM portal details page (see a preview here:
https://imgur.com/a/ExINpci)
* New Cluster upgrade UI
* OVN security groups
* IPv6 (static host addresses)
* Support of Neutron from RDO OpenStack 13 as external network provider
* Support of using Skydive from RDO OpenStack 14 as Tech Preview
* Support for 3.6 and 4.0 data centers, clusters and hosts were removed
* Now using PostgreSQL 10
* New metrics support using rsyslog instead of fluentd
This release is available now on x86_64 architecture for:
* Red Hat Enterprise Linux 7.6 or later
* CentOS Linux (or similar) 7.6 or later
This release supports Hypervisor Hosts on x86_64 and ppc64le architectures
for:
* Red Hat Enterprise Linux 7.6 or later
* CentOS Linux (or similar) 7.6 or later
* oVirt Node 4.3 (available for x86_64 only)
Experimental tech preview for x86_64 and s390x architectures for Fedora 28
is also included.
See the release notes draft [3] for installation / upgrade instructions and
a list of new features and bugs fixed.
Notes:
- oVirt Appliance is already available for both CentOS 7 and Fedora 28
(tech preview).
- oVirt Node NG is already available for both CentOS 7 and Fedora 28 (tech
preview).
Additional Resources:
* Read more about the oVirt 4.3.0 release highlights:
http://www.ovirt.org/release/4.3.0/
* Get more oVirt project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/
[1] https://www.ovirt.org/community/
[2] https://bugzilla.redhat.com/enter_bug.cgi?classification=oVirt
[3] http://www.ovirt.org/release/4.3.0/
[4] http://resources.ovirt.org/pub/ovirt-4.3-pre/iso/
--
SANDRO BONAZZOLA
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo(a)redhat.com
<https://red.ht/sig>
6 years, 4 months
Disk full
by suporte@logicworks.pt
Hi,
I have a all in one intallation with 2 glusters volumes.
The disk of one VM filled up the brick, which is a partition. That partition has 0% free disk space.
I moved the disk of that VM to the other gluster volume, the VM is working with the disk on the other gluster volume.
When I move the disk, it didn't delete it from the brick, the engine keeps complaining that there is no more disk space on that volume.
What can I do?
Is there a way to prevent this in the future?
Many thanks
José
--
Jose Ferradeira
http://www.logicworks.pt
6 years, 4 months
[Call for Testing] oVirt 4.3.0
by Sandro Bonazzola
Hi,
we are planning to release a 4.3.0 RC2 tomorrow morning, January 16th 2019.
We have a scheduled final release for oVirt 4.3.0 on January 29th: this is
the time when testing is most effective to ensure the release will be as
much stable as possible. Please join us testing the RC2 release this week
and reporting issues to
https://bugzilla.redhat.com/enter_bug.cgi?classification=oVirt
Please remember this is still pre-release material, we recommend not
installing it on production environments yet.
Thanks,
--
SANDRO BONAZZOLA
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo(a)redhat.com
<https://red.ht/sig>
6 years, 4 months
Ansible and SHE deployment
by Vrgotic, Marko
Dear oVirt team,
I would like to ask you help in get some general guidelines, do’s & don’ts in deploying complete oVirt environment using Ansible.
The first Production deployment I made was done manual:
12 Hypervsiors – all exact same HW Brand and Specs
3/12 used for HA Env for SHE
oVirt version 4.2.1 (now we are at 4.2.7)
4Gluster nodes, managed externally of oVirt
This is environment I would like to convert into deployable by Ansible
Atm, I am working on second Production env, for Eng/Dev department, and I want to go all way Ansible.
I am aware of your playbooks, and location on github, but what I want to ask is an advice on how to approach using them:
The second Env will have:
7Hypervisors different specs / all provisioned using Foreman
oVirt version, latest 4.2.x at that point.
3/7 providing HA for SHE engine
Storage used is to be NetApp.
Please let me know how to proceed with modifying Ansible playbooks and what should be the recommended executing order, and what to look for? Also, If you need additional info, I will be happy to provide.
Kind regards,
Marko Vrgotic
6 years, 4 months
Import existing storage domain into new server
by michael@wanderingmad.com
I had original ovirt-engine that had stuck tasks and just had storage issues. I decided to load a new engine appliance and import existing servers and VMs into new engine. This actually worked perfect until I went to import the storage domain into new engine, which has failed. I was unable to move the domain to maintenance mode or anything from the old engine. Any way to force import?
Error messages / logs:
Failed to attach Storage Domains to Data Center Default. (User: admin@internal-authz) (in GUI log)
engine.log:
2019-01-13 16:31:27,973-05 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler8) [176267b0] START, GlusterVolumesListVDSCommand(HostName = prometheus.redforest.wanderingmad.com, GlusterVolumesListVDSParameters:{hostId='b5b9e763-704c-47ee-817b-19d081ad9aab'}), log id: 2e7162ed
2019-01-13 16:31:28,204-05 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler8) [176267b0] Could not associate brick 'Daedalus.redforest.wanderingmad.com:/gluster_bricks/nvme-storagetwo/nvme-storagetwo' of volume 'fe0c3620-6a33-48fa-98b7-847ad76edaba' with correct network as no gluster network found in cluster '8e8cc8a7-9f45-4054-85e5-8014ec421b7f'
2019-01-13 16:31:28,205-05 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler8) [176267b0] Could not associate brick '10.100.50.14:/gluster_bricks/nvme-storagetwo/nvme-storagetwo' of volume 'fe0c3620-6a33-48fa-98b7-847ad76edaba' with correct network as no gluster network found in cluster '8e8cc8a7-9f45-4054-85e5-8014ec421b7f'
2019-01-13 16:31:28,206-05 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler8) [176267b0] Could not associate brick '10.100.50.12:/gluster_bricks/nvme-storagetwo/nvme-storagetwo' of volume 'fe0c3620-6a33-48fa-98b7-847ad76edaba' with correct network as no gluster network found in cluster '8e8cc8a7-9f45-4054-85e5-8014ec421b7f'
2019-01-13 16:31:28,208-05 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler8) [176267b0] Could not associate brick 'Daedalus.redforest.wanderingmad.com:/gluster_bricks/ssd-storagetwo/ssd-storagetwo' of volume '7add3797-9b03-433c-aa1a-90ad7c5ac280' with correct network as no gluster network found in cluster '8e8cc8a7-9f45-4054-85e5-8014ec421b7f'
2019-01-13 16:31:28,209-05 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler8) [176267b0] Could not associate brick '10.100.50.14:/gluster_bricks/ssd-storagetwo/ssd-storagetwo' of volume '7add3797-9b03-433c-aa1a-90ad7c5ac280' with correct network as no gluster network found in cluster '8e8cc8a7-9f45-4054-85e5-8014ec421b7f'
2019-01-13 16:31:28,221-05 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler8) [176267b0] Could not associate brick '10.100.50.12:/gluster_bricks/ssd-storagetwo/ssd-storagetwo' of volume '7add3797-9b03-433c-aa1a-90ad7c5ac280' with correct network as no gluster network found in cluster '8e8cc8a7-9f45-4054-85e5-8014ec421b7f'
2019-01-13 16:31:28,232-05 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler8) [176267b0] FINISH, GlusterVolumesListVDSCommand, return: {7add3797-9b03-433c-aa1a-90ad7c5ac280=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@4205765f, fe0c3620-6a33-48fa-98b7-847ad76edaba=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@40fccb}, log id: 2e7162ed
2019-01-13 16:31:37,612-05 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterTasksListVDSCommand] (DefaultQuartzScheduler10) [2876ae97] START, GlusterTasksListVDSCommand(HostName = daedalus.redforest.wanderingmad.com, VdsIdVDSCommandParametersBase:{hostId='b79fe49c-761f-4710-adf4-8fa6d1143dae'}), log id: 30c3bd7c
2019-01-13 16:31:37,863-05 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterTasksListVDSCommand] (DefaultQuartzScheduler10) [2876ae97] FINISH, GlusterTasksListVDSCommand, return: [], log id: 30c3bd7c
2019-01-13 16:31:43,248-05 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler7) [70a256c0] START, GlusterServersListVDSCommand(HostName = prometheus.redforest.wanderingmad.com, VdsIdVDSCommandParametersBase:{hostId='b5b9e763-704c-47ee-817b-19d081ad9aab'}), log id: 7f890a8
2019-01-13 16:31:43,705-05 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler7) [70a256c0] FINISH, GlusterServersListVDSCommand, return: [10.100.50.12/24:CONNECTED, icarus.redforest.wanderingmad.com:CONNECTED, daedalus.redforest.wanderingmad.com:CONNECTED], log id: 7f890a8
2019-01-13 16:31:43,708-05 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler7) [70a256c0] START, GlusterVolumesListVDSCommand(HostName = prometheus.redforest.wanderingmad.com, GlusterVolumesListVDSParameters:{hostId='b5b9e763-704c-47ee-817b-19d081ad9aab'}), log id: 50f59aca
2019-01-13 16:31:43,943-05 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler7) [70a256c0] Could not associate brick 'Daedalus.redforest.wanderingmad.com:/gluster_bricks/nvme-storagetwo/nvme-storagetwo' of volume 'fe0c3620-6a33-48fa-98b7-847ad76edaba' with correct network as no gluster network found in cluster '8e8cc8a7-9f45-4054-85e5-8014ec421b7f'
2019-01-13 16:31:43,944-05 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler7) [70a256c0] Could not associate brick '10.100.50.14:/gluster_bricks/nvme-storagetwo/nvme-storagetwo' of volume 'fe0c3620-6a33-48fa-98b7-847ad76edaba' with correct network as no gluster network found in cluster '8e8cc8a7-9f45-4054-85e5-8014ec421b7f'
2019-01-13 16:31:43,945-05 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler7) [70a256c0] Could not associate brick '10.100.50.12:/gluster_bricks/nvme-storagetwo/nvme-storagetwo' of volume 'fe0c3620-6a33-48fa-98b7-847ad76edaba' with correct network as no gluster network found in cluster '8e8cc8a7-9f45-4054-85e5-8014ec421b7f'
2019-01-13 16:31:43,947-05 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler7) [70a256c0] Could not associate brick 'Daedalus.redforest.wanderingmad.com:/gluster_bricks/ssd-storagetwo/ssd-storagetwo' of volume '7add3797-9b03-433c-aa1a-90ad7c5ac280' with correct network as no gluster network found in cluster '8e8cc8a7-9f45-4054-85e5-8014ec421b7f'
2019-01-13 16:31:43,948-05 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler7) [70a256c0] Could not associate brick '10.100.50.14:/gluster_bricks/ssd-storagetwo/ssd-storagetwo' of volume '7add3797-9b03-433c-aa1a-90ad7c5ac280' with correct network as no gluster network found in cluster '8e8cc8a7-9f45-4054-85e5-8014ec421b7f'
2019-01-13 16:31:43,949-05 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler7) [70a256c0] Could not associate brick '10.100.50.12:/gluster_bricks/ssd-storagetwo/ssd-storagetwo' of volume '7add3797-9b03-433c-aa1a-90ad7c5ac280' with correct network as no gluster network found in cluster '8e8cc8a7-9f45-4054-85e5-8014ec421b7f'
2019-01-13 16:31:43,949-05 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler7) [70a256c0] FINISH, GlusterVolumesListVDSCommand, return: {7add3797-9b03-433c-aa1a-90ad7c5ac280=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@4205765f, fe0c3620-6a33-48fa-98b7-847ad76edaba=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@40fccb}, log id: 50f59aca
2019-01-13 16:31:58,987-05 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler4) [59b635f4] START, GlusterServersListVDSCommand(HostName = prometheus.redforest.wanderingmad.com, VdsIdVDSCommandParametersBase:{hostId='b5b9e763-704c-47ee-817b-19d081ad9aab'}), log id: 512ef5c0
2019-01-13 16:31:59,518-05 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler4) [59b635f4] FINISH, GlusterServersListVDSCommand, return: [10.100.50.12/24:CONNECTED, icarus.redforest.wanderingmad.com:CONNECTED, daedalus.redforest.wanderingmad.com:CONNECTED], log id: 512ef5c0
2019-01-13 16:31:59,521-05 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler4) [59b635f4] START, GlusterVolumesListVDSCommand(HostName = prometheus.redforest.wanderingmad.com, GlusterVolumesListVDSParameters:{hostId='b5b9e763-704c-47ee-817b-19d081ad9aab'}), log id: 2de38416
2019-01-13 16:31:59,773-05 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler4) [59b635f4] Could not associate brick 'Daedalus.redforest.wanderingmad.com:/gluster_bricks/nvme-storagetwo/nvme-storagetwo' of volume 'fe0c3620-6a33-48fa-98b7-847ad76edaba' with correct network as no gluster network found in cluster '8e8cc8a7-9f45-4054-85e5-8014ec421b7f'
2019-01-13 16:31:59,774-05 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler4) [59b635f4] Could not associate brick '10.100.50.14:/gluster_bricks/nvme-storagetwo/nvme-storagetwo' of volume 'fe0c3620-6a33-48fa-98b7-847ad76edaba' with correct network as no gluster network found in cluster '8e8cc8a7-9f45-4054-85e5-8014ec421b7f'
2019-01-13 16:31:59,775-05 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler4) [59b635f4] Could not associate brick '10.100.50.12:/gluster_bricks/nvme-storagetwo/nvme-storagetwo' of volume 'fe0c3620-6a33-48fa-98b7-847ad76edaba' with correct network as no gluster network found in cluster '8e8cc8a7-9f45-4054-85e5-8014ec421b7f'
2019-01-13 16:31:59,776-05 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler4) [59b635f4] Could not associate brick 'Daedalus.redforest.wanderingmad.com:/gluster_bricks/ssd-storagetwo/ssd-storagetwo' of volume '7add3797-9b03-433c-aa1a-90ad7c5ac280' with correct network as no gluster network found in cluster '8e8cc8a7-9f45-4054-85e5-8014ec421b7f'
2019-01-13 16:31:59,777-05 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler4) [59b635f4] Could not associate brick '10.100.50.14:/gluster_bricks/ssd-storagetwo/ssd-storagetwo' of volume '7add3797-9b03-433c-aa1a-90ad7c5ac280' with correct network as no gluster network found in cluster '8e8cc8a7-9f45-4054-85e5-8014ec421b7f'
2019-01-13 16:31:59,778-05 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler4) [59b635f4] Could not associate brick '10.100.50.12:/gluster_bricks/ssd-storagetwo/ssd-storagetwo' of volume '7add3797-9b03-433c-aa1a-90ad7c5ac280' with correct network as no gluster network found in cluster '8e8cc8a7-9f45-4054-85e5-8014ec421b7f'
2019-01-13 16:31:59,778-05 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler4) [59b635f4] FINISH, GlusterVolumesListVDSCommand, return: {7add3797-9b03-433c-aa1a-90ad7c5ac280=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@4205765f, fe0c3620-6a33-48fa-98b7-847ad76edaba=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@40fccb}, log id: 2de38416
and of course, none of the gluster hosts is set as the SPM
6 years, 4 months
unable to create templates or upload files
by michael@wanderingmad.com
ovirt stopped accepting new deployments or uploading ISOs, they just stay as locked disks. I checked ovirt-engine log and see constant errors like below:
ERROR [org.ovirt.engine.core.sso.utils.SsoUtils] (default task-2) [] OAuthException access_denied: Cannot authenticate user 'admin@N/A': No valid profile found in credentials..
I checked the ovn provider in the admin portal and it's set correctly to "admin@internal" for the ovn provider
6 years, 4 months
External private ova / image repository
by Leo David
Hello Everyone,
I am not sure what would it be the pieces needed to have an external repo
that I can manage and use at the client site for downloading customized
templates.
ie: how an external docker repo works
Any ideeas on this ?
Thank you !
Have a nice day,
Leo
6 years, 4 months
oVirt Node install - kickstart postintall
by jeanbaptiste@nfrance.com
Hello everybody,
Since days, I'm trying to install oVirt (via Foreman) in network mode (TFTP net install).
All is great, but I want to make some actions in postinstall (%post).
Some actions are relatated to /etc/sysconfig/network-interfaces and another action is related to root authorized_keys.
When I try to add pub-key to a created authorized_keys for root, I work (verified into anaconda.
But after installation and anaconda reboot, I've noticed all my %post actions in /(root) are discared. After reboot, there is nothing in /root/.ssh for example.
Whereas, in /etc, all my modications are preserved.
I thought to a Selinux relative issue, but It is not relative to Selinux
I miss something. Please can you help to understand how oVirt install / partition work ?
thanks for all
6 years, 4 months
Ovirt 4.3 / New Install / NFS (broken)
by Devin Acosta
I installed the latest 4.3 release candidate and tried to add an NFS mount
to the Data Center, and it errors in the GUI with “Error while executing
action New NFS Storage Domain: Invalid parameter”, then in the vdsm.log I
see it is passing “block_size=None”. Does this regardless if NFS v3 or v4.
InvalidParameterException: Invalid parameter: 'block_size=None'
2019-01-12 20:37:58,241-0700 INFO (jsonrpc/7) [vdsm.api] START
createStorageDomain(storageType=1,
sdUUID=u'b30c64c4-4b1f-4ebf-828b-e54c330ae84c', domainName=u'nfsdata',
typeSpecificArg=u'192.168.19.155:/data/data', domClass=1, domVersion=u'4',
block_size=None, max_hosts=2000, options=None)
from=::ffff:192.168.19.178,51042, flow_id=67743df7,
task_id=ad82f581-9638-48f1-bcd9-669b9809b34a (api:48)
2019-01-12 20:37:58,241-0700 INFO (jsonrpc/7) [vdsm.api] FINISH
createStorageDomain error=Invalid parameter: 'block_size=None'
from=::ffff:192.168.19.178,51042, flow_id=67743df7,
task_id=ad82f581-9638-48f1-bcd9-669b9809b34a (api:52)
2019-01-12 20:37:58,241-0700 ERROR (jsonrpc/7) [storage.TaskManager.Task]
(Task='ad82f581-9638-48f1-bcd9-669b9809b34a') Unexpected error (task:875)
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882,
in _run
return fn(*args, **kargs)
File "<string>", line 2, in createStorageDomain
File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 50, in
method
ret = func(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 2583,
in createStorageDomain
alignment = clusterlock.alignment(block_size, max_hosts)
File "/usr/lib/python2.7/site-packages/vdsm/storage/clusterlock.py", line
661, in alignment
raise se.InvalidParameterException('block_size', block_size)
InvalidParameterException: Invalid parameter: 'block_size=None'
2019-01-12 20:37:58,242-0700 INFO (jsonrpc/7) [storage.TaskManager.Task]
(Task='ad82f581-9638-48f1-bcd9-669b9809b34a') aborting: Task is aborted:
u"Invalid parameter: 'block_size=None'" - code 100 (task:1181)
2019-01-12 20:37:58,242-0700 ERROR (jsonrpc/7) [storage.Dispatcher] FINISH
createStorageDomain error=Invalid parameter: 'block_size=None'
(dispatcher:81)
2019-01-12 20:37:58,242-0700 INFO (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC
call StorageDomain.create failed (error 1000) in 0.00 seconds (__init__:312)
2019-01-12 20:37:58,541-0700 INFO (jsonrpc/1) [vdsm.api] START
disconnectStorageServer(domType=1,
spUUID=u'00000000-0000-0000-0000-000000000000', conList=[{u'tpgt': u'1',
u'id': u'db7d16c8-7497-42db-8a75-81cb7f9d3350', u'connection':
u'192.168.19.155:/data/data', u'iqn': u'', u'user': u'', u'ipv6_enabled':
u'false', u'protocol_version': u'auto', u'password': '********', u'port':
u''}], options=None) from=::ffff:192.168.19.178,51042,
flow_id=7e4cb4fa-1437-4d5b-acb5-958838ecd54c,
task_id=1d004ea2-ae84-4c95-8c70-29e205efd4b1 (api:48)
2019-01-12 20:37:58,542-0700 INFO (jsonrpc/1) [storage.Mount] unmounting
/rhev/data-center/mnt/192.168.19.155:_data_data (mount:212)
2019-01-12 20:37:59,087-0700 INFO (jsonrpc/1) [vdsm.api] FINISH
disconnectStorageServer return={'statuslist': [{'status': 0, 'id':
u'db7d16c8-7497-42db-8a75-81cb7f9d3350'}]}
from=::ffff:192.168.19.178,51042,
flow_id=7e4cb4fa-1437-4d5b-acb5-958838ecd54c,
task_id=1d004ea2-ae84-4c95-8c70-29e205efd4b1 (api:54)
2019-01-12 20:37:59,089-0700 INFO (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC
call StoragePool.disconnectStorageServer succeeded in 0.55 seconds
(__init__:312)
Devin Acosta
6 years, 4 months
Trunk VLAN Ovirt 4.2
by Sebastian Antunez N.
Hello Guys
Have a lab with 6 nodes Ovirt 4.2. All nodes have 4 nic 1GB and two nic are
for Management and and I will assign two nic in bonding for VM traffic
My problem is the following.
The switch Cisco have 2 VLAN (56 and 57) and I need to be able to create a
new network with the two vlan but I am not clear how to make it so that the
vlan go through this created network. I have read that I must create a
network with vlan 56 for example and assign it an IP, and then create the
network with vlan 57 and assign it an IP and later assign it to bonding.
Is what I indicate or should another process be correct?
Thanks for the help.
Sebastian
6 years, 4 months
Re: [ovirt-devel] Hard disk requirement
by Nir Soffer
On Sun, Jan 13, 2019 at 6:18 PM Hetz Ben Hamo <hetz(a)hetz.biz> wrote:
> Hi,
>
> The old oVirt (3.x?) ISO image wasn't requiring a big hard disk in order
> to install the node part on a physical machine, since the HE and other
> parts were running using NFS/iSCSI etc.
>
> oVirt 4.X if I understand correctly - does require hard disk on the node.
>
I think you can setup temporary storage on a diskless host using NFS or
by connecting to temporary LUN and setting up a file system on it.
Once the bootstrap engine is ready on the "local" storage, you we move
engine disk to shared storage, and you can remove the local storage.
Adding Simone to add more info.
> Can this requirement be avoided and just use an SD Card? Whats the minimum
> storage for the local node?
>
Another option is to use /dev/shm, if you have a server with lot of memory.
Note that this list is for ovirt developers. This question is more about
using ovirt, so the users
mailing list is better. Others users may already solved this issue and can
help more than
developers which have much less experience with actual deployment.
Nir
6 years, 4 months
hosted-engine deploy Restore from backup failure
by Callum Smith
Restoring from backup appears to work perfectly, until it decides it hasn't worked and throws out. I can't see any relevant error here, does anyone have any insight?
Regards,
Callum
--
Callum Smith
Research Computing Core
Wellcome Trust Centre for Human Genetics
University of Oxford
e. callum(a)well.ox.ac.uk<mailto:callum@well.ox.ac.uk>
6 years, 4 months
Ovirt VDI Solution network issue
by mdcarr@meditech.com
Hello, first time poster looking for help on setting up the network for a VDI deployment. A little background, I have Ovirt engine running on a Vmware virtual machine and have added a physical host to serve up virtual machines running windows 7 to our developers. The host has a static IP address and when I create a virtual machine I can see the default network is attached and up but does not receive an IP address. It does have a MAC so I'm wondering if I need to have our network team assign an IP for that MAC? I will be creating around 20 vm's that would get wiped after use so assigning static ip's might cause issues. Am I missing something or is there a better way to setup the LAN for this host? Thanks for any help!
6 years, 4 months
Hypervisor host CPU utilization metrics
by Sigbjorn Lie-Soland
Hi,
Linux see twice the amount of CPUs for a host with SMT/hyperthreading enabled. Inspecting the individual cores presented to the operating system using mpstat shows that the guests are running primarily on the “physical cores”, which is expected as the system is SMT aware. The “hyperthreaded cores” are showing guest activity when the host cpu is heavily loaded. Using the traditional tools such as top and vmstat does not take hyperthreading into consideration when calculating the total cpu% usage, again, as expected.
How is the hypervisor host CPU utilization metrics in Ovirt calculated when a hypervisor host has SMT/hyperthreading enabled? Does Ovirt calculate in the same manner as the traditional vmstat/top/etc Linux tools, or does it only count CPU usage on the “physical cores” on the hypervisor? Does it read CPU metrics from the Intel PCM (https://github.com/opcm/pcm <https://github.com/opcm/pcm>) ?
Does the “Count Threads as Cores” setting affect how the host hypervisor CPU utilization metrics are calculated?
The reason for asking is when using tools for capacity and utilization reporting to attempt to establish cluster growth and to anticipate when and how much hypervisor capacity to add, we need the input utilization data to be a correct representation of what is happening at the hypervisor host.
Thanks.
Regards,
Siggi
6 years, 4 months
exporting VMs using vdsm-client
by Callum Smith
Is it possible to export VMs using vdsm-client to import into a new oVirt engine in a case of catastrophic failure such that they can be imported into a new setup?
Regards,
Callum
--
Callum Smith
Research Computing Core
Wellcome Trust Centre for Human Genetics
University of Oxford
e. callum(a)well.ox.ac.uk<mailto:callum@well.ox.ac.uk>
6 years, 4 months
[ANN] oVirt 4.3.0 First Release Candidate is now available for testing
by Sandro Bonazzola
The oVirt Project is pleased to announce the availability of the First
Release Candidate of oVirt 4.3.0, as of January 10th, 2018
This is pre-release software. This pre-release should not to be used in
production.
Please take a look at our community page[1] to learn how to ask questions
and interact with developers and users.
All issues or bugs should be reported via oVirt Bugzilla[2].
This update is the first release candidate of the 4.3.0 version.
This release brings more than 120 enhancements and more than 420 bug fixes
on top of oVirt 4.2 series.
What's new in oVirt 4.3.0?
* Q35 chipset, support booting using UEFI and Secure Boot
* Skylake-server and AMD EPYC support
* New smbus driver in windows guest tools
* Improved support for v2v
* Hundreds of bug fixes on top of oVirt 4.2 series
* New VM portal details page (see a preview here:
https://imgur.com/a/ExINpci)
* New Cluster upgrade UI
* OVN security groups
* IPv6 (static host addresses)
* LLDP Labeler
* Support of Neutron from RH OSP 13 as external network provider
* Support of using Skydive from RH OSP 14 as Tech Preview
* Support for 3.6 and 4.0 data centers, clusters and hosts were removed
* Now using PostgreSQL 10
This release is available now on x86_64 architecture for:
* Red Hat Enterprise Linux 7.6 or later
* CentOS Linux (or similar) 7.6 or later
This release supports Hypervisor Hosts on x86_64 and ppc64le architectures
for:
* Red Hat Enterprise Linux 7.6 or later
* CentOS Linux (or similar) 7.6 or later
* oVirt Node 4.3 (available for x86_64 only)
Experimental tech preview for x86_64 and s390x architectures for Fedora 28
is also included.
See the release notes draft [3] for installation / upgrade instructions and
a list of new features and bugs fixed.
Notes:
- oVirt Appliance is already available for both CentOS 7 and Fedora 28
(tech preview).
- oVirt Node NG is already available for both CentOS 7 and Fedora 28 (tech
preview).
Additional Resources:
* Read more about the oVirt 4.3.0 release highlights:
http://www.ovirt.org/release/4.3.0/
* Get more oVirt project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/
[1] https://www.ovirt.org/community/
[2] https://bugzilla.redhat.com/enter_bug.cgi?classification=oVirt
[3] http://www.ovirt.org/release/4.3.0/
[4] http://resources.ovirt.org/pub/ovirt-4.3-pre/iso/
--
SANDRO BONAZZOLA
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo(a)redhat.com
<https://red.ht/sig>
6 years, 4 months
quota_name field length missmatch in db schema
by Pavel Šipoš
Hi!
We have a problem with Quotas whose name has a length grater than 60
characters.
Such quotas can be created (or renamed) through oVirt-shell but cannot
be managed through the oVirt Engine interface. E.g. VMs in that quota
can't be started, created, quota comment can't be added...
It turns out that the problem is with the engine database definition of
"audit_log" table, that declares "quota_name" column as "character
varying(60)".
However "quota.quota_name" is defined as "character varying(65)".
I think this is a bug and should be fixed in the future.
Error example:
PL/pgSQL function insertauditlog(integer,timestamp with time
zone,integer,character varying,integer,text,uuid,character
varying,uuid,character varying,uuid,character varying,uuid,character
varying,uuid,character varying,uuid,character varying,uuid,character
varying,uuid,character varying,character varying,uuid,uuid,character
varying,text,boolean,uuid,text) line 9 at SQL statement; nested
exception is org.postgresql.util.PSQLException: ERROR: value too long
for type character varying(60)
Regards,
Pavel
6 years, 4 months
Datacenter Down Storage error
by ahuser@web.de
Hi @all,
I have a big problem with my datacenter.
After pressing optimization for vm-storage in a active gluster volume are the vms paused. And now, i cant activate the storage in the Data-Center. I have try reboot all Servers.
It runs the latest Stable ovirt Version:
ovirt-engine-4.2.7.5-1.el7.noarch
vdsm-gluster-4.20.43-1.el7.x86_64
glusterfs-3.12.15-1.el7.x86_64
All Servers Up-to Date. Centos 7.6
Gluster Volume Replica 3 with Arbiter
/var/log/sanlock.log
set scheduler RR|RESET_ON_FORK priority 99 failed: Operation not permitted
VDSM ovnode3 command SpmStatusVDS failed: (22, 'Sanlock resource read failure', 'Invalid argument')
can anybody help me?
thx
6 years, 4 months
JSON internal error setting bonding using vdsm-client
by Callum Smith
Dear All,
I’m getting the error: “(code=32603, message=Internal JSON-RPC error: {‘reason’: “‘unicode’ object has no attribute ’sort’”})
when trying to apply networking configuration with vdsm-client -f bond.json Host setupNetworks
bond.json contains:
{
“networks”: {},
“bondings”: {
“bond0”: {
“nics”: “eno1+eno2”,
“options”: “mode=4”
}
},
“options”: {}
}
Of course python is handling all the arguments as unicode entities rather than strings. Any idea what might be wrong?
Regards,
Callum
--
Callum Smith
Research Computing Core
Wellcome Trust Centre for Human Genetics
University of Oxford
e. callum(a)well.ox.ac.uk<mailto:callum@well.ox.ac.uk>
6 years, 4 months
File level restore of a VM backup located in export domain in Overt
by kevin.doyle@manchester.ac.uk
Hi
I would like to create backups of VM's using cron. I have installed and ran https://github.com/wefixit-AT/oVirtBackup This works well and saves an image to the export domain I created. I can also carry out a full restore from this by importing from the export domain and selecting the backup. The question I have is how to do a single file recovery from the backup image. I would like to restore /etc/hosts just as an example. What command would I use ?
I am not sure of the layout of the export domain directory
── dom_md
│ ├── ids
│ ├── inbox
│ ├── leases
│ ├── metadata
│ └── outbox
├── images
│ └── 622eb98e-f10b-4e96-bec5-8f0a7e7745fe
│ ├── b47ce8e1-f552-47f8-af56-33a3b8ce7aed
│ └── b47ce8e1-f552-47f8-af56-33a3b8ce7aed.meta
└── master
├── tasks
└── vms
└── c6e6c49c-c9bc-4683-8b19-d18250b5697b
└── c6e6c49c-c9bc-4683-8b19-d18250b5697b.ovf
The largest file is found is in images folder so I assume this is the backup image ?
./images/622eb98e-f10b-4e96-bec5-8f0a7e7745fe:
total 2.7G
drwxr-xr-x. 2 vdsm kvm 99 Jan 10 11:48 .
drwxr-xr-x. 3 vdsm kvm 50 Jan 10 11:48 ..
-rw-rw----. 1 vdsm kvm 50G Jan 10 11:51 b47ce8e1-f552-47f8-af56-33a3b8ce7aed
-rw-r--r--. 1 vdsm kvm 269 Jan 10 11:48 b47ce8e1-f552-47f8-af56-33a3b8ce7aed.meta
# file b47ce8e1-f552-47f8-af56-33a3b8ce7aed
b47ce8e1-f552-47f8-af56-33a3b8ce7aed: x86 boot sector; partition 1: ID=0x83, active, starthead 32, startsector 2048, 2097152 sectors; partition 2: ID=0x8e, starthead 170, startsector 2099200, 102758400 sectors, code offset 0x63
Any help would be appreciated, I am new to Ovirt
regards
Kevin
6 years, 4 months
ovirt 4.2.7-1 - adding virtual host ( nested virt. )
by paf1@email.cz
Hello guys,
I've got problem with adding new host (ESX-virtual) to ovirt 4.2.7-1 (
gluster included)
Is this feature supported ???
2019-01-07 19:38:30,168+01 ERROR
[org.ovirt.engine.core.bll.gluster.GlusterSyncJob]
(DefaultQuartzScheduler1) [15a4029b] Error while refreshing server data
for cluster 'MID' from database: null
regs.
Paul
6 years, 4 months
VMs Hung On Booting From Hard Disk
by Douglas Duckworth
Hi
I have deployed 10 VMs in our oVirt cluster using Ansible. Thanks everyone for helping get it working.
However, I randomly run into issues where the OS won't load after completing kickstart installation.
As you can see the disks are attached while we are booting to it however I keep getting a hang at "booting from hard disk." The kickstart, kernel, and everything else for this VM, matches others. I am unsure why this problem occurs.
I have attached repeating vdsm errors from the host which has the VM.
Thanks,
Douglas Duckworth, MSc, LFCS
HPC System Administrator
Scientific Computing Unit<https://scu.med.cornell.edu>
Weill Cornell Medicine
1300 York Avenue
New York, NY 10065
E: doug(a)med.cornell.edu<mailto:doug@med.cornell.edu>
O: 212-746-6305
F: 212-746-8690
6 years, 4 months
Re: move 'ovirtmgmt' bridge to a bonded NIC team
by Dominik Holler
On Tue, 8 Jan 2019 17:01:38 +0000
Shawn Southern <shawn.southern(a)entegrus.com> wrote:
>We've recently added additional NICs to our oVirt nodes, and want to move the ovirtmgmt interface to one of the bonded interfaces, away from the single ethernet port currently used. This is to provide redundant connectivity to the nodes.
>
>I've not had any luck finding documentation on how to do this. If we change it manually by editing files in /etc/sysconfig/network-scripts, VDSM simply changes everything back.
>
Please use oVirt Engine to manage the network configuration.
>I'm just looking to be pointed in the right direction here.
>
I see no reason why the usual way of configuring host networking via
Compute > Hosts > hostname > Network Interfaces > Setup Host Networks
should not work.
ovirtmgmt must not be used by a VM on this host during the change,
and ovirtmgmt should use a static IP address in
Setup Host Networks > Edit management network: ovirtmgmt > IPv4.
It might be a good idea to move the host to maintenance before the change,
and ensure connectivity from the bond to oVirt Engine, because the change will be rolled back,
if connectivity is lost.
>Thanks!
6 years, 4 months
oVirt upgrade 4.1 to 4.2
by p.staniforth@leedsbeckett.ac.uk
Hello when we did a live upgrade of our VDC from 4.1 to 4.2 we had a large number of VMs running that had a Custom Compatibility Version set to 4.1 to allow them to keep running while the cluster and VDC were upgraded.
Unfortunately there was a large number of snapshots taken be the users before they were restarted their VMs so they have the Custom_Compatibility_Version set to 4.1 and so can't run in the 4.2 VDC, is there a way to search for them in the API or SDK because I can only find them in the events log when they fail to start.
Thanks,
Paul S.
6 years, 4 months
[Cannot edit VM. Maximum number of sockets exceeded.]
by Matthias Leopold
Hi,
when a user is managing a "higher number" (couldn't find the exact
number yet, roughly >10) of VMs in VM Portal and wants to edit a VM he
gets a "[Cannot edit VM. Maximum number of sockets exceeded.]" error
message in the browser, which I also see in engine.log. I couldn't find
the reason for this. I'm using squid as a SPICE Proxy at cluster level.
oVirt version is 4.2.7, can anybody help me?
thx
matthias
6 years, 4 months
ISCSI Domain & LVM
by tehnic@take3.ro
Hello all,
i have a question regarding this note in ovirt storage documentation:
**Important:** If you are using block storage and you intend to deploy virtual machines on raw devices or direct LUNs and to manage them with the Logical Volume Manager, you must create a filter to hide the guest logical volumes. This will prevent guest logical volumes from being activated when the host is booted, a situation that could lead to stale logical volumes and cause data corruption.
What does "you must create a filter to hide the guest logical volumes" exactly mean?
I assume i have to set some filter in lvm.conf on all ovirt hosts, but im not sure about what to filter?
What i already saw is that after creating the ISCSI domain and cloning/moving/creating virtual machines on it there are new PVs and LVs visible on the ovirt hosts having an object UUID in the name (output from "pvs" and "lvs" commands).
Is this expected behavior or do i have to filter exactly this ones by allowing only local disks to be scanned for PVs/LVs?
Or do i have to setup filter to allow only local disks + ISCSI disks (in my case /dev/sd?) to be scanned for PVs/LVs?
I noticed too that after detaching and removing the ISCSI domain i still have UUID PVs. They all show up with "input/output error" in the output of "pvs" and stay there until i reboot the ovirt hosts.
On my ISCSI target system i already set the correct lvm filters so that "targetcli" is happy after reboot.
Thank you!
Happy new year,
Robert
6 years, 4 months
Shutdown VMs when Nobreak is on Battery
by Vinícius Ferrão
Hello,
I would like to know if oVirt support VMs shutdown when a battery threshold on the nobreak device is reached.
There are some fencing agents for APC devices, so I’m hoping this is supported.
If not, how are you guys doing this kind of thing? A separate device or VM in the datacenter to issue the shutdown commands?
Thanks,
6 years, 4 months