Windows 10 & 2016 snapshot error
by femi adegoke
oVirt Guest Tools 4.2-1.el7.centos & QEMU guest agent v7.5 are installed.
When I try & take a snapshot, I get this warning:
"The VM will be paused while saving the memory
Could not detect Guest Agent on the VM.
Note that without a Guest Agent the data on the created snapshot may be inconsistent."
What am I missing or doing wrong?
6 years, 3 months
AD authentication not working after upgrade to 4.2.5
by p.staniforth@leedsbeckett.ac.uk
Hello,
I have upgraded from 4.2.4 to 4.2.5 and in our AD profile users can no longer login.
in the engine log I am getting
ERROR [org.ovirt.engine.core.sso.servlets.InteractiveAuthServlet] (default task-15) [] Internal Server Error: Cannot resolve principal 'LEEDSBECKETT\stanif02'
ERROR [org.ovirt.engine.core.sso.utils.SsoUtils] (default task-15) [] Cannot resolve principal 'LEEDSBECKETT\stanif02'
ERROR [org.ovirt.engine.core.aaa.servlet.SsoPostLoginServlet] (default task-15) [] server_error: Cannot resolve principal 'LEEDSBECKETT\stanif02'
ERROR [org.ovirt.engine.core.sso.servlets.InteractiveAuthServlet] (default task-15) [] Internal Server Error: Cannot resolve principal 'LEEDSBECKETT\stanif02'
ERROR [org.ovirt.engine.core.sso.utils.SsoUtils] (default task-15) [] Cannot resolve principal 'LEEDSBECKETT\stanif02'
although I can test authentication using "ovirt-engine-extensions-tool aaa login-user"
Thanks,
Paul S.
6 years, 3 months
Drive letter / scsi id not consistent
by Nathan March
Hi,
Managing disks in the ovirt gui seems completely error prone since the GUI ordering does not seem to be used in any practical way. The scsi ID's are not exposed via the GUI either, so if you have 2 drives of the same size on NFS there's no way to identify which is which without resorting to dumping the xml.
I see in the xml that the target dev is correct and matches up with the GUI's drive order, but the scsi unit seems to be generated based on the order of last activated:
<disk type='file' device='disk' snapshot='no'>
<driver name='qemu' type='raw' cache='none' error_policy='stop' io='threads'/>
<source file='/rhev/data-center/mnt/10.1.32.10:_sas01/2060a19f-f26c-4dba-a559-83541a4d0c7a/images/b622cde9-ffc4-487b-8038-f7cb7fbc00a1/7240b4b2-5ab1-40ab-99bb-09c69fd835f6'/>
<backingStore/>
<target dev='sda' bus='scsi'/>
<serial>b622cde9-ffc4-487b-8038-f7cb7fbc00a1</serial>
<boot order='1'/>
<alias name='ua-b622cde9-ffc4-487b-8038-f7cb7fbc00a1'/>
<address type='drive' controller='0' bus='0' target='0' unit='1'/>
</disk>
<disk type='file' device='disk' snapshot='no'>
<driver name='qemu' type='raw' cache='none' error_policy='stop' io='threads'/>
<source file='/rhev/data-center/mnt/10.1.32.10:_sas01/2060a19f-f26c-4dba-a559-83541a4d0c7a/images/86765f60-4240-44f4-b437-f22b2cc3df1a/482a363e-461a-4f37-876c-43ceb529a93a'/>
<backingStore/>
<target dev='sdb' bus='scsi'/>
<serial>86765f60-4240-44f4-b437-f22b2cc3df1a</serial>
<alias name='ua-86765f60-4240-44f4-b437-f22b2cc3df1a'/>
<address type='drive' controller='0' bus='0' target='0' unit='0'/>
</disk>
<disk type='file' device='disk' snapshot='no'>
<driver name='qemu' type='raw' cache='none' error_policy='stop' io='threads'/>
<source file='/rhev/data-center/mnt/10.1.32.10:_sas01/2060a19f-f26c-4dba-a559-83541a4d0c7a/images/0b020d24-3dea-4be7-931a-9c25ccfeec48/7e3957ff-a3bf-4080-a7b6-ddad7f63a299'/>
<backingStore/>
<target dev='sdc' bus='scsi'/>
<serial>0b020d24-3dea-4be7-931a-9c25ccfeec48</serial>
<alias name='scsi0-0-0-3'/>
<address type='drive' controller='0' bus='0' target='0' unit='3'/>
</disk>
After deactivating the other 2 disks and rebooting the machine, sda has now become unit 3:
<target dev='sda' bus='scsi'/>
<serial>b622cde9-ffc4-487b-8038-f7cb7fbc00a1</serial>
<boot order='1'/>
<alias name='ua-b622cde9-ffc4-487b-8038-f7cb7fbc00a1'/>
<address type='drive' controller='0' bus='0' target='0' unit='3'/>
and then I shutdown, activated the extra 2 drives, booted it back up but now I'm still on unit 3:
<disk type='file' device='disk' snapshot='no'>
<driver name='qemu' type='raw' cache='none' error_policy='stop' io='threads'/>
<source file='/rhev/data-center/mnt/10.1.32.10:_sas01/2060a19f-f26c-4dba-a559-83541a4d0c7a/images/b622cde9-ffc4-487b-8038-f7cb7fbc00a1/7240b4b2-5ab1-40ab-99bb-09c69fd835f6'/>
<backingStore/>
<target dev='sda' bus='scsi'/>
<serial>b622cde9-ffc4-487b-8038-f7cb7fbc00a1</serial>
<boot order='1'/>
<alias name='ua-b622cde9-ffc4-487b-8038-f7cb7fbc00a1'/>
<address type='drive' controller='0' bus='0' target='0' unit='3'/>
</disk>
<disk type='file' device='disk' snapshot='no'>
<driver name='qemu' type='raw' cache='none' error_policy='stop' io='threads'/>
<source file='/rhev/data-center/mnt/10.1.32.10:_sas01/2060a19f-f26c-4dba-a559-83541a4d0c7a/images/86765f60-4240-44f4-b437-f22b2cc3df1a/482a363e-461a-4f37-876c-43ceb529a93a'/>
<backingStore/>
<target dev='sdb' bus='scsi'/>
<serial>86765f60-4240-44f4-b437-f22b2cc3df1a</serial>
<alias name='ua-86765f60-4240-44f4-b437-f22b2cc3df1a'/>
<address type='drive' controller='0' bus='0' target='0' unit='0'/>
</disk>
<disk type='file' device='disk' snapshot='no'>
<driver name='qemu' type='raw' cache='none' error_policy='stop' io='threads'/>
<source file='/rhev/data-center/mnt/10.1.32.10:_sas01/2060a19f-f26c-4dba-a559-83541a4d0c7a/images/0b020d24-3dea-4be7-931a-9c25ccfeec48/7e3957ff-a3bf-4080-a7b6-ddad7f63a299'/>
<backingStore/>
<target dev='sdc' bus='scsi'/>
<serial>0b020d24-3dea-4be7-931a-9c25ccfeec48</serial>
<alias name='ua-0b020d24-3dea-4be7-931a-9c25ccfeec48'/>
<address type='drive' controller='0' bus='0' target='0' unit='1'/>
</disk>
I managed to get things into the correct state by shutting down the VM, deactivating all the disks, then activating them in the order I want them to show up. This sets the correct order of unit 0 for sda, unit 1 for sdb, unit 2 for sdc.
Is there some way to make ovirt treat this in a sane way and always expose the scsi devices in the same order as the GUI (and the "target dev" field)?
Cheers!
Nathan
6 years, 3 months
Live Migration via NFS
by Douglas Duckworth
Hi
I haven't used oVirt in several years so wanted to ask how live migration
may have changed.
Can NFS facilitate live migration in cluster of two hosts each which have
NFS mounted?
The hosts would have local attached storage which would be the original
location of VMs.
6 years, 3 months
python error in syslog of all hosts in new HCI default build
by Jayme
Recently built a three host HCI with oVirt node 4.2.5. I am seeing the
following error in each hosts syslog often. What does it mean and how can
it be corrected?
vdsm[3470]: ERROR Internal server error#012Traceback (most recent call
last):#012 File "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py",
line 606, in _handle_request#012 res = method(**params)#012 File
"/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 197, in
_dynamicMethod#012 result = fn(*methodArgs)#012 File
"/usr/lib/python2.7/site-packages/vdsm/gluster/apiwrapper.py", line 91, in
vdoVolumeList#012 return self._gluster.vdoVolumeList()#012 File
"/usr/lib/python2.7/site-packages/vdsm/gluster/api.py", line 90, in
wrapper#012 rv = func(*args, **kwargs)#012 File
"/usr/lib/python2.7/site-packages/vdsm/gluster/api.py", line 818, in
vdoVolumeList#012 status = self.svdsmProxy.glusterVdoVolumeList()#012
File "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line 55,
in __call__#012 return callMethod()#012 File
"/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line 53, in
<lambda>#012 **kwargs)#012 File "<string>", line 2, in
glusterVdoVolumeList#012 File
"/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in
_callmethod#012 raise convert_to_error(kind, result)#012AttributeError:
'str' object has no attribute 'iteritems'
6 years, 3 months
oVirt 4.2.6 + GlusterFS
by radu@radu.cat
Hello,
I have 24 HP DL360 G7 all with 64G RAM and 1TB SAS disks.
What I want to archive is to use all the nodes with ovirt (one will be for the controller + iso domain) and use GluterFS in the same nodes for the storage domain.
Idk if anyone tried it before, but I want it to replace my actual "standalone" production hypervisors schema.
Thanks in advance
6 years, 3 months
Fwd: Upgraded to 4.2
by nreaction
I recently updated to 4.2 and when I restarted ovirt it started shutting
down all my guests on host servers, how can I disable ovirt not to do that?
6 years, 3 months
Failed to hot-plug disk
by Punaatua PK
Hello,
I have a problem when i tried to hotplug a disk to a VM. Here is the situation.
We use a VM (let's call this backupVM) which is responsible of doing our VM backup by :
- Making a snapshot of the VM we want to backup
- Attach the snapshot disk
- Make the copy using DD
- Unplug the snapshot disk and then delete snapshot.
For some reason that i don't know, for certain VM, the backupVM cannot hot plug the snapshot disk.
I'm using ovirt 4.2.4
Here is what is see in engine.log
2018-08-04 19:52:30,496-10 ERROR [org.ovirt.engine.core.bll.storage.disk.AttachDiskToVmCommand] (default task-61) [361eca85-ca85-4e23-b292-cd8303a1a86d] Command 'org.ovirt.engine.core.bll.storage.disk.AttachDiskToVmCommand' failed: EngineException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: VDSGenericException: VDSErrorException: Failed to HotPlugDiskVDS, error = internal error: unable to execute QEMU command 'device_add': Failed to get shared "write" lock, code = 45 (Failed with error FailedToPlugDisk and code 45)
Here is the vdsm.log
2018-08-04 19:51:38,512-1000 INFO (jsonrpc/5) [virt.vm] (vmId='78ef239f-3cb3-4eef-921f-f989724009ef') Hotplug disk xml: <?xml version='1.0' encoding='utf-8'?>
<disk address="" device="disk" snapshot="no" type="file">
<source file="/var/lib/vdsm/transient/225ec856-d661-4374-bb7d-8ea7168fe5f2-b641bb09-325a-4932-a9af-32c907e4a381.D52Nkj" />
<target bus="virtio" dev="vdg" />
<serial>c312a0c3-8c58-472f-99ad-cfbabb42337d</serial>
<driver cache="writethrough" error_policy="stop" io="threads" name="qemu" type="qcow2" />
</disk>
(vm:3859)
2018-08-04 19:51:38,672-1000 ERROR (jsonrpc/5) [virt.vm] (vmId='78ef239f-3cb3-4eef-921f-f989724009ef') Hotplug failed (vm:3867)
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 3865, in hotplugDisk
self._dom.attachDevice(driveXml)
File "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py", line 98, in f
ret = attr(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/vdsm/common/libvirtconnection.py", line 130, in wrapper
ret = f(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/vdsm/common/function.py", line 92, in wrapper
return func(inst, *args, **kwargs)
File "/usr/lib64/python2.7/site-packages/libvirt.py", line 570, in attachDevice
if ret == -1: raise libvirtError ('virDomainAttachDevice() failed', dom=self)
libvirtError: internal error: unable to execute QEMU command 'device_add': Failed to get shared "write" lock
here is libvirtd log in /var/log/messages
libvirtd: 2018-08-05 05:51:38.637+0000: 4967: error : qemuMonitorJSONCheckError:389 : internal error: unable to execute QEMU command 'device_add': Failed to get shared "write" lock
Do you guys have any idea of what going on ? How can i check this lock ?
6 years, 3 months
VirtIO in new upgraded 4.2.5 for FreeBSD is very poor (could not use)
by Paul.LKW
Dear All:
I just upgraded my oVIrt-4.2.4 to 4.2.5 and nightmare begin, FreeBSD VMs
hangs once the VM's VirtIO harddisk is some loading (eg. extracting a
big tar ball, portsnap extract, etc.), I tried to create a VMs using
back the IDE do not has any problem but got slow IO performance once
using VirtIO it will hang as it like.
I also tried to create another VMs with VirtIO-SCSI it seems the hang
problem no occur but I got the oVirt-Host loading over 40 when VM
harddisk have some little job, so conclusion is if you guys will using
FreeBSD as guest please do not upgrade at the moment.
In fact I feel every time the new version always have problem and
causing user to nightmare.
BR,
Paul.LKW
6 years, 3 months