Cannot start VM due to no active snapshot.

I've hit the issue described in: https://access.redhat.com/solutions/3393181 https://bugzilla.redhat.com/show_bug.cgi?id=1561052 I have one VM with three disks that cannot start. Logs indicate a null pointer exception when trying to locate the snapshot. I've verified in the engine db that no "Active" snapshot exists for the VM (this can also be seen in the WebUI). oVirt hosts and engine are both (now) on RHEL7.5. Not sure if this snapshot failed to create before the hosts were updated to 7.5 - the VM in question is a server that never gets shutdown but rather is migrated around while we perform host maintenance. oVirt: v4.2.1.7-1 Host: kernel: 3.10.0-862.9.1 KVM: 2.9.0-16 libvirt: libvirt-3.9.0-14.el7_5.6 VDSM: vdsm-4.20.27.1-1 The article just says to contact RH support, but I don't have a RHEV support agreement for my oVirt cluster. Anyone know how to recover this VM?

I can't write an elaborate response since I am away from my laptop, but a workaround would be to simply insert the snapshot back to the snapshots table You need to locate the snapshot's id in the logs where the failure occured and use vm's id insert into snapshots values ('<snapshot uuid>', '<vm uuid>', 'ACTIVE', 'OK', 'Active VM', '2018-02-21 14:00:11.845-04' On Thu, 19 Jul 2018, 14:48 , <pwightm@gmail.com> wrote:
I've hit the issue described in: https://access.redhat.com/solutions/3393181 https://bugzilla.redhat.com/show_bug.cgi?id=1561052
I have one VM with three disks that cannot start. Logs indicate a null pointer exception when trying to locate the snapshot. I've verified in the engine db that no "Active" snapshot exists for the VM (this can also be seen in the WebUI).
oVirt hosts and engine are both (now) on RHEL7.5. Not sure if this snapshot failed to create before the hosts were updated to 7.5 - the VM in question is a server that never gets shutdown but rather is migrated around while we perform host maintenance.
oVirt: v4.2.1.7-1
Host: kernel: 3.10.0-862.9.1 KVM: 2.9.0-16 libvirt: libvirt-3.9.0-14.el7_5.6 VDSM: vdsm-4.20.27.1-1
The article just says to contact RH support, but I don't have a RHEV support agreement for my oVirt cluster. Anyone know how to recover this VM? _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/ORFFAFL5G2JDBJ...

Benny, Thanks for the response! I don't think I found the right snapshot ID in the logs, but I was able to track down an ID in the images table. I've inserted that ID as the active VM in the snapshots table and it now shows an Active VM with Status:Up disks in my snapshots view in the WebUI. Unfortunately, though, when I try to start the VM it still fails. The new error being thrown is below: 2018-07-19 16:52:36,845-0400 INFO (vm/f0087d72) [vds] prepared volume path: /rhev/data-center/mnt/192.168.8.110:_oi_nfs_kvm-nfs-sr1/428a1232-ba20-4338-b24b-2983a112501c/images/4e897a16-3f7a-47dd-b047-88bb1b191406/2d01adb1-f629-4c10-9a2c-de92cf5d41bf (clientIF:497) 2018-07-19 16:52:36,846-0400 INFO (vm/f0087d72) [vdsm.api] START prepareImage(sdUUID=u'428a1232-ba20-4338-b24b-2983a112501c', spUUID=u'39f25b84-a2ad-439f-8db7-2dd7896186d1', imgUUID=u'8462a296-65cc-4740-a479-912164fa7e1d', leafUUID=u'4bb7829f-89c8-4132-9ec3-960e39094898', allowIllegal=False) from=internal, task_id=56401238-1ad9-44f2-ad54-c274c7349546 (api:46) 2018-07-19 16:52:36,890-0400 INFO (vm/f0087d72) [vdsm.api] FINISH prepareImage error=Cannot prepare illegal volume: (u'fb6e20c5-ddfa-436e-95bc-38c28e3671ec',) from=internal, task_id=56401238-1ad9-44f2-ad54-c274c7349546 (api:50) 2018-07-19 16:52:36,890-0400 ERROR (vm/f0087d72) [storage.TaskManager.Task] (Task='56401238-1ad9-44f2-ad54-c274c7349546') Unexpected error (task:875) Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882, in _run return fn(*args, **kargs) File "<string>", line 2, in prepareImage File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 48, in method ret = func(*args, **kwargs) File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 3170, in prepareImage raise se.prepareIllegalVolumeError(volUUID) prepareIllegalVolumeError: Cannot prepare illegal volume: (u'fb6e20c5-ddfa-436e-95bc-38c28e3671ec',) 2018-07-19 16:52:36,891-0400 INFO (vm/f0087d72) [storage.TaskManager.Task] (Task='56401238-1ad9-44f2-ad54-c274c7349546') aborting: Task is aborted: "Cannot prepare illegal volume: (u'fb6e20c5-ddfa-436e-95bc-38c28e3671ec',)" - code 227 (task:1181) 2018-07-19 16:52:36,892-0400 ERROR (vm/f0087d72) [storage.Dispatcher] FINISH prepareImage error=Cannot prepare illegal volume: (u'fb6e20c5-ddfa-436e-95bc-38c28e3671ec',) (dispatcher:82) 2018-07-19 16:52:36,892-0400 ERROR (vm/f0087d72) [virt.vm] (vmId='f0087d72-f051-4f62-b3fd-dd1a56a211ee') The vm start process failed (vm:943) Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 872, in _startUnderlyingVm self._run() File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2777, in _run self._devices = self._make_devices() File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2624, in _make_devices return self._make_devices_from_dict() File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2644, in _make_devices_from_dict self._preparePathsForDrives(dev_spec_map[hwclass.DISK]) File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 1017, in _preparePathsForDrives drive['path'] = self.cif.prepareVolumePath(drive, self.id) File "/usr/lib/python2.7/site-packages/vdsm/clientIF.py", line 417, in prepareVolumePath raise vm.VolumeError(drive) VolumeError: Bad volume specification {u'poolID': u'39f25b84-a2ad-439f-8db7-2dd7896186d1', 'index': '1', u'iface': u'virtio', 'apparentsize': '1441792', u'imageID': u'8462a296-65cc-4740-a479-912164fa7e1d', u'readonly': u'false', u'shared': u'false', 'truesize': '1444864', u'type': u'disk', u'domainID': u'428a1232-ba20-4338-b24b-2983a112501c', 'reqsize': '0', u'format': u'cow', u'deviceId': u'8462a296-65cc-4740-a479-912164fa7e1d', u'address': {u'function': u'0x0', u'bus': u'0x00', u'domain': u'0x0000', u'type': u'pci', u'slot': u'0x07'}, u'device': u'disk', u'propagateErrors': u'off', u'optional': u'false', 'vm_custom': {}, 'vmid': 'f0087d72-f051-4f62-b3fd-dd1a56a211ee', u'volumeID': u'4bb7829f-89c8-4132-9ec3-960e39094898', u'diskType': u'file', u'specParams': {}, u'discard': False} 2018-07-19 16:52:36,893-0400 INFO (vm/f0087d72) [virt.vm] (vmId='f0087d72-f051-4f62-b3fd-dd1a56a211ee') Changed state to Down: Bad volume specification {u'poolID': u'39f25b84-a2ad-439f-8db7-2dd7896186d1', 'index': '1', u'iface': u'virtio', 'apparentsize': '1441792', u'imageID': u'8462a296-65cc-4740-a479-912164fa7e1d', u'readonly': u'false', u'shared': u'false', 'truesize': '1444864', u'type': u'disk', u'domainID': u'428a1232-ba20-4338-b24b-2983a112501c', 'reqsize': '0', u'format': u'cow', u'deviceId': u'8462a296-65cc-4740-a479-912164fa7e1d', u'address': {u'function': u'0x0', u'bus': u'0x00', u'domain': u'0x0000', u'type': u'pci', u'slot': u'0x07'}, u'device': u'disk', u'propagateErrors': u'off', u'optional': u'false', 'vm_custom': {}, 'vmid': 'f0087d72-f051-4f62-b3fd-dd1a56a211ee', u'volumeID': u'4bb7829f-89c8-4132-9ec3-960e39094898', u'diskType': u'file', u'specParams': {}, u'discard': False} (code=1) (vm:1683)

Can you attach the logs from the original failure that caused the active snapshot to disappear? And also add your INSERT command On Fri, Jul 20, 2018 at 12:08 AM <pwightm@gmail.com> wrote:
Benny,
Thanks for the response!
I don't think I found the right snapshot ID in the logs, but I was able to track down an ID in the images table. I've inserted that ID as the active VM in the snapshots table and it now shows an Active VM with Status:Up disks in my snapshots view in the WebUI. Unfortunately, though, when I try to start the VM it still fails.
The new error being thrown is below:
2018-07-19 16:52:36,845-0400 INFO (vm/f0087d72) [vds] prepared volume path: /rhev/data-center/mnt/192.168.8.110:_oi_nfs_kvm-nfs-sr1/428a1232-ba20-4338-b24b-2983a112501c/images/4e897a16-3f7a-47dd-b047-88bb1b191406/2d01adb1-f629-4c10-9a2c-de92cf5d41bf (clientIF:497) 2018-07-19 16:52:36,846-0400 INFO (vm/f0087d72) [vdsm.api] START prepareImage(sdUUID=u'428a1232-ba20-4338-b24b-2983a112501c', spUUID=u'39f25b84-a2ad-439f-8db7-2dd7896186d1', imgUUID=u'8462a296-65cc-4740-a479-912164fa7e1d', leafUUID=u'4bb7829f-89c8-4132-9ec3-960e39094898', allowIllegal=False) from=internal, task_id=56401238-1ad9-44f2-ad54-c274c7349546 (api:46) 2018-07-19 16:52:36,890-0400 INFO (vm/f0087d72) [vdsm.api] FINISH prepareImage error=Cannot prepare illegal volume: (u'fb6e20c5-ddfa-436e-95bc-38c28e3671ec',) from=internal, task_id=56401238-1ad9-44f2-ad54-c274c7349546 (api:50) 2018-07-19 16:52:36,890-0400 ERROR (vm/f0087d72) [storage.TaskManager.Task] (Task='56401238-1ad9-44f2-ad54-c274c7349546') Unexpected error (task:875) Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882, in _run return fn(*args, **kargs) File "<string>", line 2, in prepareImage File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 48, in method ret = func(*args, **kwargs) File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 3170, in prepareImage raise se.prepareIllegalVolumeError(volUUID) prepareIllegalVolumeError: Cannot prepare illegal volume: (u'fb6e20c5-ddfa-436e-95bc-38c28e3671ec',) 2018-07-19 16:52:36,891-0400 INFO (vm/f0087d72) [storage.TaskManager.Task] (Task='56401238-1ad9-44f2-ad54-c274c7349546') aborting: Task is aborted: "Cannot prepare illegal volume: (u'fb6e20c5-ddfa-436e-95bc-38c28e3671ec',)" - code 227 (task:1181) 2018-07-19 16:52:36,892-0400 ERROR (vm/f0087d72) [storage.Dispatcher] FINISH prepareImage error=Cannot prepare illegal volume: (u'fb6e20c5-ddfa-436e-95bc-38c28e3671ec',) (dispatcher:82) 2018-07-19 16:52:36,892-0400 ERROR (vm/f0087d72) [virt.vm] (vmId='f0087d72-f051-4f62-b3fd-dd1a56a211ee') The vm start process failed (vm:943) Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 872, in _startUnderlyingVm self._run() File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2777, in _run self._devices = self._make_devices() File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2624, in _make_devices return self._make_devices_from_dict() File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2644, in _make_devices_from_dict self._preparePathsForDrives(dev_spec_map[hwclass.DISK]) File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 1017, in _preparePathsForDrives drive['path'] = self.cif.prepareVolumePath(drive, self.id) File "/usr/lib/python2.7/site-packages/vdsm/clientIF.py", line 417, in prepareVolumePath raise vm.VolumeError(drive) VolumeError: Bad volume specification {u'poolID': u'39f25b84-a2ad-439f-8db7-2dd7896186d1', 'index': '1', u'iface': u'virtio', 'apparentsize': '1441792', u'imageID': u'8462a296-65cc-4740-a479-912164fa7e1d', u'readonly': u'false', u'shared': u'false', 'truesize': '1444864', u'type': u'disk', u'domainID': u'428a1232-ba20-4338-b24b-2983a112501c', 'reqsize': '0', u'format': u'cow', u'deviceId': u'8462a296-65cc-4740-a479-912164fa7e1d', u'address': {u'function': u'0x0', u'bus': u'0x00', u'domain': u'0x0000', u'type': u'pci', u'slot': u'0x07'}, u'device': u'disk', u'propagateErrors': u'off', u'optional': u'false', 'vm_custom': {}, 'vmid': 'f0087d72-f051-4f62-b3fd-dd1a56a211ee', u'volumeID': u'4bb7829f-89c8-4132-9ec3-960e39094898', u'diskType': u'file', u'specParams': {}, u'discard': False} 2018-07-19 16:52:36,893-0400 INFO (vm/f0087d72) [virt.vm] (vmId='f0087d72-f051-4f62-b3fd-dd1a56a211ee') Changed state to Down: Bad volume specification {u'poolID': u'39f25b84-a2ad-439f-8db7-2dd7896186d1', 'index': '1', u'iface': u'virtio', 'apparentsize': '1441792', u'imageID': u'8462a296-65cc-4740-a479-912164fa7e1d', u'readonly': u'false', u'shared': u'false', 'truesize': '1444864', u'type': u'disk', u'domainID': u'428a1232-ba20-4338-b24b-2983a112501c', 'reqsize': '0', u'format': u'cow', u'deviceId': u'8462a296-65cc-4740-a479-912164fa7e1d', u'address': {u'function': u'0x0', u'bus': u'0x00', u'domain': u'0x0000', u'type': u'pci', u'slot': u'0x07'}, u'device': u'disk', u'propagateErrors': u'off', u'optional': u'false', 'vm_custom': {}, 'vmid': 'f0087d72-f051-4f62-b3fd-dd1a56a211ee', u'volumeID': u'4bb7829f-89c8-4132-9ec3-960e39094898', u'diskType': u'file', u'specParams': {}, u'discard': False} (code=1) (vm:1683)
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/7BHRI2W4YT6WME...
participants (2)
-
Benny Zlotnik
-
pwightm@gmail.com