[Users] VM snapshot delete failed - iSCSI domain

Milan Dlauhy milan.dlauhy at i.cz
Fri Nov 1 13:40:28 UTC 2013


On Fri, 2013-10-25 at 21:10 +0200, Milan Dlauhy wrote:
> Hi,
> I am testing VM snasphot in the iSCSI storage domain - Ovirt
> 3.3.1-0.3.beta2.
> I can't delete snapshot.
> The same error was in ovirt-engine-3.3.0-4.el6 before I upgraded today.
> 
> 
> My test:
> 
> create VM - server3 - 2GB, Centos 6.4
> start VM 
> shutdown VM
> create snapshot s3-snap-1 
> start VM 
> shutdown VM
> delete snapshot  ==> Failed

I tried start the VM through webadmin gui, but failed:
 could not open disk image 
/rhev/data-center/6b2e837a-e72b-46bf-9963-6fad2a95c75b/7046ad89-a197-4450-a9fd-b9c58be66327/images/e6cadf99-68bf-4255-97bb-30e04c4c38b3/ab65a322-1cb3-47ce-b2a0-d942b9fb3d77

ls
-l /rhev/data-center/6b2e837a-e72b-46bf-9963-6fad2a95c75b/7046ad89-a197-4450-a9fd-b9c58be66327/images/e6cadf99-68bf-4255-97bb-30e04c4c38b3/ab65a322-1cb3-47ce-b2a0-d942b9fb3d77
lrwxrwxrwx. 1 vdsm kvm 78 Oct 25
16:57 /rhev/data-center/6b2e837a-e72b-46bf-9963-6fad2a95c75b/7046ad89-a197-4450-a9fd-b9c58be66327/images/e6cadf99-68bf-4255-97bb-30e04c4c38b3/ab65a322-1cb3-47ce-b2a0-d942b9fb3d77 -> /dev/7046ad89-a197-4450-a9fd-b9c58be66327/ab65a322-1cb3-47ce-b2a0-d942b9fb3d77

lvs:
LV                                   VG
Attr      LSize   Pool Origin Data%  Move Log Cpy%Sync Convert
ab65a322-1cb3-47ce-b2a0-d942b9fb3d77
7046ad89-a197-4450-a9fd-b9c58be66327 -wi------   2.00g


I activate lvm image and start VM from cmdline:

lvchange
-aey /dev/7046ad89-a197-4450-a9fd-b9c58be66327/ab65a322-1cb3-47ce-b2a0-d942b9fb3d77
/usr/libexec/qemu-kvm  -m 512 -name server3 -drive
file=/dev/7046ad89-a197-4450-a9fd-b9c58be66327/ab65a322-1cb3-47ce-b2a0-d942b9fb3d77  -vnc :5

VM is running !!!
Any help would be appreciated.

Milan


> 
> 
> engine.log:
> 
> 2013-10-08 16:55:45,153 ERROR
> [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand] (DefaultQuartzScheduler_Worker-74) Failed in HSMGetAllTasksStatusesVDS method
> 2013-10-08 16:55:45,154 ERROR
> [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand] (DefaultQuartzScheduler_Worker-74) Error code LogicalVolumeDoesNotExistError and error message VDSGenericException: VDSErrorException: Failed to HSMGetAllTasksStatusesVDS, error = Logical volume does not exist
> 
> vdsm.log:
> c7e81381-d2f7-4634-b1ae-7f5fb87ba393::ERROR::2013-10-25
> 16:57:39,424::task::850::TaskManager.Task::(_setError)
> Task=`c7e81381-d2f7-4634-b1ae-7f5fb87ba393`::Unexpected error
> Traceback (most recent call last):
>   File "/usr/share/vdsm/storage/task.py", line 857, in _run
>     return fn(*args, **kargs)
>   File "/usr/share/vdsm/storage/task.py", line 318, in run
>     return self.cmd(*self.argslist, **self.argsdict)
>   File "/usr/share/vdsm/storage/securable.py", line 68, in wrapper
>     return f(self, *args, **kwargs)
>   File "/usr/share/vdsm/storage/sp.py", line 1937, in mergeSnapshots
>     sdUUID, vmUUID, imgUUID, ancestor, successor, postZero)
>   File "/usr/share/vdsm/storage/image.py", line 1162, in merge
>     srcVol.shrinkToOptimalSize()
>   File "/usr/share/vdsm/storage/blockVolume.py", line 315, in
> shrinkToOptimalSize
>     volParams = self.getVolumeParams()
>   File "/usr/share/vdsm/storage/volume.py", line 1008, in
> getVolumeParams
>     volParams['imgUUID'] = self.getImage()
>   File "/usr/share/vdsm/storage/blockVolume.py", line 494, in getImage
>     return self.getVolumeTag(TAG_PREFIX_IMAGE)
>   File "/usr/share/vdsm/storage/blockVolume.py", line 464, in
> getVolumeTag
>     return _getVolumeTag(self.sdUUID, self.volUUID, tagPrefix)
>   File "/usr/share/vdsm/storage/blockVolume.py", line 662, in
> _getVolumeTag
>     tags = lvm.getLV(sdUUID, volUUID).tags
>   File "/usr/share/vdsm/storage/lvm.py", line 851, in getLV
>     raise se.LogicalVolumeDoesNotExistError("%s/%s" % (vgName, lvName))
> LogicalVolumeDoesNotExistError: Logical volume does not exist:
> ('7046ad89-a197-4450-a9fd-b9c58be66327/_remove_me_QVdrCeOz_ab65a322-1cb3-47ce-b2a0-d942b9fb3d77',)
> 
> 
> SELECT vm_id, snapshot_id, snapshot_type, status, description, creation_date FROM snapshots where vm_id = 'fa0b9f17-31b7-47b2-a237-5f0e15ade3ca';
> 
>                 vm_id                 |             snapshot_id              | snapshot_type | status | description |       creation_date        
> --------------------------------------+--------------------------------------+---------------+--------+-------------+----------------------------
>  fa0b9f17-31b7-47b2-a237-5f0e15ade3ca | 900c4977-01b7-43f5-a18f-812ebd6cbf2a | ACTIVE        | OK     | Active VM   | 2013-10-25 15:53:41.913+02
>  fa0b9f17-31b7-47b2-a237-5f0e15ade3ca | 5a7ac315-7393-45de-9d9b-c5283908e596 | REGULAR       | BROKEN | s3-snap-1   | 2013-10-25 16:51:18.205+02
> 
> 
> 
> SELECT image_group_id, image_guid, size, parentid, vm_snapshot_id, active FROM images where image_group_id = 'e6cadf99-68bf-4255-97bb-30e04c4c38b3';
>             image_group_id            |              image_guid              |    size    |               parentid               |            vm_snapshot_id            | active 
> --------------------------------------+--------------------------------------+------------+--------------------------------------+--------------------------------------+--------
>  e6cadf99-68bf-4255-97bb-30e04c4c38b3 | ab65a322-1cb3-47ce-b2a0-d942b9fb3d77 | 2147483648 | 834b7dda-80f4-43a7-b80d-3cb2bca8ffc6 | 900c4977-01b7-43f5-a18f-812ebd6cbf2a | t
>  e6cadf99-68bf-4255-97bb-30e04c4c38b3 | 834b7dda-80f4-43a7-b80d-3cb2bca8ffc6 | 2147483648 | 00000000-0000-0000-0000-000000000000 | 5a7ac315-7393-45de-9d9b-c5283908e596 | f
> (2 rows)
> 
> 
> 
> ls -l /rhev/data-center/mnt/blockSD/7046ad89-a197-4450-a9fd-b9c58be66327/images/e6cadf99-68bf-4255-97bb-30e04c4c38b3
> total 4
> lrwxrwxrwx. 1 vdsm kvm 78 Oct 25 16:57 ab65a322-1cb3-47ce-b2a0-d942b9fb3d77 -> /dev/7046ad89-a197-4450-a9fd-b9c58be66327/ab65a322-1cb3-47ce-b2a0-d942b9fb3d77
> 
> 
> 
> HOST -  Centos 6.4
> 
> vdsm-4.12.1-4.el6.x86_64
> libvirt-0.10.2-18.el6_4.14.x86_64
> qemu-kvm-0.12.1.2-2.355.0.1.el6_4.9.x86_64
> 
> 
> Engine - Centos 6.4
> 
> ovirt-engine-sdk-python-3.3.0.6-1.el6.noarch
> ovirt-engine-lib-3.3.1-0.3.beta2.el6.noarch
> ovirt-engine-dbscripts-3.3.1-0.3.beta2.el6.noarch
> ovirt-engine-userportal-3.3.1-0.3.beta2.el6.noarch
> ovirt-iso-uploader-3.3.1-1.el6.noarch
> ovirt-release-el6-8-1.noarch
> ovirt-engine-cli-3.3.0.4-1.el6.noarch
> ovirt-host-deploy-java-1.1.1-1.el6.noarch
> ovirt-engine-setup-3.3.1-0.3.beta2.el6.noarch
> ovirt-engine-restapi-3.3.1-0.3.beta2.el6.noarch
> ovirt-engine-tools-3.3.1-0.3.beta2.el6.noarch
> ovirt-engine-webadmin-portal-3.3.1-0.3.beta2.el6.noarch
> ovirt-image-uploader-3.3.1-1.el6.noarch
> ovirt-host-deploy-1.1.1-1.el6.noarch
> ovirt-engine-websocket-proxy-3.3.1-0.3.beta2.el6.noarch
> ovirt-engine-backend-3.3.1-0.3.beta2.el6.noarch
> ovirt-engine-3.3.1-0.3.beta2.el6.noarch
> ovirt-log-collector-3.3.1-1.el6.noarch
> 
> 
> Thanks
>     Milan
> 
> 
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users

-- 
Milan Dlauhy
ICZ a.s.
Na hřebenech II 1718/10, 147 00 Praha 4 Nusle, CZ
Tel.: +420 222 275 239
      +420 724 429 878
Fax:  +420 222 271 112
mailto:milan.dlauhy at i.cz
http://www.i.cz
midl at jabbim.cz, ICQ#344919380




More information about the Users mailing list