
On Mon, Mar 14, 2016 at 5:05 PM, Marcelo Leandro <marceloltmm@gmail.com> wrote:
Is it cold (the VM is down) or live (the VM is up) merge (snapshot deletion)?
VM is up
What version are you running?
oVirt Engine Version: 3.6.3.4-1.el7.centos
Can you please share engine and vdsm logs?
yes.
Please note that at some point we try to verify that image was removed by running getVolumeInfo hence, the volume not found is expected. The thing is, that you say that volume does exist. Can you run following command on the host:
vdsClient -s 0 getVolumeInfo <sd> <sp> <img> <vol>
return the command: [root@srv-qemu03 93633835-d709-4ebb-9317-903e62064c43]# vdsClient -s 0 getVolumeInfo c2dc0101-748e-4a7b-9913-47993eaa52bd 77e24b20-9d21-4952-a089-3c5c592b4e6d 93633835-d709-4ebb-9317-903e62064c43 948d0453-1992-4a3c-81db-21248853a88a Volume does not exist: ('948d0453-1992-4a3c-81db-21248853a88a',)
after restarting the host where vm was on, the link discs in image_group_id was broken but was not removed.
lrwxrwxrwx. 1 vdsm kvm 78 Mar 12 10:04 215a902a-1b99-403b-a648-21977dd0fa78 -> /dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/215a902a-1b99-403b-a648-21977dd0fa78 lrwxrwxrwx. 1 vdsm kvm 78 Mar 12 09:28 27a8bca3-f984-4f67-9dd2-9e2fc5a5f366 -> /dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/27a8bca3-f984-4f67-9dd2-9e2fc5a5f366 lrwxrwxrwx. 1 vdsm kvm 78 Mar 12 09:31 3fba372c-4c39-4843-be9e-b358b196331d -> /dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/3fba372c-4c39-4843-be9e-b358b196331d lrwxrwxrwx. 1 vdsm kvm 78 Mar 12 08:44 5097df27-c676-4ee7-af89-ecdaed2c77be -> /dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/5097df27-c676-4ee7-af89-ecdaed2c77be lrwxrwxrwx. 1 vdsm kvm 78 Mar 12 09:23 5aaf9ce9-d7ad-4607-aab9-2e239ebaed51 -> /dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/5aaf9ce9-d7ad-4607-aab9-2e239ebaed51 lrwxrwxrwx. 1 vdsm kvm 78 Mar 12 09:12 7d9b6ed0-1125-4215-ab76-37bcda3f6c2d -> /dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/7d9b6ed0-1125-4215-ab76-37bcda3f6c2d lrwxrwxrwx. 1 vdsm kvm 78 Mar 12 10:13 948d0453-1992-4a3c-81db-21248853a88a -> /dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/948d0453-1992-4a3c-81db-21248853a88a lrwxrwxrwx. 1 vdsm kvm 78 Nov 27 22:30 b47f58e0-d576-49be-b8aa-f30581a0373a -> /dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/b47f58e0-d576-49be-b8aa-f30581a0373a lrwxrwxrwx. 1 vdsm kvm 78 Mar 11 22:01 c598bb22-a386-4908-bfa1-7c44bd764c96 -> /dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/c598bb22-a386-4908-bfa1-7c44bd764c96
You question is not clear. Can you explain what is the unexpected behavior?
the link to the lvm should not be deleted after deleting the snapshot?
Are you talking about /dev/vgname/lvname link or the links under /run/vdsm/storage/domain/image/volume, or /rhev/data-center/pull/domain/image/volume? /dev/vgname/lvname is created by udev rules when lv is activated or deactivated. To understand if this is the issue, can you show the output of: pvscan --cache lvs vgname ls -l /dev/vgname Both before the the merge, and after the merge was completed. The lv should not exist, and the links should be deleted. Links under /run/vdsm/storage or /rhev/data-center/ should be created when starting a vm, and tore down when stopping a vm, hotunpluging a disk, or removing a snapshot. To understand if there is an issue, we need the output of: tree /run/vdsm/stoage/domain/image tree /rhev/data-center/pool/domain/images/image Before and after the merge. The links should be deleted. Nir
Thanks
2016-03-14 10:14 GMT-03:00 Nir Soffer <nsoffer@redhat.com>:
On Sat, Mar 12, 2016 at 3:10 PM, Marcelo Leandro <marceloltmm@gmail.com> wrote:
Good morning
I have a doubt, when i do a snapshot, a new lvm is generated, however when I delete this snapshot the lvm not off, that's right?
You question is not clear. Can you explain what is the unexpected behavior?
To check if an lv created or removed by ovirt, you can do:
pvscan --cache lvs vg-uuid
Nir
[root@srv-qemu03 93633835-d709-4ebb-9317-903e62064c43]# ls 27a8bca3-f984-4f67-9dd2-9e2fc5a5f366 7d9b6ed0-1125-4215-ab76-37bcda3f6c2d 3fba372c-4c39-4843-be9e-b358b196331d b47f58e0-d576-49be-b8aa-f30581a0373a 5097df27-c676-4ee7-af89-ecdaed2c77be c598bb22-a386-4908-bfa1-7c44bd764c96 5aaf9ce9-d7ad-4607-aab9-2e239ebaed51 [root@srv-qemu03 93633835-d709-4ebb-9317-903e62064c43]# ls -l total 0 lrwxrwxrwx. 1 vdsm kvm 78 Mar 12 09:28 27a8bca3-f984-4f67-9dd2-9e2fc5a5f366 ->
/dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/27a8bca3-f984-4f67-9dd2-9e2fc5a5f366 lrwxrwxrwx. 1 vdsm kvm 78 Mar 12 09:31 3fba372c-4c39-4843-be9e-b358b196331d ->
/dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/3fba372c-4c39-4843-be9e-b358b196331d lrwxrwxrwx. 1 vdsm kvm 78 Mar 12 08:44 5097df27-c676-4ee7-af89-ecdaed2c77be ->
/dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/5097df27-c676-4ee7-af89-ecdaed2c77be lrwxrwxrwx. 1 vdsm kvm 78 Mar 12 09:23 5aaf9ce9-d7ad-4607-aab9-2e239ebaed51 ->
/dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/5aaf9ce9-d7ad-4607-aab9-2e239ebaed51 lrwxrwxrwx. 1 vdsm kvm 78 Mar 12 09:12 7d9b6ed0-1125-4215-ab76-37bcda3f6c2d ->
/dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/7d9b6ed0-1125-4215-ab76-37bcda3f6c2d lrwxrwxrwx. 1 vdsm kvm 78 Nov 27 22:30 b47f58e0-d576-49be-b8aa-f30581a0373a ->
/dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/b47f58e0-d576-49be-b8aa-f30581a0373a lrwxrwxrwx. 1 vdsm kvm 78 Mar 11 22:01 c598bb22-a386-4908-bfa1-7c44bd764c96 ->
/dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/c598bb22-a386-4908-bfa1-7c44bd764c96
disks snapshot: [root@srv-qemu03 93633835-d709-4ebb-9317-903e62064c43]# qemu-img info 27a8bca3-f984-4f67-9dd2-9e2fc5a5f366 image: 27a8bca3-f984-4f67-9dd2-9e2fc5a5f366 file format: qcow2 virtual size: 112G (120259084288 bytes) disk size: 0 cluster_size: 65536 backing file: ../93633835-d709-4ebb-9317-903e62064c43/b47f58e0-d576-49be-b8aa-f30581a0373a backing file format: raw Format specific information: compat: 0.10 refcount bits: 16
[root@srv-qemu03 93633835-d709-4ebb-9317-903e62064c43]# qemu-img info 3fba372c-4c39-4843-be9e-b358b196331d image: 3fba372c-4c39-4843-be9e-b358b196331d file format: qcow2 virtual size: 112G (120259084288 bytes) disk size: 0 cluster_size: 65536 backing file: ../93633835-d709-4ebb-9317-903e62064c43/b47f58e0-d576-49be-b8aa-f30581a0373a backing file format: raw Format specific information: compat: 0.10 refcount bits: 16
[root@srv-qemu03 93633835-d709-4ebb-9317-903e62064c43]# qemu-img info 5aaf9ce9-d7ad-4607-aab9-2e239ebaed51 image: 5aaf9ce9-d7ad-4607-aab9-2e239ebaed51 file format: qcow2 virtual size: 112G (120259084288 bytes) disk size: 0 cluster_size: 65536 backing file: ../93633835-d709-4ebb-9317-903e62064c43/b47f58e0-d576-49be-b8aa-f30581a0373a backing file format: raw Format specific information: compat: 0.10 refcount bits: 16
disk base: [root@srv-qemu03 93633835-d709-4ebb-9317-903e62064c43]# qemu-img info b47f58e0-d576-49be-b8aa-f30581a0373a image: b47f58e0-d576-49be-b8aa-f30581a0373a file format: raw virtual size: 112G (120259084288 bytes) disk size: 0
Thanks. _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users