[ovirt-users] Disks Snapshot
Nir Soffer
nsoffer at redhat.com
Mon Mar 14 15:40:03 UTC 2016
On Mon, Mar 14, 2016 at 5:05 PM, Marcelo Leandro <marceloltmm at gmail.com> wrote:
>
>
> Is it cold (the VM is down) or live (the VM is up) merge (snapshot
> deletion)?
>
> VM is up
>
> What version are you running?
>
> oVirt Engine Version: 3.6.3.4-1.el7.centos
>
>
> Can you please share engine and vdsm logs?
>
> yes.
Looking in your vdsm log, I see this error (454 times in 6 hours),
which looks like a bug:
periodic/5::ERROR::2016-03-12
09:28:02,847::executor::188::Executor::(_execute_task) Unhandled
exception in <NumaInfoMonitor vm=8d41b39a-5995-41a0-9807-7d9a6af308a5
at 0x7f898836f090>
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/executor.py", line 186,
in _execute_task
callable()
File "/usr/share/vdsm/virt/periodic.py", line 279, in __call__
self._execute()
File "/usr/share/vdsm/virt/periodic.py", line 324, in _execute
self._vm.updateNumaInfo()
File "/usr/share/vdsm/virt/vm.py", line 5071, in updateNumaInfo
self._numaInfo = numaUtils.getVmNumaNodeRuntimeInfo(self)
File "/usr/share/vdsm/numaUtils.py", line 116, in getVmNumaNodeRuntimeInfo
vnode_index = str(vcpu_to_vnode[vcpu_id])
KeyError: 1
Adding Francesco and Martin to look at this.
>
> Please note that at some point we try to verify that image was removed by
> running getVolumeInfo hence, the volume not found is expected. The thing is,
> that you say that volume does exist.
> Can you run following command on the host:
>
> vdsClient -s 0 getVolumeInfo <sd> <sp> <img> <vol>
>
> return the command:
> [root at srv-qemu03 93633835-d709-4ebb-9317-903e62064c43]# vdsClient -s 0
> getVolumeInfo c2dc0101-748e-4a7b-9913-47993eaa52bd
> 77e24b20-9d21-4952-a089-3c5c592b4e6d 93633835-d709-4ebb-9317-903e62064c43
> 948d0453-1992-4a3c-81db-21248853a88a
> Volume does not exist: ('948d0453-1992-4a3c-81db-21248853a88a',)
>
> after restarting the host where vm was on, the link discs in image_group_id
> was broken but was not removed.
>
> lrwxrwxrwx. 1 vdsm kvm 78 Mar 12 10:04 215a902a-1b99-403b-a648-21977dd0fa78
> ->
> /dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/215a902a-1b99-403b-a648-21977dd0fa78
> lrwxrwxrwx. 1 vdsm kvm 78 Mar 12 09:28 27a8bca3-f984-4f67-9dd2-9e2fc5a5f366
> ->
> /dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/27a8bca3-f984-4f67-9dd2-9e2fc5a5f366
> lrwxrwxrwx. 1 vdsm kvm 78 Mar 12 09:31 3fba372c-4c39-4843-be9e-b358b196331d
> ->
> /dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/3fba372c-4c39-4843-be9e-b358b196331d
> lrwxrwxrwx. 1 vdsm kvm 78 Mar 12 08:44 5097df27-c676-4ee7-af89-ecdaed2c77be
> ->
> /dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/5097df27-c676-4ee7-af89-ecdaed2c77be
> lrwxrwxrwx. 1 vdsm kvm 78 Mar 12 09:23 5aaf9ce9-d7ad-4607-aab9-2e239ebaed51
> ->
> /dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/5aaf9ce9-d7ad-4607-aab9-2e239ebaed51
> lrwxrwxrwx. 1 vdsm kvm 78 Mar 12 09:12 7d9b6ed0-1125-4215-ab76-37bcda3f6c2d
> ->
> /dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/7d9b6ed0-1125-4215-ab76-37bcda3f6c2d
> lrwxrwxrwx. 1 vdsm kvm 78 Mar 12 10:13 948d0453-1992-4a3c-81db-21248853a88a
> ->
> /dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/948d0453-1992-4a3c-81db-21248853a88a
> lrwxrwxrwx. 1 vdsm kvm 78 Nov 27 22:30 b47f58e0-d576-49be-b8aa-f30581a0373a
> ->
> /dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/b47f58e0-d576-49be-b8aa-f30581a0373a
> lrwxrwxrwx. 1 vdsm kvm 78 Mar 11 22:01 c598bb22-a386-4908-bfa1-7c44bd764c96
> ->
> /dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/c598bb22-a386-4908-bfa1-7c44bd764c96
>
>
> You question is not clear. Can you explain what is the unexpected behavior?
>
> the link to the lvm should not be deleted after deleting the snapshot?
>
>
> Thanks
>
> 2016-03-14 10:14 GMT-03:00 Nir Soffer <nsoffer at redhat.com>:
>>
>> On Sat, Mar 12, 2016 at 3:10 PM, Marcelo Leandro <marceloltmm at gmail.com>
>> wrote:
>> > Good morning
>> >
>> > I have a doubt, when i do a snapshot, a new lvm is generated, however
>> > when I delete this snapshot the lvm not off, that's right?
>>
>> You question is not clear. Can you explain what is the unexpected
>> behavior?
>>
>> To check if an lv created or removed by ovirt, you can do:
>>
>> pvscan --cache
>> lvs vg-uuid
>>
>> Nir
>>
>> >
>> > [root at srv-qemu03 93633835-d709-4ebb-9317-903e62064c43]# ls
>> > 27a8bca3-f984-4f67-9dd2-9e2fc5a5f366
>> > 7d9b6ed0-1125-4215-ab76-37bcda3f6c2d
>> > 3fba372c-4c39-4843-be9e-b358b196331d
>> > b47f58e0-d576-49be-b8aa-f30581a0373a
>> > 5097df27-c676-4ee7-af89-ecdaed2c77be
>> > c598bb22-a386-4908-bfa1-7c44bd764c96
>> > 5aaf9ce9-d7ad-4607-aab9-2e239ebaed51
>> > [root at srv-qemu03 93633835-d709-4ebb-9317-903e62064c43]# ls -l
>> > total 0
>> > lrwxrwxrwx. 1 vdsm kvm 78 Mar 12 09:28
>> > 27a8bca3-f984-4f67-9dd2-9e2fc5a5f366 ->
>> >
>> > /dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/27a8bca3-f984-4f67-9dd2-9e2fc5a5f366
>> > lrwxrwxrwx. 1 vdsm kvm 78 Mar 12 09:31
>> > 3fba372c-4c39-4843-be9e-b358b196331d ->
>> >
>> > /dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/3fba372c-4c39-4843-be9e-b358b196331d
>> > lrwxrwxrwx. 1 vdsm kvm 78 Mar 12 08:44
>> > 5097df27-c676-4ee7-af89-ecdaed2c77be ->
>> >
>> > /dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/5097df27-c676-4ee7-af89-ecdaed2c77be
>> > lrwxrwxrwx. 1 vdsm kvm 78 Mar 12 09:23
>> > 5aaf9ce9-d7ad-4607-aab9-2e239ebaed51 ->
>> >
>> > /dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/5aaf9ce9-d7ad-4607-aab9-2e239ebaed51
>> > lrwxrwxrwx. 1 vdsm kvm 78 Mar 12 09:12
>> > 7d9b6ed0-1125-4215-ab76-37bcda3f6c2d ->
>> >
>> > /dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/7d9b6ed0-1125-4215-ab76-37bcda3f6c2d
>> > lrwxrwxrwx. 1 vdsm kvm 78 Nov 27 22:30
>> > b47f58e0-d576-49be-b8aa-f30581a0373a ->
>> >
>> > /dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/b47f58e0-d576-49be-b8aa-f30581a0373a
>> > lrwxrwxrwx. 1 vdsm kvm 78 Mar 11 22:01
>> > c598bb22-a386-4908-bfa1-7c44bd764c96 ->
>> >
>> > /dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/c598bb22-a386-4908-bfa1-7c44bd764c96
>> >
>> >
>> >
>> > disks snapshot:
>> > [root at srv-qemu03 93633835-d709-4ebb-9317-903e62064c43]# qemu-img info
>> > 27a8bca3-f984-4f67-9dd2-9e2fc5a5f366
>> > image: 27a8bca3-f984-4f67-9dd2-9e2fc5a5f366
>> > file format: qcow2
>> > virtual size: 112G (120259084288 bytes)
>> > disk size: 0
>> > cluster_size: 65536
>> > backing file:
>> > ../93633835-d709-4ebb-9317-903e62064c43/b47f58e0-d576-49be-b8aa-f30581a0373a
>> > backing file format: raw
>> > Format specific information:
>> > compat: 0.10
>> > refcount bits: 16
>> >
>> >
>> > [root at srv-qemu03 93633835-d709-4ebb-9317-903e62064c43]# qemu-img info
>> > 3fba372c-4c39-4843-be9e-b358b196331d
>> > image: 3fba372c-4c39-4843-be9e-b358b196331d
>> > file format: qcow2
>> > virtual size: 112G (120259084288 bytes)
>> > disk size: 0
>> > cluster_size: 65536
>> > backing file:
>> > ../93633835-d709-4ebb-9317-903e62064c43/b47f58e0-d576-49be-b8aa-f30581a0373a
>> > backing file format: raw
>> > Format specific information:
>> > compat: 0.10
>> > refcount bits: 16
>> >
>> > [root at srv-qemu03 93633835-d709-4ebb-9317-903e62064c43]# qemu-img info
>> > 5aaf9ce9-d7ad-4607-aab9-2e239ebaed51
>> > image: 5aaf9ce9-d7ad-4607-aab9-2e239ebaed51
>> > file format: qcow2
>> > virtual size: 112G (120259084288 bytes)
>> > disk size: 0
>> > cluster_size: 65536
>> > backing file:
>> > ../93633835-d709-4ebb-9317-903e62064c43/b47f58e0-d576-49be-b8aa-f30581a0373a
>> > backing file format: raw
>> > Format specific information:
>> > compat: 0.10
>> > refcount bits: 16
>> >
>> >
>> > disk base:
>> > [root at srv-qemu03 93633835-d709-4ebb-9317-903e62064c43]# qemu-img info
>> > b47f58e0-d576-49be-b8aa-f30581a0373a
>> > image: b47f58e0-d576-49be-b8aa-f30581a0373a
>> > file format: raw
>> > virtual size: 112G (120259084288 bytes)
>> > disk size: 0
>> >
>> >
>> > Thanks.
>> > _______________________________________________
>> > Users mailing list
>> > Users at ovirt.org
>> > http://lists.ovirt.org/mailman/listinfo/users
>
>
More information about the Users
mailing list