[ovirt-users] upgrading disks with snapshots from V3.6 domain failed on NFS and Export domain V4, QCOW V2 to V3 renaming

Oliver Riesener Oliver.Riesener at hs-bremen.de
Fri Mar 2 08:55:00 UTC 2018


Hi,

after upgrading cluster compatibility from V3.6 to V4.2,
i found all V3.6 disks with *snapshots* QCOW V2 are not working on NFS 
and Export Domains.

Non UI command solves this problem, all stuck, they worsen disk state to 
"Illegal" and
produces "Async Tasks" these newer ends.

It seems that the disk file on storage domain has been renamed with 
addition -NNNNNNNNNNNN
but the QCOW backing file locations in data files are not updated. They 
point to old disk names.

[root at ovn-monster 706ff176-4f96-42fe-a5fa-56434347f16c]# ls -la
insgesamt 4983972
drwxr-xr-x.  2 vdsm kvm        4096 28. Feb 12:57 .
drwxr-xr-x. 64 vdsm kvm        4096  1. Mär 12:02 ..
-rw-rw----.  1 vdsm kvm 53687091200  5. Sep 2016 
239c0ffc-8249-4d08-967a-619abbbb897a
-rw-rw----.  1 vdsm kvm     1048576  5. Sep 2016 
239c0ffc-8249-4d08-967a-619abbbb897a.lease
-rw-r--r--.  1 vdsm kvm         319  5. Sep 2016 
239c0ffc-8249-4d08-967a-619abbbb897a.meta
-rw-rw----.  1 vdsm kvm   966393856  6. Sep 2016 
2f773536-9b60-4f53-b179-dbf64d182a41
-rw-rw----.  1 vdsm kvm     1048576  5. Sep 2016 
2f773536-9b60-4f53-b179-dbf64d182a41.lease
-rw-r--r--.  1 vdsm kvm         264  6. Sep 2016 
2f773536-9b60-4f53-b179-dbf64d182a41.meta
-rw-rw----.  1 vdsm kvm  2155806720 14. Feb 11:53 
67f96ffc-3a4f-4f3d-9c1b-46293e0be762
-rw-rw----.  1 vdsm kvm     1048576  6. Sep 2016 
67f96ffc-3a4f-4f3d-9c1b-46293e0be762.lease
-rw-r--r--.  1 vdsm kvm         260  6. Sep 2016 
67f96ffc-3a4f-4f3d-9c1b-46293e0be762.meta

[root at ovn-monster 706ff176-4f96-42fe-a5fa-56434347f16c]# file *
239c0ffc-8249-4d08-967a-619abbbb897a:       x86 boot sector; partition 
1: ID=0x83, starthead 32, startsector 2048, 104853504 sectors, code 
offset 0xb8
239c0ffc-8249-4d08-967a-619abbbb897a.lease: data
239c0ffc-8249-4d08-967a-619abbbb897a.meta:  ASCII text
2f773536-9b60-4f53-b179-dbf64d182a41:       QEMU QCOW Image (v2), has 
backing file (path 
../706ff176-4f96-42fe-a5fa-56434347f16c/239c0ffc-8249-4d08-967a), 
53687091200 bytes
2f773536-9b60-4f53-b179-dbf64d182a41.lease: data
2f773536-9b60-4f53-b179-dbf64d182a41.meta:  ASCII text
67f96ffc-3a4f-4f3d-9c1b-46293e0be762:       QEMU QCOW Image (v2), has 
backing file (path 
../706ff176-4f96-42fe-a5fa-56434347f16c/2f773536-9b60-4f53-b179), 
53687091200 bytes
67f96ffc-3a4f-4f3d-9c1b-46293e0be762.lease: data
67f96ffc-3a4f-4f3d-9c1b-46293e0be762.meta:  ASCII text

My solution is, to hard-link the disk files to old names too. Then the 
disk could be handled by UI again.

[root at ovn-monster 706ff176-4f96-42fe-a5fa-56434347f16c]# ln 
239c0ffc-8249-4d08-967a-619abbbb897a 239c0ffc-8249-4d08-967a
[root at ovn-monster 706ff176-4f96-42fe-a5fa-56434347f16c]# ln 
2f773536-9b60-4f53-b179-dbf64d182a41 2f773536-9b60-4f53-b179

To fix the illegal disk state, i manipulate the postgres database 
directly, thanx to ovirt-users mailing list.
Rescan of DIsks in the UI could also work, i will test it in the 
evening, i have a lot of old exported diskswith snapshots ...

Is there a smarter way to do it ?

Cheers

Olri








More information about the Users mailing list