On March 4, 2020 6:05:14 PM GMT+02:00, Thorsten Glaser <t.glaser(a)tarent.de> wrote:
Hi *,
I’m a bit frustrated, so please excuse any harshness in this mail.
Whose idea was it to place qcow on logical volumes anyway?
I was shrinking a hard disc: first the filesystems inside the VM,
then the partitions inside the VM, then the LV… then I wanted to
convert the LV to a compressed qcow2 file for transport, and it
told me that the source is corrupted. Huh?
I had already wondered why I was unable to inspect the LV on the
host the usual way (kpartx -v -a /dev/VG/LV after finding out,
with “virsh --readonly -c qemu:///system domblklist VM_NAME”,
which LV is the right one).
Turns out that ovirt stores qcow on LVs instead of raw images ☹
Well, vgcfgrestore to my rescue:
- vgcfgrestore -l VG_NAME
- vgcfgrestore -f /etc/… VG_NAME
The image was still marked as corrupted, but exported fine. I
could not write it back to the LV as preallocated, which seems
to be what ovirt does, because qemu-img doesn’t wish to do that
when the target is a special device (not a regular file). Meh.
Does ovirt handle raw images on LV, and if so, how can we enable
this for new VMs? If not, whyever the hell not? And whose “great”
idea was this anyway?
Thanks in advance,
//mirabilos
Hey Thorsten,
That was harsh! I know a lot of Germans willing to be in control , yet this is not the
case.
What are you actually trying to do ? Are you trying to sparsify your VM's disks
?
Are you sure that this approach is the correct one? I always thought that a storage
migration always sparsifies the VM's disk. Maybe, I'm wrong ... who knows.
Anyway, if you have recommendations -> this is the place , but be more diplomatic
.
Best Regards,
Strahil Nikolov