On Wed, Mar 4, 2020 at 5:12 PM Thorsten Glaser <t.glaser(a)tarent.de> wrote:
Hi *,
I’m a bit frustrated, so please excuse any harshness in this mail.
I try... ;-)
Whose idea was it to place qcow on logical volumes anyway?
Not mine: I'm a final user and sometimes a contributor for
ideas/solutions... and problems ...
Anyway the base idea is to provide thin provisioning disks when you have
block storage (SAN, iSCSI)
The alternative would have been to implement a cluster file system on top
of the SAN/iSCSI LUNs (such as vmfs is in vSphere or OCFS2 in Oracle
Virtualization)
But I think none of the existing solutions (eg GFS) was considered (and
indeed it is not in my opinion) robust and fast enough to manage a workload
with many hypervisors (so distributed consumers of the cluster file system
files) and many users (VMs) on each one of the hypervisors.
I think you could read this:
https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.3/...
"
If the virtual disk is thinly provisioned, a 1 GB logical volume is
created. The logical volume is continuously monitored by the host on which
the virtual machine is running. As soon as the usage nears a threshold the
host notifies the SPM, and the SPM extends the logical volume by 1 GB. The
host is responsible for resuming the virtual machine after the logical
volume has been extended. If the virtual machine goes into a paused state
it means that the SPM could not extend the disk in time. This occurs if the
SPM is too busy or if there is not enough storage space.
"
Note that you can modify parameters customizing the cited "threshold" and
also the size of the extension (default 1Gb) when needed.
I was shrinking a hard disc: first the filesystems inside the VM,
then the partitions inside the VM, then the LV… then I wanted to
convert the LV to a compressed qcow2 file for transport, and it
told me that the source is corrupted. Huh?
I don't want to comment on the method, but in the order I would follow:
filesystem
LV
PV (supposing your PV is on top of a partition, as you seem to write)
partition
Difficult in general with the method above to compute exact sizes to avoid
corruption.
In general you have to be conservative... at the cost of loosing eventually
some MBs.
I have done something similar in the past (stopping at the level of LV,
because I needed space for other LVs on the same VG, so no PV and partition
resize involved)
I had already wondered why I was unable to inspect the LV on the
host the usual way (kpartx -v -a /dev/VG/LV after finding out,
with “virsh --readonly -c qemu:///system domblklist VM_NAME”,
which LV is the right one).
It turned out in the past (and I was one of the impacted guys), that if
inside a VM you created PVs on whole virtual disks, this LVM structure was
somehow exposed to the underlying LVM structure of the host, with nasty
impacts in some activities.
In my case impatcs were on live storage migration and deleting disk of a VM.
At that time (beginning of 2018) it was very helpful Red Hat Support (it
was an RHV environment) and in particular from Olimp Bockowski.
They resulted in some bugzillas and solutions, some of them:
"RHV: Hosts boot with Guest LVs activated "
https://access.redhat.com/solutions/2662261
https://bugzilla.redhat.com/show_bug.cgi?id=1450114
https://bugzilla.redhat.com/show_bug.cgi?id=1449968
https://bugzilla.redhat.com/show_bug.cgi?id=1202595
There is also filter tool available
https://bugzilla.redhat.com/show_bug.cgi?id=1522926
So, base on opened bugzillas and final users problems, it was decided, in
correct way in my opinion, to hide all the information apart what necessary
For example on a plain CentOS host in 4.3.8 I have:
[root@ov200 ~]# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync
Convert
root cl -wi-ao---- <119.12g
swap cl -wi-ao---- 16.00g
[root@ov200 ~]#
If you want to display information and avoid LVM predefined filters on
hypervisors (use only in case of problems or debug!) you can bypass the
configuration using the standard "--config" option
This switch was very useful when debugging problems with VM disks and gives
you all the real LVM structure, also with the tags used by oVirt
[root@ov200 ~]# lvs --config 'global { use_lvmetad=0 } devices { filter = [
"a|.*/|" ] } ' -o +tags
LV VG
Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert LV
Tags
root cl
-wi-ao---- <119.12g
swap cl
-wi-ao---- 16.00g
01e20442-21e2-4237-abdb-6e919bb1f522 fa33df49-b09d-4f86-9719-ede649542c21
-wi------- 20.00g
IU_24d917f3-0858-45a0-a7a4-eba8b28b2a58,MD_47,PU_00000000-0000-0000-0000-000000000000
...
but all above is for the past. If you have a 4.3 environment you have not
to worry about it.
HIH a little understanding,
Gianluca