On Sun, Apr 26, 2020 at 2:00 PM Nyika Csaba <csabany(a)freemail.hu> wrote:
-[snip]
In theory on hypervisor node the only VG listed should be something
like
onn (like Ovirt Node New generation, I think)
In my case I have also gluster volumes, but in your case with FC SAN you
should only have onn
[root@ovirt ~]# vgs
VG #PV #LV #SN Attr VSize VFree
gluster_vg_4t 1 2 0 wz--n- <3.64t 0
gluster_vg_4t2 1 2 0 wz--n- <3.64t 0
gluster_vg_nvme0n1 1 3 0 wz--n- 349.32g 0
gluster_vg_nvme1n1 1 2 0 wz--n- 931.51g 0
onn 1 11 0 wz--n- <228.40g <43.87g
[root@ovirt ~]#
And also the command "lvs" should so show only onn related logical
volumes...
Gianluca
Hi,
I checked all nodes, and what i got back after vgs command literally
"unbelievable".
Ok, so this is your problem.
And the main bugzilla opened by great guy Germano from Red Hat support at
time of RHV 3.6 when I first opened a case on it was this:
https://bugzilla.redhat.com/show_bug.cgi?id=1374545
If I remember correctly, you will see the problem only if inside VM you
configured a PV for the whole virtual disk (and not its partitions) and if
the disk of the VM was configured as preallocated.
I have not at hand now the detailed information to solve, but for sure you
will have to modify your LVM filters and rebuild initramfs of nodes and
reboot, one by one.
Inside the bugzilla there were a script for LVM filtering and there is also
this page for oVirt:
https://blogs.ovirt.org/2017/12/lvm-configuration-the-easy-way/
Quite new installations should prevent problems, in my opinion, but you
could be impacted by wrong configurations transported during upgrades.
Gianluca