On Sun, Apr 26, 2020 at 11:06 AM Nyika Csaba <csabany(a)freemail.hu> wrote:
Thanks the advice.
The hypervisors are "fresh". But the management server arrived from
version 3.6 step-by-step (We use this ovirt since 2015).
The issuse occured diffrent clusters, hosts, diffrent HV versions. For
example the last but one vm occured on a ibm x3650, ovirt-node v4.2 host
and the last on a lenovo, ovirt-node v4.3.
Best
In theory on hypervisor node the only VG listed should be something like
onn (like Ovirt Node New generation, I think)
In my case I have also gluster volumes, but in your case with FC SAN you
should only have onn
[root@ovirt ~]# vgs
VG #PV #LV #SN Attr VSize VFree
gluster_vg_4t 1 2 0 wz--n- <3.64t 0
gluster_vg_4t2 1 2 0 wz--n- <3.64t 0
gluster_vg_nvme0n1 1 3 0 wz--n- 349.32g 0
gluster_vg_nvme1n1 1 2 0 wz--n- 931.51g 0
onn 1 11 0 wz--n- <228.40g <43.87g
[root@ovirt ~]#
And also the command "lvs" should so show only onn related logical
volumes...
Gianluca