-------- Eredeti levél --------
Feladó: Gianluca Cecchi < gianluca.cecchi@gmail.com (Link -> mailto:gianluca.cecchi@gmail.com) >
Dátum: 2020 április 26 11:42:40
Tárgy: Re: [ovirt-users] Re: Ovirt vs lvm?
Címzett: Nyika Csaba < csabany@freemail.hu (Link -> mailto:csabany@freemail.hu) >
 
On Sun, Apr 26, 2020 at 11:06 AM Nyika Csaba < csabany@freemail.hu (Link -> mailto:csabany@freemail.hu) > wrote:
 
Thanks the advice.
The hypervisors are "fresh". But the management server arrived from version 3.6 step-by-step (We use this ovirt since 2015).
The issuse occured diffrent clusters, hosts, diffrent HV versions. For example the last but one vm occured on a ibm x3650, ovirt-node v4.2 host and the last on  a lenovo, ovirt-node v4.3.
Best
 
 
In theory on hypervisor node the only VG listed should be something like onn (like Ovirt Node New generation, I think)
 
In my case I have also gluster volumes, but in your case with FC SAN you should only have onn
 
[root@ovirt ~]# vgs
  VG                 #PV #LV #SN Attr   VSize    VFree  
  gluster_vg_4t        1   2   0 wz--n-   <3.64t      0
  gluster_vg_4t2       1   2   0 wz--n-   <3.64t      0
  gluster_vg_nvme0n1   1   3   0 wz--n-  349.32g      0
  gluster_vg_nvme1n1   1   2   0 wz--n-  931.51g      0
  onn                  1  11   0 wz--n- <228.40g <43.87g
[root@ovirt ~]#
 
And also the command "lvs" should so show only onn related logical volumes...
 
Gianluca
 
 Hi,

I checked all nodes, and what i got back after vgs command literally "unbelievable".

Some host look like good :
  VG                                   #PV #LV #SN Attr   VSize   VFree  
  003b6a83-9133-4e65-9d6d-878d08e0de06   1  25   0 wz--n- <50,00t <44,86t
  0cfed8c2-cdfd-4a57-bb8c-eabfbdbccdf8   1  50   0 wz--n- <20,00t   4,57t
  1831603c-e583-412a-b20e-f97b31ad9a55   1 122   0 wz--n- <25,00t  <6,79t
  3ff15d64-a716-4fad-94f0-abb69b5643a7   1  64   0 wz--n- <17,31t  <4,09t
  424fc43f-6bbf-47bb-94a0-b4c3322a4a90   1  68   0 wz--n- <14,46t  <1,83t
  4752cc9d-5f19-4cb1-b116-a62e3ee05783   1  81   0 wz--n- <28,00t  <4,91t
  567a63ec-5b34-425c-af20-5997450cf061   1 110   0 wz--n- <17,00t  <2,21t
  5f6dcc41-9a2f-432f-9de0-bed541cd6a03   1  71   0 wz--n- <20,00t  <2,35t
  8a4e4463-0945-430e-affd-c7ac2bbdc912   1  86   0 wz--n- <13,01t   2,85t
  c9543c8d-c6da-44be-8060-179e807f1211   1  55   0 wz--n- <18,00t   5,22t
  d5679d9d-ebf2-41ef-9e93-83d2cd9b027c   1  67   0 wz--n-  <7,20t  <1,15t
  onn                                    1  11   0 wz--n- 277,46g  54,60g

Others:
  VG                                   #PV #LV #SN Attr   VSize   VFree  
  003b6a83-9133-4e65-9d6d-878d08e0de06   1  25   0 wz--n- <50,00t <44,86t
  0cfed8c2-cdfd-4a57-bb8c-eabfbdbccdf8   1  50   0 wz--n- <20,00t   4,57t
  1831603c-e583-412a-b20e-f97b31ad9a55   1 122   0 wz--n- <25,00t  <6,79t
  3ff15d64-a716-4fad-94f0-abb69b5643a7   1  64   0 wz--n- <17,31t  <4,09t
  424fc43f-6bbf-47bb-94a0-b4c3322a4a90   1  68   0 wz--n- <14,46t  <1,83t
  4752cc9d-5f19-4cb1-b116-a62e3ee05783   1  81   0 wz--n- <28,00t  <4,91t
  567a63ec-5b34-425c-af20-5997450cf061   1 110   0 wz--n- <17,00t  <2,21t
  5f6dcc41-9a2f-432f-9de0-bed541cd6a03   1  71   0 wz--n- <20,00t  <2,35t
  8a4e4463-0945-430e-affd-c7ac2bbdc912   1  86   0 wz--n- <13,01t   2,85t
  c9543c8d-c6da-44be-8060-179e807f1211   1  55   0 wz--n- <18,00t   5,22t
  d5679d9d-ebf2-41ef-9e93-83d2cd9b027c   1  67   0 wz--n-  <7,20t  <1,15t
  onn                                    1  11   0 wz--n- 277,46g  54,60g
  vg_okosvaros                           2   7   0 wz-pn- <77,20g      0

Others:
  VG                                   #PV #LV #SN Attr   VSize    VFree  
  003b6a83-9133-4e65-9d6d-878d08e0de06   1  25   0 wz--n-  <50,00t <44,86t
  0cfed8c2-cdfd-4a57-bb8c-eabfbdbccdf8   1  50   0 wz--n-  <20,00t   4,57t
  1831603c-e583-412a-b20e-f97b31ad9a55   1 122   0 wz--n-  <25,00t  <6,79t
  3ff15d64-a716-4fad-94f0-abb69b5643a7   1  64   0 wz--n-  <17,31t  <4,09t
  424fc43f-6bbf-47bb-94a0-b4c3322a4a90   1  68   0 wz--n-  <14,46t  <1,83t
  4752cc9d-5f19-4cb1-b116-a62e3ee05783   1  81   0 wz--n-  <28,00t  <4,91t
  567a63ec-5b34-425c-af20-5997450cf061   1 110   0 wz--n-  <17,00t  <2,21t
  5f6dcc41-9a2f-432f-9de0-bed541cd6a03   1  71   0 wz--n-  <20,00t  <2,35t
  8a4e4463-0945-430e-affd-c7ac2bbdc912   1  86   0 wz--n-  <13,01t   2,85t
  c9543c8d-c6da-44be-8060-179e807f1211   1  55   0 wz--n-  <18,00t   5,22t
  d5679d9d-ebf2-41ef-9e93-83d2cd9b027c   1  67   0 wz--n-   <7,20t  <1,15t
  onn                                    1  13   0 wz--n- <446,07g  88,39g
  vg_4trdb1p                             3   7   0 wz-pn-  157,19g      0
  vg_4trdb1t                             3   7   0 wz-pn-  157,19g      0
  vg_deployconfigrepo                    3   7   0 wz-pn-   72,19g      0
  vg_ektrdb1p                            3   7   0 wz-pn-  157,19g      0
  vg_ektrdb1t                            3   7   0 wz-pn-  157,19g      0
  vg_empteszt                            2   6   0 wz-pn-  <77,20g <20,00g
  vg_helyiertekek                        6   8   0 wz-pn-  278,11g      0
  vg_log                                 3   7   0 wz-pn-  347,19g <50,00g
  vg_monitor1m                           3   7   0 wz-pn-   87,19g      0
  vg_monoradattarappfejlesztoi           2   6   0 wz-pn-  <97,20g      0
  vg_okosvaros                           2   6   0 wz-pn- <377,20g      0

I can see some of the vms VG's, but not all.
I checked the "problematic" and the "good" vm's disk connections (storage configuration mistake) but every storage what connected a "wrong" vm-s, connected a "good" vm-s too.
I tried to power off vm-s and on again, and one of them became to "good" others didn't.
Every "wrong" vms was made from same template, but others 50 "good" vm were made from that template too.

csabany