Hi,

Thanks you for your kind detailed anwers.
You helpd me a lot.

Now i hope we can solved the problem.

Special thanks to Gianluca too.

csabany 
-------- Eredeti levél --------
Feladó: Nir Soffer < nsoffer@redhat.com (Link -> mailto:nsoffer@redhat.com) >
Dátum: 2020 április 26 17:39:36
Tárgy: [ovirt-users] Re: Ovirt vs lvm?
Címzett: Nyika Csaba < csabany@freemail.hu (Link -> mailto:csabany@freemail.hu) >
On Sun, Apr 26, 2020 at 3:00 PM Nyika Csaba <csabany@freemail.hu> wrote:
>
>
> -------- Eredeti levél --------
> Feladó: Gianluca Cecchi < gianluca.cecchi@gmail.com (Link -> mailto:gianluca.cecchi@gmail.com) >
> Dátum: 2020 április 26 11:42:40
> Tárgy: Re: [ovirt-users] Re: Ovirt vs lvm?
> Címzett: Nyika Csaba < csabany@freemail.hu (Link -> mailto:csabany@freemail.hu) >
>
> On Sun, Apr 26, 2020 at 11:06 AM Nyika Csaba < csabany@freemail.hu (Link -> mailto:csabany@freemail.hu) > wrote:
>
> Thanks the advice.
> The hypervisors are "fresh". But the management server arrived from version 3.6 step-by-step (We use this ovirt since 2015).
> The issuse occured diffrent clusters, hosts, diffrent HV versions. For example the last but one vm occured on a ibm x3650, ovirt-node v4.2 host and the last on a lenovo, ovirt-node v4.3.
> Best
>
>
> In theory on hypervisor node the only VG listed should be something like onn (like Ovirt Node New generation, I think)
>
> In my case I have also gluster volumes, but in your case with FC SAN you should only have onn
>
> [root@ovirt ~]# vgs
> VG #PV #LV #SN Attr VSize VFree
> gluster_vg_4t 1 2 0 wz--n- <3.64t 0
> gluster_vg_4t2 1 2 0 wz--n- <3.64t 0
> gluster_vg_nvme0n1 1 3 0 wz--n- 349.32g 0
> gluster_vg_nvme1n1 1 2 0 wz--n- 931.51g 0
> onn 1 11 0 wz--n- <228.40g <43.87g
> [root@ovirt ~]#
>
> And also the command "lvs" should so show only onn related logical volumes...
>
> Gianluca
>
> Hi,
>
> I checked all nodes, and what i got back after vgs command literally "unbelievable".
>
> Some host look like good :
> VG #PV #LV #SN Attr VSize VFree
> 003b6a83-9133-4e65-9d6d-878d08e0de06 1 25 0 wz--n- <50,00t <44,86t
> 0cfed8c2-cdfd-4a57-bb8c-eabfbdbccdf8 1 50 0 wz--n- <20,00t 4,57t
> 1831603c-e583-412a-b20e-f97b31ad9a55 1 122 0 wz--n- <25,00t <6,79t
> 3ff15d64-a716-4fad-94f0-abb69b5643a7 1 64 0 wz--n- <17,31t <4,09t
> 424fc43f-6bbf-47bb-94a0-b4c3322a4a90 1 68 0 wz--n- <14,46t <1,83t
> 4752cc9d-5f19-4cb1-b116-a62e3ee05783 1 81 0 wz--n- <28,00t <4,91t
> 567a63ec-5b34-425c-af20-5997450cf061 1 110 0 wz--n- <17,00t <2,21t
> 5f6dcc41-9a2f-432f-9de0-bed541cd6a03 1 71 0 wz--n- <20,00t <2,35t
> 8a4e4463-0945-430e-affd-c7ac2bbdc912 1 86 0 wz--n- <13,01t 2,85t
> c9543c8d-c6da-44be-8060-179e807f1211 1 55 0 wz--n- <18,00t 5,22t
> d5679d9d-ebf2-41ef-9e93-83d2cd9b027c 1 67 0 wz--n- <7,20t <1,15t
No this is not good - these are VGs on shared storage, and the host
should not be able to access them.
> onn 1 11 0 wz--n- 277,46g 54,60g
I this guest VG (created inside the guest)? If so this is bad.
> Others:
> VG #PV #LV #SN Attr VSize VFree
> 003b6a83-9133-4e65-9d6d-878d08e0de06 1 25 0 wz--n- <50,00t <44,86t
> 0cfed8c2-cdfd-4a57-bb8c-eabfbdbccdf8 1 50 0 wz--n- <20,00t 4,57t
> 1831603c-e583-412a-b20e-f97b31ad9a55 1 122 0 wz--n- <25,00t <6,79t
> 3ff15d64-a716-4fad-94f0-abb69b5643a7 1 64 0 wz--n- <17,31t <4,09t
> 424fc43f-6bbf-47bb-94a0-b4c3322a4a90 1 68 0 wz--n- <14,46t <1,83t
> 4752cc9d-5f19-4cb1-b116-a62e3ee05783 1 81 0 wz--n- <28,00t <4,91t
> 567a63ec-5b34-425c-af20-5997450cf061 1 110 0 wz--n- <17,00t <2,21t
> 5f6dcc41-9a2f-432f-9de0-bed541cd6a03 1 71 0 wz--n- <20,00t <2,35t
> 8a4e4463-0945-430e-affd-c7ac2bbdc912 1 86 0 wz--n- <13,01t 2,85t
> c9543c8d-c6da-44be-8060-179e807f1211 1 55 0 wz--n- <18,00t 5,22t
> d5679d9d-ebf2-41ef-9e93-83d2cd9b027c 1 67 0 wz--n- <7,20t <1,15t
Again, bad.
> onn 1 11 0 wz--n- 277,46g 54,60g
> vg_okosvaros 2 7 0 wz-pn- <77,20g 0
Bad if this guest VGs.
> Others:
> VG #PV #LV #SN Attr VSize VFree
> 003b6a83-9133-4e65-9d6d-878d08e0de06 1 25 0 wz--n- <50,00t <44,86t
> 0cfed8c2-cdfd-4a57-bb8c-eabfbdbccdf8 1 50 0 wz--n- <20,00t 4,57t
> 1831603c-e583-412a-b20e-f97b31ad9a55 1 122 0 wz--n- <25,00t <6,79t
> 3ff15d64-a716-4fad-94f0-abb69b5643a7 1 64 0 wz--n- <17,31t <4,09t
> 424fc43f-6bbf-47bb-94a0-b4c3322a4a90 1 68 0 wz--n- <14,46t <1,83t
> 4752cc9d-5f19-4cb1-b116-a62e3ee05783 1 81 0 wz--n- <28,00t <4,91t
> 567a63ec-5b34-425c-af20-5997450cf061 1 110 0 wz--n- <17,00t <2,21t
> 5f6dcc41-9a2f-432f-9de0-bed541cd6a03 1 71 0 wz--n- <20,00t <2,35t
> 8a4e4463-0945-430e-affd-c7ac2bbdc912 1 86 0 wz--n- <13,01t 2,85t
> c9543c8d-c6da-44be-8060-179e807f1211 1 55 0 wz--n- <18,00t 5,22t
> d5679d9d-ebf2-41ef-9e93-83d2cd9b027c 1 67 0 wz--n- <7,20t <1,15t
> onn 1 13 0 wz--n- <446,07g 88,39g
> vg_4trdb1p 3 7 0 wz-pn- 157,19g 0
> vg_4trdb1t 3 7 0 wz-pn- 157,19g 0
> vg_deployconfigrepo 3 7 0 wz-pn- 72,19g 0
> vg_ektrdb1p 3 7 0 wz-pn- 157,19g 0
> vg_ektrdb1t 3 7 0 wz-pn- 157,19g 0
> vg_empteszt 2 6 0 wz-pn- <77,20g <20,00g
> vg_helyiertekek 6 8 0 wz-pn- 278,11g 0
> vg_log 3 7 0 wz-pn- 347,19g <50,00g
> vg_monitor1m 3 7 0 wz-pn- 87,19g 0
> vg_monoradattarappfejlesztoi 2 6 0 wz-pn- <97,20g 0
> vg_okosvaros 2 6 0 wz-pn- <377,20g 0
Bad if these are guest VGs.
> I can see some of the vms VG's, but not all.
You should not see *any* of the VM VGs on the host, and none of oVirt VGS
(e.g. 003b6a83-9133-4e65-9d6d-878d08e0de06).
This is known issue with LVM on older RHEL/CentOS versions. LVM scan active LVs
and access VGs and LVs created and owned by the guest.
This can lead to data corruption and man issues, and this is why we
recommend to configure
a strict LVM filter on hypervisors.
I'm not sure the issue you see inside the VM is related to this, but
it is very likely.
Creating LVM filter is not easy, you need to understand how LVM filter
work, and which
devices are needed by the host. LVM does not provide an easy way to
configure this, so
we provide a tool to help with this.
To configure LVM filter on a hypervisor, run:
vdsm-tool config-lvm-filter
And follow the instructions.
See https://blogs.ovirt.org/2017/12/lvm-configuration-the-easy-way/
Note that the tool use heuristics to find the devices needed by the
hypervisor, and it is
possible that the filter will be too strict and booting the host will
fail with the filter.
You will have to fix it manually if this happens.
We plan to integrate this in host deploy/upgrade flow so this will be
configured automatically.
I hope it will be available in a future 4.4. version.
Nir
> I checked the "problematic" and the "good" vm's disk connections (storage configuration mistake) but every storage what connected a "wrong" vm-s, connected a "good" vm-s too.
> I tried to power off vm-s and on again, and one of them became to "good" others didn't.
> Every "wrong" vms was made from same template, but others 50 "good" vm were made from that template too.
>
> csabany
> _______________________________________________
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-leave@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/BS6UN4ZNSRD3IEFI65QUZF3TXKITJEKN/
_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-leave@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/47VU2TERUJK4ZO6OVZLMKO5VQNBWV4IA/