
Hi, Our production ovirt system looks like: standalone management server, vesion 4.3.9, 6 clusters, 28 nodes (v4.2, v4.3) , one storage domain, (FC SAN Storages), centos7 vm-s , and some windows vms. I have a returning problem. Sometime when i power off a vm and power on again , i get an error message our linux vm (when we use lvm of course): dracut: Read-only locking type set. Write locks are prohibited., dracut: Can't get lock for vg. I can repair only 70% of damaged vm. I tried to localize the problem, but a can`t. The error occured randomly every cluster, every storage on last 2 years. Has anyone ever encountered such a problem?

On April 25, 2020 11:07:23 PM GMT+03:00, csabany@freemail.hu wrote:
Hi,
Our production ovirt system looks like: standalone management server, vesion 4.3.9, 6 clusters, 28 nodes (v4.2, v4.3) , one storage domain, (FC SAN Storages), centos7 vm-s , and some windows vms. I have a returning problem. Sometime when i power off a vm and power on again , i get an error message our linux vm (when we use lvm of course): dracut: Read-only locking type set. Write locks are prohibited., dracut: Can't get lock for vg. I can repair only 70% of damaged vm. I tried to localize the problem, but a can`t. The error occured randomly every cluster, every storage on last 2 years. Has anyone ever encountered such a problem? _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/AHKCLRZUWO4UVC...
I haven't seen such issue so far, but I can only recommend yoiu to clone such VM next time, so you can try to figure out what is going on. During the repair, have you tried rebuilding the initramfs after the issue happens ? Best Regards, Strahil Nikolov

-------- Eredeti levél -------- Feladó: Strahil Nikolov < hunter86_bg@yahoo.com (Link -> mailto:hunter86_bg@yahoo.com) > Dátum: 2020 április 26 07:57:43 Tárgy: Re: [ovirt-users] Ovirt vs lvm? Címzett: csabany@freemail.hu (Link -> mailto:csabany@freemail.hu) On April 25, 2020 11:07:23 PM GMT+03:00, csabany@freemail.hu wrote:
Hi,
Our production ovirt system looks like: standalone management server, vesion 4.3.9, 6 clusters, 28 nodes (v4.2, v4.3) , one storage domain, (FC SAN Storages), centos7 vm-s , and some windows vms. I have a returning problem. Sometime when i power off a vm and power on again , i get an error message our linux vm (when we use lvm of course): dracut: Read-only locking type set. Write locks are prohibited., dracut: Can't get lock for vg. I can repair only 70% of damaged vm. I tried to localize the problem, but a can`t. The error occured randomly every cluster, every storage on last 2 years. Has anyone ever encountered such a problem? _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/AHKCLRZUWO4UVC... I haven't seen such issue so far, but I can only recommend yoiu to clone such VM next time, so you can try to figure out what is going on. During the repair, have you tried rebuilding the initramfs after the issue happens ? Best Regards, Strahil Nikolov
Thanks for advice! Definitely yes, i make a new initramfs by dracut. When this error occure, the locking_type parameter in darcut lvm.conf file changed to 4. I write back to 1: lvm vhchange -ay --config ' global {locking_type=1} ' then write back the locking_type in /etc/lvm/lvm.conf to 1. Then exit and (if i have a lucky day) the dracut -v -f build's a new initramfs and the vm works fine. The issue appear's in couple, so if i found a vm for this "error" one in the others running vms has this error too. csabany

On Sat, Apr 25, 2020 at 10:08 PM <csabany@freemail.hu> wrote:
Hi,
Our production ovirt system looks like: standalone management server, vesion 4.3.9, 6 clusters, 28 nodes (v4.2, v4.3) , one storage domain, (FC SAN Storages), centos7 vm-s , and some windows vms. I have a returning problem. Sometime when i power off a vm and power on again , i get an error message our linux vm (when we use lvm of course): dracut: Read-only locking type set. Write locks are prohibited., dracut: Can't get lock for vg. I can repair only 70% of damaged vm. I tried to localize the problem, but a can`t. The error occured randomly every cluster, every storage on last 2 years. Has anyone ever encountered such a problem?
I think one possible reason could be hypervisor not correctly masking LVM at VM disk level. There was a bug in the past about this. Is this a fresh install or arriving from previous versions? Anyway verify on all your hypervisors what is the output of the command "vgs" and be sure that you only see volume groups related to hypervisors themselves and not inner VMs. If you have a subset of VMs with the problem, identify if that happens only on particular clusters/hosts, so that you can narrow the analysis to these hypervisors. HIH, Gianluca

-------- Eredeti levél -------- Feladó: Gianluca Cecchi < gianluca.cecchi@gmail.com (Link -> mailto:gianluca.cecchi@gmail.com) > Dátum: 2020 április 26 10:01:27 Tárgy: Re: [ovirt-users] Ovirt vs lvm? Címzett: csabany@freemail.hu (Link -> mailto:csabany@freemail.hu) On Sat, Apr 25, 2020 at 10:08 PM < csabany@freemail.hu (Link -> mailto:csabany@freemail.hu) > wrote: Hi, Our production ovirt system looks like: standalone management server, vesion 4.3.9, 6 clusters, 28 nodes (v4.2, v4.3) , one storage domain, (FC SAN Storages), centos7 vm-s , and some windows vms. I have a returning problem. Sometime when i power off a vm and power on again , i get an error message our linux vm (when we use lvm of course): dracut: Read-only locking type set. Write locks are prohibited., dracut: Can't get lock for vg. I can repair only 70% of damaged vm. I tried to localize the problem, but a can`t. The error occured randomly every cluster, every storage on last 2 years. Has anyone ever encountered such a problem? I think one possible reason could be hypervisor not correctly masking LVM at VM disk level. There was a bug in the past about this. Is this a fresh install or arriving from previous versions? Anyway verify on all your hypervisors what is the output of the command "vgs" and be sure that you only see volume groups related to hypervisors themselves and not inner VMs. If you have a subset of VMs with the problem, identify if that happens only on particular clusters/hosts, so that you can narrow the analysis to these hypervisors. HIH, Gianluca Thanks the advice. The hypervisors are "fresh". But the management server arrived from version 3.6 step-by-step (We use this ovirt since 2015). The issuse occured diffrent clusters, hosts, diffrent HV versions. For example the last but one vm occured on a ibm x3650, ovirt-node v4.2 host and the last on a lenovo, ovirt-node v4.3. Best csabany

On Sun, Apr 26, 2020 at 11:06 AM Nyika Csaba <csabany@freemail.hu> wrote:
Thanks the advice.
The hypervisors are "fresh". But the management server arrived from version 3.6 step-by-step (We use this ovirt since 2015).
The issuse occured diffrent clusters, hosts, diffrent HV versions. For example the last but one vm occured on a ibm x3650, ovirt-node v4.2 host and the last on a lenovo, ovirt-node v4.3.
Best
In theory on hypervisor node the only VG listed should be something like onn (like Ovirt Node New generation, I think) In my case I have also gluster volumes, but in your case with FC SAN you should only have onn [root@ovirt ~]# vgs VG #PV #LV #SN Attr VSize VFree gluster_vg_4t 1 2 0 wz--n- <3.64t 0 gluster_vg_4t2 1 2 0 wz--n- <3.64t 0 gluster_vg_nvme0n1 1 3 0 wz--n- 349.32g 0 gluster_vg_nvme1n1 1 2 0 wz--n- 931.51g 0 onn 1 11 0 wz--n- <228.40g <43.87g [root@ovirt ~]# And also the command "lvs" should so show only onn related logical volumes... Gianluca

-------- Eredeti levél -------- Feladó: Gianluca Cecchi < gianluca.cecchi@gmail.com (Link -> mailto:gianluca.cecchi@gmail.com) > Dátum: 2020 április 26 11:42:40 Tárgy: Re: [ovirt-users] Re: Ovirt vs lvm? Címzett: Nyika Csaba < csabany@freemail.hu (Link -> mailto:csabany@freemail.hu) > On Sun, Apr 26, 2020 at 11:06 AM Nyika Csaba < csabany@freemail.hu (Link -> mailto:csabany@freemail.hu) > wrote: Thanks the advice. The hypervisors are "fresh". But the management server arrived from version 3.6 step-by-step (We use this ovirt since 2015). The issuse occured diffrent clusters, hosts, diffrent HV versions. For example the last but one vm occured on a ibm x3650, ovirt-node v4.2 host and the last on a lenovo, ovirt-node v4.3. Best In theory on hypervisor node the only VG listed should be something like onn (like Ovirt Node New generation, I think) In my case I have also gluster volumes, but in your case with FC SAN you should only have onn [root@ovirt ~]# vgs VG #PV #LV #SN Attr VSize VFree gluster_vg_4t 1 2 0 wz--n- <3.64t 0 gluster_vg_4t2 1 2 0 wz--n- <3.64t 0 gluster_vg_nvme0n1 1 3 0 wz--n- 349.32g 0 gluster_vg_nvme1n1 1 2 0 wz--n- 931.51g 0 onn 1 11 0 wz--n- <228.40g <43.87g [root@ovirt ~]# And also the command "lvs" should so show only onn related logical volumes... Gianluca Hi, I checked all nodes, and what i got back after vgs command literally "unbelievable". Some host look like good : VG #PV #LV #SN Attr VSize VFree 003b6a83-9133-4e65-9d6d-878d08e0de06 1 25 0 wz--n- <50,00t <44,86t 0cfed8c2-cdfd-4a57-bb8c-eabfbdbccdf8 1 50 0 wz--n- <20,00t 4,57t 1831603c-e583-412a-b20e-f97b31ad9a55 1 122 0 wz--n- <25,00t <6,79t 3ff15d64-a716-4fad-94f0-abb69b5643a7 1 64 0 wz--n- <17,31t <4,09t 424fc43f-6bbf-47bb-94a0-b4c3322a4a90 1 68 0 wz--n- <14,46t <1,83t 4752cc9d-5f19-4cb1-b116-a62e3ee05783 1 81 0 wz--n- <28,00t <4,91t 567a63ec-5b34-425c-af20-5997450cf061 1 110 0 wz--n- <17,00t <2,21t 5f6dcc41-9a2f-432f-9de0-bed541cd6a03 1 71 0 wz--n- <20,00t <2,35t 8a4e4463-0945-430e-affd-c7ac2bbdc912 1 86 0 wz--n- <13,01t 2,85t c9543c8d-c6da-44be-8060-179e807f1211 1 55 0 wz--n- <18,00t 5,22t d5679d9d-ebf2-41ef-9e93-83d2cd9b027c 1 67 0 wz--n- <7,20t <1,15t onn 1 11 0 wz--n- 277,46g 54,60g Others: VG #PV #LV #SN Attr VSize VFree 003b6a83-9133-4e65-9d6d-878d08e0de06 1 25 0 wz--n- <50,00t <44,86t 0cfed8c2-cdfd-4a57-bb8c-eabfbdbccdf8 1 50 0 wz--n- <20,00t 4,57t 1831603c-e583-412a-b20e-f97b31ad9a55 1 122 0 wz--n- <25,00t <6,79t 3ff15d64-a716-4fad-94f0-abb69b5643a7 1 64 0 wz--n- <17,31t <4,09t 424fc43f-6bbf-47bb-94a0-b4c3322a4a90 1 68 0 wz--n- <14,46t <1,83t 4752cc9d-5f19-4cb1-b116-a62e3ee05783 1 81 0 wz--n- <28,00t <4,91t 567a63ec-5b34-425c-af20-5997450cf061 1 110 0 wz--n- <17,00t <2,21t 5f6dcc41-9a2f-432f-9de0-bed541cd6a03 1 71 0 wz--n- <20,00t <2,35t 8a4e4463-0945-430e-affd-c7ac2bbdc912 1 86 0 wz--n- <13,01t 2,85t c9543c8d-c6da-44be-8060-179e807f1211 1 55 0 wz--n- <18,00t 5,22t d5679d9d-ebf2-41ef-9e93-83d2cd9b027c 1 67 0 wz--n- <7,20t <1,15t onn 1 11 0 wz--n- 277,46g 54,60g vg_okosvaros 2 7 0 wz-pn- <77,20g 0 Others: VG #PV #LV #SN Attr VSize VFree 003b6a83-9133-4e65-9d6d-878d08e0de06 1 25 0 wz--n- <50,00t <44,86t 0cfed8c2-cdfd-4a57-bb8c-eabfbdbccdf8 1 50 0 wz--n- <20,00t 4,57t 1831603c-e583-412a-b20e-f97b31ad9a55 1 122 0 wz--n- <25,00t <6,79t 3ff15d64-a716-4fad-94f0-abb69b5643a7 1 64 0 wz--n- <17,31t <4,09t 424fc43f-6bbf-47bb-94a0-b4c3322a4a90 1 68 0 wz--n- <14,46t <1,83t 4752cc9d-5f19-4cb1-b116-a62e3ee05783 1 81 0 wz--n- <28,00t <4,91t 567a63ec-5b34-425c-af20-5997450cf061 1 110 0 wz--n- <17,00t <2,21t 5f6dcc41-9a2f-432f-9de0-bed541cd6a03 1 71 0 wz--n- <20,00t <2,35t 8a4e4463-0945-430e-affd-c7ac2bbdc912 1 86 0 wz--n- <13,01t 2,85t c9543c8d-c6da-44be-8060-179e807f1211 1 55 0 wz--n- <18,00t 5,22t d5679d9d-ebf2-41ef-9e93-83d2cd9b027c 1 67 0 wz--n- <7,20t <1,15t onn 1 13 0 wz--n- <446,07g 88,39g vg_4trdb1p 3 7 0 wz-pn- 157,19g 0 vg_4trdb1t 3 7 0 wz-pn- 157,19g 0 vg_deployconfigrepo 3 7 0 wz-pn- 72,19g 0 vg_ektrdb1p 3 7 0 wz-pn- 157,19g 0 vg_ektrdb1t 3 7 0 wz-pn- 157,19g 0 vg_empteszt 2 6 0 wz-pn- <77,20g <20,00g vg_helyiertekek 6 8 0 wz-pn- 278,11g 0 vg_log 3 7 0 wz-pn- 347,19g <50,00g vg_monitor1m 3 7 0 wz-pn- 87,19g 0 vg_monoradattarappfejlesztoi 2 6 0 wz-pn- <97,20g 0 vg_okosvaros 2 6 0 wz-pn- <377,20g 0 I can see some of the vms VG's, but not all. I checked the "problematic" and the "good" vm's disk connections (storage configuration mistake) but every storage what connected a "wrong" vm-s, connected a "good" vm-s too. I tried to power off vm-s and on again, and one of them became to "good" others didn't. Every "wrong" vms was made from same template, but others 50 "good" vm were made from that template too. csabany

On Sun, Apr 26, 2020 at 2:00 PM Nyika Csaba <csabany@freemail.hu> wrote:
-[snip]
In theory on hypervisor node the only VG listed should be something like onn (like Ovirt Node New generation, I think)
In my case I have also gluster volumes, but in your case with FC SAN you should only have onn
[root@ovirt ~]# vgs VG #PV #LV #SN Attr VSize VFree gluster_vg_4t 1 2 0 wz--n- <3.64t 0 gluster_vg_4t2 1 2 0 wz--n- <3.64t 0 gluster_vg_nvme0n1 1 3 0 wz--n- 349.32g 0 gluster_vg_nvme1n1 1 2 0 wz--n- 931.51g 0 onn 1 11 0 wz--n- <228.40g <43.87g [root@ovirt ~]#
And also the command "lvs" should so show only onn related logical volumes...
Gianluca
Hi,
I checked all nodes, and what i got back after vgs command literally "unbelievable".
Ok, so this is your problem. And the main bugzilla opened by great guy Germano from Red Hat support at time of RHV 3.6 when I first opened a case on it was this: https://bugzilla.redhat.com/show_bug.cgi?id=1374545 If I remember correctly, you will see the problem only if inside VM you configured a PV for the whole virtual disk (and not its partitions) and if the disk of the VM was configured as preallocated. I have not at hand now the detailed information to solve, but for sure you will have to modify your LVM filters and rebuild initramfs of nodes and reboot, one by one. Inside the bugzilla there were a script for LVM filtering and there is also this page for oVirt: https://blogs.ovirt.org/2017/12/lvm-configuration-the-easy-way/ Quite new installations should prevent problems, in my opinion, but you could be impacted by wrong configurations transported during upgrades. Gianluca

On April 26, 2020 4:30:33 PM GMT+03:00, Gianluca Cecchi <gianluca.cecchi@gmail.com> wrote:
On Sun, Apr 26, 2020 at 2:00 PM Nyika Csaba <csabany@freemail.hu> wrote:
-[snip]
In theory on hypervisor node the only VG listed should be something like onn (like Ovirt Node New generation, I think)
In my case I have also gluster volumes, but in your case with FC SAN you should only have onn
[root@ovirt ~]# vgs VG #PV #LV #SN Attr VSize VFree gluster_vg_4t 1 2 0 wz--n- <3.64t 0 gluster_vg_4t2 1 2 0 wz--n- <3.64t 0 gluster_vg_nvme0n1 1 3 0 wz--n- 349.32g 0 gluster_vg_nvme1n1 1 2 0 wz--n- 931.51g 0 onn 1 11 0 wz--n- <228.40g <43.87g [root@ovirt ~]#
And also the command "lvs" should so show only onn related logical volumes...
Gianluca
Hi,
I checked all nodes, and what i got back after vgs command literally "unbelievable".
Ok, so this is your problem. And the main bugzilla opened by great guy Germano from Red Hat support at time of RHV 3.6 when I first opened a case on it was this: https://bugzilla.redhat.com/show_bug.cgi?id=1374545
If I remember correctly, you will see the problem only if inside VM you configured a PV for the whole virtual disk (and not its partitions) and if the disk of the VM was configured as preallocated.
I have not at hand now the detailed information to solve, but for sure you will have to modify your LVM filters and rebuild initramfs of nodes and reboot, one by one. Inside the bugzilla there were a script for LVM filtering and there is also this page for oVirt:
https://blogs.ovirt.org/2017/12/lvm-configuration-the-easy-way/
Quite new installations should prevent problems, in my opinion, but you could be impacted by wrong configurations transported during upgrades.
Gianluca
I wonder if you also have issues with live migration of VMs between hosts. Have you noticed anything like that so far? Best Regards, Strahil Nikolov

On Sun, Apr 26, 2020 at 3:00 PM Nyika Csaba <csabany@freemail.hu> wrote:
-------- Eredeti levél -------- Feladó: Gianluca Cecchi < gianluca.cecchi@gmail.com (Link -> mailto:gianluca.cecchi@gmail.com) > Dátum: 2020 április 26 11:42:40 Tárgy: Re: [ovirt-users] Re: Ovirt vs lvm? Címzett: Nyika Csaba < csabany@freemail.hu (Link -> mailto:csabany@freemail.hu) >
On Sun, Apr 26, 2020 at 11:06 AM Nyika Csaba < csabany@freemail.hu (Link -> mailto:csabany@freemail.hu) > wrote:
Thanks the advice. The hypervisors are "fresh". But the management server arrived from version 3.6 step-by-step (We use this ovirt since 2015). The issuse occured diffrent clusters, hosts, diffrent HV versions. For example the last but one vm occured on a ibm x3650, ovirt-node v4.2 host and the last on a lenovo, ovirt-node v4.3. Best
In theory on hypervisor node the only VG listed should be something like onn (like Ovirt Node New generation, I think)
In my case I have also gluster volumes, but in your case with FC SAN you should only have onn
[root@ovirt ~]# vgs VG #PV #LV #SN Attr VSize VFree gluster_vg_4t 1 2 0 wz--n- <3.64t 0 gluster_vg_4t2 1 2 0 wz--n- <3.64t 0 gluster_vg_nvme0n1 1 3 0 wz--n- 349.32g 0 gluster_vg_nvme1n1 1 2 0 wz--n- 931.51g 0 onn 1 11 0 wz--n- <228.40g <43.87g [root@ovirt ~]#
And also the command "lvs" should so show only onn related logical volumes...
Gianluca
Hi,
I checked all nodes, and what i got back after vgs command literally "unbelievable".
Some host look like good : VG #PV #LV #SN Attr VSize VFree 003b6a83-9133-4e65-9d6d-878d08e0de06 1 25 0 wz--n- <50,00t <44,86t 0cfed8c2-cdfd-4a57-bb8c-eabfbdbccdf8 1 50 0 wz--n- <20,00t 4,57t 1831603c-e583-412a-b20e-f97b31ad9a55 1 122 0 wz--n- <25,00t <6,79t 3ff15d64-a716-4fad-94f0-abb69b5643a7 1 64 0 wz--n- <17,31t <4,09t 424fc43f-6bbf-47bb-94a0-b4c3322a4a90 1 68 0 wz--n- <14,46t <1,83t 4752cc9d-5f19-4cb1-b116-a62e3ee05783 1 81 0 wz--n- <28,00t <4,91t 567a63ec-5b34-425c-af20-5997450cf061 1 110 0 wz--n- <17,00t <2,21t 5f6dcc41-9a2f-432f-9de0-bed541cd6a03 1 71 0 wz--n- <20,00t <2,35t 8a4e4463-0945-430e-affd-c7ac2bbdc912 1 86 0 wz--n- <13,01t 2,85t c9543c8d-c6da-44be-8060-179e807f1211 1 55 0 wz--n- <18,00t 5,22t d5679d9d-ebf2-41ef-9e93-83d2cd9b027c 1 67 0 wz--n- <7,20t <1,15t
No this is not good - these are VGs on shared storage, and the host should not be able to access them.
onn 1 11 0 wz--n- 277,46g 54,60g
I this guest VG (created inside the guest)? If so this is bad.
Others: VG #PV #LV #SN Attr VSize VFree 003b6a83-9133-4e65-9d6d-878d08e0de06 1 25 0 wz--n- <50,00t <44,86t 0cfed8c2-cdfd-4a57-bb8c-eabfbdbccdf8 1 50 0 wz--n- <20,00t 4,57t 1831603c-e583-412a-b20e-f97b31ad9a55 1 122 0 wz--n- <25,00t <6,79t 3ff15d64-a716-4fad-94f0-abb69b5643a7 1 64 0 wz--n- <17,31t <4,09t 424fc43f-6bbf-47bb-94a0-b4c3322a4a90 1 68 0 wz--n- <14,46t <1,83t 4752cc9d-5f19-4cb1-b116-a62e3ee05783 1 81 0 wz--n- <28,00t <4,91t 567a63ec-5b34-425c-af20-5997450cf061 1 110 0 wz--n- <17,00t <2,21t 5f6dcc41-9a2f-432f-9de0-bed541cd6a03 1 71 0 wz--n- <20,00t <2,35t 8a4e4463-0945-430e-affd-c7ac2bbdc912 1 86 0 wz--n- <13,01t 2,85t c9543c8d-c6da-44be-8060-179e807f1211 1 55 0 wz--n- <18,00t 5,22t d5679d9d-ebf2-41ef-9e93-83d2cd9b027c 1 67 0 wz--n- <7,20t <1,15t
Again, bad.
onn 1 11 0 wz--n- 277,46g 54,60g vg_okosvaros 2 7 0 wz-pn- <77,20g 0
Bad if this guest VGs.
Others: VG #PV #LV #SN Attr VSize VFree 003b6a83-9133-4e65-9d6d-878d08e0de06 1 25 0 wz--n- <50,00t <44,86t 0cfed8c2-cdfd-4a57-bb8c-eabfbdbccdf8 1 50 0 wz--n- <20,00t 4,57t 1831603c-e583-412a-b20e-f97b31ad9a55 1 122 0 wz--n- <25,00t <6,79t 3ff15d64-a716-4fad-94f0-abb69b5643a7 1 64 0 wz--n- <17,31t <4,09t 424fc43f-6bbf-47bb-94a0-b4c3322a4a90 1 68 0 wz--n- <14,46t <1,83t 4752cc9d-5f19-4cb1-b116-a62e3ee05783 1 81 0 wz--n- <28,00t <4,91t 567a63ec-5b34-425c-af20-5997450cf061 1 110 0 wz--n- <17,00t <2,21t 5f6dcc41-9a2f-432f-9de0-bed541cd6a03 1 71 0 wz--n- <20,00t <2,35t 8a4e4463-0945-430e-affd-c7ac2bbdc912 1 86 0 wz--n- <13,01t 2,85t c9543c8d-c6da-44be-8060-179e807f1211 1 55 0 wz--n- <18,00t 5,22t d5679d9d-ebf2-41ef-9e93-83d2cd9b027c 1 67 0 wz--n- <7,20t <1,15t onn 1 13 0 wz--n- <446,07g 88,39g vg_4trdb1p 3 7 0 wz-pn- 157,19g 0 vg_4trdb1t 3 7 0 wz-pn- 157,19g 0 vg_deployconfigrepo 3 7 0 wz-pn- 72,19g 0 vg_ektrdb1p 3 7 0 wz-pn- 157,19g 0 vg_ektrdb1t 3 7 0 wz-pn- 157,19g 0 vg_empteszt 2 6 0 wz-pn- <77,20g <20,00g vg_helyiertekek 6 8 0 wz-pn- 278,11g 0 vg_log 3 7 0 wz-pn- 347,19g <50,00g vg_monitor1m 3 7 0 wz-pn- 87,19g 0 vg_monoradattarappfejlesztoi 2 6 0 wz-pn- <97,20g 0 vg_okosvaros 2 6 0 wz-pn- <377,20g 0
Bad if these are guest VGs.
I can see some of the vms VG's, but not all.
You should not see *any* of the VM VGs on the host, and none of oVirt VGS (e.g. 003b6a83-9133-4e65-9d6d-878d08e0de06). This is known issue with LVM on older RHEL/CentOS versions. LVM scan active LVs and access VGs and LVs created and owned by the guest. This can lead to data corruption and man issues, and this is why we recommend to configure a strict LVM filter on hypervisors. I'm not sure the issue you see inside the VM is related to this, but it is very likely. Creating LVM filter is not easy, you need to understand how LVM filter work, and which devices are needed by the host. LVM does not provide an easy way to configure this, so we provide a tool to help with this. To configure LVM filter on a hypervisor, run: vdsm-tool config-lvm-filter And follow the instructions. See https://blogs.ovirt.org/2017/12/lvm-configuration-the-easy-way/ Note that the tool use heuristics to find the devices needed by the hypervisor, and it is possible that the filter will be too strict and booting the host will fail with the filter. You will have to fix it manually if this happens. We plan to integrate this in host deploy/upgrade flow so this will be configured automatically. I hope it will be available in a future 4.4. version. Nir
I checked the "problematic" and the "good" vm's disk connections (storage configuration mistake) but every storage what connected a "wrong" vm-s, connected a "good" vm-s too. I tried to power off vm-s and on again, and one of them became to "good" others didn't. Every "wrong" vms was made from same template, but others 50 "good" vm were made from that template too.
csabany _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/BS6UN4ZNSRD3IE...

Hi, Thanks you for your kind detailed anwers. You helpd me a lot. Now i hope we can solved the problem. Special thanks to Gianluca too. csabany -------- Eredeti levél -------- Feladó: Nir Soffer < nsoffer@redhat.com (Link -> mailto:nsoffer@redhat.com) > Dátum: 2020 április 26 17:39:36 Tárgy: [ovirt-users] Re: Ovirt vs lvm? Címzett: Nyika Csaba < csabany@freemail.hu (Link -> mailto:csabany@freemail.hu) > On Sun, Apr 26, 2020 at 3:00 PM Nyika Csaba <csabany@freemail.hu> wrote:
-------- Eredeti levél -------- Feladó: Gianluca Cecchi < gianluca.cecchi@gmail.com (Link -> mailto:gianluca.cecchi@gmail.com) > Dátum: 2020 április 26 11:42:40 Tárgy: Re: [ovirt-users] Re: Ovirt vs lvm? Címzett: Nyika Csaba < csabany@freemail.hu (Link -> mailto:csabany@freemail.hu) >
On Sun, Apr 26, 2020 at 11:06 AM Nyika Csaba < csabany@freemail.hu (Link -> mailto:csabany@freemail.hu) > wrote:
Thanks the advice. The hypervisors are "fresh". But the management server arrived from version 3.6 step-by-step (We use this ovirt since 2015). The issuse occured diffrent clusters, hosts, diffrent HV versions. For example the last but one vm occured on a ibm x3650, ovirt-node v4.2 host and the last on a lenovo, ovirt-node v4.3. Best
In theory on hypervisor node the only VG listed should be something like onn (like Ovirt Node New generation, I think)
In my case I have also gluster volumes, but in your case with FC SAN you should only have onn
[root@ovirt ~]# vgs VG #PV #LV #SN Attr VSize VFree gluster_vg_4t 1 2 0 wz--n- <3.64t 0 gluster_vg_4t2 1 2 0 wz--n- <3.64t 0 gluster_vg_nvme0n1 1 3 0 wz--n- 349.32g 0 gluster_vg_nvme1n1 1 2 0 wz--n- 931.51g 0 onn 1 11 0 wz--n- <228.40g <43.87g [root@ovirt ~]#
And also the command "lvs" should so show only onn related logical volumes...
Gianluca
Hi,
I checked all nodes, and what i got back after vgs command literally "unbelievable".
Some host look like good : VG #PV #LV #SN Attr VSize VFree 003b6a83-9133-4e65-9d6d-878d08e0de06 1 25 0 wz--n- <50,00t <44,86t 0cfed8c2-cdfd-4a57-bb8c-eabfbdbccdf8 1 50 0 wz--n- <20,00t 4,57t 1831603c-e583-412a-b20e-f97b31ad9a55 1 122 0 wz--n- <25,00t <6,79t 3ff15d64-a716-4fad-94f0-abb69b5643a7 1 64 0 wz--n- <17,31t <4,09t 424fc43f-6bbf-47bb-94a0-b4c3322a4a90 1 68 0 wz--n- <14,46t <1,83t 4752cc9d-5f19-4cb1-b116-a62e3ee05783 1 81 0 wz--n- <28,00t <4,91t 567a63ec-5b34-425c-af20-5997450cf061 1 110 0 wz--n- <17,00t <2,21t 5f6dcc41-9a2f-432f-9de0-bed541cd6a03 1 71 0 wz--n- <20,00t <2,35t 8a4e4463-0945-430e-affd-c7ac2bbdc912 1 86 0 wz--n- <13,01t 2,85t c9543c8d-c6da-44be-8060-179e807f1211 1 55 0 wz--n- <18,00t 5,22t d5679d9d-ebf2-41ef-9e93-83d2cd9b027c 1 67 0 wz--n- <7,20t <1,15t
onn 1 11 0 wz--n- 277,46g 54,60g I this guest VG (created inside the guest)? If so this is bad. Others: VG #PV #LV #SN Attr VSize VFree 003b6a83-9133-4e65-9d6d-878d08e0de06 1 25 0 wz--n- <50,00t <44,86t 0cfed8c2-cdfd-4a57-bb8c-eabfbdbccdf8 1 50 0 wz--n- <20,00t 4,57t 1831603c-e583-412a-b20e-f97b31ad9a55 1 122 0 wz--n- <25,00t <6,79t 3ff15d64-a716-4fad-94f0-abb69b5643a7 1 64 0 wz--n- <17,31t <4,09t 424fc43f-6bbf-47bb-94a0-b4c3322a4a90 1 68 0 wz--n- <14,46t <1,83t 4752cc9d-5f19-4cb1-b116-a62e3ee05783 1 81 0 wz--n- <28,00t <4,91t 567a63ec-5b34-425c-af20-5997450cf061 1 110 0 wz--n- <17,00t <2,21t 5f6dcc41-9a2f-432f-9de0-bed541cd6a03 1 71 0 wz--n- <20,00t <2,35t 8a4e4463-0945-430e-affd-c7ac2bbdc912 1 86 0 wz--n- <13,01t 2,85t c9543c8d-c6da-44be-8060-179e807f1211 1 55 0 wz--n- <18,00t 5,22t d5679d9d-ebf2-41ef-9e93-83d2cd9b027c 1 67 0 wz--n- <7,20t <1,15t Again, bad. onn 1 11 0 wz--n- 277,46g 54,60g vg_okosvaros 2 7 0 wz-pn- <77,20g 0 Bad if this guest VGs. Others: VG #PV #LV #SN Attr VSize VFree 003b6a83-9133-4e65-9d6d-878d08e0de06 1 25 0 wz--n- <50,00t <44,86t 0cfed8c2-cdfd-4a57-bb8c-eabfbdbccdf8 1 50 0 wz--n- <20,00t 4,57t 1831603c-e583-412a-b20e-f97b31ad9a55 1 122 0 wz--n- <25,00t <6,79t 3ff15d64-a716-4fad-94f0-abb69b5643a7 1 64 0 wz--n- <17,31t <4,09t 424fc43f-6bbf-47bb-94a0-b4c3322a4a90 1 68 0 wz--n- <14,46t <1,83t 4752cc9d-5f19-4cb1-b116-a62e3ee05783 1 81 0 wz--n- <28,00t <4,91t 567a63ec-5b34-425c-af20-5997450cf061 1 110 0 wz--n- <17,00t <2,21t 5f6dcc41-9a2f-432f-9de0-bed541cd6a03 1 71 0 wz--n- <20,00t <2,35t 8a4e4463-0945-430e-affd-c7ac2bbdc912 1 86 0 wz--n- <13,01t 2,85t c9543c8d-c6da-44be-8060-179e807f1211 1 55 0 wz--n- <18,00t 5,22t d5679d9d-ebf2-41ef-9e93-83d2cd9b027c 1 67 0 wz--n- <7,20t <1,15t onn 1 13 0 wz--n- <446,07g 88,39g vg_4trdb1p 3 7 0 wz-pn- 157,19g 0 vg_4trdb1t 3 7 0 wz-pn- 157,19g 0 vg_deployconfigrepo 3 7 0 wz-pn- 72,19g 0 vg_ektrdb1p 3 7 0 wz-pn- 157,19g 0 vg_ektrdb1t 3 7 0 wz-pn- 157,19g 0 vg_empteszt 2 6 0 wz-pn- <77,20g <20,00g vg_helyiertekek 6 8 0 wz-pn- 278,11g 0 vg_log 3 7 0 wz-pn- 347,19g <50,00g vg_monitor1m 3 7 0 wz-pn- 87,19g 0 vg_monoradattarappfejlesztoi 2 6 0 wz-pn- <97,20g 0 vg_okosvaros 2 6 0 wz-pn- <377,20g 0 Bad if these are guest VGs. I can see some of the vms VG's, but not all. You should not see *any* of the VM VGs on the host, and none of oVirt VGS (e.g. 003b6a83-9133-4e65-9d6d-878d08e0de06). This is known issue with LVM on older RHEL/CentOS versions. LVM scan active LVs and access VGs and LVs created and owned by the guest. This can lead to data corruption and man issues, and this is why we recommend to configure a strict LVM filter on hypervisors. I'm not sure the issue you see inside the VM is related to this, but it is very likely. Creating LVM filter is not easy, you need to understand how LVM filter work, and which devices are needed by the host. LVM does not provide an easy way to configure this, so we provide a tool to help with this. To configure LVM filter on a hypervisor, run: vdsm-tool config-lvm-filter And follow the instructions. See https://blogs.ovirt.org/2017/12/lvm-configuration-the-easy-way/ Note that the tool use heuristics to find the devices needed by the hypervisor, and it is
No this is not good - these are VGs on shared storage, and the host should not be able to access them. possible that the filter will be too strict and booting the host will fail with the filter. You will have to fix it manually if this happens. We plan to integrate this in host deploy/upgrade flow so this will be configured automatically. I hope it will be available in a future 4.4. version. Nir
I checked the "problematic" and the "good" vm's disk connections (storage configuration mistake) but every storage what connected a "wrong" vm-s, connected a "good" vm-s too. I tried to power off vm-s and on again, and one of them became to "good" others didn't. Every "wrong" vms was made from same template, but others 50 "good" vm were made from that template too.
csabany _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/BS6UN4ZNSRD3IE...
Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/47VU2TERUJK4ZO...
participants (5)
-
csabany@freemail.hu
-
Gianluca Cecchi
-
Nir Soffer
-
Nyika Csaba
-
Strahil Nikolov