Can you suffer downtime ?
You can try something like this (I'm improvising):
Set to global maintenance (either via UI or hosted-engine --set-maintenance
--mode=global)
Stop the engine.
Stop ovirt-ha-agent ovirt-ha-broker vdsmd supervdsmd sanlock glusterd.
Stop all gluster processes via thr script in /usr/share/gluster*..../something-named-kill
Disable all LVs in the vg via:
lvchange -an LV
If it doesn't work try disabling the VG:
vgchange -an VG and then enabling it again:
vgchange -ay VG
Last try to repair the thinpool via:
lvchange --repair VG/thinpool
Last disable the VG and give the system a reboot to verify everything works.
Best Regards,
Strahil NikolovOn Sep 29, 2019 14:39, jeremy_tourville(a)hotmail.com wrote:
Thank you for the reply. Please pardon my ignorance, I'm not very good with
GlusterFS. I don't think this is a replicated volume (though I could be wrong) I
built a single node hyperconverged hypervisor. I was reviewing my gdeploy file from when
I originally built the system. I have the following values:
PV = /dev/sda
VG1= gluster_vg1
LV1= engine_lv (thick)
LV2 = gluster_vg1 thinpool
LV3 = lv_vmdisks (thinpooll)
LV4 = lv_datadisks (thinpool)
So according to the article I read in my OP, it says to deactivate the volumes under the
thin pool as the first step. I ran the command lvchange -an
/dev/gluster_vg1/lv_datadisks When I do this I am told Volume group
"gluster_vg1" not found. Cannot process volume group gluster_vg1.
That seems consistent with the timeout error message. How do you fix this if you
can't access the volumes? Thoughts?
_______________________________________________
Users mailing list -- users(a)ovirt.org
To unsubscribe send an email to users-leave(a)ovirt.org
Privacy Statement:
https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HZXE3NCSFWC...