Re: Ovirt 4.2.7 won't start and drops to emergency console

Can you suffer downtime ? You can try something like this (I'm improvising): Set to global maintenance (either via UI or hosted-engine --set-maintenance --mode=global) Stop the engine. Stop ovirt-ha-agent ovirt-ha-broker vdsmd supervdsmd sanlock glusterd. Stop all gluster processes via thr script in /usr/share/gluster*..../something-named-kill Disable all LVs in the vg via: lvchange -an LV If it doesn't work try disabling the VG: vgchange -an VG and then enabling it again: vgchange -ay VG Last try to repair the thinpool via: lvchange --repair VG/thinpool Last disable the VG and give the system a reboot to verify everything works. Best Regards, Strahil NikolovOn Sep 29, 2019 14:39, jeremy_tourville@hotmail.com wrote:
Thank you for the reply. Please pardon my ignorance, I'm not very good with GlusterFS. I don't think this is a replicated volume (though I could be wrong) I built a single node hyperconverged hypervisor. I was reviewing my gdeploy file from when I originally built the system. I have the following values: PV = /dev/sda VG1= gluster_vg1 LV1= engine_lv (thick) LV2 = gluster_vg1 thinpool LV3 = lv_vmdisks (thinpooll) LV4 = lv_datadisks (thinpool)
So according to the article I read in my OP, it says to deactivate the volumes under the thin pool as the first step. I ran the command lvchange -an /dev/gluster_vg1/lv_datadisks When I do this I am told Volume group "gluster_vg1" not found. Cannot process volume group gluster_vg1.
That seems consistent with the timeout error message. How do you fix this if you can't access the volumes? Thoughts?
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/HZXE3NCSFWCUIE...

Yes, I can take the downtime. Actually, I don't have any choice at the moment because it is a single node setup. :) I think this is a distributed volume from the research I have performed. I posted the lvchange command in my last post, this was the result- I ran the command lvchange -an /dev/gluster_vg1/lv_datadisks When I do this I get the message "Volume group "gluster_vg1" not found. Cannot process volume group gluster_vg1". I also tried the command the way you specified with just the LV and get the same results. I had placed the system in Global maintenance mode prior to the reboot. Upon reboot I got the messages about the various gluster volumes not being able to be mounted because of timeout issues. That is what started my OP. I think we are both thinking along the same lines regarding the issue. I think the question is how do you fix a volume that the system won't mount? It does seem likely that the thinpool needs to be repaired but what do you do if you can't even perform the first step in the procedure?
participants (2)
-
jeremy_tourville@hotmail.com
-
Strahil