Re: Ovirt 4.2.7 won't start and drops to emergency console

What happens when it complain that there is no VGs ? When you run 'vgs' what is the output? Also, take a look into https://www.redhat.com/archives/linux-lvm/2016-February/msg00012.html I have the feeling that you need to disable all lvs - not only the thin pool, but also the thin LVs (first). Best Regards, Strahil NikolovOn Sep 29, 2019 23:00, jeremy_tourville@hotmail.com wrote:
Yes, I can take the downtime. Actually, I don't have any choice at the moment because it is a single node setup. :) I think this is a distributed volume from the research I have performed. I posted the lvchange command in my last post, this was the result- I ran the command lvchange -an /dev/gluster_vg1/lv_datadisks When I do this I get the message "Volume group "gluster_vg1" not found. Cannot process volume group gluster_vg1". I also tried the command the way you specified with just the LV and get the same results.
I had placed the system in Global maintenance mode prior to the reboot. Upon reboot I got the messages about the various gluster volumes not being able to be mounted because of timeout issues. That is what started my OP. I think we are both thinking along the same lines regarding the issue. I think the question is how do you fix a volume that the system won't mount? It does seem likely that the thinpool needs to be repaired but what do you do if you can't even perform the first step in the procedure?
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/2X73CAASB4UCOK...

vgs displays everything EXCEPT gluster_vg1 "dmsetup ls" does not list the VG in question. That is why I couldn't run the lvchange command. They were not active or even detected by the system. OK, I found my problem, and a solution: https://access.redhat.com/solutions/3251681 # cd /var/log # grep -ri gluster_vg1-lvthinpool-tpool My metadata is 100% full !!! Now, how do I find out how big my original metadata size was so I can make the new one the correct size?
participants (2)
-
jeremy_tourville@hotmail.com
-
Strahil