If you don't have pending heals constantly - the FUSE clients are using all 3
bricks.
What is your storage domain mount options ?
My guess is that if you set a host in maintenance (and mark it to stop the gluster) and
then start gluster (you can also run gluster volume start with the force option) and
activate the host - it will fix everything.
Why do you think that ovirt1/3 use only ovirt2 for storage ?
I am using the following bash function (you can add it to .bashrc and source .bashrc) to
find if any client is disconnected. You will need to run it on all gluster nodes:
gluster-check-connection(){
VOLUME=$(gluster volume list | grep -v gluster_shared_storage )
for i in $VOLUME
do
VOLUME_PATH=$(echo $i | sed 's/_/__/')
grep -i connected
/rhev/data-center/mnt/glusterSD/gluster1\:_$VOLUME_PATH/.meta/graphs/active/${i}-client-*/private
done
}
Here is a sample output (you should see similar:
[root@ovirt1 ~]# gluster-check-connection
/rhev/data-center/mnt/glusterSD/gluster1:_data/.meta/graphs/active/data-client-0/private:connected
= 1
/rhev/data-center/mnt/glusterSD/gluster1:_data/.meta/graphs/active/data-client-1/private:connected
= 1
/rhev/data-center/mnt/glusterSD/gluster1:_data/.meta/graphs/active/data-client-2/private:connected
= 1
/rhev/data-center/mnt/glusterSD/gluster1:_engine/.meta/graphs/active/engine-client-0/private:connected
= 1
/rhev/data-center/mnt/glusterSD/gluster1:_engine/.meta/graphs/active/engine-client-1/private:connected
= 1
/rhev/data-center/mnt/glusterSD/gluster1:_engine/.meta/graphs/active/engine-client-2/private:connected
= 1
Best Regards,
Strahil Nikolov
На 27 май 2020 г. 17:45:36 GMT+03:00, Randall Wood <rwood(a)forcepoint.com> написа:
>I have a three node oVirt 4.3.7 cluster that is using GlusterFS as the
>underlying storage (each oVirt node is a GlusterFS node). The nodes are
>named ovirt1, ovirt2, and ovirt3. This has been working wonderfully
>until last week when ovirt2 crashed (it is *old* hardware; this was not
>entirely unexpected).
>
>Now I have this situation: all three oVirt nodes are acting as if the
>GlusterFS volumes only exist on ovirt2. The bricks on all three nodes
>appear to be in sync.
>
>I *think* this began happening after I restarted ovirt2 (once hard, and
>once soft) and then restarted glusterd (and only glusterd) on ovirt1
>and ovirt3 after `gluster-eventsapi status` on those two nodes showed
>inconsistent results (this had been used with success before).
>
>How can I make the oVirt nodes read and write from their local bricks?
>_______________________________________________
>Users mailing list -- users(a)ovirt.org
>To unsubscribe send an email to users-leave(a)ovirt.org
>Privacy Statement:
https://www.ovirt.org/privacy-policy.html
>oVirt Code of Conduct:
>https://www.ovirt.org/community/about/community-guidelines/
>List Archives:
>https://lists.ovirt.org/archives/list/users@ovirt.org/message/KXTXBLHRUGQR7TLCJZF3QEI6AO4RJU24/