[ovirt-users] ovirt+gluster+NFS : storage hicups
Nicolas Ecarnot
nicolas at ecarnot.net
Thu Aug 6 21:18:52 UTC 2015
Hi Tim,
Nice to read that someone else is fighting with a similar setup :)
Le 06/08/2015 16:36, Tim Macy a écrit :
> Nicolas, I have the same setup dedicated physical system running engine
> on CentOS 6.6 three hosts running CentOS 7.1 with Gluster and KVM, and
> firewall is disabled on all hosts. I also followed the same documents
> to build my environment so I assume they are very similar. I have on
> occasion had the same errors and have also found that "ctdb rebalanceip
> <floating ip>" is the only way to resolve the problem.
Indeed, when I'm stopping/continuing my ctdb services, the main action
is a move of the vIP.
So we agree there is definitively something to dig there!
Either directly, either as a side effect.
I must admit I'd be glad to search further before following the second
part of your answer.
> I intend to
> remove ctdb since it is not needed with the configuration we are
> running. CTDB is only needed for hosted engine on a floating NFS mount,
And in a less obvious manner, it also allows to gently remove a host
from the vIP managers pool, before removing it on the gluster layer.
Not a great advantage, but worth mentionning.
> so you should be able change the gluster storage domain mount paths to
> "localhost:<name>". The only thing that has prevented me from making
> this change is that my environment is live with running VM's. Please
> let me know if you go this route.
I'm more than interested to choose this way, if :
- I find no time to investigate the floating vIP issue
- I can simplify this setup
- This can lead to increased perf
About the master storage domain path, should I use only pure gluster and
completely forget about NFS?
--
Nicolas ECARNOT
More information about the Users
mailing list