<div dir="ltr">I have the same setup, and my only issue is at the switch level with CTDB. The IP does failover, however until I issue a ping from the interface ctdb is connected to, the storage will not connect. <div><br></div><div>If i go to the host with the CTDB vip, and issue a ping from the interface ctdb is on, everything works as described. </div></div><div class="gmail_extra"><br><div class="gmail_quote">On Thu, Aug 6, 2015 at 5:18 PM, Nicolas Ecarnot <span dir="ltr"><<a href="mailto:nicolas@ecarnot.net" target="_blank">nicolas@ecarnot.net</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi Tim,<br>
<br>
Nice to read that someone else is fighting with a similar setup :)<span class=""><br>
<br>
Le 06/08/2015 16:36, Tim Macy a écrit :<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Nicolas, I have the same setup dedicated physical system running engine<br>
on CentOS 6.6 three hosts running CentOS 7.1 with Gluster and KVM, and<br>
firewall is disabled on all hosts. I also followed the same documents<br>
to build my environment so I assume they are very similar. I have on<br>
occasion had the same errors and have also found that "ctdb rebalanceip<br>
<floating ip>" is the only way to resolve the problem.<br>
</blockquote>
<br></span>
Indeed, when I'm stopping/continuing my ctdb services, the main action is a move of the vIP.<br>
So we agree there is definitively something to dig there!<br>
Either directly, either as a side effect.<br>
<br>
I must admit I'd be glad to search further before following the second part of your answer.<span class=""><br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
I intend to<br>
remove ctdb since it is not needed with the configuration we are<br>
running. CTDB is only needed for hosted engine on a floating NFS mount,<br>
</blockquote>
<br></span>
And in a less obvious manner, it also allows to gently remove a host from the vIP managers pool, before removing it on the gluster layer.<br>
Not a great advantage, but worth mentionning.<span class=""><br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
so you should be able change the gluster storage domain mount paths to<br>
"localhost:<name>". The only thing that has prevented me from making<br>
this change is that my environment is live with running VM's. Please<br>
let me know if you go this route.<br>
</blockquote>
<br></span>
I'm more than interested to choose this way, if :<br>
- I find no time to investigate the floating vIP issue<br>
- I can simplify this setup<br>
- This can lead to increased perf<br>
<br>
About the master storage domain path, should I use only pure gluster and completely forget about NFS?<div class="HOEnZb"><div class="h5"><br>
<br>
-- <br>
Nicolas ECARNOT<br>
_______________________________________________<br>
Users mailing list<br>
<a href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a><br>
<a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/mailman/listinfo/users</a><br>
</div></div></blockquote></div><br><br clear="all"><div><br></div>-- <br><div class="gmail_signature"><div dir="ltr">Donny Davis<br><br></div></div>
</div>