<div dir="ltr"><p dir="ltr"><br>
On 23/07/2014 1:45 am, "Jason Brooks" <<a href="mailto:jbrooks@redhat.com" target="_blank">jbrooks@redhat.com</a>> wrote:<br>
><br>
><br>
><br>
> ----- Original Message -----<br>
> > From: "Jason Brooks" <<a href="mailto:jbrooks@redhat.com" target="_blank">jbrooks@redhat.com</a>><br>
> > To: "Andrew Lau" <<a href="mailto:andrew@andrewklau.com" target="_blank">andrew@andrewklau.com</a>><br>
> > Cc: "users" <<a href="mailto:users@ovirt.org" target="_blank">users@ovirt.org</a>><br>
> > Sent: Tuesday, July 22, 2014 8:29:46 AM<br>
> > Subject: Re: [ovirt-users] Can we debug some truths/myths/facts about hosted-engine and gluster?<br>
> ><br>
> ><br>
> ><br>
> > ----- Original Message -----<br>
> > > From: "Andrew Lau" <<a href="mailto:andrew@andrewklau.com" target="_blank">andrew@andrewklau.com</a>><br>
> > > To: "users" <<a href="mailto:users@ovirt.org" target="_blank">users@ovirt.org</a>><br>
> > > Sent: Friday, July 18, 2014 4:50:31 AM<br>
> > > Subject: [ovirt-users] Can we debug some truths/myths/facts about<br>
> > > hosted-engine and gluster?<br>
> > ><br>
> > > Hi all,<br>
> > ><br>
> > > As most of you have got hints from previous messages, hosted engine won't<br>
> > > work on gluster . A quote from BZ1097639<br>
> > ><br>
> > > "Using hosted engine with Gluster backed storage is currently something we<br>
> > > really warn against.<br>
> ><br>
> > My current setup is hosted engine, configured w/ gluster storage as described<br>
> > in my<br>
> > blog post, but with three hosts and replica 3 volumes.<br>
> ><br>
> > Only issue I've seen is an errant message about the Hosted Engine being down<br>
> > following an engine migration. The engine does migrate successfully, though.</p><div class="gmail_default" style="font-family:tahoma,sans-serif">That was fixed in 3.4.3 I believe, although when it happened to me my engine didn't migrate it just sat there.</div>
<p></p><p dir="ltr"><br>
> ><br>
> > RE your bug, what do you use for a mount point for the nfs storage?<br>
><br>
> In the log you attached to your bug, it looks like you're using localhost as<br>
> the nfs mount point. I use a dns name that resolves to the virtual IP hosted<br>
> by ctdb. So, you're only ever talking to one nfs server at a time, and failover<br>
> between the nfs hosts is handled by ctdb.</p>
<p dir="ltr">I also tried your setup, but hit other complications. I used localhost </p><div class="gmail_default" style="font-family:tahoma,sans-serif;display:inline">in an old setup, </div>previously as I was under the assumption when accessing anything gluster related,<div class="gmail_default" style="font-family:tahoma,sans-serif;display:inline">
the connection point only provides the volume info and you connect to any server in the volume group.</div> <p></p>
<p dir="ltr">><br>
> Anyway, like I said, my main testing rig is now using this configuration,<br>
> help me try and break it. :)</p>
<p dir="ltr">rm -rf /</p>
<p dir="ltr">Jokes aside, are you able to reboot a server without losing the VM ?</p><div class="gmail_default" style="font-family:tahoma,sans-serif;display:inline"> My experience with ctdb (based on your blog) was even with the "floating/virtual IP" it wasn't fast enough, or something in the gluster layer delayed the failover. Either way, the VM goes into paused state and can't be resumed.</div>
<p></p>
<p dir="ltr">><br>
> ><br>
> > Jason<br>
> ><br>
> ><br>
> > ><br>
> > ><br>
> > > I think this bug should be closed or re-targeted at documentation,<br>
> > > because there is nothing we can do here. Hosted engine assumes that<br>
> > > all writes are atomic and (immediately) available for all hosts in the<br>
> > > cluster. Gluster violates those assumptions.<br>
> > ><br>
> > > "<br>
> > ><br>
> > > Until the documentation gets updated, I hope this serves as a useful<br>
> > > notice at least to save people some of the headaches I hit like<br>
> > > hosted-engine starting up multiple VMs because of above issue.<br>
> > > <br>
> > ><br>
> > > Now my question, does this theory prevent a scenario of perhaps something<br>
> > > like a gluster replicated volume being mounted as a glusterfs filesystem<br>
> > > and then re-exported as the native kernel NFS share for the hosted-engine<br>
> > > to consume? It could then be possible to chuck ctdb in there to provide a<br>
> > > last resort failover solution. I have tried myself and suggested it to two<br>
> > > people who are running a similar setup. Now using the native kernel NFS<br>
> > > server for hosted-engine and they haven't reported as many issues. Curious,<br>
> > > could anyone validate my theory on this?<br>
> > ><br>
> > > Thanks,<br>
> > > Andrew<br>
> > ><br>
> > > _______________________________________________<br>
> > > Users mailing list<br>
> > > <a href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a><br>
> > > <a href="http://lists.ovirt.org/mailman/listinfo/users" target="_blank">http://lists.ovirt.org/mailman/listinfo/users</a><br>
> > ><br>
> > _______________________________________________<br>
> > Users mailing list<br>
> > <a href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a><br>
> > <a href="http://lists.ovirt.org/mailman/listinfo/users" target="_blank">http://lists.ovirt.org/mailman/listinfo/users</a><br>
> ><br>
</p>
</div>