<div dir="ltr"><div class="gmail_default" style="font-family:tahoma,sans-serif">Hi,</div><div class="gmail_default" style="font-family:tahoma,sans-serif"><br></div><div class="gmail_default" style="font-family:tahoma,sans-serif">
keepalived is only for grabbing the gluster volume info (eg. the servers which host the bricks) in which from what I've noticed your clients will then connect to the gluster servers directly (not using keepalived anymore).</div>
<div class="gmail_default" style="font-family:tahoma,sans-serif"><br></div><div class="gmail_default" style="font-family:tahoma,sans-serif">If you disable quorum then you won't have the issue of "read only" when you lose a host, but you won't have protection from split brain (if your two hosts lose network connectivity). VMs will keep writing to the hosts, as you have the gluster server and client on the same host this is inevitable. </div>
<div class="gmail_default" style="font-family:tahoma,sans-serif"><br></div><div class="gmail_default" style="font-family:tahoma,sans-serif">Hopefully someone else with a little more knowledge can chime in. I've got a pretty much similar setup to you so what I write is just my findings.</div>
<div class="gmail_default" style="font-family:tahoma,sans-serif"><br></div><div class="gmail_default" style="font-family:tahoma,sans-serif">But, yes you should be able to get real HA if you make some trade offs (disable quorum). So far mine haven't had any issues..</div>
<div class="gmail_default" style="font-family:tahoma,sans-serif"><br></div><div class="gmail_default" style="font-family:tahoma,sans-serif">HTH</div><div class="gmail_default" style="font-family:tahoma,sans-serif">Andrew.</div>
<div class="gmail_default" style="font-family:tahoma,sans-serif"><br></div><div class="gmail_extra"><br><div class="gmail_quote">On Sat, Dec 21, 2013 at 1:06 AM, <span dir="ltr"><<a href="mailto:gregoire.leroy@retenodus.net" target="_blank">gregoire.leroy@retenodus.net</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">Hi,<br>
<br>
There are some things I don't understand. First of all, why do we need keepalived ? I thought that it would be transparent at this layer and that glusterfs would manage all the replication thing by itself. Is that because I've POSIXFS instead of GlusterFS or is it totally unrelated ?<br>
<br>
Secondly, about the split-brain, when you says that I can read but not write, does that mean I can't write data on the VM storage space or can't I create VM ? If I can't write data, what would be the workaround ? Am I force to have 3 (or 4, I guess, as I want to get replication) nodes ?<br>
<br>
To conclude : can I get real HA (except engine) with Ovirt / Glusterfs with 2 nodes ?<br>
<br>
Thank you very much,<br>
Regards,<br>
Grégoire Leroy<br>
<br>
<br>
Le 2013-12-19 23:03, Andrew Lau a écrit :<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div>
Hi,<br>
<br>
What I learned in the way glusterfs works is you specify the host<br>
only to grab the initial volume information, then it'll go directly to<br>
the other hosts to connect to the datastore - this avoids the<br>
bottleneck issue that NFS has.<br>
<br>
Knowing this, the work around I used was to setup keepalived on the<br>
gluster hosts (make sure you set it up on an interface other than your<br>
ovirtmgmt or you'll clash with the live migration components). So now<br>
if one of my hosts drop from the cluster, the storage access is not<br>
lost. I haven't fully tested the whole infrastructure yet but my only<br>
fear is they may drop into "PAUSE" mode during the keepalived<br>
transition period.<br>
<br>
Also - you may need to change your glusterfs ports so they don't<br>
interfere with vdsm. My post here was a little outdated but it still<br>
has my findings on keepalived<br>
etc. <a href="http://www.andrewklau.com/returning-to-glusterized-ovirt-3-3/" target="_blank">http://www.andrewklau.<u></u>com/returning-to-glusterized-<u></u>ovirt-3-3/</a><br></div>
[2]<div><div><br>
<br>
The other thing to note, is you've only got two gluster hosts. I<br>
believe by default now ovirt sets the quorum setting which enforces<br>
that there must be atleast 2 nodes alive in your configuration. This<br>
means when there is only 1 gluster server up, you'll be able to read<br>
but not write this is to avoid split-brain.<br>
<br>
Thanks,<br>
Andrew<br>
<br>
On Thu, Dec 19, 2013 at 5:12 AM, <<a href="mailto:gregoire.leroy@retenodus.net" target="_blank">gregoire.leroy@retenodus.net</a>> wrote:<br>
<br>
</div></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div><div>
Hello,<br>
<br>
As I said in a previous email, I have this configuration with Ovirt<br>
3.3 :<br>
1 Ovirt Engine<br>
2 Hosts Centos 6.5<br>
<br>
I successfully setup GlusterFS. I created a distributed replicate<br>
volume with 2 bricks : host1:/gluster and host2:/gluster.<br>
<br>
Then, I created a storage storage_gluster POSIXFS with the option<br>
glusterfs and I gave the path "host1:/gluster".<br>
<br>
First, I'm rather surprised I have to specify an host for the<br>
storage as I wish to have a distribute replicated storage. I<br>
expected to specify both hosts.<br>
<br>
Then I create a VM on this storage. The expected behaviour if I<br>
shutdown host1 should be that my VM keeps running on the second<br>
brick. Yet, not only I lose my VM but host2 is in a non operationnal<br>
status because one of its data storage is not reachable.<br>
<br>
Did I miss something in the configuration ? How could I get the<br>
wanted behaviour ?<br>
<br>
Thanks a lot,<br>
Regards,<br>
Grégoire Leroy<br>
______________________________<u></u>_________________<br>
Users mailing list<br>
<a href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a><br>
</div></div><a href="http://lists.ovirt.org/mailman/listinfo/users" target="_blank">http://lists.ovirt.org/<u></u>mailman/listinfo/users</a> [1]<br>
</blockquote>
<br>
<br>
<br>
Links:<br>
------<br>
[1] <a href="http://lists.ovirt.org/mailman/listinfo/users" target="_blank">http://lists.ovirt.org/<u></u>mailman/listinfo/users</a><br>
[2] <a href="http://www.andrewklau.com/returning-to-glusterized-ovirt-3-3/" target="_blank">http://www.andrewklau.com/<u></u>returning-to-glusterized-<u></u>ovirt-3-3/</a><br>
</blockquote>
</blockquote></div><br></div></div>