[Users] GlusterFS Distributed Replicate
a.ludas at gmail.com
a.ludas at gmail.com
Fri Dec 20 10:22:43 EST 2013
Hi,
in a 2 node cluster you can set the path to localhost:volume. If one host goes down and SPM role switches to the remaining running host, your master domain is still accessible and so your VMs stay up and running.
Regards,
Alex
-----Original message-----
> From:gregoire.leroy at retenodus.net <gregoire.leroy at retenodus.net>
> Sent: Friday 20th December 2013 15:10
> To: Andrew Lau <andrew at andrewklau.com>
> Cc: users <users at ovirt.org>
> Subject: Re: [Users] GlusterFS Distributed Replicate
>
> Hi,
>
> There are some things I don't understand. First of all, why do we need
> keepalived ? I thought that it would be transparent at this layer and
> that glusterfs would manage all the replication thing by itself. Is that
> because I've POSIXFS instead of GlusterFS or is it totally unrelated ?
>
> Secondly, about the split-brain, when you says that I can read but not
> write, does that mean I can't write data on the VM storage space or
> can't I create VM ? If I can't write data, what would be the workaround
> ? Am I force to have 3 (or 4, I guess, as I want to get replication)
> nodes ?
>
> To conclude : can I get real HA (except engine) with Ovirt / Glusterfs
> with 2 nodes ?
>
> Thank you very much,
> Regards,
> Grégoire Leroy
>
>
> Le 2013-12-19 23:03, Andrew Lau a écrit :
> > Hi,
> >
> > What I learned in the way glusterfs works is you specify the host
> > only to grab the initial volume information, then it'll go directly to
> > the other hosts to connect to the datastore - this avoids the
> > bottleneck issue that NFS has.
> >
> > Knowing this, the work around I used was to setup keepalived on the
> > gluster hosts (make sure you set it up on an interface other than your
> > ovirtmgmt or you'll clash with the live migration components). So now
> > if one of my hosts drop from the cluster, the storage access is not
> > lost. I haven't fully tested the whole infrastructure yet but my only
> > fear is they may drop into "PAUSE" mode during the keepalived
> > transition period.
> >
> > Also - you may need to change your glusterfs ports so they don't
> > interfere with vdsm. My post here was a little outdated but it still
> > has my findings on keepalived
> > etc. http://www.andrewklau.com/returning-to-glusterized-ovirt-3-3/
> > [2]
> >
> > The other thing to note, is you've only got two gluster hosts. I
> > believe by default now ovirt sets the quorum setting which enforces
> > that there must be atleast 2 nodes alive in your configuration. This
> > means when there is only 1 gluster server up, you'll be able to read
> > but not write this is to avoid split-brain.
> >
> > Thanks,
> > Andrew
> >
> > On Thu, Dec 19, 2013 at 5:12 AM, <gregoire.leroy at retenodus.net> wrote:
> >
> >> Hello,
> >>
> >> As I said in a previous email, I have this configuration with Ovirt
> >> 3.3 :
> >> 1 Ovirt Engine
> >> 2 Hosts Centos 6.5
> >>
> >> I successfully setup GlusterFS. I created a distributed replicate
> >> volume with 2 bricks : host1:/gluster and host2:/gluster.
> >>
> >> Then, I created a storage storage_gluster POSIXFS with the option
> >> glusterfs and I gave the path "host1:/gluster".
> >>
> >> First, I'm rather surprised I have to specify an host for the
> >> storage as I wish to have a distribute replicated storage. I
> >> expected to specify both hosts.
> >>
> >> Then I create a VM on this storage. The expected behaviour if I
> >> shutdown host1 should be that my VM keeps running on the second
> >> brick. Yet, not only I lose my VM but host2 is in a non operationnal
> >> status because one of its data storage is not reachable.
> >>
> >> Did I miss something in the configuration ? How could I get the
> >> wanted behaviour ?
> >>
> >> Thanks a lot,
> >> Regards,
> >> Grégoire Leroy
> >> _______________________________________________
> >> Users mailing list
> >> Users at ovirt.org
> >> http://lists.ovirt.org/mailman/listinfo/users [1]
> >
> >
> >
> > Links:
> > ------
> > [1] http://lists.ovirt.org/mailman/listinfo/users
> > [2] http://www.andrewklau.com/returning-to-glusterized-ovirt-3-3/
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
More information about the Users
mailing list