[ovirt-users] 4-node oVirt with replica-3 gluster
Davide Ferrari
davide at billymob.com
Fri Sep 23 14:24:58 UTC 2016
Reading the glusterfs docs
https://gluster.readthedocs.io/en/latest/Administrator%20Guide/arbiter-volumes-and-quorum/
"In a replica 3 volume, client-quorum is enabled by default and set to
'auto'. This means 2 bricks need to be up for the writes to succeed. Here
is how this configuration prevents files from ending up in split-brain:"
So this means that if one of the machines with the 2 bricks (arbiter &
normal) fails, the otherbrick will be set RO, or am I missing something?
I mean, this config will be better in case of a network loss, and thus a
split brain, but it's far worse in case of a machine failing or being
rebooted for maintenance.
2016-09-23 16:11 GMT+02:00 Davide Ferrari <davide at billymob.com>:
>
>
> 2016-09-23 15:57 GMT+02:00 Sahina Bose <sabose at redhat.com>:
>
>>
>> You could do this - where Node3 & Node 2 also has arbiter bricks. Arbiter
>> bricks only store metadata and requires very low storage capacity compared
>> to the data bricks.
>>
>> Node1 Node2 Node3 Node4
>> brick1 brick1 arb-brick
>> arb-brick brick1 brick1
>>
>
> Ok, cool! And this won't pose any problem if Node2 or Node4 fail?
>
> The syntax shuld be this:
>
> gluster volume create data replica 3 arbiter 1 node1:/brick node2:/brick
> node2:/arb_brick node3:/brick node4:/brick node4:/arb_brick
>
> is not a problem having more than a brick on the same host for the volume
> create syntax?
>
> Thanks again
>
> --
> Davide Ferrari
> Senior Systems Engineer
>
--
Davide Ferrari
Senior Systems Engineer
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20160923/9700710c/attachment-0001.html>
More information about the Users
mailing list