
Oh, thanks! Reinstalling all the luster from scratch right now to get it right. If I run again in the problem I described before I will open another thread and attach the relevant logs 2016-09-23 16:28 GMT+02:00 Sahina Bose <sabose@redhat.com>:
On Fri, Sep 23, 2016 at 7:54 PM, Davide Ferrari <davide@billymob.com> wrote:
Reading the glusterfs docs
https://gluster.readthedocs.io/en/latest/Administrator%20Gui de/arbiter-volumes-and-quorum/
"In a replica 3 volume, client-quorum is enabled by default and set to 'auto'. This means 2 bricks need to be up for the writes to succeed. Here is how this configuration prevents files from ending up in split-brain:"
So this means that if one of the machines with the 2 bricks (arbiter & normal) fails, the otherbrick will be set RO, or am I missing something? I mean, this config will be better in case of a network loss, and thus a split brain, but it's far worse in case of a machine failing or being rebooted for maintenance.
See the updated vol create command - you should set it up such that 2 bricks in a sub-volume are not from the same host, thus you avoid the problem you describe above
2016-09-23 16:11 GMT+02:00 Davide Ferrari <davide@billymob.com>:
2016-09-23 15:57 GMT+02:00 Sahina Bose <sabose@redhat.com>:
You could do this - where Node3 & Node 2 also has arbiter bricks. Arbiter bricks only store metadata and requires very low storage capacity compared to the data bricks.
Node1 Node2 Node3 Node4 brick1 brick1 arb-brick arb-brick brick1 brick1
Ok, cool! And this won't pose any problem if Node2 or Node4 fail?
The syntax shuld be this:
gluster volume create data replica 3 arbiter 1 node1:/brick node2:/brick node2:/arb_brick node3:/brick node4:/brick node4:/arb_brick
is not a problem having more than a brick on the same host for the volume create syntax?
Thanks again
-- Davide Ferrari Senior Systems Engineer
-- Davide Ferrari Senior Systems Engineer
-- Davide Ferrari Senior Systems Engineer