Hi,
keepalived is only for grabbing the gluster volume info (eg. the
servers which host the bricks) > in which from what I've noticed your
clients will then connect to the gluster servers directly
(not using keepalived anymore).
Can't keepalived be replace by naming the hostname by localhost, as what
Alex says ?
If you disable quorum then you won't have the issue of "read
only" when
you lose a host, but you > won't have protection from split brain (if
your two hosts lose network connectivity). VMs will
keep writing to the hosts, as you have the gluster server and client on
the same host this is
inevitable.
Ok, I get the problem caused by disabling the quorum. So, what if while
I've two hosts the lack of HA is not so dramatic but will be necessary
when I'll have more hosts ? (3 or 4). Here is the scenario I would like
to have :
1) I have two hosts : HOSTA and HOSTB. They have glusterfs bricks
configured as Distribute Replicated and data is replicated.
=> For now, I'm totally ok with the fact that if a node fails, then VM
on this hosts are stopped and unreachable. However, I would like that if
a node fails, the DC keeps running so that VM on the other hosts are not
stopped and a human intervention make possible to start the VM on the
other host. Would it be possible without disabling the quorum ?
2) In few months, I'll add two other hosts to the glusterfs volum. Their
bricks will be replicated.
=> At that time, I would like to be able to make evolve my architecture
(without shut my VM and export/import them on a new cluster) so that if
a node fails, VM on this host start to run on the other host of the same
brick (without manual intervention).
Is it possible ?
Le 2013-12-20 16:22, a.ludas(a)gmail.com a écrit :
Hi,
in a 2 node cluster you can set the path to localhost:volume. If one
host goes down and SPM role switches to the remaining running host,
your master domain is still accessible and so your VMs stay up and
running.
Regards,
Alex
I tried it but still, the storage/cluster were shutdown, probably
because of the quorum.
Thank you very much,
Regards,
Grégoire Leroy