[ovirt-users] Q: Partitioning - oVirt 4.1 & GlusterFS 2-node System
Simone Tiraboschi
stirabos at redhat.com
Thu Dec 14 07:36:43 UTC 2017
On Wed, Dec 13, 2017 at 11:51 PM, Andrei V <andreil1 at starlett.lv> wrote:
> Hi, Donny,
>
> Thanks for the link.
>
> Am I understood correctly that I'm need at least 3-node system to run in
> failover mode? So far I'm plan to deploy only 2 nodes, either with hosted
> either with bare metal engine.
>
> *The key thing to keep in mind regarding host maintenance and downtime is
> that this converged three node system relies on having at least two of the
> nodes up at all times. If you bring down two machines at once, you'll run
> afoul of the Gluster quorum rules that guard us from split-brain states in
> our storage, the volumes served by your remaining host will go read-only,
> and the VMs stored on those volumes will pause and require a shutdown and
> restart in order to run again.*
>
> What happens if in 2-node glusterfs system (with hosted engine) one node
> goes down?
> Bare metal engine can manage this situation, but I'm not sure about hosted
> engine.
>
In order to be sure you cannot get affected by a split brain issue, you
need a full replica 3 env or at least replica 3 with an arbiter node:
http://docs.gluster.org/en/latest/Administrator%20Guide/Split%20brain%20and%20ways%20to%20deal%20with%20it/
Otherwise if, for any reason (like a network split), you have two divergent
copies of the file you simply do not have enough information to
authoritatively pick the right copy and discard the other.
>
>
>
> On 12/13/2017 11:17 PM, Donny Davis wrote:
>
> I would start here
> https://ovirt.org/blog/2017/04/up-and-running-with-ovirt-
> 4.1-and-gluster-storage/
>
> Pretty good basic guidance.
>
> Also with software defined storage its recommended their are at least two
> "storage" nodes and one arbiter node to maintain quorum.
>
> On Wed, Dec 13, 2017 at 3:45 PM, Andrei V <andreil1 at starlett.lv> wrote:
>
>> Hi,
>>
>> I'm going to setup relatively simple 2-node system with oVirt 4.1,
>> GlusterFS, and several VMs running.
>> Each node going to be installed on dual Xeon system with single RAID 5.
>>
>> oVirt node installer uses relatively simple default partitioning scheme.
>> Should I leave it as is, or there are better options?
>> I never used GlusterFS before, so any expert opinion is very welcome.
>>
>> Thanks in advance.
>> Andrei
>> _______________________________________________
>> Users mailing list
>> Users at ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>
>
>
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20171214/bdc3dbcd/attachment.html>
More information about the Users
mailing list