<div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On Wed, Dec 13, 2017 at 11:51 PM, Andrei V <span dir="ltr"><<a href="mailto:andreil1@starlett.lv" target="_blank">andreil1@starlett.lv</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div bgcolor="#FFFFFF">
<div class="gmail-m_-2462610824078421352moz-cite-prefix">Hi, Donny,<br>
<br>
Thanks for the link.<br>
<br>
Am I understood correctly that I'm need at least 3-node system to
run in failover mode? So far I'm plan to deploy only 2 nodes,
either with hosted either with bare metal engine.<br>
<br>
<i>The key thing to keep in mind regarding host maintenance and
downtime is that this <b>converged three node system relies on
having at least two of the nodes up at all times</b>. If you
bring down two machines at once, you'll run afoul of the
Gluster quorum rules that guard us from split-brain states in
our storage, the volumes served by your remaining host will go
read-only, and the VMs stored on those volumes will pause and
require a shutdown and restart in order to run again.</i><br>
<br>
What happens if in 2-node glusterfs system (with hosted engine)
one node goes down?<br>
Bare metal engine can manage this situation, but I'm not sure
about hosted engine.</div></div></blockquote><br>In order to be sure you cannot get affected by a split brain issue, you need a full replica 3 env or at least replica 3 with an arbiter node:<br><a href="http://docs.gluster.org/en/latest/Administrator%20Guide/Split%20brain%20and%20ways%20to%20deal%20with%20it/">http://docs.gluster.org/en/latest/Administrator%20Guide/Split%20brain%20and%20ways%20to%20deal%20with%20it/</a><br><br>Otherwise if, for any reason (like a network split), you have two divergent copies of the file you simply do not have enough information to authoritatively pick the right copy and discard the other.<div> <br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div bgcolor="#FFFFFF"><div class="gmail-m_-2462610824078421352moz-cite-prefix"><div><div class="gmail-h5"><br>
<br>
<br>
On 12/13/2017 11:17 PM, Donny Davis wrote:<br>
</div></div></div><div><div class="gmail-h5">
<blockquote type="cite">
<div dir="ltr">I would start here
<div><a href="https://ovirt.org/blog/2017/04/up-and-running-with-ovirt-4.1-and-gluster-storage/" target="_blank">https://ovirt.org/blog/2017/<wbr>04/up-and-running-with-ovirt-<wbr>4.1-and-gluster-storage/</a><br>
</div>
<div><br>
</div>
<div>Pretty good basic guidance. </div>
<div><br>
</div>
<div>Also with software defined storage its recommended their
are at least two "storage" nodes and one arbiter node to
maintain quorum. </div>
</div>
<div class="gmail_extra"><br>
<div class="gmail_quote">On Wed, Dec 13, 2017 at 3:45 PM, Andrei
V <span dir="ltr"><<a href="mailto:andreil1@starlett.lv" target="_blank">andreil1@starlett.lv</a>></span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Hi,<br>
<br>
I'm going to setup relatively simple 2-node system with
oVirt 4.1,<br>
GlusterFS, and several VMs running.<br>
Each node going to be installed on dual Xeon system with
single RAID 5.<br>
<br>
oVirt node installer uses relatively simple default
partitioning scheme.<br>
Should I leave it as is, or there are better options?<br>
I never used GlusterFS before, so any expert opinion is very
welcome.<br>
<br>
Thanks in advance.<br>
Andrei<br>
______________________________<wbr>_________________<br>
Users mailing list<br>
<a href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a><br>
<a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/mailman<wbr>/listinfo/users</a><br>
</blockquote>
</div>
<br>
</div>
</blockquote>
<p><br>
</p>
</div></div></div>
<br>______________________________<wbr>_________________<br>
Users mailing list<br>
<a href="mailto:Users@ovirt.org">Users@ovirt.org</a><br>
<a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/<wbr>mailman/listinfo/users</a><br>
<br></blockquote></div><br></div></div>