<div dir="auto">Hi, <div dir="auto"><br></div><div dir="auto">AFAIK, during hosted engine deployment installer will check the GlusterFS replica type. And replica 3 is a mandatory requirement. Previously, i got and idvise within this mailing list to look on DRDB solution if you do t have a third node to to run at a GlusterFS replica 3.</div></div><div class="gmail_extra"><br><div class="gmail_quote">14 дек. 2017 г. 1:51 пользователь "Andrei V" <<a href="mailto:andreil1@starlett.lv" target="_blank">andreil1@starlett.lv</a>> написал:<br type="attribution"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div text="#000000" bgcolor="#FFFFFF">
<div class="m_2379513291706369479moz-cite-prefix">Hi, Donny,<br>
<br>
Thanks for the link.<br>
<br>
Am I understood correctly that I'm need at least 3-node system to
run in failover mode? So far I'm plan to deploy only 2 nodes,
either with hosted either with bare metal engine.<br>
<br>
<i>The key thing to keep in mind regarding host maintenance and
downtime is that this <b>converged three node system relies on
having at least two of the nodes up at all times</b>. If you
bring down two machines at once, you'll run afoul of the
Gluster quorum rules that guard us from split-brain states in
our storage, the volumes served by your remaining host will go
read-only, and the VMs stored on those volumes will pause and
require a shutdown and restart in order to run again.</i><br>
<br>
What happens if in 2-node glusterfs system (with hosted engine)
one node goes down?<br>
Bare metal engine can manage this situation, but I'm not sure
about hosted engine.<br>
<br>
<br>
On 12/13/2017 11:17 PM, Donny Davis wrote:<br>
</div>
<blockquote type="cite">
<div dir="ltr">I would start here
<div><a href="https://ovirt.org/blog/2017/04/up-and-running-with-ovirt-4.1-and-gluster-storage/" target="_blank">https://ovirt.org/blog/2017/<wbr>04/up-and-running-with-ovirt-<wbr>4.1-and-gluster-storage/</a><br>
</div>
<div><br>
</div>
<div>Pretty good basic guidance. </div>
<div><br>
</div>
<div>Also with software defined storage its recommended their
are at least two "storage" nodes and one arbiter node to
maintain quorum. </div>
</div>
<div class="gmail_extra"><br>
<div class="gmail_quote">On Wed, Dec 13, 2017 at 3:45 PM, Andrei
V <span dir="ltr"><<a href="mailto:andreil1@starlett.lv" target="_blank">andreil1@starlett.lv</a>></span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi,<br>
<br>
I'm going to setup relatively simple 2-node system with
oVirt 4.1,<br>
GlusterFS, and several VMs running.<br>
Each node going to be installed on dual Xeon system with
single RAID 5.<br>
<br>
oVirt node installer uses relatively simple default
partitioning scheme.<br>
Should I leave it as is, or there are better options?<br>
I never used GlusterFS before, so any expert opinion is very
welcome.<br>
<br>
Thanks in advance.<br>
Andrei<br>
______________________________<wbr>_________________<br>
Users mailing list<br>
<a href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a><br>
<a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/mailman<wbr>/listinfo/users</a><br>
</blockquote>
</div>
<br>
</div>
</blockquote>
<p><br>
</p>
</div>
<br>______________________________<wbr>_________________<br>
Users mailing list<br>
<a href="mailto:Users@ovirt.org">Users@ovirt.org</a><br>
<a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/<wbr>mailman/listinfo/users</a><br>
<br></blockquote></div></div>