Q: Partitioning - oVirt 4.1 & GlusterFS 2-node System

Hi, I'm going to setup relatively simple 2-node system with oVirt 4.1, GlusterFS, and several VMs running. Each node going to be installed on dual Xeon system with single RAID 5. oVirt node installer uses relatively simple default partitioning scheme. Should I leave it as is, or there are better options? I never used GlusterFS before, so any expert opinion is very welcome. Thanks in advance. Andrei

I would start here https://ovirt.org/blog/2017/04/up-and-running-with-ovirt-4.1-and-gluster-sto... Pretty good basic guidance. Also with software defined storage its recommended their are at least two "storage" nodes and one arbiter node to maintain quorum. On Wed, Dec 13, 2017 at 3:45 PM, Andrei V <andreil1@starlett.lv> wrote:
Hi,
I'm going to setup relatively simple 2-node system with oVirt 4.1, GlusterFS, and several VMs running. Each node going to be installed on dual Xeon system with single RAID 5.
oVirt node installer uses relatively simple default partitioning scheme. Should I leave it as is, or there are better options? I never used GlusterFS before, so any expert opinion is very welcome.
Thanks in advance. Andrei _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

This is a multi-part message in MIME format. --------------80C631B20FF4A39BFE25D6C1 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable Hi, Donny, Thanks for the link. Am I understood correctly that I'm need at least 3-node system to run in failover mode? So far I'm plan to deploy only 2 nodes, either with hosted either with bare metal engine. /The key thing to keep in mind regarding host maintenance and downtime is that this *converged=C2=A0 three node system relies on having at least= two of the nodes up at all times*. If you bring down=C2=A0 two machines at on= ce, you'll run afoul of the Gluster quorum rules that guard us from split-brain states in our storage, the volumes served by your remaining host will go read-only, and the VMs stored on those volumes will pause and require a shutdown and restart in order to run again./ What happens if in 2-node glusterfs system (with hosted engine) one node goes down? Bare metal engine can manage this situation, but I'm not sure about hosted engine. On 12/13/2017 11:17 PM, Donny Davis wrote:
I would start here https://ovirt.org/blog/2017/04/up-and-running-with-ovirt-4.1-and-gluste= r-storage/
Pretty good basic guidance.=C2=A0
Also with software defined storage its recommended their are at least two "storage" nodes and one arbiter node to maintain quorum.=C2=A0
On Wed, Dec 13, 2017 at 3:45 PM, Andrei V <andreil1@starlett.lv <mailto:andreil1@starlett.lv>> wrote:
Hi,
I'm going to setup relatively simple 2-node system with oVirt 4.1, GlusterFS, and several VMs running. Each node going to be installed on dual Xeon system with single RAID 5.
oVirt node installer uses relatively simple default partitioning scheme. Should I leave it as is, or there are better options? I never used GlusterFS before, so any expert opinion is very welcom= e.
Thanks in advance. Andrei _______________________________________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> http://lists.ovirt.org/mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users>
--------------80C631B20FF4A39BFE25D6C1 Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: quoted-printable <html> <head> <meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dutf= -8"> </head> <body text=3D"#000000" bgcolor=3D"#FFFFFF"> <div class=3D"moz-cite-prefix">Hi, Donny,<br> <br> Thanks for the link.<br> <br> Am I understood correctly that I'm need at least 3-node system to run in failover mode? So far I'm plan to deploy only 2 nodes, either with hosted either with bare metal engine.<br> <br> <i>The key thing to keep in mind regarding host maintenance and downtime is that this <b>converged=C2=A0 three node system relies= on having at least two of the nodes up at all times</b>. If you bring down=C2=A0 two machines at once, you'll run afoul of the Gluster quorum rules that guard us from split-brain states in our storage, the volumes served by your remaining host will go read-only, and the VMs stored on those volumes will pause and require a shutdown and restart in order to run again.</i><br> <br> What happens if in 2-node glusterfs system (with hosted engine) one node goes down?<br> Bare metal engine can manage this situation, but I'm not sure about hosted engine.<br> <br> <br> On 12/13/2017 11:17 PM, Donny Davis wrote:<br> </div> <blockquote type=3D"cite" cite=3D"mid:CAMHmko-yvXD0G53vhj6WdL5H-Gu-7x_d8HxjqCLZzLufgo4ggw@mail.gmai= l.com"> <div dir=3D"ltr">I would start here <div><a href=3D"https://ovirt.org/blog/2017/04/up-and-running-with-ovirt-4.1-and-= gluster-storage/" moz-do-not-send=3D"true">https://ovirt.org/blog/2017/04/up-an= d-running-with-ovirt-4.1-and-gluster-storage/</a><br> </div> <div><br> </div> <div>Pretty good basic guidance.=C2=A0</div> <div><br> </div> <div>Also with software defined storage its recommended their are at least two "storage" nodes and one arbiter node to maintain quorum.=C2=A0</div> </div> <div class=3D"gmail_extra"><br> <div class=3D"gmail_quote">On Wed, Dec 13, 2017 at 3:45 PM, Andre= i V <span dir=3D"ltr"><<a href=3D"mailto:andreil1@starlett.lv" target=3D"_blank" moz-do-not-send=3D"true">andreil1@starlet= t.lv</a>></span> wrote:<br> <blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi,<br> <br> I'm going to setup relatively simple 2-node system with oVirt 4.1,<br> GlusterFS, and several VMs running.<br> Each node going to be installed on dual Xeon system with single RAID 5.<br> <br> oVirt node installer uses relatively simple default partitioning scheme.<br> Should I leave it as is, or there are better options?<br> I never used GlusterFS before, so any expert opinion is very welcome.<br> <br> Thanks in advance.<br> Andrei<br> ______________________________<wbr>_________________<br> Users mailing list<br> <a href=3D"mailto:Users@ovirt.org" moz-do-not-send=3D"true">U= sers@ovirt.org</a><br> <a href=3D"http://lists.ovirt.org/mailman/listinfo/users" rel=3D"noreferrer" target=3D"_blank" moz-do-not-send=3D"tru= e">http://lists.ovirt.org/<wbr>mailman/listinfo/users</a><br> </blockquote> </div> <br> </div> </blockquote> <p><br> </p> </body> </html> --------------80C631B20FF4A39BFE25D6C1--

Hi, AFAIK, during hosted engine deployment installer will check the GlusterFS replica type. And replica 3 is a mandatory requirement. Previously, i got and idvise within this mailing list to look on DRDB solution if you do t have a third node to to run at a GlusterFS replica 3. 14 дек. 2017 г. 1:51 пользователь "Andrei V" <andreil1@starlett.lv> написал:
Hi, Donny,
Thanks for the link.
Am I understood correctly that I'm need at least 3-node system to run in failover mode? So far I'm plan to deploy only 2 nodes, either with hosted either with bare metal engine.
*The key thing to keep in mind regarding host maintenance and downtime is that this converged three node system relies on having at least two of the nodes up at all times. If you bring down two machines at once, you'll run afoul of the Gluster quorum rules that guard us from split-brain states in our storage, the volumes served by your remaining host will go read-only, and the VMs stored on those volumes will pause and require a shutdown and restart in order to run again.*
What happens if in 2-node glusterfs system (with hosted engine) one node goes down? Bare metal engine can manage this situation, but I'm not sure about hosted engine.
On 12/13/2017 11:17 PM, Donny Davis wrote:
I would start here https://ovirt.org/blog/2017/04/up-and-running-with-ovirt- 4.1-and-gluster-storage/
Pretty good basic guidance.
Also with software defined storage its recommended their are at least two "storage" nodes and one arbiter node to maintain quorum.
On Wed, Dec 13, 2017 at 3:45 PM, Andrei V <andreil1@starlett.lv> wrote:
Hi,
I'm going to setup relatively simple 2-node system with oVirt 4.1, GlusterFS, and several VMs running. Each node going to be installed on dual Xeon system with single RAID 5.
oVirt node installer uses relatively simple default partitioning scheme. Should I leave it as is, or there are better options? I never used GlusterFS before, so any expert opinion is very welcome.
Thanks in advance. Andrei _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On Wed, Dec 13, 2017 at 11:51 PM, Andrei V <andreil1@starlett.lv> wrote:
Hi, Donny,
Thanks for the link.
Am I understood correctly that I'm need at least 3-node system to run in failover mode? So far I'm plan to deploy only 2 nodes, either with hosted either with bare metal engine.
*The key thing to keep in mind regarding host maintenance and downtime is that this converged three node system relies on having at least two of the nodes up at all times. If you bring down two machines at once, you'll run afoul of the Gluster quorum rules that guard us from split-brain states in our storage, the volumes served by your remaining host will go read-only, and the VMs stored on those volumes will pause and require a shutdown and restart in order to run again.*
What happens if in 2-node glusterfs system (with hosted engine) one node goes down? Bare metal engine can manage this situation, but I'm not sure about hosted engine.
In order to be sure you cannot get affected by a split brain issue, you need a full replica 3 env or at least replica 3 with an arbiter node: http://docs.gluster.org/en/latest/Administrator%20Guide/Split%20brain%20and%... Otherwise if, for any reason (like a network split), you have two divergent copies of the file you simply do not have enough information to authoritatively pick the right copy and discard the other.
On 12/13/2017 11:17 PM, Donny Davis wrote:
I would start here https://ovirt.org/blog/2017/04/up-and-running-with-ovirt- 4.1-and-gluster-storage/
Pretty good basic guidance.
Also with software defined storage its recommended their are at least two "storage" nodes and one arbiter node to maintain quorum.
On Wed, Dec 13, 2017 at 3:45 PM, Andrei V <andreil1@starlett.lv> wrote:
Hi,
I'm going to setup relatively simple 2-node system with oVirt 4.1, GlusterFS, and several VMs running. Each node going to be installed on dual Xeon system with single RAID 5.
oVirt node installer uses relatively simple default partitioning scheme. Should I leave it as is, or there are better options? I never used GlusterFS before, so any expert opinion is very welcome.
Thanks in advance. Andrei _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
participants (4)
-
Andrei V
-
Artem Tambovskiy
-
Donny Davis
-
Simone Tiraboschi