This is a multi-part message in MIME format.
--------------7999E232532992C2B4184FF7
Content-Type: text/plain; charset=windows-1252; format=flowed
Content-Transfer-Encoding: 7bit
Moacir, I beleive for using the 3 servers directly connected to each
others without switch you have to have a Bridge on each server for every
2 physical interfaces to allow the traffic passthrough in layer2 (Is it
possible to create this from the oVirt Engine Web Interface?). If your
ovirtmgmt network is separate from other (should really be) that should
be fine to do.
Fernando
On 07/08/2017 07:13, Moacir Ferreira wrote:
Hi, in-line responses.
Thanks,
Moacir
------------------------------------------------------------------------
*From:* Yaniv Kaul <ykaul(a)redhat.com>
*Sent:* Monday, August 7, 2017 7:42 AM
*To:* Moacir Ferreira
*Cc:* users(a)ovirt.org
*Subject:* Re: [ovirt-users] Good practices
On Sun, Aug 6, 2017 at 5:49 PM, Moacir Ferreira
<moacirferreira(a)hotmail.com <mailto:moacirferreira@hotmail.com>> wrote:
I am willing to assemble a oVirt "pod", made of 3 servers, each
with 2 CPU sockets of 12 cores, 256GB RAM, 7 HDD 10K, 1 SSD. The
idea is to use GlusterFS to provide HA for the VMs. The 3 servers
have a dual 40Gb NIC and a dual 10Gb NIC. So my intention is to
create a loop like a server triangle using the 40Gb NICs for
virtualization files (VMs .qcow2) access and to move VMs around
the pod (east /west traffic) while using the 10Gb interfaces for
giving services to the outside world (north/south traffic).
Very nice gear. How are you planning the network exactly? Without a
switch, back-to-back? (sounds OK to me, just wanted to ensure this is
what the 'dual' is used for). However, I'm unsure if you have the
correct balance between the interface speeds (40g) and the disks (too
many HDDs?).
Moacir:The idea is to have a very high performance network for the
distributed file system and to prevent bottlenecks when we move one VM
from a node to another. Using 40Gb NICs I can just connect the servers
back-to-back. In this case I don't need the expensive 40Gb switch, I
get very high speed and no contention between north/south traffic with
east/west.
This said, my first question is: How should I deploy GlusterFS in
such oVirt scenario? My questions are:
1 - Should I create 3 RAID (i.e.: RAID 5), one on each oVirt node,
and then create a GlusterFS using them?
I would assume RAID 1 for the operating system (you don't want a
single point of failure there?) and the rest JBODs. The SSD will be
used for caching, I reckon? (I personally would add more SSDs instead
of HDDs, but it does depend on the disk sizes and your space requirements.
Moacir: Yes, I agree that I need a RAID-1 for the OS. Now, generic
JBOD or a JBOD assembled using RAID-5 "disks" createdby the server's
disk controller?
2 - Instead, should I create a JBOD array made of all server's disks?
3 - What is the best Gluster configuration to provide for HA while
not consuming too much disk space?
Replica 2 + Arbiter sounds good to me.
Moacir:I agree, and that is what I am using.
4 - Does a oVirt hypervisor pod like I am planning to build, and
the virtualization environment, benefits from tiering when using a
SSD disk? And yes, will Gluster do it by default or I have to
configure it to do so?
Yes, I believe using lvmcache is the best way to go.
Moacir: Are you sure? I say that because the qcow2 files will be
quite big. So if tiering is "file based" the SSD would have to be
very, very big unless Gluster tiering do it by "chunks of data".
At the bottom line, what is the good practice for using GlusterFS
in small pods for enterprises?
Don't forget jumbo frames. libgfapi (coming hopefully in 4.1.5).
Sharding (enabled out of the box if you use a hyper-converged setup
via gdeploy).
*Moacir:* Yes! This is another reason to have separate networks for
north/south and east/west. In that way I can use the standard MTU on
the 10Gb NICs and jumbo frames on the file/move 40Gb NICs.
Y.
You opinion/feedback will be really appreciated!
Moacir
_______________________________________________
Users mailing list
Users(a)ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users
<
http://lists.ovirt.org/mailman/listinfo/users>
_______________________________________________
Users mailing list
Users(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
--------------7999E232532992C2B4184FF7
Content-Type: text/html; charset=windows-1252
Content-Transfer-Encoding: 8bit
<html>
<head>
<meta http-equiv="Content-Type" content="text/html;
charset=windows-1252">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<p>Moacir, I beleive for using the 3 servers directly connected to
each others without switch you have to have a Bridge on each
server for every 2 physical interfaces to allow the traffic
passthrough in layer2 (Is it possible to create this from the
oVirt Engine Web Interface?). If your ovirtmgmt network is
separate from other (should really be) that should be fine to do.</p>
<p><br>
</p>
<p>Fernando<br>
</p>
<br>
<div class="moz-cite-prefix">On 07/08/2017 07:13, Moacir Ferreira
wrote:<br>
</div>
<blockquote type="cite"
cite="mid:VI1P190MB02856C6CA513853E8EDA2DB2C8B50@VI1P190MB0285.EURP190.PROD.OUTLOOK.COM">
<meta http-equiv="Content-Type" content="text/html;
charset=windows-1252">
<style type="text/css" style="display:none;"><!-- P
{margin-top:0;margin-bottom:0;} --></style>
<div id="divtagdefaultwrapper"
style="font-size:12pt;color:#000000;font-family:Calibri,Helvetica,sans-serif;"
dir="ltr">
<p>Hi, in-line responses.<br>
</p>
<br>
<p>Thanks,</p>
<p>Moacir<br>
</p>
<br>
<div style="color: rgb(49, 55, 57);">
<hr tabindex="-1" style="display:inline-block;
width:98%">
<div id="divRplyFwdMsg" dir="ltr"><font
style="font-size:11pt"
color="#000000" face="Calibri,
sans-serif"><b>From:</b>
Yaniv Kaul <a class="moz-txt-link-rfc2396E"
href="mailto:ykaul@redhat.com"><ykaul@redhat.com></a><br>
<b>Sent:</b> Monday, August 7, 2017 7:42 AM<br>
<b>To:</b> Moacir Ferreira<br>
<b>Cc:</b> <a class="moz-txt-link-abbreviated"
href="mailto:users@ovirt.org">users@ovirt.org</a><br>
<b>Subject:</b> Re: [ovirt-users] Good practices</font>
<div> </div>
</div>
<div>
<div dir="ltr"><br>
<div class="gmail_extra"><br>
<div class="gmail_quote">On Sun, Aug 6, 2017 at 5:49 PM,
Moacir Ferreira <span dir="ltr">
<<a href="mailto:moacirferreira@hotmail.com"
target="_blank"
moz-do-not-send="true">moacirferreira(a)hotmail.com</a>&gt;</span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex; border-left:1px #ccc solid; padding-left:1ex">
<div dir="ltr">
<div
id="m_5509585889569690791divtagdefaultwrapper"
dir="ltr" style="font-size:12pt; color:#000000;
font-family:Calibri,Helvetica,sans-serif">
<p><span>I am willing to assemble a oVirt
"pod",
made of 3 servers, each with 2 CPU sockets
of 12 cores, 256GB RAM, 7 HDD 10K, 1 SSD.
The idea is to use GlusterFS to provide HA
for the VMs. The 3 servers have a dual 40Gb
NIC and a dual 10Gb NIC. So my intention is
to create a loop like a server triangle
using the 40Gb NICs for virtualization files
(VMs .qcow2) access and to move VMs around
the pod (east /west traffic) while using the
10Gb interfaces for giving services to the
outside world (north/south traffic).</span></p>
</div>
</div>
</blockquote>
<div><br>
</div>
<div>Very nice gear. How are you planning the network
exactly? Without a switch, back-to-back? (sounds OK
to me, just wanted to ensure this is what the 'dual'
is used for). However, I'm unsure if you have the
correct balance between the interface speeds (40g)
and the disks (too many HDDs?).</div>
<div><br>
<span style="color: rgb(0, 0,
0);">M</span><span
style="color: rgb(0, 0, 0);">o</span><span
style="color: rgb(0, 0, 0);">a</span><span
style="color: rgb(0, 0, 0);">c</span><span
style="color: rgb(0, 0, 0);">i</span><span
style="color: rgb(0, 0, 0);">r</span><span
style="color: rgb(0, 0, 0);">:</span><span
style="color: rgb(0, 0, 0);"> The idea is to have
a very high performance network for the
distributed file system and to prevent bottlenecks
when we move one VM from a node to another. Using
</span><span style="color: rgb(0, 0, 0);">40Gb
NICs
I can just connect the servers back-to-back. In
this case I don't need the expensive 40Gb switch,
I get very high speed and no contention between
north/south traffic with east/west.</span><br>
<br>
</div>
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex; border-left:1px #ccc solid; padding-left:1ex">
<div dir="ltr">
<div
id="m_5509585889569690791divtagdefaultwrapper"
dir="ltr" style="font-size:12pt; color:#000000;
font-family:Calibri,Helvetica,sans-serif">
<p><br>
<span></span></p>
<p>This said, my first question is: How should I
deploy GlusterFS in such oVirt scenario? My
questions are:</p>
<p><br>
</p>
<p>1 - Should I create 3 RAID (i.e.: RAID 5),
one on each oVirt node, and then create a
GlusterFS using them?</p>
</div>
</div>
</blockquote>
<div>I would assume RAID 1 for the operating system
(you don't want a single point of failure there?)
and the rest JBODs. The SSD will be used for
caching, I reckon? (I personally would add more SSDs
instead of HDDs, but it does depend on the disk
sizes and your space requirements.<br>
<br>
<span style="color: rgb(0, 0, 0);">Moacir: Yes, I
agree that I need a RAID-1 for the OS. Now,
generic JBOD or a JBOD assembled using RAID-5
"disks" created</span><span style="color:
rgb(0,
0, 0);"> by the server's disk
</span><span style="color: rgb(0, 0,
0);">controller?</span><br>
</div>
<span style="color: rgb(0, 0, 0);"></span>
<div> <br>
</div>
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex; border-left:1px #ccc solid; padding-left:1ex">
<div dir="ltr">
<div
id="m_5509585889569690791divtagdefaultwrapper"
dir="ltr" style="font-size:12pt; color:#000000;
font-family:Calibri,Helvetica,sans-serif">
<p>2 - Instead, should I create a JBOD array
made of all server's disks?</p>
<p>3 - What is the best Gluster configuration to
provide for HA while not consuming too much
disk space?<br>
</p>
</div>
</div>
</blockquote>
<div><br>
</div>
<div>Replica 2 + Arbiter sounds good to me.</div>
<div><span style="color: rgb(0, 0,
0);">M</span><span
style="color: rgb(0, 0, 0);">o</span><span
style="color: rgb(0, 0, 0);">a</span><span
style="color: rgb(0, 0, 0);">c</span><span
style="color: rgb(0, 0, 0);">i</span><span
style="color: rgb(0, 0, 0);">r</span><span
style="color: rgb(0, 0, 0);">:</span><span
style="color: rgb(0, 0, 0);">
</span><span style="color: rgb(0, 0, 0);">I
agree,
and that is what I am using.</span><br>
<br>
</div>
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex; border-left:1px #ccc solid; padding-left:1ex">
<div dir="ltr">
<div
id="m_5509585889569690791divtagdefaultwrapper"
dir="ltr" style="font-size:12pt; color:#000000;
font-family:Calibri,Helvetica,sans-serif">
<p>4 - Does a oVirt hypervisor pod like I am
planning to build, and the virtualization
environment, benefits from tiering when using
a SSD disk? And yes, will Gluster do it by
default or I have to configure it to do so?</p>
</div>
</div>
</blockquote>
<div><br>
</div>
<div>Yes, I believe using lvmcache is the best way to
go. </div>
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex; border-left:1px #ccc solid; padding-left:1ex">
<div dir="ltr">
<div
id="m_5509585889569690791divtagdefaultwrapper"
dir="ltr" style="font-size:12pt; color:#000000;
font-family:Calibri,Helvetica,sans-serif">
<p>Moacir: Are you sure? I say that because the
qcow2 files will be quite big. So if tiering
is "file based" the SSD would have to be very,
very big unless Gluster tiering do it by
"chunks of data".<br>
</p>
<p><br>
</p>
<p>At the bottom line, what is the good practice
for using GlusterFS in small pods for
enterprises?<br>
</p>
</div>
</div>
</blockquote>
<div><br>
</div>
<div>Don't forget jumbo frames. libgfapi (coming
hopefully in 4.1.5). Sharding (enabled out of the
box if you use a hyper-converged setup via gdeploy).<br>
<b>Moacir:</b> Yes! This is another reason to have
separate networks for north/south and east/west. In
that way I can use the standard MTU on the 10Gb NICs
and jumbo frames on the file/move 40Gb NICs.<br>
<br>
</div>
<div>Y.</div>
<div> </div>
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex; border-left:1px #ccc solid; padding-left:1ex">
<div dir="ltr">
<div
id="m_5509585889569690791divtagdefaultwrapper"
dir="ltr" style="font-size:12pt; color:#000000;
font-family:Calibri,Helvetica,sans-serif">
<p><br>
</p>
<p>You opinion/feedback will be really
appreciated!</p>
<span class="HOEnZb"><font
color="#888888">
<p>Moacir<br>
</p>
</font></span></div>
</div>
<br>
______________________________<wbr>_________________<br>
Users mailing list<br>
<a href="mailto:Users@ovirt.org"
moz-do-not-send="true">Users(a)ovirt.org</a><br>
<a
href="http://lists.ovirt.org/mailman/listinfo/users"
rel="noreferrer" target="_blank"
moz-do-not-send="true">http://lists.ovirt.org/<wbr>mai...
<br>
</blockquote>
</div>
<br>
</div>
</div>
</div>
</div>
</div>
<br>
<fieldset class="mimeAttachmentHeader"></fieldset>
<br>
<pre wrap="">_______________________________________________
Users mailing list
<a class="moz-txt-link-abbreviated"
href="mailto:Users@ovirt.org">Users@ovirt.org</a>
<a class="moz-txt-link-freetext"
href="http://lists.ovirt.org/mailman/listinfo/users">http://...
</pre>
</blockquote>
<br>
</body>
</html>
--------------7999E232532992C2B4184FF7--