--_000_VI1P190MB02856C6CA513853E8EDA2DB2C8B50VI1P190MB0285EURP_
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Hi, in-line responses.
Thanks,
Moacir
________________________________
From: Yaniv Kaul <ykaul(a)redhat.com
Sent:
Monday, August 7, 2017 7:42 AM
To: Moacir Ferreira
Cc: users(a)ovirt.org
Subject: Re: [ovirt-users] Good practices
On Sun, Aug 6, 2017 at 5:49 PM, Moacir Ferreira <moacirferreira(a)hotmail.com=
<mailto:moacirferreira@hotmail.com>> wrote:
I am willing to assemble a oVirt "pod", made of 3 servers, each with 2 CPU =
sockets of 12 cores, 256GB RAM, 7 HDD 10K, 1 SSD. The idea is to use Gluste=
rFS to provide HA for the VMs. The 3 servers have a dual 40Gb NIC and a dua=
l 10Gb NIC. So my intention is to create a loop like a server triangle usin=
g the 40Gb NICs for virtualization files (VMs .qcow2) access and to move VM=
s around the pod (east /west traffic) while using the 10Gb interfaces for g=
iving services to the outside world (north/south traffic).
Very nice gear. How are you planning the network exactly? Without a switch,=
back-to-back? (sounds OK to me, just wanted to ensure this is what the 'du=
al' is used for). However, I'm unsure if you have the correct balance betwe=
en the interface speeds (40g) and the disks (too many HDDs?).
Moacir: The idea is to have a very high performance network for the distrib=
uted file system and to prevent bottlenecks when we move one VM from a node=
to another. Using 40Gb NICs I can just connect the servers back-to-back. I=
n this case I don't need the expensive 40Gb switch, I get very high speed a=
nd no contention between north/south traffic with east/west.
This said, my first question is: How should I deploy GlusterFS in such oVir=
t scenario? My questions are:
1 - Should I create 3 RAID (i.e.: RAID 5), one on each oVirt node, and then=
create a GlusterFS using them?
I would assume RAID 1 for the operating system (you don't want a single poi=
nt of failure there?) and the rest JBODs. The SSD will be used for caching,=
I reckon? (I personally would add more SSDs instead of HDDs, but it does d=
epend on the disk sizes and your space requirements.
Moacir: Yes, I agree that I need a RAID-1 for the OS. Now, generic JBOD or =
a JBOD assembled using RAID-5 "disks" created by the server's disk
controll=
er?
2 - Instead, should I create a JBOD array made of all server's disks?
3 - What is the best Gluster configuration to provide for HA while not cons=
uming too much disk space?
Replica 2 + Arbiter sounds good to me.
Moacir: I agree, and that is what I am using.
4 - Does a oVirt hypervisor pod like I am planning to build, and the virtua=
lization environment, benefits from tiering when using a SSD disk? And yes,=
will Gluster do it by default or I have to configure it to do so?
Yes, I believe using lvmcache is the best way to go.
Moacir: Are you sure? I say that because the qcow2 files will be quite big.=
So if tiering is "file based" the SSD would have to be very, very big unle=
ss Gluster tiering do it by "chunks of data".
At the bottom line, what is the good practice for using GlusterFS in small =
pods for enterprises?
Don't forget jumbo frames. libgfapi (coming hopefully in 4.1.5). Sharding (=
enabled out of the box if you use a hyper-converged setup via gdeploy).
Moacir: Yes! This is another reason to have separate networks for north/sou=
th and east/west. In that way I can use the standard MTU on the 10Gb NICs a=
nd jumbo frames on the file/move 40Gb NICs.
Y.
You opinion/feedback will be really appreciated!
Moacir
_______________________________________________
Users mailing list
Users@ovirt.org<mailto:Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
--_000_VI1P190MB02856C6CA513853E8EDA2DB2C8B50VI1P190MB0285EURP_
Content-Type: text/html; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
<html
<head
<meta http-equiv=3D"Content-Type"
content=3D"text/html; charset=3Diso-8859-=
1"
<style type=3D"text/css"
style=3D"display:none;"><!-- P {margin-top:0;margi=
n-bottom:0;} --></style
</head
<body dir=3D"ltr"
<div
id=3D"divtagdefaultwrapper" style=3D"font-size:12pt;color:#000000;font=
-family:Calibri,Helvetica,sans-serif;" dir=3D"ltr"
<p>Hi, in-line responses.<br
</p
<br
<p>Thanks,</p
<p>Moacir<br
</p
<br
<div
style=3D"color: rgb(49, 55, 57);"
<hr
tabindex=3D"-1" style=3D"display:inline-block; width:98%"
<div id=3D"divRplyFwdMsg"
dir=3D"ltr"><font style=3D"font-size:11pt" face=
=3D"Calibri, sans-serif" color=3D"#000000"><b>From:</b>
Yaniv Kaul <ykau=
l(a)redhat.com&gt;<br
<b>Sent:</b> Monday,
August 7, 2017 7:42 AM<br
<b>To:</b> Moacir
Ferreira<br
<b>Cc:</b>
users(a)ovirt.org<br
<b>Subject:</b> Re:
[ovirt-users] Good practices</font
<div> </div
</div
<div
<div
dir=3D"ltr"><br
<div
class=3D"gmail_extra"><br
<div
class=3D"gmail_quote">On Sun, Aug 6, 2017 at 5:49 PM, Moacir Ferreira =
<span dir=3D"ltr"
<<a
href=3D"mailto:moacirferreira@hotmail.com"
target=3D"_blank">moacirf=
erreira(a)hotmail.com</a>&gt;</span> wrote:<br
<blockquote class=3D"gmail_quote"
style=3D"margin:0 0 0 .8ex; border-left:1=
px #ccc solid; padding-left:1ex"
<div
dir=3D"ltr"
<div
id=3D"m_5509585889569690791divtagdefaultwrapper" dir=3D"ltr"
style=3D"=
font-size:12pt; color:#000000; font-family:Calibri,Helvetica,sans-serif"
<p><span>I am willing to assemble a oVirt
"pod", made of 3 server=
s, each with 2 CPU sockets of 12 cores, 256GB RAM, 7 HDD 10K, 1 SSD. The id=
ea is to use GlusterFS to provide HA for the VMs. The 3 servers have a dual=
40Gb NIC and a dual 10Gb NIC. So my intention
is to create a loop like a server triangle using the 40Gb NICs for virtual=
ization files (VMs .qcow2) access and to move VMs around the pod (east /wes=
t traffic) while using the 10Gb interfaces for giving services to the outsi=
de world (north/south traffic).</span></p
</div
</div
</blockquote
<div><br
</div
<div>Very nice gear. How are you planning the network
exactly? Without a sw=
itch, back-to-back? (sounds OK to me, just wanted to ensure this is what th=
e 'dual' is used for). However, I'm unsure if you have the correct balance =
between the interface speeds (40g)
and the disks (too many HDDs?).</div
<div><br
<span style=3D"color: rgb(0,
0, 0);">M</span><span style=3D"color: rgb(0, 0=
, 0);">o</span><span style=3D"color: rgb(0, 0,
0);">a</span><span style=3D"=
color: rgb(0, 0, 0);">c</span><span style=3D"color: rgb(0, 0,
0);">i</span>=
<span style=3D"color: rgb(0, 0, 0);">r</span><span
style=3D"color: rgb(0, 0=
, 0);">:</span><span style=3D"color: rgb(0, 0, 0);"
The idea is to have a very high performance network for the
distributed fi=
le system and to prevent bottlenecks when we move one VM from a node to ano=
ther. Using
</span><span style=3D"color: rgb(0, 0, 0);">40Gb NICs I can just
connect th=
e servers back-to-back. In this case I don't need the expensive 40Gb switch=
, I get very high speed and no contention between north/south traffic with =
east/west.</span><br
<br
</div
<blockquote
class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex; border-left:1=
px #ccc solid; padding-left:1ex"
<div
dir=3D"ltr"
<div
id=3D"m_5509585889569690791divtagdefaultwrapper" dir=3D"ltr"
style=3D"=
font-size:12pt; color:#000000; font-family:Calibri,Helvetica,sans-serif"
<p><br
<span></span></p
<p>This said, my first question is: How should I deploy GlusterFS in such o=
Virt scenario? My questions are:</p
<p><br
</p
<p>1 - Should I create 3 RAID (i.e.: RAID 5), one on each oVirt node, and t=
hen create a GlusterFS using them?</p
</div
</div
</blockquote
<div>I would assume RAID 1 for the operating system (you don't want a
singl=
e point of failure there?) and the rest JBODs. The SSD will be used for cac=
hing, I reckon? (I personally would add more SSDs instead of HDDs, but it d=
oes depend on the disk sizes and
your space requirements.<br
<br
<span
style=3D"color: rgb(0, 0, 0);">Moacir: Yes, I agree that I need a RAI=
D-1 for the OS. Now, generic JBOD or a JBOD assembled using RAID-5 "di=
sks" created</span><span style=3D"color: rgb(0, 0, 0);"> by
the server=
's disk
</span><span style=3D"color: rgb(0, 0,
0);">controller?</span><br
</div
<span style=3D"color: rgb(0,
0, 0);"></span
<div> <br
</div
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;
border-left:1=
px #ccc solid; padding-left:1ex"
<div
dir=3D"ltr"
<div
id=3D"m_5509585889569690791divtagdefaultwrapper" dir=3D"ltr"
style=3D"=
font-size:12pt; color:#000000; font-family:Calibri,Helvetica,sans-serif"
<p>2 - Instead, should I create a JBOD array made of all
server's disks?</p=
<p>3 - What is the best Gluster configuration to provide
for HA while not c=
onsuming too much disk space?<br
</p
</div
</div
</blockquote
<div><br
</div
<div>Replica 2 + Arbiter sounds good to me.</div
<div><span style=3D"color: rgb(0, 0,
0);">M</span><span style=3D"color: rgb=
(0, 0, 0);">o</span><span style=3D"color: rgb(0, 0,
0);">a</span><span styl=
e=3D"color: rgb(0, 0, 0);">c</span><span style=3D"color: rgb(0,
0, 0);">i</=
span><span style=3D"color: rgb(0, 0, 0);">r</span><span
style=3D"color: rgb=
(0, 0, 0);">:</span><span style=3D"color: rgb(0, 0, 0);"
</span><span style=3D"color: rgb(0, 0,
0);">I agree, and that is what I am =
using.</span><br
<br
</div
<blockquote
class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex; border-left:1=
px #ccc solid; padding-left:1ex"
<div
dir=3D"ltr"
<div
id=3D"m_5509585889569690791divtagdefaultwrapper" dir=3D"ltr"
style=3D"=
font-size:12pt; color:#000000; font-family:Calibri,Helvetica,sans-serif"
<p></p
<p>4 - Does a oVirt hypervisor pod like I am planning to build, and the vir=
tualization environment, benefits from tiering when using a SSD disk? And y=
es, will Gluster do it by default or I have to configure it to do so?</p
</div
</div
</blockquote
<div><br
</div
<div>Yes, I believe using
lvmcache is the best way to go. </div
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;
border-left:1=
px #ccc solid; padding-left:1ex"
<div
dir=3D"ltr"
<div
id=3D"m_5509585889569690791divtagdefaultwrapper" dir=3D"ltr"
style=3D"=
font-size:12pt; color:#000000; font-family:Calibri,Helvetica,sans-serif"
<p>Moacir: Are you sure? I say that because the qcow2
files will be quite b=
ig. So if tiering is "file based" the SSD would have to be very, =
very big unless Gluster tiering do it by "chunks of data".<br
</p
<p><br
</p
<p>At the bottom line, what is
the good practice for using GlusterFS in sma=
ll pods for enterprises?<br
</p
</div
</div
</blockquote
<div><br
</div
<div>Don't forget jumbo frames. libgfapi (coming
hopefully in 4.1.5). Shard=
ing (enabled out of the box if you use a hyper-converged setup via gdeploy)=
.<br
<b>Moacir:</b> Yes! This is another reason to have
separate networks for no=
rth/south and east/west. In that way I can use the standard MTU on the 10Gb=
NICs and jumbo frames on the file/move 40Gb NICs.<br
<br
</div
<div>Y.</div
<div> </div
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;
border-left:1=
px #ccc solid; padding-left:1ex"
<div
dir=3D"ltr"
<div
id=3D"m_5509585889569690791divtagdefaultwrapper" dir=3D"ltr"
style=3D"=
font-size:12pt; color:#000000; font-family:Calibri,Helvetica,sans-serif"
<p></p
<p><br
</p
<p>You opinion/feedback will be really appreciated!</p
<span class=3D"HOEnZb"><font
color=3D"#888888"
<p>Moacir<br
</p
</font></span></div
</div
<br
______________________________<wbr>_________________<br
Users mailing list<br
<a
href=3D"mailto:Users@ovirt.org">Users@ovirt.org</a><br
<a
href=3D"http://lists.ovirt.org/mailman/listinfo/users" rel=3D"noreferrer=
"
target=3D"_blank">http://lists.ovirt.org/<wbr>mailman/...
br
<br
</blockquote
</div
<br
</div
</div
</div
</div
</div
</body
</html
--_000_VI1P190MB02856C6CA513853E8EDA2DB2C8B50VI1P190MB0285EURP_--