--_000_VI1P190MB0285197D4F878904201462F9C8B50VI1P190MB0285EURP_
Content-Type: text/plain; charset="Windows-1252"
Content-Transfer-Encoding: quoted-printable
Devin,
Many, many thaks for your response. I will read the doc you sent and if I s=
till have questions I will post them here.
But why would I use a RAIDed brick if Gluster, by itself, already "protects=
" the data by making replicas. You see, that is what is confusing to me...
Thanks,
Moacir
________________________________
From: Devin Acosta <devin(a)pabstatencio.com
Sent:
Monday, August 7, 2017 7:46 AM
To: Moacir Ferreira; users(a)ovirt.org
Subject: Re: [ovirt-users] Good practices
Moacir,
I have recently installed multiple Red Hat Virtualization hosts for several=
different companies, and have dealt with the Red Hat Support Team in depth=
about optimal configuration in regards to setting up GlusterFS most effici=
ently and I wanted to share with you what I learned.
In general Red Hat Virtualization team frowns upon using each DISK of the s=
ystem as just a JBOD, sure there is some protection by having the data repl=
icated, however, the recommendation is to use RAID 6 (preferred) or RAID-5,=
or at least RAID-1 at the very least.
Here is the direct quote from Red Hat when I asked about RAID and Bricks:
"A typical Gluster configuration would use RAID underneath the bricks. RAID=
6 is most typical as it gives you 2 disk failure protection, but RAID 5 co=
uld be used too. Once you have the RAIDed bricks, you'd then apply the desi=
red replication on top of that. The most popular way of doing this would be=
distributed replicated with 2x replication. In general you'll get better p=
erformance with larger bricks. 12 drives is often a sweet spot. Another opt=
ion would be to create a separate tier using all SSD=92s.=94
In order to SSD tiering from my understanding you would need 1 x NVMe drive=
in each server, or 4 x SSD hot tier (it needs to be distributed, replicate=
d for the hot tier if not using NVME). So with you only having 1 SSD drive =
in each server, I=92d suggest maybe looking into the NVME option.
Since your using only 3-servers, what I=92d probably suggest is to do (2 Re=
plicas + Arbiter Node), this setup actually doesn=92t require the 3rd serve=
r to have big drives at all as it only stores meta-data about the files and=
not actually a full copy.
Please see the attached document that was given to me by Red Hat to get mor=
e information on this. Hope this information helps you.
--
Devin Acosta, RHCA, RHVCA
Red Hat Certified Architect
On August 6, 2017 at 7:29:29 PM, Moacir Ferreira (moacirferreira(a)hotmail.co=
m<mailto:moacirferreira@hotmail.com>) wrote:
I am willing to assemble a oVirt "pod", made of 3 servers, each with 2 CPU =
sockets of 12 cores, 256GB RAM, 7 HDD 10K, 1 SSD. The idea is to use Gluste=
rFS to provide HA for the VMs. The 3 servers have a dual 40Gb NIC and a dua=
l 10Gb NIC. So my intention is to create a loop like a server triangle usin=
g the 40Gb NICs for virtualization files (VMs .qcow2) access and to move VM=
s around the pod (east /west traffic) while using the 10Gb interfaces for g=
iving services to the outside world (north/south traffic).
This said, my first question is: How should I deploy GlusterFS in such oVir=
t scenario? My questions are:
1 - Should I create 3 RAID (i.e.: RAID 5), one on each oVirt node, and then=
create a GlusterFS using them?
2 - Instead, should I create a JBOD array made of all server's disks?
3 - What is the best Gluster configuration to provide for HA while not cons=
uming too much disk space?
4 - Does a oVirt hypervisor pod like I am planning to build, and the virtua=
lization environment, benefits from tiering when using a SSD disk? And yes,=
will Gluster do it by default or I have to configure it to do so?
At the bottom line, what is the good practice for using GlusterFS in small =
pods for enterprises?
You opinion/feedback will be really appreciated!
Moacir
_______________________________________________
Users mailing list
Users@ovirt.org<mailto:Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
--_000_VI1P190MB0285197D4F878904201462F9C8B50VI1P190MB0285EURP_
Content-Type: text/html; charset="Windows-1252"
Content-Transfer-Encoding: quoted-printable
<html
<head
<meta http-equiv=3D"Content-Type"
content=3D"text/html; charset=3DWindows-1=
252"
<style
type=3D"text/css" style=3D"display:none;"><!-- P
{margin-top:0;margi=
n-bottom:0;} --></style
</head
<body dir=3D"ltr"
<div
id=3D"divtagdefaultwrapper" style=3D"font-size:12pt;color:#000000;font=
-family:Calibri,Helvetica,sans-serif;" dir=3D"ltr"
<p>Devin,</p
<p><br
</p
<p>Many, many thaks for your response. I will read the doc you sent and if =
I still have questions I will post them here.</p
<p><br
</p
<p>But why would I use a RAIDed brick if Gluster, by itself, already
"=
protects" the data by making replicas. You see, that is what is confus=
ing to me...
<br
</p
<br
<p>Thanks,</p
<p>Moacir<br
</p
<br
<br
<div style=3D"color: rgb(49,
55, 57);"
<hr tabindex=3D"-1"
style=3D"display:inline-block; width:98%"
<div
id=3D"divRplyFwdMsg" dir=3D"ltr"><font
style=3D"font-size:11pt" face=
=3D"Calibri, sans-serif" color=3D"#000000"><b>From:</b>
Devin Acosta <de=
vin(a)pabstatencio.com&gt;<br
<b>Sent:</b> Monday,
August 7, 2017 7:46 AM<br
<b>To:</b> Moacir
Ferreira; users(a)ovirt.org<br
<b>Subject:</b> Re:
[ovirt-users] Good practices</font
<div> </div
</div
<div
<div
id=3D"bloop_customfont" style=3D"color:rgb(0,0,0);
margin:0px"><font f=
ace=3D"Input Mono"><br
</font></div
<div
id=3D"bloop_customfont" style=3D"color:rgb(0,0,0);
margin:0px"><font f=
ace=3D"Input Mono">Moacir,</font></div
<div
id=3D"bloop_customfont" style=3D"color:rgb(0,0,0);
margin:0px"><font f=
ace=3D"Input Mono"><br
</font></div
<div
id=3D"bloop_customfont" style=3D"color:rgb(0,0,0);
margin:0px"><font f=
ace=3D"Input Mono">I have recently installed multiple Red Hat Virtualizatio=
n hosts for several different companies, and have dealt with the Red Hat Su=
pport Team in depth about optimal configuration
in regards to setting up GlusterFS most efficiently and I wanted to share =
with you what I learned.</font></div
<div
id=3D"bloop_customfont" style=3D"color:rgb(0,0,0);
margin:0px"><font f=
ace=3D"Input Mono"><br
</font></div
<div
id=3D"bloop_customfont" style=3D"color:rgb(0,0,0);
margin:0px"><font f=
ace=3D"Input Mono">In general Red Hat Virtualization team frowns upon using=
each DISK of the system as just a JBOD, sure there is some protection by h=
aving the data replicated, however, the
recommendation is to use RAID 6 (preferred) or RAID-5, or at least RAID-1 =
at the very least.</font></div
<div
id=3D"bloop_customfont" style=3D"color:rgb(0,0,0);
margin:0px"><font f=
ace=3D"Input Mono"><br
</font></div
<div
id=3D"bloop_customfont" style=3D"margin:0px"><font
face=3D"Input Mono"=
Here is the direct quote from Red Hat when I asked about RAID
and Bri=
cks:</font></div
<div
id=3D"bloop_customfont" style=3D"margin:0px"><font
face=3D"Input Mono"=
<i><br>
</i></font></div
<div id=3D"bloop_customfont"
style=3D"margin:0px"><font face=3D"Input Mono"=
<i>"A typical Gluster configuration would use RAID
underneath the bri=
cks. RAID 6 is most typical as it gives you 2 disk failure
protection, but =
RAID 5 could be used too. Once you have the
RAIDed bricks, you'd then apply the desired replication on top of that. Th=
e most popular way of doing this would be distributed replicated with 2x re=
plication. In general you'll get better performance with larger bricks=
. 12 drives is often a sweet spot. Another
option would be to create a separate tier using all SSD=92s.=94 </i><=
/font></div
<div
id=3D"bloop_customfont" style=3D"margin:0px"><br
</div
<div
id=3D"bloop_customfont" style=3D"margin:0px"><font
face=3D"Input Mono"=
<i>In order to SSD tiering from my understanding you
would need 1 x N=
VMe drive in each server, or 4 x SSD hot tier (it needs to be
distributed, =
replicated for the hot tier if not using NVME).
So with you only having 1 SSD drive in each server, I=92d suggest may=
be looking into the NVME option. </i></font></div
<div id=3D"bloop_customfont"
style=3D"margin:0px"><font face=3D"Input Mono"=
<i><br>
</i></font></div
<div id=3D"bloop_customfont"
style=3D"margin:0px"><font face=3D"Input Mono"=
<i>Since your using only 3-servers, what I=92d probably suggest
is to do (=
2 Replicas + Arbiter Node), this setup actually doesn=92t
require the 3=
rd server to have big drives at all as it only
stores meta-data about the files and not actually a full copy. </i></=
font></div
<div
id=3D"bloop_customfont" style=3D"margin:0px"><font
face=3D"Input Mono"=
<i><br>
</i></font></div
<div id=3D"bloop_customfont"
style=3D"margin:0px"><font face=3D"Input Mono"=
<i>Please see the attached document that was given to me by Red
Hat to get=
more information on this. Hope this information helps
you.</i></font></div=
<div id=3D"bloop_customfont"
style=3D"margin:0px"><font face=3D"Input Mono"=
<i><br>
</i></font></div
<br
<div
id=3D"bloop_sign_1502087376725469184" class=3D"bloop_sign"><span
style=
=3D"font-family:'helvetica Neue',helvetica;
font-size:14px">--</span><br st=
yle=3D"font-family:'helvetica Neue',helvetica; font-size:14px"
<div class=3D"gmail_signature"
style=3D"font-family:'helvetica Neue',helvet=
ica; font-size:14px"
<div dir=3D"ltr"
<div><br
</div
<div>Devin Acosta, RHCA,
RHVCA</div
<div>Red Hat Certified
Architect</div
<div></div
</div
</div
</div
<br
<p
class=3D"airmail_on">On August 6, 2017 at 7:29:29 PM, Moacir Ferreira (<=
a
href=3D"mailto:moacirferreira@hotmail.com">moacirferreira@hotmail.com</a>=
) wrote:</p
<blockquote
type=3D"cite" class=3D"clean_bq"><span
<div dir=3D"ltr"
<div></div
<div
<div
id=3D"divtagdefaultwrapper" dir=3D"ltr" style=3D"font-size:12pt;
color=
:#000000; font-family:Calibri,Helvetica,sans-serif"
<p><span>I am willing to assemble a oVirt "pod", made
of 3 server=
s, each with 2 CPU sockets of 12 cores, 256GB RAM, 7 HDD 10K, 1 SSD. The id=
ea is to use GlusterFS to provide HA for the VMs. The 3 servers have a dual=
40Gb NIC and a dual 10Gb NIC. So my intention
is to create a loop like a server triangle using the 40Gb NICs for virtual=
ization files (VMs .qcow2) access and to move VMs around the pod (east /wes=
t traffic) while using the 10Gb interfaces for giving services to the outsi=
de world (north/south traffic).</span></p
<p><br
</p
<p>This said, my first question is: How should I deploy GlusterFS in such o=
Virt scenario? My questions are:</p
<p><br
</p
<p>1 - Should I create 3 RAID (i.e.: RAID 5), one on each oVirt node, and t=
hen create a GlusterFS using them?</p
<p>2 - Instead, should I create a JBOD array made of all server's
disks?</p=
<p>3 - What is the best Gluster configuration to provide
for HA while not c=
onsuming too much disk space?<br
</p
<p>4 - Does a oVirt hypervisor
pod like I am planning to build, and the vir=
tualization environment, benefits from tiering when using a SSD disk? And y=
es, will Gluster do it by default or I have to configure it to do so?</p
<p><br
</p
<p>At the bottom line, what is
the good practice for using GlusterFS in sma=
ll pods for enterprises?<br
</p
<p><br
</p
<p>You opinion/feedback will be really appreciated!</p
<p>Moacir<br
</p
</div
_______________________________________________ <br
Users mailing list <br
<a
href=3D"mailto:Users@ovirt.org">Users@ovirt.org</a> <br
<a
href=3D"http://lists.ovirt.org/mailman/listinfo/users">http:...
t.org/mailman/listinfo/users</a
<br
</div
</div
</span></blockquote
</div
</div
</div
</body
</html
--_000_VI1P190MB0285197D4F878904201462F9C8B50VI1P190MB0285EURP_--