--_000_DB6P190MB02801EA0892B38F503220896C88A0DB6P190MB0280EURP_
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Fernando,
Let's see what people say... But this is what I understood Red Hat says is =
the best performance model. This is the main reason to open this discussion=
because as long as I can see, some of you in the community, do not agree.
But when I think about a "distributed file system", that can make any numbe=
r of copies you want, it does not make sense using a RAIDed brick, what it =
makes sense is to use JBOD.
Moacir
________________________________
From: fernando.frediani(a)upx.com.br <fernando.frediani(a)upx.com.br> on behalf=
of FERNANDO FREDIANI <fernando.frediani(a)upx.com
Sent:
Tuesday, August 8, 2017 3:08 AM
To: Moacir Ferreira
Cc: Colin Coe; users(a)ovirt.org
Subject: Re: [ovirt-users] Good practices
Moacir, I understand that if you do this type of configuration you will be =
severely impacted on storage performance, specially for writes. Even if you=
have a Hardware RAID Controller with Writeback cache you will have a signi=
ficant performance penalty and may not fully use all the resources you ment=
ioned you have.
Fernando
2017-08-07 10:03 GMT-03:00 Moacir Ferreira <moacirferreira(a)hotmail.com<mail=
to:moacirferreira@hotmail.com>>:
Hi Colin,
Take a look on Devin's response. Also, read the doc he shared that gives so=
me hints on how to deploy Gluster.
It is more like that if you want high-performance you should have the brick=
s created as RAID (5 or 6) by the server's disk controller and them assembl=
e a JBOD GlusterFS. The attached document is Gluster specific and not for o=
Virt. But at this point I think that having SSD will not be a plus as using=
the RAID controller Gluster will not be aware of the SSD. Regarding the OS=
, my idea is to have a RAID 1, made of 2 low cost HDDs, to install it.
So far, based on the information received I should create a single RAID 5 o=
r 6 on each server and then use this disk as a brick to create my Gluster c=
luster, made of 2 replicas + 1 arbiter. What is new for me is the detail th=
at the arbiter does not need a lot of space as it only keeps meta data.
Thanks for your response!
Moacir
________________________________
From: Colin Coe <colin.coe@gmail.com<mailto:colin.coe@gmail.com>
Sent: Monday, August 7, 2017 12:41 PM
To: Moacir Ferreira
Cc: users@ovirt.org<mailto:users@ovirt.org
Subject:
Re: [ovirt-users] Good practices
Hi
I just thought that you'd do hardware RAID if you had the controller or JBO=
D if you didn't. In hindsight, a server with 40Gbps NICs is pretty likely =
to have a hardware RAID controller. I've never done JBOD with hardware RAI=
D. I think having a single gluster brick on hardware JBOD would be riskier=
than multiple bricks, each on a single disk, but thats not based on anythi=
ng other than my prejudices.
I thought gluster tiering was for the most frequently accessed files, in wh=
ich case all the VMs disks would end up in the hot tier. However, I have b=
een wrong before...
I just wanted to know where the OS was going as I didn't see it mentioned i=
n the OP. Normally, I'd have the OS on a RAID1 but in your case thats a lo=
t of wasted disk.
Honestly, I think Yaniv's answer was far better than my own and made the im=
portant point about having an arbiter.
Thanks
On Mon, Aug 7, 2017 at 5:56 PM, Moacir Ferreira <moacirferreira(a)hotmail.com=
<mailto:moacirferreira@hotmail.com>> wrote:
Hi Colin,
I am in Portugal, so sorry for this late response. It is quite confusing fo=
r me, please consider:
1 - What if the RAID is done by the server's disk controller, not by softwa=
re?
2 - For JBOD I am just using gdeploy to deploy it. However, I am not using =
the oVirt node GUI to do this.
3 - As the VM .qcow2 files are quite big, tiering would only help if made b=
y an intelligent system that uses SSD for chunks of data not for the entire=
.qcow2 file. But I guess this is a problem everybody else has. So, Do you =
know how tiering works in Gluster?
4 - I am putting the OS on the first disk. However, would you do differentl=
y?
Moacir
________________________________
From: Colin Coe <colin.coe@gmail.com<mailto:colin.coe@gmail.com>
Sent: Monday, August 7, 2017 4:48 AM
To: Moacir Ferreira
Cc: users@ovirt.org<mailto:users@ovirt.org
Subject:
Re: [ovirt-users] Good practices
1) RAID5 may be a performance hit-
2) I'd be inclined to do this as JBOD by creating a distributed disperse vo=
lume on each server. Something like
echo gluster volume create dispersevol disperse-data 5 redundancy 2 \
$(for SERVER in a b c; do for BRICK in $(seq 1 5); do echo -e "server${SERV=
ER}:/brick/brick-${SERVER}${BRICK}/brick \c"; done; done)
3) I think the above.
4) Gluster does support tiering, but IIRC you'd need the same number of SSD=
as spindle drives. There may be another way to use the SSD as a fast cach=
e.
Where are you putting the OS?
Hope I understood the question...
Thanks
On Sun, Aug 6, 2017 at 10:49 PM, Moacir Ferreira <moacirferreira(a)hotmail.co=
m<mailto:moacirferreira@hotmail.com>> wrote:
I am willing to assemble a oVirt "pod", made of 3 servers, each with 2 CPU =
sockets of 12 cores, 256GB RAM, 7 HDD 10K, 1 SSD. The idea is to use Gluste=
rFS to provide HA for the VMs. The 3 servers have a dual 40Gb NIC and a dua=
l 10Gb NIC. So my intention is to create a loop like a server triangle usin=
g the 40Gb NICs for virtualization files (VMs .qcow2) access and to move VM=
s around the pod (east /west traffic) while using the 10Gb interfaces for g=
iving services to the outside world (north/south traffic).
This said, my first question is: How should I deploy GlusterFS in such oVir=
t scenario? My questions are:
1 - Should I create 3 RAID (i.e.: RAID 5), one on each oVirt node, and then=
create a GlusterFS using them?
2 - Instead, should I create a JBOD array made of all server's disks?
3 - What is the best Gluster configuration to provide for HA while not cons=
uming too much disk space?
4 - Does a oVirt hypervisor pod like I am planning to build, and the virtua=
lization environment, benefits from tiering when using a SSD disk? And yes,=
will Gluster do it by default or I have to configure it to do so?
At the bottom line, what is the good practice for using GlusterFS in small =
pods for enterprises?
You opinion/feedback will be really appreciated!
Moacir
_______________________________________________
Users mailing list
Users@ovirt.org<mailto:Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________
Users mailing list
Users@ovirt.org<mailto:Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
--_000_DB6P190MB02801EA0892B38F503220896C88A0DB6P190MB0280EURP_
Content-Type: text/html; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
<html
<head
<meta http-equiv=3D"Content-Type"
content=3D"text/html; charset=3Diso-8859-=
1"
<style type=3D"text/css"
style=3D"display:none;"><!-- P {margin-top:0;margi=
n-bottom:0;} --></style
</head
<body dir=3D"ltr"
<div
id=3D"divtagdefaultwrapper" style=3D"font-size:12pt;color:#000000;font=
-family:Calibri,Helvetica,sans-serif;" dir=3D"ltr"
<p>Fernando,</p
<p><br
</p
<p>Let's see what people say... But this is what I understood Red Hat says
=
is the best performance model. This is the main reason to open this discuss=
ion because as long as I can see, some of you in the community, do not agre=
e.<br
</p
<br
<p>But when I think about a
"distributed file system", that can m=
ake any number of copies you want, it does not make sense using a RAIDed br=
ick, what it makes sense is to use JBOD.</p
<p><br
</p
<p>Moacir<br
</p
<br
<div style=3D"color: rgb(49,
55, 57);"
<hr tabindex=3D"-1"
style=3D"display:inline-block; width:98%"
<div
id=3D"divRplyFwdMsg" dir=3D"ltr"><font
style=3D"font-size:11pt" face=
=3D"Calibri, sans-serif" color=3D"#000000"><b>From:</b>
fernando.frediani@u=
px.com.br &lt;fernando.frediani(a)upx.com.br&gt; on behalf of FERNANDO FREDIA=
NI &lt;fernando.frediani(a)upx.com&gt;<br
<b>Sent:</b> Tuesday, August 8, 2017 3:08 AM<br
<b>To:</b> Moacir Ferreira<br
<b>Cc:</b> Colin Coe; users(a)ovirt.org<br
<b>Subject:</b> Re: [ovirt-users] Good
practices</font
<div> </div
</div
<div
<div dir=3D"ltr"
<div>Moacir, I understand that if you do this type of
configuration you wil=
l be severely impacted on storage performance, specially for writes. Even i=
f you have a Hardware RAID Controller with Writeback cache you will have a =
significant performance penalty
and may not fully use all the resources you mentioned you have.<br
<br
</div
Fernando<br
</div
<div
class=3D"gmail_extra"><br
<div
class=3D"gmail_quote">2017-08-07 10:03 GMT-03:00 Moacir Ferreira <span=
dir=3D"ltr"
<<a
href=3D"mailto:moacirferreira@hotmail.com"
target=3D"_blank">moacirf=
erreira@hotmail.com</a>></span>:<br
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;
border-left:1=
px #ccc solid; padding-left:1ex"
<div
dir=3D"ltr"
<div
id=3D"m_1874859859842763104divtagdefaultwrapper" dir=3D"ltr"
style=3D"=
font-size:12pt; color:#000000; font-family:Calibri,Helvetica,sans-serif"
<p>Hi Colin,</p
<p><br
</p
<p>Take a look on Devin's response. Also, read the doc he shared that
gives=
some hints on how to deploy Gluster.</p
<p><br
</p
<p>It is more like that if you want high-performance you should have the br=
icks created as RAID (5 or 6) by the server's disk controller and them=
assemble a JBOD GlusterFS. The attached document is Gluster specific and n=
ot for oVirt. But at this point I think
that having SSD will not be a plus as using the RAID controller Gluster wi=
ll not be aware of the SSD. Regarding the OS, my idea is to have a RAID 1, =
made of 2 low cost HDDs, to install it.</p
<p><br
</p
<p>So far, based on the information received I should create
a si=
ngle RAID 5 or 6 on each server and then use this disk as a brick to create=
my Gluster cluster, made of 2 replicas + 1 arbiter. What is new for me=
is the detail that the arbiter does not need
a lot of space as it only keeps meta data.</p
<p><br
</p
<p>Thanks for your response!<br
</p
Moacir<br
<br
<div
style=3D"color:rgb(49,55,57)"
<hr
style=3D"display:inline-block; width:98%"
<div
id=3D"m_1874859859842763104divRplyFwdMsg" dir=3D"ltr"><font
style=3D"f=
ont-size:11pt" face=3D"Calibri, sans-serif"
color=3D"#000000"><b>From:</b> =
Colin Coe <<a href=3D"mailto:colin.coe@gmail.com"
target=3D"_blank">coli=
n.coe(a)gmail.com</a>&gt;<br
<b>Sent:</b> Monday, August 7, 2017 12:41 PM
<div
<div class=3D"h5"><br
<b>To:</b> Moacir Ferreira<br
<b>Cc:</b> <a
href=3D"mailto:users@ovirt.org" target=3D"_blank">users@ovirt=
.org</a><br
<b>Subject:</b> Re:
[ovirt-users] Good practices</div
</div
</font
<div> </div
</div
<div
<div
class=3D"h5"
<div
<div
dir=3D"ltr">Hi
<div><br
</div
<div>I just thought that you'd do hardware RAID if
you had the controller o=
r JBOD if you didn't. In hindsight, a server with 40Gbps NICs is pret=
ty likely to have a hardware RAID controller. I've never done JBOD wi=
th hardware RAID. I think having a single
gluster brick on hardware JBOD would be riskier than multiple bricks, each=
on a single disk, but thats not based on anything other than my prejudices=
.</div
<div><br
</div
<div>I thought gluster tiering was for the most frequently accessed files, =
in which case all the VMs disks would end up in the hot tier. However=
, I have been wrong before...</div
<div><br
</div
<div>I just wanted to know where the OS was going as I
didn't see it mentio=
ned in the OP. Normally, I'd have the OS on a RAID1 but in your case =
thats a lot of wasted disk.</div
<div><br
</div
<div>Honestly, I think Yaniv's answer was far better
than my own and made t=
he important point about having an arbiter. </div
<div><br
</div
<div>Thanks</div
</div
<div
class=3D"gmail_extra"><br
<div
class=3D"gmail_quote">On Mon, Aug 7, 2017 at 5:56 PM, Moacir Ferreira =
<span dir=3D"ltr"
<<a
href=3D"mailto:moacirferreira@hotmail.com"
target=3D"_blank">moacirf=
erreira(a)hotmail.com</a>&gt;</span> wrote:<br
<blockquote class=3D"gmail_quote"
style=3D"margin:0 0 0 .8ex; border-left:1=
px #ccc solid; padding-left:1ex"
<div
dir=3D"ltr"
<div
id=3D"m_1874859859842763104m_4831886883305672718divtagdefaultwrapper" =
dir=3D"ltr" style=3D"font-size:12pt; color:#000000;
font-family:Calibri,Hel=
vetica,sans-serif"
<p>Hi Colin,</p
<p><br
</p
<p><span>I am in
Portugal</span>, so sorry for this late response. It is qu=
ite confusing for me, please consider:</p
<p></p
<div><b><br
</b>1<b> - </b>What if the RAID is done by
the server's disk controller, no=
t by software?</div
<br
<p></p
<p>2 -<b> </b>For
JBOD I am just using gdeploy to deploy it. However, I am =
not using the oVirt node GUI to do this.</p
<p><br
</p
<p>3 -<b> </b>As the VM .qcow2 files are quite big,
tiering would only=
help if made by an intelligent system that uses SSD for chunks of data not=
for the entire .qcow2 file. But I guess this is a problem everybody else h=
as. So, Do you know how tiering works
in Gluster?<br
</p
<p><br
</p
<p>4 - I am putting the OS on the first disk. However, would you do
di=
fferently?<br
</p
<p><br
</p
Moacir<br
<br
<div
style=3D"color:rgb(49,55,57)"
<hr
style=3D"display:inline-block; width:98%"
<div
id=3D"m_1874859859842763104m_4831886883305672718divRplyFwdMsg" dir=3D"=
ltr"><font style=3D"font-size:11pt" face=3D"Calibri,
sans-serif" color=3D"#=
000000"><b>From:</b> Colin Coe <<a
href=3D"mailto:colin.coe@gmail.com" t=
arget=3D"_blank">colin.coe(a)gmail.com</a>&gt;<br
<b>Sent:</b> Monday, August 7, 2017 4:48
AM<br
<b>To:</b> Moacir
Ferreira<br
<b>Cc:</b> <a
href=3D"mailto:users@ovirt.org" target=3D"_blank">users@ovirt=
.org</a><br
<b>Subject:</b> Re:
[ovirt-users] Good practices</font
<div> </div
</div
<div
<div
dir=3D"ltr">1) RAID5 may be a performance hit- <br
<div><br
</div
<div><span>2) I'd be
inclined to do this as JBOD by creating a distributed =
disperse volume on each server. Something like
<div><br
</div
<div>echo gluster volume create dispersevol disperse-data
5 redundancy 2 \<=
/div
<div>$(for SERVER in a b c; do for BRICK in $(seq 1 5);
do echo -e "se=
rver${SERVER}:/brick/brick-<wbr>${SERVER}${BRICK}/brick \c"; done; don=
e)</div
<div><br
</div
</span
<div>3) I think the above.
<b></b></div
<span
<div><br
</div
<div>4) Gluster does support
tiering, but IIRC you'd need the same number o=
f SSD as spindle drives. There may be another way to use the SSD as a=
fast cache. </div
<div><br
</div
<div>Where are you putting the OS?</div
<div><br
</div
<div>Hope I understood the question...</div
<div><br
</div
<div>Thanks</div
</span></div
</div
<span
<div class=3D"gmail_extra"><br
<div class=3D"gmail_quote">On Sun, Aug 6, 2017
at 10:49 PM, Moacir Ferreira=
<span dir=3D"ltr"
<<a
href=3D"mailto:moacirferreira@hotmail.com"
target=3D"_blank">moacirf=
erreira(a)hotmail.com</a>&gt;</span> wrote:<br
<blockquote class=3D"gmail_quote"
style=3D"margin:0 0 0 .8ex; border-left:1=
px #ccc solid; padding-left:1ex"
<div
dir=3D"ltr"
<div
id=3D"m_1874859859842763104m_4831886883305672718m_2460985691746498322d=
ivtagdefaultwrapper" dir=3D"ltr" style=3D"font-size:12pt;
color:#000000; fo=
nt-family:Calibri,Helvetica,sans-serif"
<p><span>I am willing to assemble a oVirt "pod", made
of 3 server=
s, each with 2 CPU sockets of 12 cores, 256GB RAM, 7 HDD 10K, 1 SSD. The id=
ea is to use GlusterFS to provide HA for the VMs. The 3 servers have a dual=
40Gb NIC and a dual 10Gb NIC. So my intention
is to create a loop like a server triangle using the 40Gb NICs for virtual=
ization files (VMs .qcow2) access and to move VMs around the pod (east /wes=
t traffic) while using the 10Gb interfaces for giving services to the outsi=
de world (north/south traffic).</span></p
<p><br
<span></span></p
<p>This said, my first question is: How should I deploy
GlusterFS in such o=
Virt scenario? My questions are:</p
<p><br
</p
<p>1 - Should I create 3 RAID (i.e.: RAID 5), one on each oVirt node, and t=
hen create a GlusterFS using them?</p
<p>2 - Instead, should I create a JBOD array made of all server's
disks?</p=
<p>3 - What is the best Gluster configuration to provide
for HA while not c=
onsuming too much disk space?<br
</p
<p>4 - Does a oVirt hypervisor
pod like I am planning to build, and the vir=
tualization environment, benefits from tiering when using a SSD disk? And y=
es, will Gluster do it by default or I have to configure it to do so?</p
<p><br
</p
<p>At the bottom line, what is
the good practice for using GlusterFS in sma=
ll pods for enterprises?<br
</p
<p><br
</p
<p>You opinion/feedback will be really appreciated!</p
<span
class=3D"m_1874859859842763104m_4831886883305672718HOEnZb"><font colo=
r=3D"#888888"
<p>Moacir<br
</p
</font></span></div
</div
<br
______________________________<wbr>_________________<br
Users mailing list<br
<a
href=3D"mailto:Users@ovirt.org"
target=3D"_blank">Users(a)ovirt.org</a><br=
<a
href=3D"http://lists.ovirt.org/mailman/listinfo/users" rel=3D"noreferrer=
"
target=3D"_blank">http://lists.ovirt.org/mailman<wbr>/...
br
<br
</blockquote
</div
<br
</div
</span></div
</div
</div
</div
</blockquote
</div
<br
</div
</div
</div
</div
</div
</div
</div
<br
______________________________<wbr>_________________<br
Users mailing list<br
<a
href=3D"mailto:Users@ovirt.org">Users@ovirt.org</a><br
<a
href=3D"http://lists.ovirt.org/mailman/listinfo/users" rel=3D"noreferrer=
"
target=3D"_blank">http://lists.ovirt.org/<wbr>mailman/...
br
<br
</blockquote
</div
<br
</div
</div
</div
</div
</body
</html
--_000_DB6P190MB02801EA0892B38F503220896C88A0DB6P190MB0280EURP_--