--_000_DB6P190MB0280AD888032AB4A6232D576C88A0DB6P190MB0280EURP_
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Ok, the 40Gb NIC that I got were for free. But anyway, if you were working =
with 6 HDD + 1 SSD per server, then you get 21 disks on your cluster. As da=
ta in a JBOD will be built all over the network, then it can be really inte=
nsive especially depending on the number of replicas you choose for your ne=
eds. Also, when moving a VM alive you must transfer the memory contents of =
a VM to another node (just think about moving a VM with 32GB RAM). All toge=
ther, it can be a quite large chunk of data moving over the network all the=
time. While 40Gb NIC is not a "must", I think it is more affordable as it =
cost much less then a good disk controller.
But my confusion is that, as said by other fellows, the best "performance m=
odel" is when you use a hardware RAIDed brick (i.e.: 5 or 6) to assemble yo=
ur GlusterFS. In this case, as I would have to buy a good controller but ha=
ve less network traffic, to lower the cost I would then use a separate netw=
ork made of 10Gb NICs plus the controller.
Moacir
> Le 8 ao?t 2017 ? 04:08, FERNANDO FREDIANI <fernando.frediani(a)upx.com> a
?crit :
> Even if you have a Hardware RAID Controller with Writeback cache you
will have a significant performance penalty and may not fully use all the
resources you mentioned you have.
>
Nope again,from my experience with HP Smart Array and write back cache,
write, that goes in the cache, are even faster that read that must goes t=
o
the disks. of course if the write are too fast and to big, they will
over
overflow the cache. But on todays controller they are multi-gigabyte cach=
e,
you must write a lot to fill them. And if you can afford 40Gb card,
you c=
an
afford decent controller.
The last sentence raises an excellent point: balance your resources. Don't
spend a fortune on one component while another will end up being your
bottleneck.
Storage is usually the slowest link in the chain. I personally believe that
spending the money on NVMe drives makes more sense than 40Gb (except [1],
which is suspiciously cheap!)
Y.
[1]
http://a.co/4hsCTqG
--_000_DB6P190MB0280AD888032AB4A6232D576C88A0DB6P190MB0280EURP_
Content-Type: text/html; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
<html>
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html;
charset=3Diso-8859-=
1">
<style type=3D"text/css" style=3D"display:none;"><!-- P
{margin-top:0;margi=
n-bottom:0;} --></style>
</head>
<body dir=3D"ltr">
<div id=3D"divtagdefaultwrapper"
style=3D"font-size:12pt;color:#000000;font=
-family:Calibri,Helvetica,sans-serif;" dir=3D"ltr">
<p>Ok, the 40Gb NIC that I got were for free. But anyway, if you were worki=
ng with 6 HDD + 1 SSD per server, then you get 21 disks on your cluster=
. As data in a JBOD will be built all over the network, then it can be real=
ly intensive especially depending on
the number of replicas you choose for your needs. Also, when moving a VM a=
live you must transfer the memory contents of a VM to another node (just th=
ink about moving a VM with 32GB RAM). All together, it can be a quite large=
chunk of data moving over the network
all the time. While 40Gb NIC is not a "must", I think it is more=
affordable as it cost much less then a good disk controller.</p>
<p><br>
</p>
<p>But my confusion is that, as said by other fellows, the best "perfo=
rmance model" is when you use a hardware RAIDed brick (i.e.: 5 or 6) t=
o assemble your GlusterFS. In this case, as I would have to buy a good cont=
roller but have less network traffic, to lower
the cost I would then use a separate network made of 10Gb NICs plus the co=
ntroller.</p>
<p><br>
</p>
<p>Moacir<br>
</p>
<div style=3D"color: rgb(49, 55, 57);"><font
size=3D"2"><span style=3D"font=
-size:10pt;">
<div class=3D"PlainText"><br>
<br>
<br>
><br>
> > Le 8 ao?t 2017 ? 04:08, FERNANDO FREDIANI <fernando.frediani@u=
px.com> a<br>
> ?crit :<br>
><br>
> > Even if you have a Hardware RAID Controller with Writeback cache =
you<br>
> will have a significant performance penalty and may not fully use all =
the<br>
> resources you mentioned you have.<br>
> ><br>
><br>
> Nope again,from my experience with HP Smart Array and write back cache=
,<br>
> write, that goes in the cache, are even faster that read that must goe=
s to<br>
> the disks. of course if the write are too fast and to big, they will o=
ver<br>
> overflow the cache. But on todays controller they are multi-gigabyte c=
ache,<br>
> you must write a lot to fill them. And if you can afford 40Gb card, yo=
u can<br>
> afford decent controller.<br>
><br>
<br>
The last sentence raises an excellent point: balance your resources. Don't<=
br>
spend a fortune on one component while another will end up being your<br>
bottleneck.<br>
Storage is usually the slowest link in the chain. I personally believe that=
<br>
spending the money on NVMe drives makes more sense than 40Gb (except [1],<b=
r>
which is suspiciously cheap!)<br>
<br>
Y.<br>
[1] <a href=3D"http://a.co/4hsCTqG" id=3D"LPlnk802834"
previewremoved=3D"tr=
ue">http://a.co/4hsCTqG</a><br>
<br>
</div>
</span></font></div>
</div>
</body>
</html>
--_000_DB6P190MB0280AD888032AB4A6232D576C88A0DB6P190MB0280EURP_--