--_000_DM5PR01MB25065165B22682E9E3F1BC99FFCC0DM5PR01MB2506prod_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
Hi,
In our setup, we used each SSD as a standalone brick "no RAID" and created =
distributed replica with sharding.
Also, we are NOT managing Gluster from ovirt.
--
Respectfully
Mahdi A. Mahdi
________________________________
From: users-bounces(a)ovirt.org <users-bounces(a)ovirt.org> on behalf of Barak =
Korren <bkorren(a)redhat.com
Sent: Sunday, June 11, 2017 11:20:45
AM
To: Yaniv Kaul
Cc: ovirt(a)fateknollogee.com; Ovirt Users
Subject: Re: [ovirt-users] Hardware for Hyperconverged oVirt: Gluster stora=
ge best practice
On 11 June 2017 at 11:08, Yaniv Kaul <ykaul(a)redhat.com> wrote:
> I will install the o/s for each node on a SATADOM.
> Since each node will have 6x SSD for gluster storage.
> Should this be software RAID, hardware RAID or no RAID?
I'd reckon that you should prefer HW RAID on software RAID, and some RAID=
on
no RAID at all, but it really depends on your budget, performance,
and yo=
ur
availability requirements.
Not sure that is the best advice, given the use of Gluster+SSDs for
hosting individual VMs.
Typical software or hardware RAID systems are designed for use with
spinning disks, and may not yield any better performance on SSDs. RAID
is also not very good when I/O is highly scattered as it probably is
when running multiple different VMs.
So we are left with using RAID solely for availability. I think
Gluster may already provide that, so adding additional software or
hardware layers for RAID may just degrade performance without
providing any tangible benefits.
I think just defining each SSD as a single Gluster brick may provide
the best performance for VMs, but my understanding of this is
theoretical, so I leave it to the Gluster people to provide further
insight.
--
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. |
redhat.com/trusted
_______________________________________________
Users mailing list
Users(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
--_000_DM5PR01MB25065165B22682E9E3F1BC99FFCC0DM5PR01MB2506prod_
Content-Type: text/html; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
<html
<head
<meta http-equiv=3D"Content-Type"
content=3D"text/html; charset=3Dus-ascii"=
<meta name=3D"Generator" content=3D"Microsoft
Exchange Server"
<!-- converted from text
--><style><!-- .EmailQuote { margin-left: 1pt; pad=
ding-left: 4pt; border-left: #800000 2px solid; } --></style
</head
<body
<meta content=3D"text/html;
charset=3DUTF-8"
<style
type=3D"text/css" style=3D""
<!--
p
{margin-top:0;
margin-bottom:0}
--
</style
<div
dir=3D"ltr"
<div
id=3D"x_divtagdefaultwrapper" dir=3D"ltr"
style=3D"font-size:12pt; col=
or:#000000; font-family:Calibri,Arial,Helvetica,sans-serif"
<p>Hi,</p
<p><br
</p
<p>In our setup, we used each SSD as a standalone brick "no
RAID"=
and created distributed replica with sharding.</p
<p>Also, we are NOT managing Gluster from ovirt.</p
<div id=3D"x_Signature"><br
<div class=3D"x_ecxmoz-signature">-- <br
<br
<font
color=3D"#3366ff"><font
color=3D"#000000">Respectfully<b><br
</b><b>Mahdi A. Mahdi</b></font></font><font
color=3D"#3366ff"><br
<br
</font><font
color=3D"#3366ff"></font></div
</div
</div
<hr tabindex=3D"-1"
style=3D"display:inline-block; width:98%"
<div
id=3D"x_divRplyFwdMsg" dir=3D"ltr"><font face=3D"Calibri,
sans-serif" =
color=3D"#000000" style=3D"font-size:11pt"><b>From:</b>
users-bounces@ovirt=
.org &lt;users-bounces(a)ovirt.org&gt; on behalf of Barak Korren <bkorren@=
redhat.com><br
<b>Sent:</b> Sunday,
June 11, 2017 11:20:45 AM<br
<b>To:</b> Yaniv
Kaul<br
<b>Cc:</b>
ovirt(a)fateknollogee.com; Ovirt Users<br
<b>Subject:</b> Re: [ovirt-users] Hardware for Hyperconverged oVirt:
Gluste=
r storage best practice</font
<div> </div
</div
</div
<font
size=3D"2"><span style=3D"font-size:10pt;"
<div class=3D"PlainText">On 11 June 2017 at
11:08, Yaniv Kaul <ykaul@red=
hat.com> wrote:<br
><br
>> I will install the o/s for each node on a
SATADOM.<br
>> Since each node
will have 6x SSD for gluster storage.<br
>> Should this be software RAID, hardware RAID or no RAID?<br
><br
>
I'd reckon that you should prefer HW RAID on software RAID, and some R=
AID on<br
> no RAID at all, but it
really depends on your budget, performance, and=
your<br
> availability
requirements.<br
><br
<br
Not sure that is the best advice,
given the use of Gluster+SSDs for<br
hosting
individual VMs.<br
<br
Typical
software or hardware RAID systems are designed for use with<br
spinning disks, and may not yield any better performance on
SSDs. RAID<br
is also not very good when I/O is
highly scattered as it probably is<br
when
running multiple different VMs.<br
<br
So we are left with using RAID
solely for availability. I think<br
Gluster
may already provide that, so adding additional software or<br
hardware layers for RAID may just degrade performance
without<br
providing any tangible
benefits.<br
<br
I think
just defining each SSD as a single Gluster brick may provide<br
the best performance for VMs, but my understanding of this
is<br
theoretical, so I leave it to the
Gluster people to provide further<br
insight.<br
<br
--
<br
Barak Korren<br
RHV
DevOps team , RHCE, RHCi<br
Red Hat EMEA<br
redhat.com | TRIED. TESTED. TRUSTED. |
redhat.com/trusted<br
_______________________________________________<br
Users
mailing list<br
Users(a)ovirt.org<br
<a
href=3D"http://lists.ovirt.org/mailman/listinfo/users">http:...
t.org/mailman/listinfo/users</a><br
</div
</span></font
</body
</html
--_000_DM5PR01MB25065165B22682E9E3F1BC99FFCC0DM5PR01MB2506prod_--