--_000_DM5PR01MB2506D11EEF39BD88516359ECFFCC0DM5PR01MB2506prod_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
Hi,
4 SSDs in "distributed replica 2" volume for VM images, with additional 20 =
HDDs in another volume.
We had some minor XFS issues with the HDDs volume.
as for monitoring, standard snmp with few scripts to read smart report, we'=
re still looking for a better way to monitor Gluster.
hardware is Cisco UCS C220.
We have another setup but not HC, and its equipped with 96 SSDs only.
No major issues so far.
--
Respectfully
Mahdi A. Mahdi
________________________________
From: ovirt(a)fateknollogee.com <ovirt(a)fateknollogee.com
Sent:
Sunday, June 11, 2017 4:45:30 PM
To: Mahdi Adnan
Cc: Barak Korren; Yaniv Kaul; Ovirt Users
Subject: Re: [ovirt-users] Hardware for Hyperconverged oVirt: Gluster stora=
ge best practice
Mahdi,
Can you share some more detail on your hardware?
How many total SSDs?
Have you had any drive failures?
How do you monitor for failed drives?
Was it a problem replacing failed drives?
On 2017-06-11 02:21, Mahdi Adnan wrote:
Hi,
In our setup, we used each SSD as a standalone brick "no RAID" and
created distributed replica with sharding.
Also, we are NOT managing Gluster from ovirt.
--
Respectfully
MAHDI A. MAHDI
-------------------------
FROM: users-bounces(a)ovirt.org <users-bounces(a)ovirt.org> on behalf of
Barak Korren <bkorren(a)redhat.com
SENT:
Sunday, June 11, 2017 11:20:45 AM
TO: Yaniv Kaul
CC: ovirt(a)fateknollogee.com; Ovirt Users
SUBJECT: Re: [ovirt-users] Hardware for Hyperconverged oVirt: Gluster
storage best practice
On 11 June 2017 at 11:08, Yaniv Kaul <ykaul(a)redhat.com> wrote:
>> I will install the o/s for each node on a SATADOM.
>> Since each node will have 6x SSD for gluster storage.
>> Should this be software RAID, hardware RAID or no RAID?
> I'd reckon that you should prefer HW RAID on software
RAID, and some
RAID on
> no RAID at all, but it really depends on your budget, performance,
and your
> availability requirements.
Not sure that is the best advice, given the use of Gluster+SSDs for
hosting individual VMs.
Typical software or hardware RAID systems are designed for use with
spinning disks, and may not yield any better performance on SSDs. RAID
is also not very good when I/O is highly scattered as it probably is
when running multiple different VMs.
So we are left with using RAID solely for availability. I think
Gluster may already provide that, so adding additional software or
hardware layers for RAID may just degrade performance without
providing any tangible benefits.
I think just defining each SSD as a single Gluster brick may provide
the best performance for VMs, but my understanding of this is
theoretical, so I leave it to the Gluster people to provide further
insight.
--
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. |
redhat.com/trusted
_______________________________________________
Users mailing list
Users(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
--_000_DM5PR01MB2506D11EEF39BD88516359ECFFCC0DM5PR01MB2506prod_
Content-Type: text/html; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
<html
<head
<meta http-equiv=3D"Content-Type"
content=3D"text/html; charset=3Dus-ascii"=
<meta name=3D"Generator" content=3D"Microsoft
Exchange Server"
<!-- converted from text
--><style><!-- .EmailQuote { margin-left: 1pt; pad=
ding-left: 4pt; border-left: #800000 2px solid; } --></style
</head
<body
<meta content=3D"text/html;
charset=3DUTF-8"
<style
type=3D"text/css" style=3D""
<!--
p
{margin-top:0;
margin-bottom:0}
--
</style
<div
dir=3D"ltr"
<div
id=3D"x_divtagdefaultwrapper" dir=3D"ltr"
style=3D"font-size:12pt; col=
or:#000000; font-family:Calibri,Arial,Helvetica,sans-serif"
<p>Hi,</p
<p><br
</p
<p>4 SSDs in "distributed replica 2" volume for VM images,
with a=
dditional 20 HDDs in another volume.</p
<p>We had some minor XFS issues with the HDDs volume.</p
<p>as for monitoring, standard snmp with few scripts to
read smart report, =
we're still looking for a better way to monitor Gluster.</p
<p>hardware is Cisco UCS C220.</p
<p><br
</p
<p>We have another setup but
not HC, and its equipped with 96 SSDs only.</p=
<p>No major issues so far.</p
<p><br
</p
<div
id=3D"x_Signature"><br
<div
class=3D"x_ecxmoz-signature">-- <br
<br
<font
color=3D"#3366ff"><font
color=3D"#000000">Respectfully<b><br
</b><b>Mahdi A. Mahdi</b></font></font><font
color=3D"#3366ff"><br
<br
</font><font
color=3D"#3366ff"></font></div
</div
</div
<hr tabindex=3D"-1"
style=3D"display:inline-block; width:98%"
<div
id=3D"x_divRplyFwdMsg" dir=3D"ltr"><font face=3D"Calibri,
sans-serif" =
color=3D"#000000" style=3D"font-size:11pt"><b>From:</b>
ovirt@fateknollogee=
.com &lt;ovirt(a)fateknollogee.com&gt;<br
<b>Sent:</b> Sunday, June 11, 2017 4:45:30 PM<br
<b>To:</b> Mahdi Adnan<br
<b>Cc:</b> Barak Korren; Yaniv Kaul; Ovirt
Users<br
<b>Subject:</b> Re:
[ovirt-users] Hardware for Hyperconverged oVirt: Gluste=
r storage best practice</font
<div> </div
</div
</div
<font
size=3D"2"><span style=3D"font-size:10pt;"
<div class=3D"PlainText">Mahdi,<br
<br
Can you share some more detail on
your hardware?<br
How many total SSDs?<br
Have you had any drive failures?<br
How do you monitor for failed drives?<br
Was it a problem replacing failed drives?<br
<br
On 2017-06-11 02:21, Mahdi Adnan
wrote:<br
> Hi,<br
> <br
>
In our setup, we used each SSD as a standalone brick "no RAID&quo=
t; and<br
> created distributed replica
with sharding.<br
> <br
> Also, we are NOT managing Gluster from
ovirt.<br
> <br
> --<br
>
<br
> Respectfully<br
>
MAHDI A. MAHDI<br
> <br
> -------------------------<br
> <br
>
FROM: users-bounces(a)ovirt.org &lt;users-bounces(a)ovirt.org&gt; on behal=
f of<br
> Barak Korren
&lt;bkorren(a)redhat.com&gt;<br
>
SENT: Sunday, June 11, 2017 11:20:45 AM<br
>
TO: Yaniv Kaul<br
> CC:
ovirt(a)fateknollogee.com; Ovirt Users<br
>
SUBJECT: Re: [ovirt-users] Hardware for Hyperconverged oVirt: Gluster<=
br
> storage best practice<br
> <br
>
On 11 June 2017 at 11:08, Yaniv Kaul &lt;ykaul(a)redhat.com&gt; wrote:<b=
r
>> <br
>>> I will install the o/s for each node on a
SATADOM.<br
>>> Since each
node will have 6x SSD for gluster storage.<br
>>> Should this be software RAID, hardware RAID or no
RAID?<br
>> <br
>> I'd reckon that you should prefer HW RAID
on software RAID, and so=
me<br
> RAID on<br
>> no RAID at all, but it really depends on your
budget, performance,=
<br
> and your<br
>> availability requirements.<br
>> <br
> <br
> Not sure that is the best advice, given the use of
Gluster+SSDs fo=
r<br
> hosting individual VMs.<br
> <br
>
Typical software or hardware RAID systems are designed for use with<br=
> spinning disks, and may not yield any better
performance on SSDs. RAID=
<br
> is also not very good when I/O is highly scattered as
it probably is<b=
r
> when running multiple different VMs.<br
> <br
>
So we are left with using RAID solely for availability. I think<br
> Gluster may already provide that, so adding additional
software or<br
> hardware layers for RAID
may just degrade performance without<br
>
providing any tangible benefits.<br
>
<br
> I think just defining each SSD as a single Gluster
brick may provide<b=
r
> the best performance for VMs, but my understanding of
this is<br
> theoretical, so I leave it
to the Gluster people to provide further<br=
> insight.<br
>
<br
> --<br
>
Barak Korren<br
> RHV DevOps team , RHCE,
RHCi<br
> Red Hat EMEA<br
>
redhat.com | TRIED. TESTED. TRUSTED. |
redhat.com/trusted<br
>
_______________________________________________<br
>
Users mailing list<br
> Users(a)ovirt.org<br
> <a
href=3D"http://lists.ovirt.org/mailman/listinfo/users">http:...
.ovirt.org/mailman/listinfo/users</a><br
</div
</span></font
</body
</html
--_000_DM5PR01MB2506D11EEF39BD88516359ECFFCC0DM5PR01MB2506prod_--