<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=us-ascii">
<meta name="Generator" content="Microsoft Exchange Server">
<!-- converted from text --><style><!-- .EmailQuote { margin-left: 1pt; padding-left: 4pt; border-left: #800000 2px solid; } --></style>
</head>
<body>
<meta content="text/html; charset=UTF-8">
<style type="text/css" style="">
<!--
p
        {margin-top:0;
        margin-bottom:0}
-->
</style>
<div dir="ltr">
<div id="x_divtagdefaultwrapper" dir="ltr" style="font-size:12pt; color:#000000; font-family:Calibri,Arial,Helvetica,sans-serif">
<p>Hi,</p>
<p><br>
</p>
<p>4 SSDs in "distributed replica 2" volume for VM images, with additional 20 HDDs in another volume.</p>
<p>We had some minor XFS issues with the HDDs volume.</p>
<p>as for monitoring, standard snmp with few scripts to read smart report, we're still looking for a better way to monitor Gluster.</p>
<p>hardware is Cisco UCS C220.</p>
<p><br>
</p>
<p>We have another setup but not HC, and its equipped with 96 SSDs only.</p>
<p>No major issues so far.</p>
<p><br>
</p>
<div id="x_Signature"><br>
<div class="x_ecxmoz-signature">-- <br>
<br>
<font color="#3366ff"><font color="#000000">Respectfully<b><br>
</b><b>Mahdi A. Mahdi</b></font></font><font color="#3366ff"><br>
<br>
</font><font color="#3366ff"></font></div>
</div>
</div>
<hr tabindex="-1" style="display:inline-block; width:98%">
<div id="x_divRplyFwdMsg" dir="ltr"><font face="Calibri, sans-serif" color="#000000" style="font-size:11pt"><b>From:</b> ovirt@fateknollogee.com <ovirt@fateknollogee.com><br>
<b>Sent:</b> Sunday, June 11, 2017 4:45:30 PM<br>
<b>To:</b> Mahdi Adnan<br>
<b>Cc:</b> Barak Korren; Yaniv Kaul; Ovirt Users<br>
<b>Subject:</b> Re: [ovirt-users] Hardware for Hyperconverged oVirt: Gluster storage best practice</font>
<div> </div>
</div>
</div>
<font size="2"><span style="font-size:10pt;">
<div class="PlainText">Mahdi,<br>
<br>
Can you share some more detail on your hardware?<br>
How many total SSDs?<br>
Have you had any drive failures?<br>
How do you monitor for failed drives?<br>
Was it a problem replacing failed drives?<br>
<br>
On 2017-06-11 02:21, Mahdi Adnan wrote:<br>
> Hi,<br>
> <br>
> In our setup, we used each SSD as a standalone brick "no RAID" and<br>
> created distributed replica with sharding.<br>
> <br>
> Also, we are NOT managing Gluster from ovirt.<br>
> <br>
> --<br>
> <br>
> Respectfully<br>
> MAHDI A. MAHDI<br>
> <br>
> -------------------------<br>
> <br>
> FROM: users-bounces@ovirt.org <users-bounces@ovirt.org> on behalf of<br>
> Barak Korren <bkorren@redhat.com><br>
> SENT: Sunday, June 11, 2017 11:20:45 AM<br>
> TO: Yaniv Kaul<br>
> CC: ovirt@fateknollogee.com; Ovirt Users<br>
> SUBJECT: Re: [ovirt-users] Hardware for Hyperconverged oVirt: Gluster<br>
> storage best practice<br>
> <br>
> On 11 June 2017 at 11:08, Yaniv Kaul <ykaul@redhat.com> wrote:<br>
>> <br>
>>> I will install the o/s for each node on a SATADOM.<br>
>>> Since each node will have 6x SSD for gluster storage.<br>
>>> Should this be software RAID, hardware RAID or no RAID?<br>
>> <br>
>> I'd reckon that you should prefer HW RAID on software RAID, and some<br>
> RAID on<br>
>> no RAID at all, but it really depends on your budget, performance,<br>
> and your<br>
>> availability requirements.<br>
>> <br>
> <br>
> Not sure that is the best advice, given the use of Gluster+SSDs for<br>
> hosting individual VMs.<br>
> <br>
> Typical software or hardware RAID systems are designed for use with<br>
> spinning disks, and may not yield any better performance on SSDs. RAID<br>
> is also not very good when I/O is highly scattered as it probably is<br>
> when running multiple different VMs.<br>
> <br>
> So we are left with using RAID solely for availability. I think<br>
> Gluster may already provide that, so adding additional software or<br>
> hardware layers for RAID may just degrade performance without<br>
> providing any tangible benefits.<br>
> <br>
> I think just defining each SSD as a single Gluster brick may provide<br>
> the best performance for VMs, but my understanding of this is<br>
> theoretical, so I leave it to the Gluster people to provide further<br>
> insight.<br>
> <br>
> --<br>
> Barak Korren<br>
> RHV DevOps team , RHCE, RHCi<br>
> Red Hat EMEA<br>
> redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted<br>
> _______________________________________________<br>
> Users mailing list<br>
> Users@ovirt.org<br>
> <a href="http://lists.ovirt.org/mailman/listinfo/users">http://lists.ovirt.org/mailman/listinfo/users</a><br>
</div>
</span></font>
</body>
</html>