Hardware for Hyperconverged oVirt: Gluster storage best practice

Martin, Looking to test oVirt on real hardware (aka no nesting) Scenario # 1: 1x Supermicro 2027TR-HTRF 2U 4 node server I will install the o/s for each node on a SATADOM. Since each node will have 6x SSD for gluster storage. Should this be software RAID, hardware RAID or no RAID? Scenario # 2: 3x SuperMicro SC216E16-R1200LPB 2U server Each server has 24x 2.5" bays (front) + 2x 2.5" bays (rear) I will install the o/s on the drives using the rear bays (maybe RAID 1?) For Gluster, we will use the 24 front bays. Should this be software RAID, hardware RAID or no RAID? Thanks Femi

On Sat, Jun 10, 2017 at 1:43 PM, <ovirt@fateknollogee.com> wrote:
Martin,
Looking to test oVirt on real hardware (aka no nesting)
Scenario # 1: 1x Supermicro 2027TR-HTRF 2U 4 node server
Is that a hyper-converged setup of both oVirt and Gluster? We usually do it in batches of 3 nodes. I will install the o/s for each node on a SATADOM.
Since each node will have 6x SSD for gluster storage. Should this be software RAID, hardware RAID or no RAID?
I'd reckon that you should prefer HW RAID on software RAID, and some RAID on no RAID at all, but it really depends on your budget, performance, and your availability requirements.
Scenario # 2: 3x SuperMicro SC216E16-R1200LPB 2U server Each server has 24x 2.5" bays (front) + 2x 2.5" bays (rear) I will install the o/s on the drives using the rear bays (maybe RAID 1?)
Makes sense (I could not see the rear bays - might have missed them). Will you be able to put some SSDs there for caching?
For Gluster, we will use the 24 front bays. Should this be software RAID, hardware RAID or no RAID?
Same answer is above. Y.
Thanks Femi _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On 11 June 2017 at 11:08, Yaniv Kaul <ykaul@redhat.com> wrote:
I will install the o/s for each node on a SATADOM. Since each node will have 6x SSD for gluster storage. Should this be software RAID, hardware RAID or no RAID?
I'd reckon that you should prefer HW RAID on software RAID, and some RAID on no RAID at all, but it really depends on your budget, performance, and your availability requirements.
Not sure that is the best advice, given the use of Gluster+SSDs for hosting individual VMs. Typical software or hardware RAID systems are designed for use with spinning disks, and may not yield any better performance on SSDs. RAID is also not very good when I/O is highly scattered as it probably is when running multiple different VMs. So we are left with using RAID solely for availability. I think Gluster may already provide that, so adding additional software or hardware layers for RAID may just degrade performance without providing any tangible benefits. I think just defining each SSD as a single Gluster brick may provide the best performance for VMs, but my understanding of this is theoretical, so I leave it to the Gluster people to provide further insight. -- Barak Korren RHV DevOps team , RHCE, RHCi Red Hat EMEA redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted

--_000_DM5PR01MB25065165B22682E9E3F1BC99FFCC0DM5PR01MB2506prod_ Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable Hi, In our setup, we used each SSD as a standalone brick "no RAID" and created = distributed replica with sharding. Also, we are NOT managing Gluster from ovirt. -- Respectfully Mahdi A. Mahdi ________________________________ From: users-bounces@ovirt.org <users-bounces@ovirt.org> on behalf of Barak = Korren <bkorren@redhat.com> Sent: Sunday, June 11, 2017 11:20:45 AM To: Yaniv Kaul Cc: ovirt@fateknollogee.com; Ovirt Users Subject: Re: [ovirt-users] Hardware for Hyperconverged oVirt: Gluster stora= ge best practice On 11 June 2017 at 11:08, Yaniv Kaul <ykaul@redhat.com> wrote:
I will install the o/s for each node on a SATADOM. Since each node will have 6x SSD for gluster storage. Should this be software RAID, hardware RAID or no RAID?
I'd reckon that you should prefer HW RAID on software RAID, and some RAID=
on
no RAID at all, but it really depends on your budget, performance, and yo= ur availability requirements.
Not sure that is the best advice, given the use of Gluster+SSDs for hosting individual VMs. Typical software or hardware RAID systems are designed for use with spinning disks, and may not yield any better performance on SSDs. RAID is also not very good when I/O is highly scattered as it probably is when running multiple different VMs. So we are left with using RAID solely for availability. I think Gluster may already provide that, so adding additional software or hardware layers for RAID may just degrade performance without providing any tangible benefits. I think just defining each SSD as a single Gluster brick may provide the best performance for VMs, but my understanding of this is theoretical, so I leave it to the Gluster people to provide further insight. -- Barak Korren RHV DevOps team , RHCE, RHCi Red Hat EMEA redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users --_000_DM5PR01MB25065165B22682E9E3F1BC99FFCC0DM5PR01MB2506prod_ Content-Type: text/html; charset="us-ascii" Content-Transfer-Encoding: quoted-printable <html> <head> <meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii"=
<meta name=3D"Generator" content=3D"Microsoft Exchange Server"> <!-- converted from text --><style><!-- .EmailQuote { margin-left: 1pt; pad= ding-left: 4pt; border-left: #800000 2px solid; } --></style> </head> <body> <meta content=3D"text/html; charset=3DUTF-8"> <style type=3D"text/css" style=3D""> <!-- p {margin-top:0; margin-bottom:0} --> </style> <div dir=3D"ltr"> <div id=3D"x_divtagdefaultwrapper" dir=3D"ltr" style=3D"font-size:12pt; col= or:#000000; font-family:Calibri,Arial,Helvetica,sans-serif"> <p>Hi,</p> <p><br> </p> <p>In our setup, we used each SSD as a standalone brick "no RAID"= and created distributed replica with sharding.</p> <p>Also, we are NOT managing Gluster from ovirt.</p> <div id=3D"x_Signature"><br> <div class=3D"x_ecxmoz-signature">-- <br> <br> <font color=3D"#3366ff"><font color=3D"#000000">Respectfully<b><br> </b><b>Mahdi A. Mahdi</b></font></font><font color=3D"#3366ff"><br> <br> </font><font color=3D"#3366ff"></font></div> </div> </div> <hr tabindex=3D"-1" style=3D"display:inline-block; width:98%"> <div id=3D"x_divRplyFwdMsg" dir=3D"ltr"><font face=3D"Calibri, sans-serif" = color=3D"#000000" style=3D"font-size:11pt"><b>From:</b> users-bounces@ovirt= .org <users-bounces@ovirt.org> on behalf of Barak Korren <bkorren@= redhat.com><br> <b>Sent:</b> Sunday, June 11, 2017 11:20:45 AM<br> <b>To:</b> Yaniv Kaul<br> <b>Cc:</b> ovirt@fateknollogee.com; Ovirt Users<br> <b>Subject:</b> Re: [ovirt-users] Hardware for Hyperconverged oVirt: Gluste= r storage best practice</font> <div> </div> </div> </div> <font size=3D"2"><span style=3D"font-size:10pt;"> <div class=3D"PlainText">On 11 June 2017 at 11:08, Yaniv Kaul <ykaul@red= hat.com> wrote:<br> ><br> >> I will install the o/s for each node on a SATADOM.<br> >> Since each node will have 6x SSD for gluster storage.<br> >> Should this be software RAID, hardware RAID or no RAID?<br> ><br> > I'd reckon that you should prefer HW RAID on software RAID, and some R= AID on<br> > no RAID at all, but it really depends on your budget, performance, and= your<br> > availability requirements.<br> ><br> <br> Not sure that is the best advice, given the use of Gluster+SSDs for<br> hosting individual VMs.<br> <br> Typical software or hardware RAID systems are designed for use with<br> spinning disks, and may not yield any better performance on SSDs. RAID<br> is also not very good when I/O is highly scattered as it probably is<br> when running multiple different VMs.<br> <br> So we are left with using RAID solely for availability. I think<br> Gluster may already provide that, so adding additional software or<br> hardware layers for RAID may just degrade performance without<br> providing any tangible benefits.<br> <br> I think just defining each SSD as a single Gluster brick may provide<br> the best performance for VMs, but my understanding of this is<br> theoretical, so I leave it to the Gluster people to provide further<br> insight.<br> <br> -- <br> Barak Korren<br> RHV DevOps team , RHCE, RHCi<br> Red Hat EMEA<br> redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted<br> _______________________________________________<br> Users mailing list<br> Users@ovirt.org<br> <a href=3D"http://lists.ovirt.org/mailman/listinfo/users">http://lists.ovir= t.org/mailman/listinfo/users</a><br> </div> </span></font> </body> </html> --_000_DM5PR01MB25065165B22682E9E3F1BC99FFCC0DM5PR01MB2506prod_--

Mahdi, Can you share some more detail on your hardware? How many total SSDs? Have you had any drive failures? How do you monitor for failed drives? Was it a problem replacing failed drives? On 2017-06-11 02:21, Mahdi Adnan wrote:
Hi,
In our setup, we used each SSD as a standalone brick "no RAID" and created distributed replica with sharding.
Also, we are NOT managing Gluster from ovirt.
--
Respectfully MAHDI A. MAHDI
-------------------------
FROM: users-bounces@ovirt.org <users-bounces@ovirt.org> on behalf of Barak Korren <bkorren@redhat.com> SENT: Sunday, June 11, 2017 11:20:45 AM TO: Yaniv Kaul CC: ovirt@fateknollogee.com; Ovirt Users SUBJECT: Re: [ovirt-users] Hardware for Hyperconverged oVirt: Gluster storage best practice
On 11 June 2017 at 11:08, Yaniv Kaul <ykaul@redhat.com> wrote:
I will install the o/s for each node on a SATADOM. Since each node will have 6x SSD for gluster storage. Should this be software RAID, hardware RAID or no RAID?
I'd reckon that you should prefer HW RAID on software RAID, and some
RAID on
no RAID at all, but it really depends on your budget, performance, and your availability requirements.
Not sure that is the best advice, given the use of Gluster+SSDs for hosting individual VMs.
Typical software or hardware RAID systems are designed for use with spinning disks, and may not yield any better performance on SSDs. RAID is also not very good when I/O is highly scattered as it probably is when running multiple different VMs.
So we are left with using RAID solely for availability. I think Gluster may already provide that, so adding additional software or hardware layers for RAID may just degrade performance without providing any tangible benefits.
I think just defining each SSD as a single Gluster brick may provide the best performance for VMs, but my understanding of this is theoretical, so I leave it to the Gluster people to provide further insight.
-- Barak Korren RHV DevOps team , RHCE, RHCi Red Hat EMEA redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

--_000_DM5PR01MB2506D11EEF39BD88516359ECFFCC0DM5PR01MB2506prod_ Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable Hi, 4 SSDs in "distributed replica 2" volume for VM images, with additional 20 = HDDs in another volume. We had some minor XFS issues with the HDDs volume. as for monitoring, standard snmp with few scripts to read smart report, we'= re still looking for a better way to monitor Gluster. hardware is Cisco UCS C220. We have another setup but not HC, and its equipped with 96 SSDs only. No major issues so far. -- Respectfully Mahdi A. Mahdi ________________________________ From: ovirt@fateknollogee.com <ovirt@fateknollogee.com> Sent: Sunday, June 11, 2017 4:45:30 PM To: Mahdi Adnan Cc: Barak Korren; Yaniv Kaul; Ovirt Users Subject: Re: [ovirt-users] Hardware for Hyperconverged oVirt: Gluster stora= ge best practice Mahdi, Can you share some more detail on your hardware? How many total SSDs? Have you had any drive failures? How do you monitor for failed drives? Was it a problem replacing failed drives? On 2017-06-11 02:21, Mahdi Adnan wrote:
Hi,
In our setup, we used each SSD as a standalone brick "no RAID" and created distributed replica with sharding.
Also, we are NOT managing Gluster from ovirt.
--
Respectfully MAHDI A. MAHDI
-------------------------
FROM: users-bounces@ovirt.org <users-bounces@ovirt.org> on behalf of Barak Korren <bkorren@redhat.com> SENT: Sunday, June 11, 2017 11:20:45 AM TO: Yaniv Kaul CC: ovirt@fateknollogee.com; Ovirt Users SUBJECT: Re: [ovirt-users] Hardware for Hyperconverged oVirt: Gluster storage best practice
On 11 June 2017 at 11:08, Yaniv Kaul <ykaul@redhat.com> wrote:
I will install the o/s for each node on a SATADOM. Since each node will have 6x SSD for gluster storage. Should this be software RAID, hardware RAID or no RAID?
I'd reckon that you should prefer HW RAID on software RAID, and some
RAID on
no RAID at all, but it really depends on your budget, performance, and your availability requirements.
Not sure that is the best advice, given the use of Gluster+SSDs for hosting individual VMs.
Typical software or hardware RAID systems are designed for use with spinning disks, and may not yield any better performance on SSDs. RAID is also not very good when I/O is highly scattered as it probably is when running multiple different VMs.
So we are left with using RAID solely for availability. I think Gluster may already provide that, so adding additional software or hardware layers for RAID may just degrade performance without providing any tangible benefits.
I think just defining each SSD as a single Gluster brick may provide the best performance for VMs, but my understanding of this is theoretical, so I leave it to the Gluster people to provide further insight.
-- Barak Korren RHV DevOps team , RHCE, RHCi Red Hat EMEA redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
--_000_DM5PR01MB2506D11EEF39BD88516359ECFFCC0DM5PR01MB2506prod_ Content-Type: text/html; charset="us-ascii" Content-Transfer-Encoding: quoted-printable <html> <head> <meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii"=
<meta name=3D"Generator" content=3D"Microsoft Exchange Server"> <!-- converted from text --><style><!-- .EmailQuote { margin-left: 1pt; pad= ding-left: 4pt; border-left: #800000 2px solid; } --></style> </head> <body> <meta content=3D"text/html; charset=3DUTF-8"> <style type=3D"text/css" style=3D""> <!-- p {margin-top:0; margin-bottom:0} --> </style> <div dir=3D"ltr"> <div id=3D"x_divtagdefaultwrapper" dir=3D"ltr" style=3D"font-size:12pt; col= or:#000000; font-family:Calibri,Arial,Helvetica,sans-serif"> <p>Hi,</p> <p><br> </p> <p>4 SSDs in "distributed replica 2" volume for VM images, with a= dditional 20 HDDs in another volume.</p> <p>We had some minor XFS issues with the HDDs volume.</p> <p>as for monitoring, standard snmp with few scripts to read smart report, = we're still looking for a better way to monitor Gluster.</p> <p>hardware is Cisco UCS C220.</p> <p><br> </p> <p>We have another setup but not HC, and its equipped with 96 SSDs only.</p=
<p>No major issues so far.</p> <p><br> </p> <div id=3D"x_Signature"><br> <div class=3D"x_ecxmoz-signature">-- <br> <br> <font color=3D"#3366ff"><font color=3D"#000000">Respectfully<b><br> </b><b>Mahdi A. Mahdi</b></font></font><font color=3D"#3366ff"><br> <br> </font><font color=3D"#3366ff"></font></div> </div> </div> <hr tabindex=3D"-1" style=3D"display:inline-block; width:98%"> <div id=3D"x_divRplyFwdMsg" dir=3D"ltr"><font face=3D"Calibri, sans-serif" = color=3D"#000000" style=3D"font-size:11pt"><b>From:</b> ovirt@fateknollogee= .com <ovirt@fateknollogee.com><br> <b>Sent:</b> Sunday, June 11, 2017 4:45:30 PM<br> <b>To:</b> Mahdi Adnan<br> <b>Cc:</b> Barak Korren; Yaniv Kaul; Ovirt Users<br> <b>Subject:</b> Re: [ovirt-users] Hardware for Hyperconverged oVirt: Gluste= r storage best practice</font> <div> </div> </div> </div> <font size=3D"2"><span style=3D"font-size:10pt;"> <div class=3D"PlainText">Mahdi,<br> <br> Can you share some more detail on your hardware?<br> How many total SSDs?<br> Have you had any drive failures?<br> How do you monitor for failed drives?<br> Was it a problem replacing failed drives?<br> <br> On 2017-06-11 02:21, Mahdi Adnan wrote:<br> > Hi,<br> > <br> > In our setup, we used each SSD as a standalone brick "no RAID&quo= t; and<br> > created distributed replica with sharding.<br> > <br> > Also, we are NOT managing Gluster from ovirt.<br> > <br> > --<br> > <br> > Respectfully<br> > MAHDI A. MAHDI<br> > <br> > -------------------------<br> > <br> > FROM: users-bounces@ovirt.org <users-bounces@ovirt.org> on behal= f of<br> > Barak Korren <bkorren@redhat.com><br> > SENT: Sunday, June 11, 2017 11:20:45 AM<br> > TO: Yaniv Kaul<br> > CC: ovirt@fateknollogee.com; Ovirt Users<br> > SUBJECT: Re: [ovirt-users] Hardware for Hyperconverged oVirt: Gluster<= br> > storage best practice<br> > <br> > On 11 June 2017 at 11:08, Yaniv Kaul <ykaul@redhat.com> wrote:<b= r> >> <br> >>> I will install the o/s for each node on a SATADOM.<br> >>> Since each node will have 6x SSD for gluster storage.<br> >>> Should this be software RAID, hardware RAID or no RAID?<br> >> <br> >> I'd reckon that you should prefer HW RAID on software RAID, and so= me<br> > RAID on<br> >> no RAID at all, but it really depends on your budget, performance,= <br> > and your<br> >> availability requirements.<br> >> <br> > <br> > Not sure that is the best advice, given the use of Gluster+SSDs fo= r<br> > hosting individual VMs.<br> > <br> > Typical software or hardware RAID systems are designed for use with<br=
> spinning disks, and may not yield any better performance on SSDs. RAID= <br> > is also not very good when I/O is highly scattered as it probably is<b= r> > when running multiple different VMs.<br> > <br> > So we are left with using RAID solely for availability. I think<br> > Gluster may already provide that, so adding additional software or<br> > hardware layers for RAID may just degrade performance without<br> > providing any tangible benefits.<br> > <br> > I think just defining each SSD as a single Gluster brick may provide<b= r> > the best performance for VMs, but my understanding of this is<br> > theoretical, so I leave it to the Gluster people to provide further<br=
> insight.<br> > <br> > --<br> > Barak Korren<br> > RHV DevOps team , RHCE, RHCi<br> > Red Hat EMEA<br> > redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted<br> > _______________________________________________<br> > Users mailing list<br> > Users@ovirt.org<br> > <a href=3D"http://lists.ovirt.org/mailman/listinfo/users">http://lists= .ovirt.org/mailman/listinfo/users</a><br> </div> </span></font> </body> </html> --_000_DM5PR01MB2506D11EEF39BD88516359ECFFCC0DM5PR01MB2506prod_--

I think just defining each SSD as a single Gluster brick may provide the best performance for VMs, but my understanding of this is theoretical, so I leave it to the Gluster people to provide further insight.
Barak, very interesting, I had never thought of doing it this way but your idea does make sense. I assume Gluster is able to tolerate drive failures in the array? I'm also interested in hearing what the Gluster folks think about your approach? On 2017-06-11 01:20, Barak Korren wrote:
On 11 June 2017 at 11:08, Yaniv Kaul <ykaul@redhat.com> wrote:
I will install the o/s for each node on a SATADOM. Since each node will have 6x SSD for gluster storage. Should this be software RAID, hardware RAID or no RAID?
I'd reckon that you should prefer HW RAID on software RAID, and some RAID on no RAID at all, but it really depends on your budget, performance, and your availability requirements.
Not sure that is the best advice, given the use of Gluster+SSDs for hosting individual VMs.
Typical software or hardware RAID systems are designed for use with spinning disks, and may not yield any better performance on SSDs. RAID is also not very good when I/O is highly scattered as it probably is when running multiple different VMs.
So we are left with using RAID solely for availability. I think Gluster may already provide that, so adding additional software or hardware layers for RAID may just degrade performance without providing any tangible benefits.
I think just defining each SSD as a single Gluster brick may provide the best performance for VMs, but my understanding of this is theoretical, so I leave it to the Gluster people to provide further insight.

I think just defining each SSD as a single Gluster brick may provide the best performance for VMs, but my understanding of this is theoretical, so I leave it to the Gluster people to provide further insight.
Pardon my lack of knowledge (I'm an oVirt/Gluster newbie). I assume the SSD-to-single-Gluster brick can be done using gdeploy in oVirt? On 2017-06-11 01:20, Barak Korren wrote:
On 11 June 2017 at 11:08, Yaniv Kaul <ykaul@redhat.com> wrote:
I will install the o/s for each node on a SATADOM. Since each node will have 6x SSD for gluster storage. Should this be software RAID, hardware RAID or no RAID?
I'd reckon that you should prefer HW RAID on software RAID, and some RAID on no RAID at all, but it really depends on your budget, performance, and your availability requirements.
Not sure that is the best advice, given the use of Gluster+SSDs for hosting individual VMs.
Typical software or hardware RAID systems are designed for use with spinning disks, and may not yield any better performance on SSDs. RAID is also not very good when I/O is highly scattered as it probably is when running multiple different VMs.
So we are left with using RAID solely for availability. I think Gluster may already provide that, so adding additional software or hardware layers for RAID may just degrade performance without providing any tangible benefits.
I think just defining each SSD as a single Gluster brick may provide the best performance for VMs, but my understanding of this is theoretical, so I leave it to the Gluster people to provide further insight.

Is that a hyper-converged setup of both oVirt and Gluster? We usually do it in batches of 3 nodes.
Yes, it is for a HC setup of both oVirt & Gluster.
..it really depends on your budget, performance, and your availability requirements.
I would like to enhance the performance.
Makes sense (I could not see the rear bays - might have missed them). Will you be able to put some SSDs there for caching?
This the correct part # https://www.supermicro.com/products/chassis/2U/216/SC216BE26-R920LPB In all the oVirt docs/videos, no one mentioned using SSD for caching? Therefore I was not planning on caching. I planned to install oVirt node on the rear SSD's
On 2017-06-11 01:08, Yaniv Kaul wrote:
On Sat, Jun 10, 2017 at 1:43 PM, <ovirt@fateknollogee.com> wrote:
Martin,
Looking to test oVirt on real hardware (aka no nesting)
Scenario # 1: 1x Supermicro 2027TR-HTRF 2U 4 node server
Is that a hyper-converged setup of both oVirt and Gluster? We usually do it in batches of 3 nodes.
I will install the o/s for each node on a SATADOM. Since each node will have 6x SSD for gluster storage. Should this be software RAID, hardware RAID or no RAID?
I'd reckon that you should prefer HW RAID on software RAID, and some RAID on no RAID at all, but it really depends on your budget, performance, and your availability requirements.
Scenario # 2: 3x SuperMicro SC216E16-R1200LPB 2U server Each server has 24x 2.5" bays (front) + 2x 2.5" bays (rear) I will install the o/s on the drives using the rear bays (maybe RAID 1?)
Makes sense (I could not see the rear bays - might have missed them). Will you be able to put some SSDs there for caching?
For Gluster, we will use the 24 front bays. Should this be software RAID, hardware RAID or no RAID?
Same answer is above. Y.
Thanks Femi _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users [1]
Links: ------ [1] http://lists.ovirt.org/mailman/listinfo/users
participants (4)
-
Barak Korren
-
Mahdi Adnan
-
ovirt@fateknollogee.com
-
Yaniv Kaul