Bill, that is for sharing your findings I've also been curious as well and haven't had much luck finding reference hardware or expected performance from HCI builds.  I just put together three top end r720s with 256gb ram and two 2tb SSDs in jbod per host with 10gbe backend for gluster.  I just finished racking the servers and haven't had a chance to start the Ovirt install yet but hope to get to it within the next couple of weeks.. I've been a bit worried about the performance of gluster, I'm hoping I won't be disappointed.

Jayme

On Sat, Jun 23, 2018, 10:45 AM , <william.dossett@gmail.com> wrote:

I have deployed a HCI environment several times now as I wanted to get some idea of disk performance with Gluster/

 

On a Dell R720 3 node cluster with H710 PERC controller and 8 x 2TB 7200 RPM SATA drives.

 

My first test was with the 8 drives configured as H/W RAID 6 and I configured Gluster as RAID 6 – quite a lot of redundancy but that was just my first deployment

 

Running IOMeter for 24 hours using all in one access specification I got 240 IOPs.  Pretty good for SATA drives.

 

I then broke the RAID and configured 8 virtual disks, one per physical and then deployed Gluster as JBOD – I am not sure how resilient that is, but I assume in a 3 node cluster failures to tolerate would be one.

 

This gave me 267 IOPs.

 

I don’t know that much about the internals of Gluster, but when I first asked about this there didn’t seem to be much knowledge of what configuration would be best for HCI.  I plan to do more research and tests on this, but for what its worth for now, I am going down the JBOD route with no H/W RAID.

 

Regards

Bill

 

_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-leave@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/6XLN6QVVDIDKWYEJ5DYGJDEYWFHQKTFS/