Personally I feel like raid on top of GlusterFS is too wasteful. It would
give you a few advantages such as being able to replace a failed drive at
raid level vs replacing bricks with Gluster.
In my production HCI setup I have three Dell hosts each with two 2Tb SSDs
in JBOD. I find this setup works well for me, but I have not yet run in to
any drive failure scenarios.
What Perc card do you have in the dell machines? Jbod is tough with most
Perc cards, in many cases to do Jbod you have to fake it using individual
raid 0 for each drive. Only some perc controllers allow true jbod
passthrough.
On Fri, Feb 22, 2019 at 12:30 PM Guillaume Pavese <
guillaume.pavese(a)interactiv-group.com> wrote:
Hi,
We have been evaluating oVirt HyperConverged for 9 month now with a test
cluster of 3 DELL Hosts with Hardware RAID5 on PERC card.
We were not impressed with the performance...
No SSD for LV Cache on these hosts but I tried anyway with LV Cache on a
ram device. Perf were almost unchanged.
It seems that LV Cache is its own source of bugs and problems anyway, so
we are thinking going for full NVME drives when buying the production
cluster.
What would the recommandation be in that case, JBOD or RAID?
Thanks
Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group
_______________________________________________
Users mailing list -- users(a)ovirt.org
To unsubscribe send an email to users-leave(a)ovirt.org
Privacy Statement:
https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IODRDUEIZBP...