I do similar with ZFS. In fact, I have a mix of large multi-drive ZFS volumes as single
bricks, and a few SSDs with xfs as single bricks in other volumes, based on use.
From what I’ve gathered watching the lists for a while, some people with lots of single
bricks (drives) per node encounter higher heal times than people with large single volume
bricks (mdadm, hardware raid, ZFS…) encounter better healing but maybe suffer a small
performance penalty. Seems people like to raid their spinning disks and use SSDs or NVMes
as single drive bricks in most cases.
Obviously your hardware and use case will drive it, but with NMVes, I’d be tempted to use
them as single bricks. Raid 1 with them would let you bail one and not have to heal
gluster, so that would be a bonus, and might get you more IOPS to boot. I’d do it if I
could afford it ;) The ultimate answer is to test it in both configs, including testing
healing across them and see what works best for you.
On Feb 25, 2019, at 6:35 AM, Guillaume Pavese
<guillaume.pavese(a)interactiv-group.com> wrote:
Thanks Jayme,
We currently use H730 PERC cards on our test cluster but we are not set on anything yet
for the production cluster.
We are indeed worried about losing a drive in JBOD mode. Would setting up a RAID1 of NVME
drives with mdadm, and then use that as the JBOD drive for the volume, be a *good* idea?
Is that even possible/ something that people do?
Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group
On Sat, Feb 23, 2019 at 2:51 AM Jayme <jaymef(a)gmail.com
<mailto:jaymef@gmail.com>> wrote:
Personally I feel like raid on top of GlusterFS is too wasteful. It would give you a few
advantages such as being able to replace a failed drive at raid level vs replacing bricks
with Gluster.
In my production HCI setup I have three Dell hosts each with two 2Tb SSDs in JBOD. I
find this setup works well for me, but I have not yet run in to any drive failure
scenarios.
What Perc card do you have in the dell machines? Jbod is tough with most Perc cards, in
many cases to do Jbod you have to fake it using individual raid 0 for each drive. Only
some perc controllers allow true jbod passthrough.
On Fri, Feb 22, 2019 at 12:30 PM Guillaume Pavese
<guillaume.pavese(a)interactiv-group.com
<mailto:guillaume.pavese@interactiv-group.com>> wrote:
Hi,
We have been evaluating oVirt HyperConverged for 9 month now with a test cluster of 3
DELL Hosts with Hardware RAID5 on PERC card.
We were not impressed with the performance...
No SSD for LV Cache on these hosts but I tried anyway with LV Cache on a ram device. Perf
were almost unchanged.
It seems that LV Cache is its own source of bugs and problems anyway, so we are thinking
going for full NVME drives when buying the production cluster.
What would the recommandation be in that case, JBOD or RAID?
Thanks
Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group
_______________________________________________
Users mailing list -- users(a)ovirt.org <mailto:users@ovirt.org>
To unsubscribe send an email to users-leave(a)ovirt.org
<mailto:users-leave@ovirt.org>
Privacy Statement:
https://www.ovirt.org/site/privacy-policy/
<
https://www.ovirt.org/site/privacy-policy/>
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
<
https://www.ovirt.org/community/about/community-guidelines/>
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IODRDUEIZBP...
<
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IODRDUEIZBP...
_______________________________________________
Users mailing list -- users(a)ovirt.org
To unsubscribe send an email to users-leave(a)ovirt.org
Privacy Statement:
https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KEVWLTZTSKX...