1) If you have a hardware RAID you want to use, you should just use NFS
/ Direct SCSI / FibreChannel on top of it and forgo Gluster. As it's
been reported on the list that Gluster has latency issues when dealing
with a hardware RAID. Makes sense considering that Gluster is
essentially a software RAID. Using both is redundant, and counter
productive. (As a bad write on Gluster won't be detected / show up on
hardware and vice versa.)
Throwing Gluster on top of LVM on top of a hardware RAID, is not
something I'd recommend. You've got way to many layers of abstraction
there and it will be a huge pain to debug, or perform data recovery, if
something goes wrong later. Not to mention all of the resources spent
trying to figure out what data block goes where. (Don't forget that
each VM adds another layer of abstraction for it's virtual harddisk
paritions / filesystem.) LVM and a hardware RAID perform virtually all
of the tasks Gluster would offer you. If having a fail over node is
important to you, I'd just drop the hardware RAID and use Gluster. I
personally don't use LVM, as I find it to be the source of a lot of
unnecessary headaches, but you can use it with Gluster without penalty.
Gluster, in my opinion, tends to work best without abstraction layers
underneath it. You can use LVM, but I personally would prefer physical
partitions / disks for the bricks. As there is less chance of LVM
grabbing the wrong partitions and causing a boot to emergency mode.
(Which is reported on the list a lot.) Granted that's not fesible in
many cases, but I consider it when Gluster is involved.
2) If you choose to use Gluster, the only thing you'll really lose by
doing a 3 node pool to start with is the time to replicate to the new
bricks after adding them. Although, if you want to test how things work
with your hardware RAID you may want to start with all 6, so that you
can get accurate measurements, especially latency, for real usage.
-Patrick Hibbs
On Tue, 2022-05-31 at 03:14 +0000, bpbp(a)fastmail.com wrote:
Hi all, planning a new 6 node hyper-converged cluster. Have a couple
of questions
1) storage - I think we want to do 2x replicas and 1 arbiter, in the
chained configuration seen here (example 5.7)
https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5...
any suggestions on how that looks from the bottom up? for example
does each host have all their disks in a single hardware raid6
volume, and then the bricks are thinly provisioned via LVM on top so
each node has 2 data and 1 arbiter bricks. or is something else
recommended?
2) setup - Do I start with a 3 node pool and extend to 6 or use
ansible to set up 6 from the start?
Thanks
_______________________________________________
Users mailing list -- users(a)ovirt.org
To unsubscribe send an email to users-leave(a)ovirt.org
Privacy Statement:
https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KPE7N5VCLBM...