Thanks Strahil !More questions may follow.Thanks Again For Your Help !On Wed, Oct 14, 2020 at 12:29 AM Strahil Nikolov <hunter86_bg@yahoo.com> wrote:Imagine you got a host with 60 Spinning Disks -> I would recommend you to split it to 10/12 disk groups and these groups will represent several bricks (6/5).
Keep in mind that when you start using many (some articles state hundreds , but no exact number was given) bricks , you should consider brick multiplexing (cluster.brick-multiplex).
So, you can use as many bricks you want , but each brick requires cpu time (separate thread) , tcp port number and memory.
In my setup I use multiple bricks in order to spread the load via LACP over several small (1GBE) NICs.
The only "limitation" is to have your data on separate hosts , so when you create the volume it is extremely advisable that you follow this model:
hostA:/path/to/brick
hostB:/path/to/brick
hostC:/path/to/brick
hostA:/path/to/brick2
hostB:/path/to/brick2
hostC:/path/to/brick2
In JBOD mode, Red Hat support only 'replica 3' volumes - just to keep that in mind.
From my perspective , JBOD is suitable for NVMEs/SSDs while spinning disks should be in a raid of some type (maybe RAID10 for perf).
Best Regards,
Strahil Nikolov
В сряда, 14 октомври 2020 г., 06:34:17 Гринуич+3, C Williams <cwilliams3320@gmail.com> написа:
Hello,
I am getting some questions from others on my team.
I have some hosts that could provide up to 6 JBOD disks for oVirt data (not arbiter) bricks
Would this be workable / advisable ? I'm under the impression there should not be more than 1 data brick per HCI host .
Please correct me if I'm wrong.
Thank You For Your Help !
_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-leave@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/OGZISNFLEG3GJPDQGWTT7TWRPPAMLPFQ/