Please clarify what are the disk groups that you are referring to? 

Regarding your statement  "In JBOD mode, Red Hat support only 'replica 3' volumes." does this also mean "replica 3" variants ex. 

Thank You For Your Help !

On Wed, Oct 14, 2020 at 7:34 AM C Williams <> wrote:
Thanks Strahil !

More questions may follow. 

Thanks Again For Your Help !

On Wed, Oct 14, 2020 at 12:29 AM Strahil Nikolov <> wrote:
Imagine you got a host with 60 Spinning Disks -> I would recommend you to split it to 10/12 disk groups and these groups will represent several bricks (6/5).

Keep in mind that when you start using many (some articles state hundreds , but no exact number was given) bricks , you should consider brick multiplexing (cluster.brick-multiplex).

So, you can use as many bricks you want , but each brick requires cpu time (separate thread) , tcp port number and memory.

In my setup I use multiple bricks in order to spread the load via LACP over several small (1GBE) NICs.

The only "limitation" is to have your data on separate hosts , so when you create the volume it is extremely advisable that you follow this model:


In JBOD mode, Red Hat support only 'replica 3' volumes - just to keep that in mind.

From my perspective , JBOD is suitable for NVMEs/SSDs while spinning disks should be in a raid of some type (maybe RAID10 for perf).

Best Regards,
Strahil Nikolov

В сряда, 14 октомври 2020 г., 06:34:17 Гринуич+3, C Williams <> написа:


I am getting some questions from others on my team.

I have some hosts that could provide up to 6 JBOD disks for oVirt data (not arbiter) bricks 

Would this be workable / advisable ?  I'm under the impression there should not be more than 1 data brick per HCI host .

Please correct me if I'm wrong.

Thank You For Your Help !

Users mailing list --
To unsubscribe send an email to
Privacy Statement:
oVirt Code of Conduct:
List Archives: