and you expect newcomers to find that significant bit of information within the reference
that you quote as they try to evaluate if oVirt is the right tool for the job?
I only found out once I tried to add dispersed volumes to an existing 3 node HCI and dug
through the log files.
Of course, I eventually managed to remove the nicely commented bits of ansible code that
prevented adding the volume, only to find that it could not be used to run VMs from there
or use it for disks.
I can still mount those volumes from inside the VMs via a GlusterFS client and I'd
guess that there is little if any difference in performance.
For an enterprise HCI solution, the usable intersection between oVirt and Gluster is so
small, it needs a magnifying glass very top and early in the documentation.
Gluster advertises itself as a scale-out file system without any metadata choke point as
the main differentiator vs. Lustre etc. and with tunable ratio of read amplification via
replicas and resilience.
Nobody expects scale-out to mean 1 or 3.
With perhaps 6 and 9 as a special option.
Or only replicas actually supported by oVirt, when erasure coding should at least in
theory give you near perfect scalability, which means you can add increments of one or any
bigger number and freely allocated between capacity and resilience.
It's perfectly legitimate to not support every potential permutation of deployment
scenarios of oVirt and Gluster.
But the limitations baked in and perhaps even the motivation need to be explained from the
very start. It doesn't help oVirt's adoption and success if people only find out
after they have invested heavily under the assumption a "scale-out" solution
delivers what that term implies.