I use three very low power target nodes (J5005 Intel Atoms, 24GB RAM, 250GB SSD) to
experiment with hyperconverged oVirt, and some bigger machines to act as pure compute
nodes, preferably CentOS based, potentially oVirt Node image based.
I've used both CentOS7 and oVirt Node image bases (all latest versions and patches)
and typically I use either a single partition and xfs root file system to avoid
partitioning the constricted space on CentOS or the recommended partition layout with the
oVirt node images, which still consumes the single primary SSD /dev/sda.
In both cases I don't have another device or partition left for the three-node
hyperconverged wizard's ansible scripts to play with.
And since I want to understand better what is going on under the hood in any case I tried
to separate the Gluster setup and the hosted-engine setup.
Question: Is the impact of that decision perhaps far bigger than I imagined?
So I went with the alternative A Hosted-Engine wizard, where the storage was already
provisioned and I had set up 2+1 Gluster storage using bricks right on the xfs, assuming I
would achieve something very similar to what the three node wizard would simplify, except
without the LVM/VDO layer all done with Ansible.
But I am running into all kinds of problems and what continues to irritate me is the note
in the manuals, that clusters need to be either 'compute' or 'storage' but
cannot be both--yet that's the very point of hyperconvergence...
So basically I am wondering whether at this point there is a strict bifurifcation of oVirt
between the hyperconverged variant and the 'Hosted-Engine' on existing storage,
without any middle ground. Or is it ok to do a two-phased approach, where you separate out
the Gluster aspect and the Hosted-Engine *but with physical servers that are doing both
storage and compute*?
Also, I am wondering about the 'Enable Virt Service' and 'Enable Gluster
Service' tick boxes in the Cluster property pages: For my 'hosted engine'
wizard generated Cluster 'gluster' was unticked, even if it did run on Gluster
storage. And while ticking 'gluster' in addition had no visible effect, unticking
either one of the two is now impossible. I could not find any documentation on the
significance or effect of these boxes.
While the prepared three node replicated storage Gluster for the /engine was running on
CentOS nodes, the Hosted-Engine VM was then created on additional oVirt Node hosts using
that Gluster storage (trying to do the same on CentOS nodes always fails, separate topic),
with the assumption that I could then push the hosted-engine VM back to the gluster
storage nodes and get rid of the temporary compute-only host used for hosted engine
setup.
I then tried to add the Gluster nodes as hosts (and hosted-engine backup nodes), that also
failed. Whether it failed because they were CentOS based or because they were already used
for storage (and there is this hidden exclusivity), I would like to have some clarity on.
So to repeat once again my main question:
Is the (one or three) node hyperconverged oVirt setup a completely different thing from
the hosted-engine on Gluster?
Are you supposed to be able to add extra compute hosts, storage etc. to be managed by a
three-node HC based hosted-engine?
Are you supposed to be able to expand the three-node HC also in terms of storage, if
perhaps currently only in quorum-maintaining multiples?
Is oVirt currently actually strictly mandating 100% segregation between compute and
storage unless you use the wholly contained single/triple HC setups?