My understanding is that in a HCI environment, the storage nodes should be rather static,
but that the pure compute nodes, can be much more dynamic or opportunistic: actually those
should/could even be switched off and restarted as part of oVirt's resource
optimization.
The 'pure compute' nodes fall into two major categories, a) those who can run the
management engine, b) those who can't.
What I don't quite understand is why both types seem to be made 'gluster
peers', when they don't contribute storage bricks: shouldn't they just be able
to mount Gluster volumes?
The reason for my concern is that I actually want to manage these computing hosts with
much more freedom. I may have them join and take on workloads, or not and I may want to
shut them down.
To my irritation I sometimes even see these 'missing' hosts being considered for
quorum decisions or being listed e.g. when I do a 'gluster volume engine status'.
I find hosts there, that definitely are not contributing bricks to 'engine' (or
any other volume).
Then I'm not even sure I have consistent behavior when I remove hosts from oVirt:
I'd swear that quite a few remain as Gluster peers, even if they are completely
invisible from the oVirt GUI (while hosted-engine --vm-status will still list them).
So what's the theory here and would it be a bug if removed hosts remain gluster
members?