Why and when does oVirt make a compute host part of the HCI Gluster?

My understanding is that in a HCI environment, the storage nodes should be rather static, but that the pure compute nodes, can be much more dynamic or opportunistic: actually those should/could even be switched off and restarted as part of oVirt's resource optimization. The 'pure compute' nodes fall into two major categories, a) those who can run the management engine, b) those who can't. What I don't quite understand is why both types seem to be made 'gluster peers', when they don't contribute storage bricks: shouldn't they just be able to mount Gluster volumes? The reason for my concern is that I actually want to manage these computing hosts with much more freedom. I may have them join and take on workloads, or not and I may want to shut them down. To my irritation I sometimes even see these 'missing' hosts being considered for quorum decisions or being listed e.g. when I do a 'gluster volume engine status'. I find hosts there, that definitely are not contributing bricks to 'engine' (or any other volume). Then I'm not even sure I have consistent behavior when I remove hosts from oVirt: I'd swear that quite a few remain as Gluster peers, even if they are completely invisible from the oVirt GUI (while hosted-engine --vm-status will still list them). So what's the theory here and would it be a bug if removed hosts remain gluster members?

I think that this is worth a RFE where you can select if you want that node to be a gluster peer or not.I'm not sure if oVirt installs Gluster on compute-only node, but I believe that at least you can workaround the problem by going over the shall and remove that node from the Gluster TSP. Best Regards,Strahil Nikolov On Mon, Apr 19, 2021 at 12:57, Thomas Hoberg<thomas@hoberg.net> wrote: My understanding is that in a HCI environment, the storage nodes should be rather static, but that the pure compute nodes, can be much more dynamic or opportunistic: actually those should/could even be switched off and restarted as part of oVirt's resource optimization. The 'pure compute' nodes fall into two major categories, a) those who can run the management engine, b) those who can't. What I don't quite understand is why both types seem to be made 'gluster peers', when they don't contribute storage bricks: shouldn't they just be able to mount Gluster volumes? The reason for my concern is that I actually want to manage these computing hosts with much more freedom. I may have them join and take on workloads, or not and I may want to shut them down. To my irritation I sometimes even see these 'missing' hosts being considered for quorum decisions or being listed e.g. when I do a 'gluster volume engine status'. I find hosts there, that definitely are not contributing bricks to 'engine' (or any other volume). Then I'm not even sure I have consistent behavior when I remove hosts from oVirt: I'd swear that quite a few remain as Gluster peers, even if they are completely invisible from the oVirt GUI (while hosted-engine --vm-status will still list them). So what's the theory here and would it be a bug if removed hosts remain gluster members? _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/334GRWOQ7EFHS6...

Well, that's why I really want a theory of operation here, because removing a host as a gluster peer might just break something in oVirt... And trying to fix that may be not trivial either. It's one of those cases where I'd just really love to have nested virtualization work better so I can snapshot an virtualized HCI cluster before I do these things, so I can revert with a click or three. And what really irritates me is that even hosts that do not contribute bricks to a volume show up when I do a volume status. Mostly because I've had situations where I had the typical 3 node HCI with an additional 5 compute-only nodes and when I was updating these with some degree of overlap (several unloaded hosts at once), all of a sudden vmstore/data/engine would go down for lack of quorum.... just because non-participating hosts were not available!! So again, if one of the oVirt developers could shine a light into this, it would be very much appreciated.

Another workaround that comes to my mind: disable server quorum gluster option. Of course all of these are just workarounds and not a real fix. Best Regards,Strahil Nikolov On Mon, Apr 19, 2021 at 13:43, Thomas Hoberg<thomas@hoberg.net> wrote: Well, that's why I really want a theory of operation here, because removing a host as a gluster peer might just break something in oVirt... And trying to fix that may be not trivial either. It's one of those cases where I'd just really love to have nested virtualization work better so I can snapshot an virtualized HCI cluster before I do these things, so I can revert with a click or three. And what really irritates me is that even hosts that do not contribute bricks to a volume show up when I do a volume status. Mostly because I've had situations where I had the typical 3 node HCI with an additional 5 compute-only nodes and when I was updating these with some degree of overlap (several unloaded hosts at once), all of a sudden vmstore/data/engine would go down for lack of quorum.... just because non-participating hosts were not available!! So again, if one of the oVirt developers could shine a light into this, it would be very much appreciated. _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/QPARRXAVKOTP4J...
participants (2)
-
Strahil Nikolov
-
Thomas Hoberg