Thank you very much.
I am planning to run a 3-nodes glusterfs cluster + 3-nodes compute only that will be permanently running.
Best regards.
Def. Quota thomas@hoberg.net:
I have done that, even added five nodes that contribute a separate Gluster file system using dispersed (erasure codes, more efficient) mode.
But in another cluster with such a 3-node-HCI base, I had a lot (3 or 4) of compute nodes, that were actually dual-boot or just shut off when not used: Even used the GUI to do that properly.
This caused strange issues as I shut down all three compute-only nodes: Gluster reported loss of quorum, and essentially the entire HCI lost storage, even if these compute nodes didn't add bricks to the Gluster at all. In fact the compute nodes probably shouldn't have even participated in the Gluster, since they were only clients, but the Cockpit wizard added them anyway.
I believe this is because HCI is designed to support adding extra nodes in sets of three e.g. for a 9-node setup, which should be really nice with 7+2 disperse encoding.
I didn't dare reproduce the situation intentionally, but if you should come across this, perhaps you can document and report it. If the (most of) extra nodes are permanently running, you don't need to worry.
In terms of regaining control, you mostly have to make sure you turn the missing nodes back on, oVirt can be astonishingly resilient. If you then remove the nodes prior to shutdown, the quorum issue goes away.
_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-leave@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/A4EDM3RYVIYXZ5QAJO4VOYKQUDWYDA4P/