Engine across Clusters

Has anyone setup hype converged gluster (3Nodes) and then added more after while maintaining access to the engine? An oversight on my end was 2 fold, Engine gluster being on engine nodes and new nodes requiring their own cluster due to different CPU type. So basically I am trying to see if I can setup a new cluster for my other nodes that require it while trying to give them ability to run the engine and ofcourse because they arent part of the engine cluster, we all know how that goes. Has anyone dealt with this or worked around it, any advices?

If you are trying to re-use the existing bricks (gluster hosts) for both clusters, you could potentially do it by making a seperate volume on each brick. Then just have both clusters use their own specific volume. Underneath the volumes could have different physical media backing them, or if you're willing to sacrifice space and visibility in the GUI, the same media. (You'll need to keep them on separate filesystem trees though.) As for the engine VM, there's no way for it to use a separate cluster other than the one it's assigned to, and VMs cannot be shared / migrated across clusters. (Again, that's due to oVirt not having a generic CPU type that would work regardless of the host's manufacturer.) I've never tried this but, you could potentially fudge it by exporting the engine VM and re-importing it into the new cluster, setup some external failover solution that would remap the IPs / hostname, and start up the VM on the alternate cluster if the original one went down using the rest api. oVirt can recover from an unexpected engine host shutdown, (it has to for the hosted engine to work at all), but that support assumes that only one engine host is present per database. At the very least, the engine database would need to be on a separate host, possibly with it's own mirroring and failover solution, and once the original engine VM was restored, said VM would need to recognize the other engine already has control over the data center, and not interfere with the other engine. (I.e. The restored engine would need to terminate until the replacement was shut down. Both VMs can *never* be running at the same time.) Of course, it goes without saying that this idea is completely unsupported, the second engine VM would need to be kept up-to-date with the original to avoid conflicts, and such a setup would be inherently unstable. (Only as stable as the chosen database mirroring and failover solution.) But if you've got a test setup to toy around and get measurements with, it might be a fun project to spend a weekend on. :) -Patrick Hibbs On Fri, 2022-03-11 at 18:14 +0000, Abe E wrote:
Has anyone setup hype converged gluster (3Nodes) and then added more after while maintaining access to the engine? An oversight on my end was 2 fold, Engine gluster being on engine nodes and new nodes requiring their own cluster due to different CPU type.
So basically I am trying to see if I can setup a new cluster for my other nodes that require it while trying to give them ability to run the engine and ofcourse because they arent part of the engine cluster, we all know how that goes. Has anyone dealt with this or worked around it, any advices? _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/QGP2ZCESCWLFQX...

On Fri, 11 Mar 2022, Abe E wrote:
Has anyone setup hype converged gluster (3Nodes) and then added more after while maintaining access to the engine?
I have added additional self-hosted-engine hosts to a cluster in 4.3 and it worked fine. I don't know if 4.4 is more strict about that or not, as I moved away from self-hosted for 4.4
An oversight on my end was 2 fold, Engine gluster being on engine nodes and new nodes requiring their own cluster due to different CPU type.
You don't _have_ to use the default CPU type. If they are just newer models of the same type, you can just add them to the existing cluster as the old CPU type. I have a mix of Skylake and Cascade Lake CPUs in my Skylake cluster, and it works great. I guess you could have issues if your original cluster is too old to have "Secure" CPU types, and your new CPUs only have "Secure" CPUs, or if your old cluster is Intel and new one is AMD.
So basically I am trying to see if I can setup a new cluster for my other nodes that require it while trying to give them ability to run the engine and ofcourse because they arent part of the engine cluster, we all know how that goes. Has anyone dealt with this or worked around it, any advices?
From a gluster standpoint, this should work fine, but I suspect oVirt won't like having self-hosted engine hosts in different clusters. Unless you plan on entirely removing the original hosts in the future, I'm not sure this should be much of an issue anyway. While the idea of being able to run the engine on any host is nice, it's also a bit overkill. Three hosts should be sufficient for redundancy. If you just want to have the engine running on the newer hosts because they may be around longer, I would just make the new hosts part of the existing cluster (assuming they're close enough), migrate everything to the new hosts, then remove the old hosts and rebuild them as non-self-hosted hosts and put them in a separate cluster. Then once that's done, you can upgrade the cluster CPU type on the new machines.
participants (3)
-
Abe E
-
Patrick Hibbs
-
Sketch