If you are trying to re-use the existing bricks (gluster hosts) for
both clusters, you could potentially do it by making a seperate volume
on each brick. Then just have both clusters use their own specific
volume.
Underneath the volumes could have different physical media backing
them, or if you're willing to sacrifice space and visibility in the
GUI, the same media. (You'll need to keep them on separate filesystem
trees though.)
As for the engine VM, there's no way for it to use a separate cluster
other than the one it's assigned to, and VMs cannot be shared /
migrated across clusters. (Again, that's due to oVirt not having a
generic CPU type that would work regardless of the host's
manufacturer.)
I've never tried this but, you could potentially fudge it by exporting
the engine VM and re-importing it into the new cluster, setup some
external failover solution that would remap the IPs / hostname, and
start up the VM on the alternate cluster if the original one went down
using the rest api. oVirt can recover from an unexpected engine host
shutdown, (it has to for the hosted engine to work at all), but that
support assumes that only one engine host is present per database. At
the very least, the engine database would need to be on a separate
host, possibly with it's own mirroring and failover solution, and once
the original engine VM was restored, said VM would need to recognize
the other engine already has control over the data center, and not
interfere with the other engine. (I.e. The restored engine would need
to terminate until the replacement was shut down. Both VMs can *never*
be running at the same time.)
Of course, it goes without saying that this idea is completely
unsupported, the second engine VM would need to be kept up-to-date with
the original to avoid conflicts, and such a setup would be inherently
unstable. (Only as stable as the chosen database mirroring and failover
solution.) But if you've got a test setup to toy around and get
measurements with, it might be a fun project to spend a weekend on. :)
-Patrick Hibbs
On Fri, 2022-03-11 at 18:14 +0000, Abe E wrote:
Has anyone setup hype converged gluster (3Nodes) and then added more
after while maintaining access to the engine?
An oversight on my end was 2 fold, Engine gluster being on engine
nodes and new nodes requiring their own cluster due to different CPU
type.
So basically I am trying to see if I can setup a new cluster for my
other nodes that require it while trying to give them ability to run
the engine and ofcourse because they arent part of the engine
cluster, we all know how that goes. Has anyone dealt with this or
worked around it, any advices?
_______________________________________________
Users mailing list -- users(a)ovirt.org
To unsubscribe send an email to users-leave(a)ovirt.org
Privacy Statement:
https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QGP2ZCESCWL...