[ovirt-users] Best Practice Question: How many engines, one or more than one, for multiple physical locations

Simone Tiraboschi stirabos at redhat.com
Mon Dec 11 09:24:35 UTC 2017


On Fri, Dec 8, 2017 at 9:50 PM, Matt Simonsen <matt at khoza.com> wrote:

> Hello all,
>
> I read with Gluster using hyper-convergence that the engine must reside on
> the same LAN as the nodes. I guess this makes sense by definition - ie:
> using Gluster storage and replicating Gluster bricks across the web sounds
> awful.
>
> This got me wondering about best practices for the engine setup. We have
> multiple physical locations (co-location data centers).
>
> In my initial plan I had expected to have my oVirt engine hosted
> separately from each physical location so that in the event of trouble at a
> remote facility the engine would still be usable.
>
> In this case, our prod sites would not have a "hyper-converged" setup if
> we decide to run GlusterFS for storage at any particular physical site, but
> I believe it would still be possible to use Gluster. In this case oVirt
> would have a 3 node cluster, using GlusterFS storage, but not
> hyper-converged since the engine would be in a separate facility.
>
> Is there any downside in this setup to having the engine off-site?
>

This is called a stretched cluster setup. You have pro and cons, for
instance host fencing could become problematic.
VM leases could help:
https://ovirt.org/develop/release-management/features/storage/vm-leases/



>
> Rather than having an off-site engine, should I consider one engine per
> physical co-location space?
>

This would be simpler but you are going to loose a few capabilities that
can be relevant in a disaster recovery scenario.


>
> Thank you all for any feedback,
>
> Matt
>
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20171211/cc77d340/attachment.html>


More information about the Users mailing list