Hi,
another approach you may consider is to use HA functionality provided by oVirt
which re-create VM on another host if original host become unavailable and
attach the disk from underlying Gluster (which is already HA) to newly created
VM, so you don't have to care about creating HA storage on your VMs. You also
save some resources as you don't have to have backup VMs running.
The drawback of this approach that re-creating new VM takes more time than
switching the traffic to already running VM on load balancer. This depends on
your constraints but IMHO worth to consider.
Vojta
On Friday, 16 April 2021 03:00:23 CEST David White via Users wrote:
I'm currently thinking about just setting up a rsync cron to run
every
minute.
Sent with ProtonMail Secure Email.
‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
On Thursday, April 15, 2021 8:55 PM, David White via Users <users(a)ovirt.org>
wrote:
> > David, I’m curious what the use case is
>
> This is for a customer who wants as much high availability as possible for
> their website, which relies on a basic LAMP or LNMP stack.
>
>
> The plan is to create a MariaDB Galera cluster.
>
>
> Each of the 3 VMs will run MariaDB, as well as Apache or Nginx (I haven't
> decided which, yet), and will be able to accept web traffic.
>
>
> So the website files will need to be the same across all 3 virtual
> servers.
>
>
> My original intent was to setup a mount point on all 3 virtual servers
> that mapped back to the same shared disk.
>
>
> Strahil, 1 idea I had, which I don't think would be ideal at all, was to
> setup a separate, new, gluster configuration on each of the 3 VMs.
> Gluster virtualized on top of gluster! If that doesn't make your head
> spin, what will? But I'm not seriously thinking about that. :)
>
>
> It did occur to me that I could setup a 4th VM to host the NFS share.
>
>
> But I'm trying to stay away from as many single points of failures as
> possible.
>
>
> Sent with ProtonMail Secure Email.
>
>
> ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
>
> On Thursday, April 15, 2021 7:40 PM, Strahil Nikolov via Users
users(a)ovirt.org wrote:
> > I know that clusterizing applications (for example
corosync/pacemaker
> > Active-Passive or even GFS2) require simultaneous access to the data.
> >
> >
> > In your case you can create:
> >
> >
> > - 2 separate VMs replicating over DRBD and sharing the storage over
> > NFS/iSCSI - Using NFS Ganesha (this is just a theory but should work)
> > to export your Gluster volumes in a redundant and highly available way>
> > Best Regards,
> > Strahil Nikolov
> >
> > В петък, 16 април 2021 г., 01:56:09 ч. Гринуич+3, Jayme
jaymef(a)gmail.com написа:
> > David, I’m curious what the use case is. :9 you plan on
using the disk
> > with three vms at the same time? This isn’t really what shareable
> > disks are meant to do afaik. If you want to share storage with multiple
> > vms I’d probably just setup an nfs share on one of the vms> >
> > On Thu, Apr 15, 2021 at 7:37 PM David White via Users users(a)ovirt.org
wrote:
> > > I found the proper documentation
> > > at
https://www.ovirt.org/documentation/administration_guide/#Shareabl
> > > e_Disks. When I tried to edit the disk, I see that sharable is grayed
> > > out, and when I hover my mouse over it, I see "Sharable Storage is
> > > not supported on Gluster/Offload Domain". So to confirm, is there
any
> > > circumstance where a Gluster volume can support sharable storage?
> > > Unfortunately, I don't have any other storage available, and I chose
> > > to use Gluster, so that I could have a HA environment. Sent with
> > > ProtonMail Secure Email.
> > > ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
> > >
> > > On Thursday, April 15, 2021 5:05 PM, David White via Users
users(a)ovirt.org wrote:
> > > > I need to mount a partition across 3 different
VMs.
> > > > How do I attach a disk to multiple VMs?
> > > > This looks like fairly old
> > > > documentation-not-documentation:
https://www.ovirt.org/develop/rele
> > > > ase-management/features/storage/sharedrawdisk.html Sent with
> > > > ProtonMail Secure Email.
> > >
> > > Users mailing list -- users(a)ovirt.org
> > > To unsubscribe send an email to users-leave(a)ovirt.org
> > > Privacy Statement:
https://www.ovirt.org/privacy-policy.html
> > > oVirt Code of Conduct:
> > >
https://www.ovirt.org/community/about/community-guidelines/ List
> > > Archives:
> > >
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WZY6OJW
> > > BH5KAB5H2XXYJOVI7BLR4Z67F/> >
> > Users mailing list -- users(a)ovirt.org
> > To unsubscribe send an email to users-leave(a)ovirt.org
> > Privacy Statement:
https://www.ovirt.org/privacy-policy.html
> > oVirt Code of Conduct:
> >
https://www.ovirt.org/community/about/community-guidelines/ List
> > Archives:
> >
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3XAQI4WTRS
> > PEECHZV3NEQLUMHRCEPUEZ/
> >
> >
> > Users mailing list -- users(a)ovirt.org
> > To unsubscribe send an email to users-leave(a)ovirt.org
> > Privacy Statement:
https://www.ovirt.org/privacy-policy.html
> > oVirt Code of Conduct:
> >
https://www.ovirt.org/community/about/community-guidelines/ List
> > Archives:
> >
https://lists.ovirt.org/archives/list/users@ovirt.org/message/AFSDOLZCL
> > VEO6NU36C3WOTSONMGPT4EF/>
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org
> Privacy Statement:
https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
>
https://www.ovirt.org/community/about/community-guidelines/ List
> Archives:
>
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3J6KFZM7K2A
> KDZJDISSQBRVSEGIYBPIX/