Gluster in the VMs... I was thinking to propose it, but I wasn't
sure what kind of workload you got.
Maybe that's going to be my best option.
Thank you!
I remember support that exact type of setup (Gluster in AWS VMs syncing website files) for
a hosting company I interned with over 10 years ago.
I've forgotten a lot about gluster since then, but I suppose I need to learn it
anyway, since I've taken the plunge into oVirt land. :)
I'll think about it, and look into it further.
Sent with ProtonMail Secure Email.
‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
On Friday, April 16, 2021 9:47 AM, Strahil Nikolov via Users <users(a)ovirt.org>
wrote:
Gluster in the VMs... I was thinking to propose it, but I wasn't
sure what kind of workload you got.
I think that with Redis and Gluster on VM - you will be quite fine.
For the Galera - it doesn't need shared storage at all, so you will be quite fine.
Don't forget that lattency kills gluster, so keep it as tight as
possible , but in the same time keep them on separate hosts.
Best Regards,
Strahil Nikolov
В петък, 16 април 2021 г., 03:57:51 ч. Гринуич+3, David White via
Users users(a)ovirt.org написа:
> David, I’m curious what the use case is
This is for a customer who wants as much high availability as
possible for their website, which relies on a basic LAMP or LNMP stack.
The plan is to create a MariaDB Galera cluster.
Each of the 3 VMs will run MariaDB, as well as Apache or Nginx (I
haven't decided which, yet), and will be able to accept web traffic.
So the website files will need to be the same across all 3 virtual
servers.
My original intent was to setup a mount point on all 3 virtual
servers that mapped back to the same shared disk.
Strahil, 1 idea I had, which I don't think would be ideal at all,
was to setup a separate, new, gluster configuration on each of the 3 VMs. Gluster
virtualized on top of gluster! If that doesn't make your head spin, what will? But
I'm not seriously thinking about that. :)
It did occur to me that I could setup a 4th VM to host the NFS
share.
But I'm trying to stay away from as many single points of
failures as possible.
Sent with ProtonMail Secure Email.
‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
On Thursday, April 15, 2021 7:40 PM, Strahil Nikolov via Users users(a)ovirt.org wrote:
> I know that clusterizing applications (for example
corosync/pacemaker Active-Passive or even GFS2) require simultaneous access to the data.
> In your case you can create:
> - 2 separate VMs replicating over DRBD and sharing the storage
over NFS/iSCSI
> - Using NFS Ganesha (this is just a theory but should work) to export your Gluster
volumes in a redundant and highly available way
> Best Regards,
> Strahil Nikolov
> В петък, 16 април 2021 г., 01:56:09 ч. Гринуич+3, Jayme
jaymef(a)gmail.com написа:
>
> > David, I’m curious what the use case is. :9 you plan on using the disk with
three vms at the same time? This isn’t really what shareable disks are meant to do afaik.
If you want to share storage with multiple vms I’d probably just setup an nfs share on one
of the vms
> > On Thu, Apr 15, 2021 at 7:37 PM David White via Users users(a)ovirt.org wrote:
> > > I found the proper documentation
at https://www.ovirt.org/documentation/administration_guide/#Shareable_Disks.
> > > When I tried to edit the disk, I see that sharable is grayed out, and when
I hover my mouse over it, I see "Sharable Storage is not supported on Gluster/Offload
Domain".
> > > So to confirm, is there any circumstance where a Gluster volume can support
sharable storage? Unfortunately, I don't have any other storage available, and I chose
to use Gluster, so that I could have a HA environment.
> > > Sent with ProtonMail Secure Email.
> > > ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
> > > On Thursday, April 15, 2021 5:05 PM, David White via Users
users(a)ovirt.org wrote:
> > > > I need to mount a partition across 3 different VMs.
> > > > How do I attach a disk to multiple VMs?
> > > > This looks like fairly old
documentation-not-documentation: https://www.ovirt.org/develop/release-ma...
> > >
Sent with ProtonMail Secure Email.
> > > Users mailing list -- users(a)ovirt.org
> > > To unsubscribe send an email to users-leave(a)ovirt.org
> > > Privacy Statement:
https://www.ovirt.org/privacy-policy.html
> > > oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
> > > List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WZY6OJWBH5K...
> > Users mailing list -- users(a)ovirt.org
> > To unsubscribe send an email to users-leave(a)ovirt.org
> > Privacy Statement:
https://www.ovirt.org/privacy-policy.html
> > oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
> > List Archives:
> >
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3XAQI4WTRSP...
>
> > Users mailing list -- users(a)ovirt.org
> > To unsubscribe send an email to users-leave(a)ovirt.org
> > Privacy Statement:
https://www.ovirt.org/privacy-policy.html
> > oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
> > List Archives:
>
https://lists.ovirt.org/archives/list/users@ovirt.org/message/AFSDOLZCLVE...
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org
> Privacy Statement:
https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
>
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3J6KFZM7K2A...