How do I share a disk across multiple VMs?

I need to mount a partition across 3 different VMs. How do I attach a disk to multiple VMs? This looks like fairly old documentation-not-documentation: https://www.ovirt.org/develop/release-management/features/storage/sharedrawd... Sent with ProtonMail Secure Email.

I found the proper documentation at https://www.ovirt.org/documentation/administration_guide/#Shareable_Disks. When I tried to edit the disk, I see that sharable is grayed out, and when I hover my mouse over it, I see "Sharable Storage is not supported on Gluster/Offload Domain". So to confirm, is there any circumstance where a Gluster volume can support sharable storage? Unfortunately, I don't have any other storage available, and I chose to use Gluster, so that I could have a HA environment. Sent with ProtonMail Secure Email. ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐ On Thursday, April 15, 2021 5:05 PM, David White via Users <users@ovirt.org> wrote:
I need to mount a partition across 3 different VMs. How do I attach a disk to multiple VMs?
This looks like fairly old documentation-not-documentation: https://www.ovirt.org/develop/release-management/features/storage/sharedrawd...
Sent with ProtonMail Secure Email.

David, I’m curious what the use case is. :9 you plan on using the disk with three vms at the same time? This isn’t really what shareable disks are meant to do afaik. If you want to share storage with multiple vms I’d probably just setup an nfs share on one of the vms On Thu, Apr 15, 2021 at 7:37 PM David White via Users <users@ovirt.org> wrote:
I found the proper documentation at https://www.ovirt.org/documentation/administration_guide/#Shareable_Disks.
When I tried to edit the disk, I see that sharable is grayed out, and when I hover my mouse over it, I see "Sharable Storage is not supported on Gluster/Offload Domain".
So to confirm, is there any circumstance where a Gluster volume can support sharable storage? Unfortunately, I don't have any other storage available, and I chose to use Gluster, so that I could have a HA environment.
Sent with ProtonMail <https://protonmail.com> Secure Email.
‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐ On Thursday, April 15, 2021 5:05 PM, David White via Users < users@ovirt.org> wrote:
I need to mount a partition across 3 different VMs. How do I attach a disk to multiple VMs?
This looks like fairly old documentation-not-documentation: https://www.ovirt.org/develop/release-management/features/storage/sharedrawd...
Sent with ProtonMail <https://protonmail.com> Secure Email.
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/WZY6OJWBH5KAB5...

I know that clusterizing applications (for example corosync/pacemaker Active-Passive or even GFS2) require simultaneous access to the data. In your case you can create: - 2 separate VMs replicating over DRBD and sharing the storage over NFS/iSCSI - Using NFS Ganesha (this is just a theory but should work) to export your Gluster volumes in a redundant and highly available way Best Regards, Strahil Nikolov В петък, 16 април 2021 г., 01:56:09 ч. Гринуич+3, Jayme <jaymef@gmail.com> написа: David, I’m curious what the use case is. :9 you plan on using the disk with three vms at the same time? This isn’t really what shareable disks are meant to do afaik. If you want to share storage with multiple vms I’d probably just setup an nfs share on one of the vms On Thu, Apr 15, 2021 at 7:37 PM David White via Users <users@ovirt.org> wrote:
I found the proper documentation at https://www.ovirt.org/documentation/administration_guide/#Shareable_Disks.
When I tried to edit the disk, I see that sharable is grayed out, and when I hover my mouse over it, I see "Sharable Storage is not supported on Gluster/Offload Domain".
So to confirm, is there any circumstance where a Gluster volume can support sharable storage? Unfortunately, I don't have any other storage available, and I chose to use Gluster, so that I could have a HA environment.
Sent with ProtonMail Secure Email.
‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐ On Thursday, April 15, 2021 5:05 PM, David White via Users <users@ovirt.org> wrote:
I need to mount a partition across 3 different VMs. How do I attach a disk to multiple VMs?
This looks like fairly old documentation-not-documentation: https://www.ovirt.org/develop/release-management/features/storage/sharedrawd...
Sent with ProtonMail Secure Email.
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/WZY6OJWBH5KAB5...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/3XAQI4WTRSPEEC...

David, I’m curious what the use case is This is for a customer who wants as much high availability as possible for their website, which relies on a basic LAMP or LNMP stack.
The plan is to create a MariaDB Galera cluster. Each of the 3 VMs will run MariaDB, as well as Apache or Nginx (I haven't decided which, yet), and will be able to accept web traffic. So the website files will need to be the same across all 3 virtual servers. My original intent was to setup a mount point on all 3 virtual servers that mapped back to the same shared disk. Strahil, 1 idea I had, which I don't think would be ideal at all, was to setup a separate, new, gluster configuration on each of the 3 VMs. Gluster virtualized on top of gluster! If that doesn't make your head spin, what will? But I'm not seriously thinking about that. :) It did occur to me that I could setup a 4th VM to host the NFS share. But I'm trying to stay away from as many single points of failures as possible. Sent with ProtonMail Secure Email. ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐ On Thursday, April 15, 2021 7:40 PM, Strahil Nikolov via Users <users@ovirt.org> wrote:
I know that clusterizing applications (for example corosync/pacemaker Active-Passive or even GFS2) require simultaneous access to the data.
In your case you can create:
- 2 separate VMs replicating over DRBD and sharing the storage over NFS/iSCSI - Using NFS Ganesha (this is just a theory but should work) to export your Gluster volumes in a redundant and highly available way
Best Regards, Strahil Nikolov
В петък, 16 април 2021 г., 01:56:09 ч. Гринуич+3, Jayme jaymef@gmail.com написа:
David, I’m curious what the use case is. :9 you plan on using the disk with three vms at the same time? This isn’t really what shareable disks are meant to do afaik. If you want to share storage with multiple vms I’d probably just setup an nfs share on one of the vms
On Thu, Apr 15, 2021 at 7:37 PM David White via Users users@ovirt.org wrote:
I found the proper documentation at https://www.ovirt.org/documentation/administration_guide/#Shareable_Disks. When I tried to edit the disk, I see that sharable is grayed out, and when I hover my mouse over it, I see "Sharable Storage is not supported on Gluster/Offload Domain". So to confirm, is there any circumstance where a Gluster volume can support sharable storage? Unfortunately, I don't have any other storage available, and I chose to use Gluster, so that I could have a HA environment. Sent with ProtonMail Secure Email. ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐ On Thursday, April 15, 2021 5:05 PM, David White via Users users@ovirt.org wrote:
I need to mount a partition across 3 different VMs. How do I attach a disk to multiple VMs? This looks like fairly old documentation-not-documentation: https://www.ovirt.org/develop/release-management/features/storage/sharedrawd... Sent with ProtonMail Secure Email.
Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/WZY6OJWBH5KAB5...
Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/3XAQI4WTRSPEEC...
Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/AFSDOLZCLVEO6N...

I'm currently thinking about just setting up a rsync cron to run every minute. Sent with ProtonMail Secure Email. ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐ On Thursday, April 15, 2021 8:55 PM, David White via Users <users@ovirt.org> wrote:
David, I’m curious what the use case is
This is for a customer who wants as much high availability as possible for their website, which relies on a basic LAMP or LNMP stack.
The plan is to create a MariaDB Galera cluster.
Each of the 3 VMs will run MariaDB, as well as Apache or Nginx (I haven't decided which, yet), and will be able to accept web traffic.
So the website files will need to be the same across all 3 virtual servers.
My original intent was to setup a mount point on all 3 virtual servers that mapped back to the same shared disk.
Strahil, 1 idea I had, which I don't think would be ideal at all, was to setup a separate, new, gluster configuration on each of the 3 VMs. Gluster virtualized on top of gluster! If that doesn't make your head spin, what will? But I'm not seriously thinking about that. :)
It did occur to me that I could setup a 4th VM to host the NFS share.
But I'm trying to stay away from as many single points of failures as possible.
Sent with ProtonMail Secure Email.
‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐ On Thursday, April 15, 2021 7:40 PM, Strahil Nikolov via Users users@ovirt.org wrote:
I know that clusterizing applications (for example corosync/pacemaker Active-Passive or even GFS2) require simultaneous access to the data.
In your case you can create:
- 2 separate VMs replicating over DRBD and sharing the storage over NFS/iSCSI - Using NFS Ganesha (this is just a theory but should work) to export your Gluster volumes in a redundant and highly available way
Best Regards, Strahil Nikolov
В петък, 16 април 2021 г., 01:56:09 ч. Гринуич+3, Jayme jaymef@gmail.com написа:
David, I’m curious what the use case is. :9 you plan on using the disk with three vms at the same time? This isn’t really what shareable disks are meant to do afaik. If you want to share storage with multiple vms I’d probably just setup an nfs share on one of the vms
On Thu, Apr 15, 2021 at 7:37 PM David White via Users users@ovirt.org wrote:
I found the proper documentation at https://www.ovirt.org/documentation/administration_guide/#Shareable_Disks. When I tried to edit the disk, I see that sharable is grayed out, and when I hover my mouse over it, I see "Sharable Storage is not supported on Gluster/Offload Domain". So to confirm, is there any circumstance where a Gluster volume can support sharable storage? Unfortunately, I don't have any other storage available, and I chose to use Gluster, so that I could have a HA environment. Sent with ProtonMail Secure Email. ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐ On Thursday, April 15, 2021 5:05 PM, David White via Users users@ovirt.org wrote:
I need to mount a partition across 3 different VMs. How do I attach a disk to multiple VMs? This looks like fairly old documentation-not-documentation: https://www.ovirt.org/develop/release-management/features/storage/sharedrawd... Sent with ProtonMail Secure Email.
Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/WZY6OJWBH5KAB5...
Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/3XAQI4WTRSPEEC...
Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/AFSDOLZCLVEO6N...
Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/3J6KFZM7K2AKDZ...

Hi, another approach you may consider is to use HA functionality provided by oVirt which re-create VM on another host if original host become unavailable and attach the disk from underlying Gluster (which is already HA) to newly created VM, so you don't have to care about creating HA storage on your VMs. You also save some resources as you don't have to have backup VMs running. The drawback of this approach that re-creating new VM takes more time than switching the traffic to already running VM on load balancer. This depends on your constraints but IMHO worth to consider. Vojta On Friday, 16 April 2021 03:00:23 CEST David White via Users wrote:
I'm currently thinking about just setting up a rsync cron to run every minute.
Sent with ProtonMail Secure Email.
‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
On Thursday, April 15, 2021 8:55 PM, David White via Users <users@ovirt.org> wrote:
David, I’m curious what the use case is
This is for a customer who wants as much high availability as possible for their website, which relies on a basic LAMP or LNMP stack.
The plan is to create a MariaDB Galera cluster.
Each of the 3 VMs will run MariaDB, as well as Apache or Nginx (I haven't decided which, yet), and will be able to accept web traffic.
So the website files will need to be the same across all 3 virtual servers.
My original intent was to setup a mount point on all 3 virtual servers that mapped back to the same shared disk.
Strahil, 1 idea I had, which I don't think would be ideal at all, was to setup a separate, new, gluster configuration on each of the 3 VMs. Gluster virtualized on top of gluster! If that doesn't make your head spin, what will? But I'm not seriously thinking about that. :)
It did occur to me that I could setup a 4th VM to host the NFS share.
But I'm trying to stay away from as many single points of failures as possible.
Sent with ProtonMail Secure Email.
‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
On Thursday, April 15, 2021 7:40 PM, Strahil Nikolov via Users users@ovirt.org wrote:
I know that clusterizing applications (for example corosync/pacemaker Active-Passive or even GFS2) require simultaneous access to the data.
In your case you can create:
- 2 separate VMs replicating over DRBD and sharing the storage over NFS/iSCSI - Using NFS Ganesha (this is just a theory but should work) to export your Gluster volumes in a redundant and highly available way>
Best Regards, Strahil Nikolov
В петък, 16 април 2021 г., 01:56:09 ч. Гринуич+3, Jayme
jaymef@gmail.com написа:
David, I’m curious what the use case is. :9 you plan on using the disk with three vms at the same time? This isn’t really what shareable disks are meant to do afaik. If you want to share storage with multiple vms I’d probably just setup an nfs share on one of the vms> > On Thu, Apr 15, 2021 at 7:37 PM David White via Users users@ovirt.org wrote:
I found the proper documentation at https://www.ovirt.org/documentation/administration_guide/#Shareabl e_Disks. When I tried to edit the disk, I see that sharable is grayed out, and when I hover my mouse over it, I see "Sharable Storage is not supported on Gluster/Offload Domain". So to confirm, is there any circumstance where a Gluster volume can support sharable storage? Unfortunately, I don't have any other storage available, and I chose to use Gluster, so that I could have a HA environment. Sent with ProtonMail Secure Email. ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
On Thursday, April 15, 2021 5:05 PM, David White via Users users@ovirt.org wrote:
I need to mount a partition across 3 different VMs. How do I attach a disk to multiple VMs? This looks like fairly old documentation-not-documentation: https://www.ovirt.org/develop/rele ase-management/features/storage/sharedrawdisk.html Sent with ProtonMail Secure Email.
Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/WZY6OJW BH5KAB5H2XXYJOVI7BLR4Z67F/> > Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/3XAQI4WTRS PEECHZV3NEQLUMHRCEPUEZ/
Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/AFSDOLZCL VEO6NU36C3WOTSONMGPT4EF/> Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/3J6KFZM7K2A KDZJDISSQBRVSEGIYBPIX/

Sent with ProtonMail Secure Email. ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐ On Friday, April 16, 2021 4:40 AM, Vojtech Juranek <vjuranek@redhat.com> wrote:
Hi, another approach you may consider is to use HA functionality provided by oVirt which re-create VM on another host if original host become unavailable and attach the disk from underlying Gluster (which is already HA) to newly created VM, so you don't have to care about creating HA storage on your VMs. You also save some resources as you don't have to have backup VMs running.
This thought did occur to me as well. However, the customer is pretty adamant that they would like to reduce downtime as much as possible, and I knew that the HA feature built into oVirt meant there could be some downtime while a VM gets killed and moved from 1 host to another. Don't get me wrong - I'll still mark all 3 of the VMs as HA inside of oVirt. But we're using Cloudflare for the load balancing. And we're going to push two of the VMs through 1 of our datacenter uplinks, and the 3rd VM through the 2nd uplink, so that our routing will be HA for this customer as well. At this point, I'm strongly leaning towards just a simple rsync via cron, that will run every minute (or perhaps every 5 minutes) or something. But yeah, I'm pretty disappointed that the shared disk approach isn't supported with the current Gluster implementation. I'm sure that there are technical issues involved with that decision, but I'm wondering if that could be a reasonable feature request for future versions of oVirt?
The drawback of this approach that re-creating new VM takes more time than switching the traffic to already running VM on load balancer. This depends on your constraints but IMHO worth to consider.
Vojta
On Friday, 16 April 2021 03:00:23 CEST David White via Users wrote:
I'm currently thinking about just setting up a rsync cron to run every minute. Sent with ProtonMail Secure Email. ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐ On Thursday, April 15, 2021 8:55 PM, David White via Users users@ovirt.org
wrote:
David, I’m curious what the use case is
This is for a customer who wants as much high availability as possible for their website, which relies on a basic LAMP or LNMP stack. The plan is to create a MariaDB Galera cluster. Each of the 3 VMs will run MariaDB, as well as Apache or Nginx (I haven't decided which, yet), and will be able to accept web traffic. So the website files will need to be the same across all 3 virtual servers. My original intent was to setup a mount point on all 3 virtual servers that mapped back to the same shared disk. Strahil, 1 idea I had, which I don't think would be ideal at all, was to setup a separate, new, gluster configuration on each of the 3 VMs. Gluster virtualized on top of gluster! If that doesn't make your head spin, what will? But I'm not seriously thinking about that. :) It did occur to me that I could setup a 4th VM to host the NFS share. But I'm trying to stay away from as many single points of failures as possible. Sent with ProtonMail Secure Email. ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐ On Thursday, April 15, 2021 7:40 PM, Strahil Nikolov via Users
users@ovirt.org wrote:
I know that clusterizing applications (for example corosync/pacemaker Active-Passive or even GFS2) require simultaneous access to the data. In your case you can create:
- 2 separate VMs replicating over DRBD and sharing the storage over NFS/iSCSI - Using NFS Ganesha (this is just a theory but should work) to export your Gluster volumes in a redundant and highly available way>
Best Regards, Strahil Nikolov
В петък, 16 април 2021 г., 01:56:09 ч. Гринуич+3, Jayme
jaymef@gmail.com написа:
David, I’m curious what the use case is. :9 you plan on using the disk with three vms at the same time? This isn’t really what shareable disks are meant to do afaik. If you want to share storage with multiple vms I’d probably just setup an nfs share on one of the vms> > On Thu, Apr 15, 2021 at 7:37 PM David White via Users users@ovirt.org
wrote:
I found the proper documentation at https://www.ovirt.org/documentation/administration_guide/#Shareabl e_Disks. When I tried to edit the disk, I see that sharable is grayed out, and when I hover my mouse over it, I see "Sharable Storage is not supported on Gluster/Offload Domain". So to confirm, is there any circumstance where a Gluster volume can support sharable storage? Unfortunately, I don't have any other storage available, and I chose to use Gluster, so that I could have a HA environment. Sent with ProtonMail Secure Email. ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐ On Thursday, April 15, 2021 5:05 PM, David White via Users
users@ovirt.org wrote:
I need to mount a partition across 3 different VMs. How do I attach a disk to multiple VMs? This looks like fairly old documentation-not-documentation: https://www.ovirt.org/develop/rele ase-management/features/storage/sharedrawdisk.html Sent with ProtonMail Secure Email.
Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/WZY6OJW BH5KAB5H2XXYJOVI7BLR4Z67F/> > Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/3XAQI4WTRS PEECHZV3NEQLUMHRCEPUEZ/
Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/AFSDOLZCL VEO6NU36C3WOTSONMGPT4EF/> Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/3J6KFZM7K2A KDZJDISSQBRVSEGIYBPIX/
Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/JV7VQ3BK5EUEQR...

Gluster in the VMs... I was thinking to propose it, but I wasn't sure what kind of workload you got. I think that with Redis and Gluster on VM - you will be quite fine. For the Galera - it doesn't need shared storage at all, so you will be quite fine. Don't forget that lattency kills gluster, so keep it as tight as possible , but in the same time keep them on separate hosts. Best Regards, Strahil Nikolov В петък, 16 април 2021 г., 03:57:51 ч. Гринуич+3, David White via Users <users@ovirt.org> написа:
David, I’m curious what the use case is This is for a customer who wants as much high availability as possible for their website, which relies on a basic LAMP or LNMP stack.
The plan is to create a MariaDB Galera cluster. Each of the 3 VMs will run MariaDB, as well as Apache or Nginx (I haven't decided which, yet), and will be able to accept web traffic. So the website files will need to be the same across all 3 virtual servers. My original intent was to setup a mount point on all 3 virtual servers that mapped back to the same shared disk. Strahil, 1 idea I had, which I don't think would be ideal at all, was to setup a separate, new, gluster configuration on each of the 3 VMs. Gluster virtualized on top of gluster! If that doesn't make your head spin, what will? But I'm not seriously thinking about that. :) It did occur to me that I could setup a 4th VM to host the NFS share. But I'm trying to stay away from as many single points of failures as possible. Sent with ProtonMail Secure Email. ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐ On Thursday, April 15, 2021 7:40 PM, Strahil Nikolov via Users <users@ovirt.org> wrote:
I know that clusterizing applications (for example corosync/pacemaker Active-Passive or even GFS2) require simultaneous access to the data.
In your case you can create:
- 2 separate VMs replicating over DRBD and sharing the storage over NFS/iSCSI - Using NFS Ganesha (this is just a theory but should work) to export your Gluster volumes in a redundant and highly available way
Best Regards, Strahil Nikolov
В петък, 16 април 2021 г., 01:56:09 ч. Гринуич+3, Jayme jaymef@gmail.com написа:
David, I’m curious what the use case is. :9 you plan on using the disk with three vms at the same time? This isn’t really what shareable disks are meant to do afaik. If you want to share storage with multiple vms I’d probably just setup an nfs share on one of the vms
On Thu, Apr 15, 2021 at 7:37 PM David White via Users users@ovirt.org wrote:
I found the proper documentation at https://www.ovirt.org/documentation/administration_guide/#Shareable_Disks. When I tried to edit the disk, I see that sharable is grayed out, and when I hover my mouse over it, I see "Sharable Storage is not supported on Gluster/Offload Domain". So to confirm, is there any circumstance where a Gluster volume can support sharable storage? Unfortunately, I don't have any other storage available, and I chose to use Gluster, so that I could have a HA environment. Sent with ProtonMail Secure Email. ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐ On Thursday, April 15, 2021 5:05 PM, David White via Users users@ovirt.org wrote:
I need to mount a partition across 3 different VMs. How do I attach a disk to multiple VMs? This looks like fairly old documentation-not-documentation: https://www.ovirt.org/develop/release-management/features/storage/sharedrawd... Sent with ProtonMail Secure Email.
Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/WZY6OJWBH5KAB5...
Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/3XAQI4WTRSPEEC...
Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/AFSDOLZCLVEO6N...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/3J6KFZM7K2AKDZ...

Gluster in the VMs... I was thinking to propose it, but I wasn't sure what kind of workload you got. Maybe that's going to be my best option. Thank you!
I remember support that exact type of setup (Gluster in AWS VMs syncing website files) for a hosting company I interned with over 10 years ago. I've forgotten a lot about gluster since then, but I suppose I need to learn it anyway, since I've taken the plunge into oVirt land. :) I'll think about it, and look into it further. Sent with ProtonMail Secure Email. ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐ On Friday, April 16, 2021 9:47 AM, Strahil Nikolov via Users <users@ovirt.org> wrote:
Gluster in the VMs... I was thinking to propose it, but I wasn't sure what kind of workload you got.
I think that with Redis and Gluster on VM - you will be quite fine. For the Galera - it doesn't need shared storage at all, so you will be quite fine.
Don't forget that lattency kills gluster, so keep it as tight as possible , but in the same time keep them on separate hosts.
Best Regards, Strahil Nikolov
В петък, 16 април 2021 г., 03:57:51 ч. Гринуич+3, David White via Users users@ovirt.org написа:
David, I’m curious what the use case is
This is for a customer who wants as much high availability as possible for their website, which relies on a basic LAMP or LNMP stack.
The plan is to create a MariaDB Galera cluster.
Each of the 3 VMs will run MariaDB, as well as Apache or Nginx (I haven't decided which, yet), and will be able to accept web traffic.
So the website files will need to be the same across all 3 virtual servers.
My original intent was to setup a mount point on all 3 virtual servers that mapped back to the same shared disk.
Strahil, 1 idea I had, which I don't think would be ideal at all, was to setup a separate, new, gluster configuration on each of the 3 VMs. Gluster virtualized on top of gluster! If that doesn't make your head spin, what will? But I'm not seriously thinking about that. :)
It did occur to me that I could setup a 4th VM to host the NFS share.
But I'm trying to stay away from as many single points of failures as possible.
Sent with ProtonMail Secure Email.
‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐ On Thursday, April 15, 2021 7:40 PM, Strahil Nikolov via Users users@ovirt.org wrote:
I know that clusterizing applications (for example corosync/pacemaker Active-Passive or even GFS2) require simultaneous access to the data.
In your case you can create:
- 2 separate VMs replicating over DRBD and sharing the storage over NFS/iSCSI - Using NFS Ganesha (this is just a theory but should work) to export your Gluster volumes in a redundant and highly available way
Best Regards, Strahil Nikolov
В петък, 16 април 2021 г., 01:56:09 ч. Гринуич+3, Jayme jaymef@gmail.com написа:
David, I’m curious what the use case is. :9 you plan on using the disk with three vms at the same time? This isn’t really what shareable disks are meant to do afaik. If you want to share storage with multiple vms I’d probably just setup an nfs share on one of the vms
On Thu, Apr 15, 2021 at 7:37 PM David White via Users users@ovirt.org wrote:
I found the proper documentation at https://www.ovirt.org/documentation/administration_guide/#Shareable_Disks. When I tried to edit the disk, I see that sharable is grayed out, and when I hover my mouse over it, I see "Sharable Storage is not supported on Gluster/Offload Domain". So to confirm, is there any circumstance where a Gluster volume can support sharable storage? Unfortunately, I don't have any other storage available, and I chose to use Gluster, so that I could have a HA environment. Sent with ProtonMail Secure Email. ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐ On Thursday, April 15, 2021 5:05 PM, David White via Users users@ovirt.org wrote:
I need to mount a partition across 3 different VMs. How do I attach a disk to multiple VMs? This looks like fairly old documentation-not-documentation: https://www.ovirt.org/develop/release-management/features/storage/sharedrawd... Sent with ProtonMail Secure Email.
Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/WZY6OJWBH5KAB5...
Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/3XAQI4WTRSPEEC...
Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/AFSDOLZCLVEO6N...
Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/3J6KFZM7K2AKDZ...
Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/7RPGUWGB4RNY5O...

Sharing disks typically requires that you need to coordinate their use above the disk. So did you consider sharing a file system instead? Members in my team have been using NetApp for their entire career and are quite used to sharing files even for databases. And since Gluster HCI basically builds disks out of a replicated file system, why not use that directly? All they do these days is mount some parts of oVirt's 'data' volume inside the VMs as a GlusterFS. We just create a separate directory to avoid stepping on oVirt's toes and mount that on the clients, who won't see or disturb the oVirt images. They also run persistent Docker storage on these with Gluster mounted by the daemon, so none of the Gluster stuff needs to be baked into the Docker images. Gives you HA, zero extra copying and very fast live-migrations, which are RAM content, only. I actually added separate Glusters (not managed by oVirt) using erasure coding dispersed volumes for things not database, because the storage efficiency is much better and a lot of that data is read-mostly. These are machines that are seen as pure compute hosts to oVirt, but offer distinct gluster volumes to all types of consumers via GlusterFS (NFS or SMB would work, too). Too bad oVirt breaks with dispersed volumes and Gluster won't support a seamless migration from 2+1 replicas+arbiter to say 7:2 dispersed volumes as you add tiplets of hosts... If only oVirt was a product rather than only a patchwork design!

Il giorno mar 20 apr 2021 alle ore 03:00 Thomas Hoberg <thomas@hoberg.net> ha scritto:
Sharing disks typically requires that you need to coordinate their use above the disk.
So did you consider sharing a file system instead?
Members in my team have been using NetApp for their entire career and are quite used to sharing files even for databases.
And since Gluster HCI basically builds disks out of a replicated file system, why not use that directly? All they do these days is mount some parts of oVirt's 'data' volume inside the VMs as a GlusterFS. We just create a separate directory to avoid stepping on oVirt's toes and mount that on the clients, who won't see or disturb the oVirt images.
They also run persistent Docker storage on these with Gluster mounted by the daemon, so none of the Gluster stuff needs to be baked into the Docker images. Gives you HA, zero extra copying and very fast live-migrations, which are RAM content, only.
I actually added separate Glusters (not managed by oVirt) using erasure coding dispersed volumes for things not database, because the storage efficiency is much better and a lot of that data is read-mostly. These are machines that are seen as pure compute hosts to oVirt, but offer distinct gluster volumes to all types of consumers via GlusterFS (NFS or SMB would work, too).
Too bad oVirt breaks with dispersed volumes and Gluster won't support a seamless migration from 2+1 replicas+arbiter to say 7:2 dispersed volumes as you add tiplets of hosts...
+Gobinda Das <godas@redhat.com> you may be interested joining this discussion.
If only oVirt was a product rather than only a patchwork design!
You're welcome to help with oVirt project design and discuss with the community the parts that you think should benefit from a re-design.
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/JKRIJDB6PQSIUK...
-- Sandro Bonazzola MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV Red Hat EMEA <https://www.redhat.com/> sbonazzo@redhat.com <https://www.redhat.com/> *Red Hat respects your work life balance. Therefore there is no need to answer this email out of your office hours. <https://mojo.redhat.com/docs/DOC-1199578>*

You're welcome to help with oVirt project design and discuss with the community the parts that you think should benefit from a re-design.
I consider these pesky little comments part of the discussion, even if I know they are not the best style. But how much is there to discuss, if Redhat has already decided to switch to a beta base (CentOS stream) underneath oVirt? Nobody wants bleeding edge on a hypervisor, except those who develop that hypervisor. oVirt is supposed to deliver a higher reliability than bare metal hardware, by providing a fault tolerant design and automatic fault recovery. But if the software stack that the HA engine builds on is more volatile than the hardware below, it simply can't do its work of increasing overall resilience: a beta OS kills the value of a HA management stack above. Only a RHEL downstream CentOS is near solid enough to build on (unless you did fully validated oVirt node images). Bleeding edge is what you put inside the VMs, not underneath. I don't think I have heard a single oVirt *user* advocating the switch to Stream. IMHO it's political and kills oVirt's value proposition. My next major other gripe is that to a newcomer it's not obvious from the start that the oVirt 'classic' and the HCI variant are and will most likely remain very different beasts, because they have a distinct history and essentially incompatible principles. Classic oVirt started with shared storage, which is always turned on. Theat means idle hosts can be turned off and workloads consolidated to minimize energy consumption. It aims for the minimal number of hosts to do the job. Gluster is all about scale out without any choke points, the more hosts the better the performance. And when you combine both in a HCI gluster, turning off hosts requires a much better attention as to whether these hosts contribute bricks to volumes in use or not. For a user it's quite natural to mix both, using a set of HCI nodes to provide storage and base capacity and then add pure compute nodes to provide dynamic workload expansion. But now I'm pretty sure that's unchartered territory, because I see terrible things happening with quota decisions when computing nodes that don't even contribute bricks to volumes are rebooted e.g. during updates. Making the community responsible for providing a unifying vision, is asking for help a bit too late in the game. And then classic oVirt and HCI oVirt have completely different scaling characteristics. Classic will scale from one to N hosts without any issue, but HCI won't even go from 1 to 2 or the more sensible 3 nodes. Nor does it then allow the transition from replicas to the obviously more attractive dispersion volumes as you expand from 3 to 6 or 9 nodes. How much of a discussion will we have, when I say that I want a Gluster volume to expand/grow/shrink and transform from 1 to N bricks and transition between replicas, dispersed volumes, sharded or non sharded with oVirt seamlessly running on top? But unfortunately that is the natural expectation any newcomer will have, just like I did, when I read all the nice things Redhat had to say about the technology. I hope you won't dispute it's still very much a patchwork and with a very small chance of near time resolution.

On Wed, Apr 21, 2021 at 11:02 AM Thomas Hoberg <thomas@hoberg.net> wrote:
You're welcome to help with oVirt project design and discuss with the community the parts that you think should benefit from a re-design.
I consider these pesky little comments part of the discussion, even if I know they are not the best style.
But how much is there to discuss, if Redhat has already decided to switch to a beta base (CentOS stream) underneath oVirt?
Nobody wants bleeding edge on a hypervisor, except those who develop that hypervisor.
oVirt is supposed to deliver a higher reliability than bare metal hardware, by providing a fault tolerant design and automatic fault recovery.
Only to point out that between core components of this type of hypervisor are for sure libvirt and qemu-kvm and these two components were never the ones provided OOTB by the downstream RHEL version. Also vdsm for example, that is another core component, was never part of the downstream OS. In 4.2 deps.repo: [ovirt-4.2-epel] mirrorlist= https://mirrors.fedoraproject.org/metalink?repo=epel-7&arch=$basearch [ovirt-4.2-centos-gluster312] baseurl=http://mirror.centos.org/centos/7/storage/$basearch/gluster-3.12/ [ovirt-4.2-virtio-win-latest] baseurl=http://fedorapeople.org/groups/virt/virtio-win/repo/latest [ovirt-4.2-centos-qemu-ev] baseurl=http://mirror.centos.org/centos/7/virt/$basearch/kvm-common/ [ovirt-4.2-centos-opstools] baseurl=http://mirror.centos.org/centos/7/opstools/$basearch/ [centos-sclo-rh-release] baseurl=http://mirror.centos.org/centos/7/sclo/$basearch/rh/ [ovirt-4.2-centos-ovirt42] baseurl=http://mirror.centos.org/centos/7/virt/$basearch/ovirt-4.2/ In 4.3 deps.repo: [ovirt-4.3-epel] mirrorlist= https://mirrors.fedoraproject.org/metalink?repo=epel-7&arch=$basearch [ovirt-4.3-centos-gluster6] baseurl=http://mirror.centos.org/centos/7/storage/$basearch/gluster-6/ [ovirt-4.3-virtio-win-latest] baseurl=http://fedorapeople.org/groups/virt/virtio-win/repo/latest [ovirt-4.3-centos-qemu-ev] baseurl=http://mirror.centos.org/centos/7/virt/$basearch/kvm-common/ [ovirt-4.3-centos-ovirt43] baseurl=http://mirror.centos.org/centos/7/virt/$basearch/ovirt-4.3/ [ovirt-4.3-centos-ovirt-common] baseurl=http://mirror.centos.org/centos/7/virt/$basearch/ovirt-common/ [ovirt-4.3-centos-opstools] baseurl=http://mirror.centos.org/centos/7/opstools/$basearch/ [centos-sclo-rh-release] baseurl=http://mirror.centos.org/centos/7/sclo/$basearch/rh/ Gianluca

Thank you Gianluca, for supporting my claim: it's patchwork and not "a solution designed for the entire enterprise". Instead it's more of "a set of assets where two major combinations from a myriad of potential permutations have received a bit of testing and might be useful somewhere in your enterprise". As such, I see very little future for oVirt as anything that doesn't achieve scale these days is doomed to die. I gather IBM is betting on RH in the cloud, but oVirt isn't designed for that (and suffers a license overhead for little if any extra value over the cloud native stack) and HCI doesn't make sense in any existing cloud: its mission is more like bootstrapping your own. Once you achieve any scale, storage will move to specialized appliances. I can see oVirt and especially the HCI variant on the potentially many stopovers from the edge to the cloud core and even in special data center holdouts. There the ability to really deliver the best fault resilient 1-9 node scalability (somewhat bigger shouldn't be a problem anyway) with the ability to carefully tune and mingle between resilience and storage efficiency is key. You don't want to redeploy oVirt in a 100 embedded locations across a bigger geography, just because you've outgrown 3 nodes. I could see oVirt run on ships, (including space ships), in factories, in the military, train networks, schools or just about any place that need to combine some local presence with resilience, flexibility and remote management (but low dependence). But you'd have to go at it strategically and with a truly unified approach between the Gluster and oVirt teams. The management engine, KVM, VDO and Gluster are each brilliant pieces of engineering, the combination of which could be a force to reckon with, everywhere outside the cloud. But not with the current approach, where each component is allowed to trudge along at its own pace, hopefully not breaking as each is evolving independently. And of course, the final product must be available free of charge so money doesn't get in the way of scale. When a nation adopts oVirt to digitalize its school, or its rail systems, or an industry giant to runs its factories, revenue should not be an issue. And at the low-end you really want to beat QNAP with a 3 node HCI at a similar cost and energy footprint e.g. using RASPI modules (or just three last generation smartphones for that matter). That's how you'd get the scale. I hope you'll find something valuable in all this rant! And sorry for the bother.

On Fri, Apr 23, 2021 at 9:24 AM Thomas Hoberg <thomas@hoberg.net> wrote:
Thank you Gianluca, for supporting my claim: it's patchwork and not "a solution designed for the entire enterprise".
Instead it's more of "a set of assets where two major combinations from a myriad of potential permutations have received a bit of testing and might be useful somewhere in your enterprise".
As such, I see very little future for oVirt as anything that doesn't achieve scale these days is doomed to die.
Actually the meaning of my sentence was opposite, in the sense that if you consider it now in 4.4 a "patchwork", it was always so; you had been here for many years and I think you should have already abandoned it in 4.3 (or 4.2) days, based on your considerations. How it is composed didn't change so much with 4.4 release. It's opensource and it's a project. Most of your claims could be done against RHV product, not oVirt as a project. And for sure many problems are there in Gluster implementations, but for NFS, FC or iSCSI based the situation in my opinion is quite better. Gianluca

Thank you Gianluca for your honest assessment. Now if only you'd put that on the home page of oVirt, or better yet, used the opportunity to change things. Yes, after what I know today, I should not have started with oVirt on Gluster, but unfortunately HCI is exactly the most attractive grassroots use case and the biggest growth opportunity for oVirt. AFAIK there is no competition out there. If there was, I probably would have jumped ship already, as attached (habit dependent) as I am to CentOS. In theory oVirt could start with single node HCI, and then go to 3nHCI to gain resilience. That step is already crucial (ideally witha 2nHCI base on a warm storage standby) and currently easy with Gluster, but "not supported" (with no warning or explanation that I could remember) by oVirt. Whatever the reason, it is not at all obvious to the newcomer, nor should it be unsurmountable to my understanding. And as things go on and you're expanding some clusters, there is a natural tendency to go towards dedicated storage say after reaching dual digits on HCI nodes, because it offer better price/performance and controls. Again, that transition should be "organic" of not seamless, with migration of VMs and their disks, perhaps not live, even if gaps there should be closable, too. Nobody else can do that, not VMware, not Nutanix, nor Proxmox. And the cloud guys aren't offering anything right there, even if they probably should, if only to make sure that any such grass-roots projects can be fork lifted easily into their clouds, once their end product takes off. IMHO this sort of opportunity needs serious seed money, organic growth from the community and direct revenues obviously haven't done it yet. But letting things just go on as they do now, I consider a death knell to oVirt. Kind regards, Thomas

This turned into quite a discussion. LOL. A lot of interesting points. Thomas said -->
If only oVirt was a product rather than only a patchwork design! I think Sandro already spoke into this a little bit, but I would echo what they (he? she?) said. oVirt is an open source project, so there's really nothing preventing any of us from jumping in and assisting where we can. Granted, I'm not much of a software developer, but eventually, I could see how I can contribute my time in some ways. Replying to emails on the mailing list, providing engineering input on system level decisions, testing RC releases, fixing / debugging Ansible scripts, etc... (I love ansible!), helping to update documentation, etc...
Sandro Said -->
I can understand the position here, but the fact that oVirt is developed and stabilized against CentOS Stream which is upstream to RHEL doesn't prevent you to run oVirt on RHEL or any other RHEL rebuild on production. If you face any issue running oVirt on top of a downstream to CentOS Stream please report a bug for it and we'll be happy to handle. My hosts are running RHEL 8.3, and I have no plans to move to something different. Ironically, though, the vast majority of my VMs are running Ubuntu.
Off topic, but something to address: We need a stable ovirt-guest-agent package. This doesn't seem to be working for me, although I'll take a look at it more closely again when I have some time: https://launchpad.net/ubuntu/focal/+source/ovirt-guest-agent Thomas said -->
Yes, after what I know today, I should not have started with oVirt on Gluster, but unfortunately HCI is exactly the most attractive grassroots use case and the biggest growth opportunity for oVirt.
Serious question: What's preventing you (or anyone) from just spinning up new storage with NFS, SCSI or whatever, mounting to the engine, and migrating your VMs to that new storage? Correct me if I'm wrong, but HCI is more of a philosophy and a framework than anything. There's nothing that prevents us from moving away from an HCI model. In fact, I'm getting ready to do something that will already start to move my environment in that direction. I have a 4th server that I originally bought last fall for testing & spare parts, but I've decided I want to put it into the datacenter, and use it as a backup destination, as well as a second DNS server for the overall environment. I have 3 spinning disks arriving next week that I'll put into a RAID 5 on that server, and my plan is to then build two different NFS mount points, and expose those NFS shares to the oVirt Cluster. I'm also half tempted to add that server as a host to the cluster as well, because it has 40 cores in it that would otherwise just sit there unused. I'll use 1 NFS share as a Backup domain, and I'll use another NFS share to store images that don't need ssd speeds - such as ISOs, and such. Sorry for the rabbit trail, but this plan I think is a classic example of my point that HCI is a philosophy, not a hard and fast rule. You can do whatever you want. If you don't like HCI, why can't you move to something else? Gianluca said -->
And for sure many problems are there in Gluster implementations, but for NFS, FC or iSCSI based the situation in my opinion is quite better. That's interesting to hear. Out of curiosity, why keep gluster around? And also, why hasn't there been much of an effort to support Ceph in a HCI environment? I actually had someone who I know is a Red Hat TAM advise me (off the record, off the clock, not through official Red Hat channels) to stay away from Gluster as well. After my research & testing, though, it was the *only* way I could see how I could deploy my environment on the limited budget that I had (and still have).
Eventually, yes, I would like to get better storage. One idea that comes to mind that would be stable and still relatively cheap is to grab a couple of Synology NAS devices, stick ssds into them, and put them into an HA pair, and expose that storage as an iscsi mount. - David Sent with ProtonMail Secure Email.

On Sat, Apr 24, 2021 at 3:31 PM David White via Users <users@ovirt.org> wrote:
Off topic, but something to address: We need a stable ovirt-guest-agent package. This doesn't seem to be working for me, although I'll take a look at it more closely again when I have some time: https://launchpad.net/ubuntu/focal/+source/ovirt-guest-agent
ovirt-guest-agent is deprecated in 4.4. See also here the downstream documentation: https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/htm... Gianluca

Il giorno sab 24 apr 2021 alle ore 19:11 Gianluca Cecchi < gianluca.cecchi@gmail.com> ha scritto:
On Sat, Apr 24, 2021 at 3:31 PM David White via Users <users@ovirt.org> wrote:
Off topic, but something to address: We need a stable ovirt-guest-agent package. This doesn't seem to be working for me, although I'll take a look at it more closely again when I have some time: https://launchpad.net/ubuntu/focal/+source/ovirt-guest-agent
ovirt-guest-agent is deprecated in 4.4. See also here the downstream documentation:
https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/htm...
Yes, recent qemu-guest-agent should be enough.
Gianluca _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/JWJJSLK24YY4XN...
-- Sandro Bonazzola MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV Red Hat EMEA <https://www.redhat.com/> sbonazzo@redhat.com <https://www.redhat.com/> *Red Hat respects your work life balance. Therefore there is no need to answer this email out of your office hours. <https://mojo.redhat.com/docs/DOC-1199578>*

Il giorno mer 21 apr 2021 alle ore 11:02 Thomas Hoberg <thomas@hoberg.net> ha scritto:
You're welcome to help with oVirt project design and discuss with the community the parts that you think should benefit from a re-design.
I consider these pesky little comments part of the discussion, even if I know they are not the best style.
But how much is there to discuss, if Redhat has already decided to switch to a beta base (CentOS stream) underneath oVirt?
Nobody wants bleeding edge on a hypervisor, except those who develop that hypervisor.
I can understand the position here, but the fact that oVirt is developed and stabilized against CentOS Stream which is upstream to RHEL doesn't prevent you to run oVirt on RHEL or any other RHEL rebuild on production. If you face any issue running oVirt on top of a downstream to CentOS Stream please report a bug for it and we'll be happy to handle.
oVirt is supposed to deliver a higher reliability than bare metal hardware, by providing a fault tolerant design and automatic fault recovery.
But if the software stack that the HA engine builds on is more volatile than the hardware below, it simply can't do its work of increasing overall resilience: a beta OS kills the value of a HA management stack above.
Only a RHEL downstream CentOS is near solid enough to build on (unless you did fully validated oVirt node images). Bleeding edge is what you put inside the VMs, not underneath. I don't think I have heard a single oVirt *user* advocating the switch to Stream. IMHO it's political and kills oVirt's value proposition.
And I understand the point of view on this and I've nothing against this. CentOS Linux 8 is going to reach EOL in a few months and oVirt project can't keep using it for development. CentOS Stream is upstream to whatever CentOS Stream derivative (including RHEL) oVirt users are going to use in production so we need to ensure oVirt will run on what's coming next in order to avoid oVirt users to get broken systems once an update will come to production. So from oVirt development perspective CentOS Stream right now is the only choice. That said, really, on production you are not required to use CentOS Stream if you don't want to.
My next major other gripe is that to a newcomer it's not obvious from the start that the oVirt 'classic' and the HCI variant are and will most likely remain very different beasts, because they have a distinct history and essentially incompatible principles.
Classic oVirt started with shared storage, which is always turned on. Theat means idle hosts can be turned off and workloads consolidated to minimize energy consumption. It aims for the minimal number of hosts to do the job.
Gluster is all about scale out without any choke points, the more hosts the better the performance.
And when you combine both in a HCI gluster, turning off hosts requires a much better attention as to whether these hosts contribute bricks to volumes in use or not.
For a user it's quite natural to mix both, using a set of HCI nodes to provide storage and base capacity and then add pure compute nodes to provide dynamic workload expansion.
But now I'm pretty sure that's unchartered territory, because I see terrible things happening with quota decisions when computing nodes that don't even contribute bricks to volumes are rebooted e.g. during updates.
And this can be made more clear either on the download page or in the installation guide documentation for new comers. Let's do it. I can open a PR on https://github.com/oVirt/ovirt-site or you can do it and we can work together on ensuring this will be clear for new comers.
Making the community responsible for providing a unifying vision, is asking for help a bit too late in the game.
And then classic oVirt and HCI oVirt have completely different scaling characteristics. Classic will scale from one to N hosts without any issue, but HCI won't even go from 1 to 2 or the more sensible 3 nodes. Nor does it then allow the transition from replicas to the obviously more attractive dispersion volumes as you expand from 3 to 6 or 9 nodes.
How much of a discussion will we have, when I say that I want a Gluster volume to expand/grow/shrink and transform from 1 to N bricks and transition between replicas, dispersed volumes, sharded or non sharded with oVirt seamlessly running on top?
I'll let Gluster team discuss this, I lack the needed knowledge to give meaningful replies.
But unfortunately that is the natural expectation any newcomer will have, just like I did, when I read all the nice things Redhat had to say about the technology.
I hope you won't dispute it's still very much a patchwork and with a very small chance of near time resolution. _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/2CIGCLM5E2CTA2...
-- Sandro Bonazzola MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV Red Hat EMEA <https://www.redhat.com/> sbonazzo@redhat.com <https://www.redhat.com/> *Red Hat respects your work life balance. Therefore there is no need to answer this email out of your office hours.*

How much of a discussion will we have, when I say that I want a Gluster volume to expand/grow/shrink and transform from 1 to N bricks and transition between replicas, dispersed volumes, sharded or non sharded with oVirt seamlessly running on top? It's dead simple -> replica 3 (and arbitraded replica 3) is the safest and the most performant volume. Dispersed volumes are like erasure coding in CEPH. You can save space, but performance will be mediocre. Yet, oVirt UI prevents you to create and use such volume - but it doesn't prevent you from using such. In both cases (replicated and dispersed volumes) you cannot expand to an arbitrary brick count. What you are asking is a distributed volume in gluster, but in order to be safe you need distributed-replicated one.
Imagine that you have 1 brick and you want to add 4 more , so 5 in total.You got 2 options:A) use distributed volume , but when a node is rebooted you loose access to it's data -> quite not good, right B) you use distributed-replicated (replica 3 arbiter 1) volume to spread data among 4 nodes and the 5th is your arbiter for both subvolumes The second option protects you against failures during reboots, yet the upgrade path (safe one) is limited (assuming that you want only 1 brick per host, as you use HW raid and all disks into it) :1 host -> 3 hosts -> 5/6 hosts (5-> arbitraded, 6-> full replica) and so on. It's easy to say that you want to go from 1 to N bricks, but safety is always first. Yet, oVirt is not like other proprietary software - so it let's you use POSIX-compliant solutions which allows you to use even a pure distributed volume. It just won't let you shoot your leg with oVirt.... Best Regards,Strahil Nikolov

Hi Strahil, I've tried to measure the cost or of erasure coding and, more importantly, VDO with de-duplication and compression a bit. Erasure coding should be neglible in terms of CPU power while the vastly more complex LZ4 compression (used inside VDO) really is rather impressive at 1GByte/s single threaded for compression (6Gbyte/s decompression, on a 25GByte/s memory bus) on the 15Watt NUCs I am using for one cluster. The storage I/O overhead of erasure coding shouldn't really matter with NVMe becoming cheaper than SATA SSD. Perhaps the write amplification needs to be watched with SSDs, but a lot of that is writeback tuning and with a Gluster in the back, you can commit to RAM as long as you have a quorum (and a UPS). Actually with Gluster I guess most of the erasure coding would actually be done by the client and the network amplification would also be there, but not really different between erasure coding and replicas: If you write to nine nodes, you write to nine nodes from the client independent of the encoding. There the ability to say "please continue to use the 4:2 dispersion as I expand from 6 to 9 nodes and roll that across on a shard by shard base without me having to set up bricks like that", would certainly help. With all of VDO enabled I get 200MByte/s for a random data workload on FIO via Gluster, which becomes 600MByte/s for reads with 3 replicas on the 10Gbit network I use, 60% of the theoretical maximum with random I/O. That's completely adequate, because we're not running HPC or SAP batches here and I'd be rather sure that using erasure coding with 6 and 9 nodes won't introduce a performance bottleneck, unless I go to 40 or 100GBit on the network. I'd just really want to be able to choose between say 1, 2 or 3 out of 9 bricks being used for redundancy, depending on if it's an HCI block next door, going into a ship with months at sea or into a space station. I'd also probably add an extra node or two to act as warm (even cold) standby in critical or hard-to-reach locations, that act as compute-only nodes initially (to avoid split quotas), but can be promoted to replace a storage node that failed without hands-on intervention. oVirt HCI is as close at it gets to LEGO computers, but right now it's doing LEGO with your hands tied behind your back. Kind regards, Thomas

Right now we don't have any plan for supporting Disperse gluster volume from HCI. We have BZ to stop creating Storage Domain for such unsupported volume[1] We only recommend replica 3 or replica2 + arbiter with sharding for VM store use cases. 1. https://bugzilla.redhat.com/show_bug.cgi?id=1951894 On Wed, Apr 21, 2021 at 12:39 PM Sandro Bonazzola <sbonazzo@redhat.com> wrote:
Il giorno mar 20 apr 2021 alle ore 03:00 Thomas Hoberg <thomas@hoberg.net> ha scritto:
Sharing disks typically requires that you need to coordinate their use above the disk.
So did you consider sharing a file system instead?
Members in my team have been using NetApp for their entire career and are quite used to sharing files even for databases.
And since Gluster HCI basically builds disks out of a replicated file system, why not use that directly? All they do these days is mount some parts of oVirt's 'data' volume inside the VMs as a GlusterFS. We just create a separate directory to avoid stepping on oVirt's toes and mount that on the clients, who won't see or disturb the oVirt images.
They also run persistent Docker storage on these with Gluster mounted by the daemon, so none of the Gluster stuff needs to be baked into the Docker images. Gives you HA, zero extra copying and very fast live-migrations, which are RAM content, only.
I actually added separate Glusters (not managed by oVirt) using erasure coding dispersed volumes for things not database, because the storage efficiency is much better and a lot of that data is read-mostly. These are machines that are seen as pure compute hosts to oVirt, but offer distinct gluster volumes to all types of consumers via GlusterFS (NFS or SMB would work, too).
Too bad oVirt breaks with dispersed volumes and Gluster won't support a seamless migration from 2+1 replicas+arbiter to say 7:2 dispersed volumes as you add tiplets of hosts...
+Gobinda Das <godas@redhat.com> you may be interested joining this discussion.
If only oVirt was a product rather than only a patchwork design!
You're welcome to help with oVirt project design and discuss with the community the parts that you think should benefit from a re-design.
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/JKRIJDB6PQSIUK...
--
Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo@redhat.com <https://www.redhat.com/>
*Red Hat respects your work life balance. Therefore there is no need to answer this email out of your office hours. <https://mojo.redhat.com/docs/DOC-1199578>*
-- Thanks, Gobinda

and you expect newcomers to find that significant bit of information within the reference that you quote as they try to evaluate if oVirt is the right tool for the job? I only found out once I tried to add dispersed volumes to an existing 3 node HCI and dug through the log files. Of course, I eventually managed to remove the nicely commented bits of ansible code that prevented adding the volume, only to find that it could not be used to run VMs from there or use it for disks. I can still mount those volumes from inside the VMs via a GlusterFS client and I'd guess that there is little if any difference in performance. For an enterprise HCI solution, the usable intersection between oVirt and Gluster is so small, it needs a magnifying glass very top and early in the documentation. Gluster advertises itself as a scale-out file system without any metadata choke point as the main differentiator vs. Lustre etc. and with tunable ratio of read amplification via replicas and resilience. Nobody expects scale-out to mean 1 or 3. With perhaps 6 and 9 as a special option. Or only replicas actually supported by oVirt, when erasure coding should at least in theory give you near perfect scalability, which means you can add increments of one or any bigger number and freely allocated between capacity and resilience. It's perfectly legitimate to not support every potential permutation of deployment scenarios of oVirt and Gluster. But the limitations baked in and perhaps even the motivation need to be explained from the very start. It doesn't help oVirt's adoption and success if people only find out after they have invested heavily under the assumption a "scale-out" solution delivers what that term implies.
participants (8)
-
David White
-
Gianluca Cecchi
-
Gobinda Das
-
Jayme
-
Sandro Bonazzola
-
Strahil Nikolov
-
Thomas Hoberg
-
Vojtech Juranek