Q: Hybrid GlusterFS / local storage setup?

Hello all, I'm thinking about converting a couple of old dual Xeon V2 workstations into (yet another) oVirt setup. However, the use case for this cluster is somewhat different: While I do want most of the VMs to be highly available (Via 2+1 GFS storage domain), I'd also want pin at least one "desktop" VM to each host (possibly with vGPU) and let this VM access the local storage directly in-order to get near bare metal performance. Now, I am aware that I can simply share an LVM LV over NFS / localhost and pin a specific VM to each specific host, and the performance will be acceptable, I seem to remember that there's a POSIX-FS storage domain that at least in theory should be able to give me per-host private storage. A. Am I barking at the wrong tree here? Is this setup even possible? B. If it is even possible, any documentation / pointers on setting up per-host private storage? I should mention that these workstations are quite beefy (64-128GB RAM, large MDRAID, SSD, etc) so I can spare memory / storage space (I can even split the local storage and GFS to different arrays). - Gilboa

On Wed, Oct 14, 2020, 12:42 Gilboa Davara <gilboad@gmail.com> wrote:
Hello all,
I'm thinking about converting a couple of old dual Xeon V2 workstations into (yet another) oVirt setup. However, the use case for this cluster is somewhat different: While I do want most of the VMs to be highly available (Via 2+1 GFS
GlusterFS? storage domain), I'd also want pin at least one "desktop" VM to each
host (possibly with vGPU) and let this VM access the local storage directly in-order to get near bare metal performance.
This makes sense. Now, I am aware that I can simply share an LVM LV over NFS / localhost
and pin a specific VM to each specific host, and the performance will be acceptable,
This will not be fast as local device passed-through to the vm It will also be problematic, since all hosts will mount, monitor, and maintain leases on this NFS storage, since it is considered as shared storage. If another host fail to access this NFS storage the other host will be deactivated and all the VMs will migrate to other hosts. This migration storm can cause lot of trouble. In the worst case, in no other host can access this NFS storage all other hosts will be deactivated. I seem to remember that there's a POSIX-FS storage
domain that at least in theory should be able to give me per-host private storage.
This is same as NFS (internally this is the same code). It will work only if you can mount the same device/export on all hosts. This is even worse than NFS.
A. Am I barking at the wrong tree here? Is this setup even possible?
This is possible using host device. You can attach a host device to a VM. This will pin the VM to the host, and give best performance. It may not be flexible enough since you need to attach entire device. Maybe it can work with LVM logical volumes. Nir B. If it is even possible, any documentation / pointers on setting up
per-host private storage?
I should mention that these workstations are quite beefy (64-128GB RAM, large MDRAID, SSD, etc) so I can spare memory / storage space (I can even split the local storage and GFS to different arrays).
- Gilboa _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/VT24EBHC4OF7VX...

Hello Nir, Thanks for the prompt answer. On Wed, Oct 14, 2020 at 1:02 PM Nir Soffer <nsoffer@redhat.com> wrote:
GlusterFS?
Yep, GlusterFS. Sorry, wrong abbreviation on my end..
This will not be fast as local device passed-through to the vm
It will also be problematic, since all hosts will mount, monitor, and
maintain leases on this NFS storage, since it is considered as shared storage. If another host fail to access this NFS storage the other host will be deactivated and all the VMs will migrate to other hosts. This migration storm can cause lot of trouble. In the worst case, in no other host can access this NFS storage all other hosts will be deactivated.
This is same as NFS (internally this is the same code). It will work only
if you can mount the same device/export on all hosts. This is even worse than NFS.
OK. Understood. No NFS / POSIXFS storage than.
A. Am I barking at the wrong tree here? Is this setup even possible?
This is possible using host device. You can attach a host device to a VM. This will pin the VM to the host, and give best performance. It may not be flexible enough since you need to attach entire device. Maybe it can work with LVM logical volumes.
I've got room to spare. Any documentation on how to achieve this (or some pointers where to look)? I couldn't find LVM / block device under host devices / storage domain / etc and Google search returned irrelevant results. - Gilboa
Nir
B. If it is even possible, any documentation / pointers on setting up
per-host private storage?
I should mention that these workstations are quite beefy (64-128GB RAM, large MDRAID, SSD, etc) so I can spare memory / storage space (I can even split the local storage and GFS to different arrays).
- Gilboa _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/VT24EBHC4OF7VX...

On Thu, Oct 15, 2020 at 8:45 AM Gilboa Davara <gilboad@gmail.com> wrote:
Hello Nir,
Thanks for the prompt answer.
On Wed, Oct 14, 2020 at 1:02 PM Nir Soffer <nsoffer@redhat.com> wrote:
GlusterFS?
Yep, GlusterFS. Sorry, wrong abbreviation on my end..
This will not be fast as local device passed-through to the vm
It will also be problematic, since all hosts will mount, monitor, and maintain leases on this NFS storage, since it is considered as shared storage. If another host fail to access this NFS storage the other host will be deactivated and all the VMs will migrate to other hosts. This migration storm can cause lot of trouble. In the worst case, in no other host can access this NFS storage all other hosts will be deactivated.
This is same as NFS (internally this is the same code). It will work only if you can mount the same device/export on all hosts. This is even worse than NFS.
OK. Understood. No NFS / POSIXFS storage than.
A. Am I barking at the wrong tree here? Is this setup even possible?
This is possible using host device. You can attach a host device to a VM. This will pin the VM to the host, and give best performance. It may not be flexible enough since you need to attach entire device. Maybe it can work with LVM logical volumes.
I've got room to spare. Any documentation on how to achieve this (or some pointers where to look)?
It should be documented in ovirt.org, and in RHV documentation: https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/htm...
I couldn't find LVM / block device under host devices / storage domain / etc and Google search returned irrelevant results.
I tested locally, LVM devices are not available in: Compute > Hosts > {hostname} > Host Devices Looks like libvirt does not support device mapper devices. You can try: # virsh -r nodedev-list To see supported devices. The list seems to match what oVirt displays in the Host Devices tab. So you only option it to attach the entire local device to the VM, either using pci passthrough or as a scsi disk. Nir
- Gilboa
Nir
B. If it is even possible, any documentation / pointers on setting up per-host private storage?
I should mention that these workstations are quite beefy (64-128GB RAM, large MDRAID, SSD, etc) so I can spare memory / storage space (I can even split the local storage and GFS to different arrays).
- Gilboa _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/VT24EBHC4OF7VX...

On Thu, Oct 15, 2020 at 2:38 PM Nir Soffer <nsoffer@redhat.com> wrote:
I've got room to spare. Any documentation on how to achieve this (or some pointers where to look)?
It should be documented in ovirt.org, and in RHV documentation:
https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/htm...
I couldn't find LVM / block device under host devices / storage domain / etc and Google search returned irrelevant results.
I tested locally, LVM devices are not available in: Compute > Hosts > {hostname} > Host Devices
Looks like libvirt does not support device mapper devices. You can try: # virsh -r nodedev-list
To see supported devices. The list seems to match what oVirt displays in the Host Devices tab.
So you only option it to attach the entire local device to the VM, either using pci passthrough or as a scsi disk.
Nir
Full SCSI passthrough per "desktop" VM is an overkill for this user case. (Plus, I don't see MD devices in the list, only pure SATA/SAS devices). Any idea if there are plans to add support for LVM devices (or any other block device)? - Gilboa

On Thu, Oct 15, 2020 at 3:20 PM Gilboa Davara <gilboad@gmail.com> wrote:
On Thu, Oct 15, 2020 at 2:38 PM Nir Soffer <nsoffer@redhat.com> wrote:
I've got room to spare. Any documentation on how to achieve this (or some pointers where to look)?
It should be documented in ovirt.org, and in RHV documentation: https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/htm...
I couldn't find LVM / block device under host devices / storage domain / etc and Google search returned irrelevant results.
I tested locally, LVM devices are not available in: Compute > Hosts > {hostname} > Host Devices
Looks like libvirt does not support device mapper devices. You can try: # virsh -r nodedev-list
To see supported devices. The list seems to match what oVirt displays in the Host Devices tab.
So you only option it to attach the entire local device to the VM, either using pci passthrough or as a scsi disk.
Nir
Full SCSI passthrough per "desktop" VM is an overkill for this user case. (Plus, I don't see MD devices in the list, only pure SATA/SAS devices). Any idea if there are plans to add support for LVM devices (or any other block device)?
I don't think there is such a plan, but it makes sense to support such usage. Please file ovit-engine RFE explaining the use case, and we can consider it for a future version. Nir

On Thu, Oct 15, 2020 at 3:27 PM Nir Soffer <nsoffer@redhat.com> wrote:
On Thu, Oct 15, 2020 at 3:20 PM Gilboa Davara <gilboad@gmail.com> wrote:
On Thu, Oct 15, 2020 at 2:38 PM Nir Soffer <nsoffer@redhat.com> wrote:
I've got room to spare. Any documentation on how to achieve this (or some pointers where to
look)?
It should be documented in ovirt.org, and in RHV documentation:
https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/htm...
I couldn't find LVM / block device under host devices / storage
domain / etc and Google search returned irrelevant results.
I tested locally, LVM devices are not available in: Compute > Hosts > {hostname} > Host Devices
Looks like libvirt does not support device mapper devices. You can try: # virsh -r nodedev-list
To see supported devices. The list seems to match what oVirt displays in the Host Devices tab.
So you only option it to attach the entire local device to the VM,
either using
pci passthrough or as a scsi disk.
Nir
Full SCSI passthrough per "desktop" VM is an overkill for this user case. (Plus, I don't see MD devices in the list, only pure SATA/SAS devices). Any idea if there are plans to add support for LVM devices (or any other block device)?
I don't think there is such a plan, but it makes sense to support such usage.
Please file ovit-engine RFE explaining the use case, and we can consider it for a future version.
Nir
Done. https://bugzilla.redhat.com/show_bug.cgi?id=1889138 Thanks again for the help! - Gilboa

Hi, On 10/18/20 11:16 AM, Gilboa Davara wrote:
On Thu, Oct 15, 2020 at 3:27 PM Nir Soffer <nsoffer@redhat.com <mailto:nsoffer@redhat.com>> wrote:
On Thu, Oct 15, 2020 at 3:20 PM Gilboa Davara <gilboad@gmail.com <mailto:gilboad@gmail.com>> wrote: > > On Thu, Oct 15, 2020 at 2:38 PM Nir Soffer <nsoffer@redhat.com <mailto:nsoffer@redhat.com>> wrote: >> >> > I've got room to spare. >> > Any documentation on how to achieve this (or some pointers where to look)? >> >> It should be documented in ovirt.org <http://ovirt.org>, and in RHV documentation: >> https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/htm... >> >> > I couldn't find LVM / block device under host devices / storage domain / etc and Google search returned irrelevant results. >> >> I tested locally, LVM devices are not available in: >> Compute > Hosts > {hostname} > Host Devices >> >> Looks like libvirt does not support device mapper devices. You can try: >> # virsh -r nodedev-list >> >> To see supported devices. The list seems to match what oVirt displays >> in the Host Devices tab. >> >> So you only option it to attach the entire local device to the VM, either using >> pci passthrough or as a scsi disk. >> >> Nir > > > Full SCSI passthrough per "desktop" VM is an overkill for this user case. (Plus, I don't see MD devices in the list, only pure SATA/SAS devices). > Any idea if there are plans to add support for LVM devices (or any other block device)?
I don't think there is such a plan, but it makes sense to support such usage.
Please file ovit-engine RFE explaining the use case, and we can consider it for a future version.
Nir
I also wote for this feature. There is also this RFE which could allow similar functionality (allow use of local disk storages in shared storage cluster). https://bugzilla.redhat.com/show_bug.cgi?id=1406412 Cheers, Jiri
Thanks again for the help! - Gilboa
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/NQOCWZ2SYPS2LX...

Thanks. Added my bug report as a possible duplicate. - Gilboa On Sun, Oct 18, 2020 at 3:13 PM Jiří Sléžka <jiri.slezka@slu.cz> wrote:
Hi,
On 10/18/20 11:16 AM, Gilboa Davara wrote:
On Thu, Oct 15, 2020 at 3:27 PM Nir Soffer <nsoffer@redhat.com <mailto:nsoffer@redhat.com>> wrote:
On Thu, Oct 15, 2020 at 3:20 PM Gilboa Davara <gilboad@gmail.com <mailto:gilboad@gmail.com>> wrote: > > On Thu, Oct 15, 2020 at 2:38 PM Nir Soffer <nsoffer@redhat.com <mailto:nsoffer@redhat.com>> wrote: >> >> > I've got room to spare. >> > Any documentation on how to achieve this (or some pointers where to look)? >> >> It should be documented in ovirt.org <http://ovirt.org>, and in RHV documentation: >>
https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/htm...
>> >> > I couldn't find LVM / block device under host devices / storage domain / etc and Google search returned irrelevant results. >> >> I tested locally, LVM devices are not available in: >> Compute > Hosts > {hostname} > Host Devices >> >> Looks like libvirt does not support device mapper devices. You can try: >> # virsh -r nodedev-list >> >> To see supported devices. The list seems to match what oVirt
displays
>> in the Host Devices tab. >> >> So you only option it to attach the entire local device to the VM, either using >> pci passthrough or as a scsi disk. >> >> Nir > > > Full SCSI passthrough per "desktop" VM is an overkill for this user case. (Plus, I don't see MD devices in the list, only pure SATA/SAS devices). > Any idea if there are plans to add support for LVM devices (or any other block device)?
I don't think there is such a plan, but it makes sense to support such usage.
Please file ovit-engine RFE explaining the use case, and we can
consider
it for a future version.
Nir
I also wote for this feature. There is also this RFE which could allow similar functionality (allow use of local disk storages in shared storage cluster).
https://bugzilla.redhat.com/show_bug.cgi?id=1406412
Cheers,
Jiri
Thanks again for the help! - Gilboa
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/NQOCWZ2SYPS2LX...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/24NC357DKZ4JTH...

Hi Gilboa, I think that storage domains need to be accessible from all nodes in the cluster - and as yours will be using local storage and yet be in a 2-node cluster that will be hard. My guess is that you can try the following cheat: Create a single brick gluster volume and do some modifications: - volume type 'replica 1' - cluster.choose-local should be set to yes , once you apply the virt group of settings (cause it's setting it to no) - Set your VMs in such way that they don't failover Of course, creation of new VMs will happen from the host with the SPM flag, but the good thing is that you can change the host with that flag. So if your gluster has a brick ovirt1:/local_brick/brick , you can set the host ovirt1 to 'SPM' and then create your VM. Of course the above is just pure speculation as I picked my setup to be 'replica 3 arbiter 1' and trade storage for live migration. Best Regards, Strahil Nikolov В сряда, 14 октомври 2020 г., 12:42:44 Гринуич+3, Gilboa Davara <gilboad@gmail.com> написа: Hello all, I'm thinking about converting a couple of old dual Xeon V2 workstations into (yet another) oVirt setup. However, the use case for this cluster is somewhat different: While I do want most of the VMs to be highly available (Via 2+1 GFS storage domain), I'd also want pin at least one "desktop" VM to each host (possibly with vGPU) and let this VM access the local storage directly in-order to get near bare metal performance. Now, I am aware that I can simply share an LVM LV over NFS / localhost and pin a specific VM to each specific host, and the performance will be acceptable, I seem to remember that there's a POSIX-FS storage domain that at least in theory should be able to give me per-host private storage. A. Am I barking at the wrong tree here? Is this setup even possible? B. If it is even possible, any documentation / pointers on setting up per-host private storage? I should mention that these workstations are quite beefy (64-128GB RAM, large MDRAID, SSD, etc) so I can spare memory / storage space (I can even split the local storage and GFS to different arrays). - Gilboa _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/VT24EBHC4OF7VX...

On Wed, Oct 14, 2020 at 1:29 PM Strahil Nikolov via Users <users@ovirt.org> wrote:
Hi Gilboa,
I think that storage domains need to be accessible from all nodes in the cluster - and as yours will be using local storage and yet be in a 2-node cluster that will be hard.
My guess is that you can try the following cheat:
Create a single brick gluster volume and do some modifications: - volume type 'replica 1' - cluster.choose-local should be set to yes , once you apply the virt group of settings (cause it's setting it to no) - Set your VMs in such way that they don't failover
This has the same issues of "shared" NFS domain served by one host.
Of course, creation of new VMs will happen from the host with the SPM flag, but the good thing is that you can change the host with that flag. So if your gluster has a brick ovirt1:/local_brick/brick , you can set the host ovirt1 to 'SPM' and then create your VM.
Of course the above is just pure speculation as I picked my setup to be 'replica 3 arbiter 1' and trade storage for live migration.
Best Regards, Strahil Nikolov
В сряда, 14 октомври 2020 г., 12:42:44 Гринуич+3, Gilboa Davara <gilboad@gmail.com> написа:
Hello all,
I'm thinking about converting a couple of old dual Xeon V2 workstations into (yet another) oVirt setup. However, the use case for this cluster is somewhat different: While I do want most of the VMs to be highly available (Via 2+1 GFS storage domain), I'd also want pin at least one "desktop" VM to each host (possibly with vGPU) and let this VM access the local storage directly in-order to get near bare metal performance.
Now, I am aware that I can simply share an LVM LV over NFS / localhost and pin a specific VM to each specific host, and the performance will be acceptable, I seem to remember that there's a POSIX-FS storage domain that at least in theory should be able to give me per-host private storage.
A. Am I barking at the wrong tree here? Is this setup even possible? B. If it is even possible, any documentation / pointers on setting up per-host private storage?
I should mention that these workstations are quite beefy (64-128GB RAM, large MDRAID, SSD, etc) so I can spare memory / storage space (I can even split the local storage and GFS to different arrays).
- Gilboa _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/VT24EBHC4OF7VX... _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/XJ44CLBZ3BIVVD...
participants (4)
-
Gilboa Davara
-
Jiří Sléžka
-
Nir Soffer
-
Strahil Nikolov