➢ I have a very similar setup as you and have just very recently started testing OVA
exports for backup purposes to NFS attached storage.
➢ I have a three node HCI on GlusterFS (SSD backed) with 10Gbit and my ovirt management
network is 10Gbit as well. My NFS storage server is an 8 x 8Tb 7200 RPM drives in RAID10
running CentOS 8x with 10Gbit link.
Our setups are indeed similar, the main difference being, that my management network
including the connection to the NFS server is only 1Gbit. Only GlusterFS has 10Gbit
here.
➢ I haven't done specific measurement yet as I just setup the storage today but a test
export of a 50Gb VM took just about ~10 minutes start to finish.
Doing the maths this is ~80MiB/s and 20 times faster than in my setup. Lucky you 😊
Much less than your 10Gbit link between NFS Server and nodes could provide, but maybe
close to the limit of the drives in your NFS server.
The interesting thing is, that when setting up an export domain, stopping the VM and doing
an export to the *same* NFS server, I'm getting write speeds as expected.
Only the OVA export is terribly slow.
The main difference I can see is the use of a loop device when exporting to OVA.
The export to the export domain does something like
/usr/bin/qemu-img convert -p -t none -T none -f raw {source disk on GlusterFS} {target
disk on NFS server}
whereas the OVA export will do
/usr/bin/qemu-img convert -T none -O qcow2 {source snapshot on GlusterFS} /dev/loopX
with /dev/loopX pointing to the NFS OVA target image.
If you have the time and are willing to test, I would be interested in how fast your
exports to an export domain are
--
juergen