Hello,

I have a very similar setup as you and have just very recently started testing OVA exports for backup purposes to NFS attached storage.

I have a three node HCI on GlusterFS (SSD backed) with 10Gbit and my ovirt management network is 10Gbit as well.  My NFS storage server is an 8 x 8Tb 7200 RPM drives in RAID10 running CentOS 8x with 10Gbit link.

I haven't done specific measurement yet as I just setup the storage today but a test export of a 50Gb VM took just about ~10 minutes start to finish.

I will hopefully be doing some further testing over the next few weeks and am interested to hear how you get along as well. If it's helpful I'd be happy to run any testing you might be interested in on my equipment to see how it compares.

- Jayme

On Wed, Jan 22, 2020 at 10:16 AM Jürgen Walch <jwalch@plumsoft.de> wrote:
Hello,

we are using oVirt on a production system with a three node hyperconverged-cluster based on GlusterFS with a 10Gbit storage backbone network.
Everything runs smooth except OVA exports.

Each node has a NFS mount mounted on

        /data/ova

with custom mount option "soft".
The NFS server used is a plain vanilla CentOS7 host with /etc/exports containing a line

        /data/ova     *(rw,all_squash,anonuid=36,anongid=36)

When exporting VM's as OVA using the engine web gui, the export is terribly slow (~4MiB/s), it succeeds for small disks (up to 20GB), exporting larger disks fails with a timeout.
The network link between oVirt-nodes and NFS server is 1Gbit.

I have done a little testing and looked at the code in /usr/share/ovirt-engine/playbooks/roles/ovirt-ova-pack/files/pack_ova.py.
It seems, the export is done by setting up a loop device /dev/loopX on the exporting node linked to a freshly generated sparse file /data/ova/{vmname}.tmp on the NFS share and then exporting the disk using qemu-img with target /dev/loopX.
Using iotop on the node doing the export I can see write rates ranging from 2-5 Mib/s on the /dev/loopX device.

When copying to the NFS share /data/ova using dd or qemu-img *directly* (that is using /data/ova/test.img instead of the loop device as target) I am getting write rates of ~100MiB/s which is the expected performance of the NFS servers underlying harddisk-system and the network connection. It seems that the loop device is the bottleneck.

So far I have been playing with NFS mount options and the options passed to qemu-img in /usr/share/ovirt-engine/playbooks/roles/ovirt-ova-pack/files/pack_ova.py without any success.

Any ideas or anyone with similar problems ? 😊

--
       
     juergen walch

_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-leave@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/HZIAYHUKQ5XHGPM3PC4O5GGKHCB52XKU/