Hello.
we have installed Ovirt on HPE ProLiant XL270d Gen10.
The server is installed with 2 * Intel(R) Xeon(R) Gold 6154 CPU @ 3.00GHz, 1.5T Ram, 23T Local SSD, 4 Nvidia V100 32GB GPU and 2* Mellanox Technologies MT27800 Family [ConnectX-5]


The Mellanox card are 2 100Gbit DualPort each connected with a single 100Gbit port to a Mellanox 100Gbit switch which has access to a 16GByte storage server.
MTU is set to 9000 for jumbo packets.

Running our application on a VM created on top of the server is having lower network performance vs the host.
I tried both normal ovirt network driver, using Virtual Function and PCI-passthrough moving one of the Mellanox card directly to the VM but i never get the same result as on the host.
I also tried allocating the numa bush directly to the VM but it doesnt improve the results.

Im testing using FIO to perform sequential reads from the storage, this is the command we run both on the host and the VM.

sqream@host-3-171 /media/StorONE1/tpch10t_for_4.0/logs/192.168.3.171_5000 $ fio --randrepeat=1 --ioengine=sync --direct=1 --gtod_reduce=1 --name=test --filename=/media/StorONE3/t1e1s1111t221www22.file --bs=2m --iodepth=24 --size=25G  --numjobs=14  --readwrite=read --rwmixread=100
Run status group 0 (all jobs):
   READ: bw=3919MiB/s (4109MB/s), 280MiB/s-314MiB/s (294MB/s-329MB/s), io=350GiB (376GB), run=81630-91459msec

Before starting Ovirt i measured 7.5GB/sec on the host running the same FIO command.
are there any optimization i should perform to make sure the vguest i getting the best possible network performance, im mostly stressed about GPFS performance as this is our main filesystem.
On the same time running the test on another host not running Ovirt i can still get the 7.9GB/sec.

appreciate any comments or suggestions.





Guy Brodny

Cloud Architecture & DevOps Manager | SQream
M: +972-54-2279528
sqream.com | Linkedin | Twitter