On Tue, Feb 5, 2019 at 3:58 PM Hetz Ben Hamo <hetz(a)hetz.biz> wrote:
Hi,
I'm doing some very simple disk benchmarks on Windows 10, both with ESXI
6.7 and oVirt 4.3.0.
Both Windows 10 Pro guests have all the driver installed.
the "Storage" (Datastore in VMWare and storage domains in oVirt) comes
from the same ZFS machine, both mounted as NFS *without* any parameters
for NFS mount.
The ESXI is running on HP DL360 G7 with E5620 CPU, while the oVirt node is
running on IBM X3550 M3 with dual Xeon E5620. There are no memory issues as
both machines have plenty of free memory and free CPU resources.
Screenshots:
- Windows 10 in vSphere 6.7 -
https://imgur.com/V75ep2n
- Windows 10 in oVirt 4.3.0 -
https://imgur.com/3JDrWLx
As you can see, while oVirt lags a bit in 4K Read, the write performance
is really bad.
382 MiB/s vs 54 MiBs? smells like someone is cheating :-)
Maybe VMWare is using buffered I/O, so you test writing to the host buffer
cache,
while oVirt is using cache=none, and actually writing to the remote storage.
But this is only a wild guess, we need much more details.
Lets start by getting more details about your setup, so we can reproduce it
in our lab.
- What is the network topology?
- Spec of the NFS server?
- Configuration of the VM on both oVirt and VMWare.
- How do you test? how much data is written during the test?
- How it compares to running the same tests on the hypervisor?
For oVirt, getting the VM XML would be helpful - try:
Find the vm id:
sudo virsh -r list
Dump the xml:
sudo virsh -r dumpxml N
For testing, you should probably use fio:
https://bluestop.org/fio/
Added people who can help to diagnose this or at least ask better questions.
Nir