[Users] VirtIO disk latency

Markus Stockhausen stockhausen at collogia.de
Thu Jan 9 10:09:11 UTC 2014


> Von: sander.grendelman at gmail.com 
> Gesendet: Donnerstag, 9. Januar 2014 10:32
> An: Markus Stockhausen
> Cc: users at ovirt.org
> Betreff: Re: [Users] VirtIO disk latency
> 
> On Thu, Jan 9, 2014 at 10:16 AM, Markus Stockhausen
> <stockhausen at collogia.de> wrote:
> ...
> > - access NFS inside the hypervisor - 12.000 I/Os per second - or 83us latency
> > - access DISK inside ESX VM that resides on NFS - 8000 I/Os per second - or 125us latency
> > - access DISK inside OVirt VM that resides on NFS - 2200 I/Os per second - or 450us latency
> 
> I can do a bit of testing on local disk and FC (with some extra setup
> maybe also NFS).
> What is your exact testing method? ( commands, file sizes, sofware
> versions, mount options etc.)

Thanks for taking time to help. 

I have used several tools to measure latencies but it
always boils down to the same numbers. The software 
components and their releases should not matter to get 
a first overview. The important thing is to ensure that a 
read request of a test inside the VM is really passing the 
QEMU layer. 

The simplest test I can think of (at least in our case) is to
start a Windows VM and attach a very small NFS disk with
1GB to it. Start it, install HDTune Trial and run the random 
access test to the small disk. Other ways could be to run 
some kind of direct IO based read test inside the VM.

During each test I can see the packets running between 
the NFS server and the hypervisor so I know that each
request is not cached inside the VM or QEMU. 

After one or two runs the filecache in the RAM of our NFS 
server has all the hot data and latency decreases down to
the microseconds area. With that we can derive the
penalty of the virtualization layer.

Whatever I try to optimize I only reach 1/4th of the I/Os
of ESX for very small packets (512 bytes or 1K). And
that inside the same (migrated) VM on the same NFS
topology with the same test programs.

The baseline numbers for the hypervisor are an average of 
running direct io based test tools onto files residing on
the same NFS.

Markus

P.S. I'm not complaining about that performance. 
Driving an IPoIB environment you get used to waste 
bandwidth and latency. But it is always good to know
where it comes from.
-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
Name: InterScan_Disclaimer.txt
URL: <http://lists.ovirt.org/pipermail/users/attachments/20140109/01138111/attachment-0001.txt>


More information about the Users mailing list