[Users] VirtIO disk latency
Markus Stockhausen
stockhausen at collogia.de
Thu Jan 9 09:16:20 UTC 2014
Hello,
coming from the "low cost NFS storage thread" I will open a new one
about a topic that might be interesting for others too.
We see a quite a heavy latency penalty using KVM VirtIO disks in comparison
to ESX. Doing one I/O onto disk inside a VM usually adds 370us of overhead in
the virtualisation layer. This has been tested with VirtIO-SCSI and windows
guest (2K3). More here (still now answer yet):
http://lists.nongnu.org/archive/html/qemu-discuss/2013-12/msg00028.html
A comparison for small sequential 1K I/Os on a NFS datastore in our setup gives:
- access NFS inside the hypervisor - 12.000 I/Os per second - or 83us latency
- access DISK inside ESX VM that resides on NFS - 8000 I/Os per second - or 125us latency
- access DISK inside OVirt VM that resides on NFS - 2200 I/Os per second - or 450us latency
Even the official document at http://www.linux-kvm.org/page/Virtio/Block/Latency
suggest that the several mechanisms (iothread/vcpu) at least have a overhead
of more than 200us
Has anyone experienced something simlar. If these latency are normal it would
make no sense to think about SSDs inside a central storage (be it ISCSI or NFS
or whatever).
Markus
-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
Name: InterScan_Disclaimer.txt
URL: <http://lists.ovirt.org/pipermail/users/attachments/20140109/fcb07c93/attachment-0001.txt>
More information about the Users
mailing list