This probably more appropriate for the qemu users mailing list, but that list doesn’t get
much traffic and most posts go unanswered…
As I’ve mentioned in the past, I’m migrating my environment from ESXi to oVirt AIO.
Under ESXi I was pretty happy with the disk performance, and noticed very little
difference from bare metal to HV.
Under oVirt/QEMU/KVM, not so much….
Running hdparm on the disk from the HV and from the guest yields the same number, about
180MB/sec (SATA III disks, 7200RPM). The problem is, during disk activity, and it doesn’t
matter if it’s Windows 7 guests or Fedora 20 (both using virtio-scsi) the qemu-system-x86
process starts consuming 100% of the hypervisor CPU. Hypervisor is a Core i7 950 with
24GB of RAM. There’s 2 Fedora 20 guests and 2 Windows 7 guests. Each configured with 4
GB of guaranteed RAM.
Load averages can go up over 40 during sustained disk IO. Performance obviously suffers
greatly.
I have tried all combinations of having the guests on EXT 4, BTRFS and using EXT 4 and
BTRFS inside the guests, as well as direct LUN. Doesn’t make any difference. Disk IO
sends qemu-system-x86 to high CPU percentages.
This can’t be normal, so I’m wondering what I’ve done wrong. Is there some magic setting
I’m missing?