[Users] Horrid performance during disk I/O
Andrew Cathrow
acathrow at redhat.com
Tue Jan 14 17:04:25 UTC 2014
----- Original Message -----
> From: "Blaster" <blaster at 556nato.com>
> To: users at ovirt.org
> Sent: Monday, January 13, 2014 12:22:37 PM
> Subject: [Users] Horrid performance during disk I/O
>
>
> This probably more appropriate for the qemu users mailing list, but
> that list doesn’t get much traffic and most posts go unanswered…
>
> As I’ve mentioned in the past, I’m migrating my environment from ESXi
> to oVirt AIO.
>
> Under ESXi I was pretty happy with the disk performance, and noticed
> very little difference from bare metal to HV.
>
> Under oVirt/QEMU/KVM, not so much….
>
> Running hdparm on the disk from the HV and from the guest yields the
> same number, about 180MB/sec (SATA III disks, 7200RPM). The problem
> is, during disk activity, and it doesn’t matter if it’s Windows 7
> guests or Fedora 20 (both using virtio-scsi) the qemu-system-x86
> process starts consuming 100% of the hypervisor CPU. Hypervisor is
> a Core i7 950 with 24GB of RAM. There’s 2 Fedora 20 guests and 2
> Windows 7 guests. Each configured with 4 GB of guaranteed RAM.
>
Did you compare virtio-block to virto-scsi, the former will likely outperform the latter.
> Load averages can go up over 40 during sustained disk IO.
> Performance obviously suffers greatly.
>
> I have tried all combinations of having the guests on EXT 4, BTRFS
> and using EXT 4 and BTRFS inside the guests, as well as direct LUN.
> Doesn’t make any difference. Disk IO sends qemu-system-x86 to high
> CPU percentages.
>
> This can’t be normal, so I’m wondering what I’ve done wrong. Is
> there some magic setting I’m missing?
>
>
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
>
More information about the Users
mailing list