Hi all:
I've had I/O performance problems pretty much since the beginning of using
oVirt. I've applied several upgrades as time went on, but strangely, none
of them have alleviated the problem. VM disk I/O is still very slow to the
point that running VMs is often painful; it notably affects nearly all my
VMs, and makes me leary of starting any more. I'm currently running 12 VMs
and the hosted engine on the stack.
My configuration started out with 1Gbps networking and hyperconverged
gluster running on a single SSD on each node. It worked, but I/O was
painfully slow. I also started running out of space, so I added an SSHD on
each node, created another gluster volume, and moved VMs over to it. I
also ran that on a dedicated 1Gbps network. I had recurring disk failures
(seems that disks only lasted about 3-6 months; I warrantied all three at
least once, and some twice before giving up). I suspect the Dell PERC 6/i
was partly to blame; the raid card refused to see/acknowledge the disk, but
plugging it into a normal PC showed no signs of problems. In any case,
performance on that storage was notably bad, even though the gig-e
interface was rarely taxed.
I put in 10Gbps ethernet and moved all the storage on that none the less,
as several people here said that 1Gbps just wasn't fast enough. Some
aspects improved a bit, but disk I/O is still slow. And I was still having
problems with the SSHD data gluster volume eating disks, so I bought a
dedicated NAS server (supermicro 12 disk dedicated FreeNAS NFS storage
system on 10Gbps ethernet). Set that up. I found that it was actually
FASTER than the SSD-based gluster volume, but still slow. Lately its been
getting slower, too...Don't know why. The FreeNAS server reports network
loads around 4MB/s on its 10Gbe interface, so its not network constrained.
At 4MB/s, I'd sure hope the 12 spindle SAS interface wasn't constrained
either..... (and disk I/O operations on the NAS itself complete much
faster).
So, running a test on my NAS against an ISO file I haven't accessed in
months:
# dd
if=en_windows_server_2008_r2_standard_enterprise_datacenter_and_web_x64_dvd_x15-59754.iso
of=/dev/null bs=1024k count=500
500+0 records in
500+0 records out
524288000 bytes transferred in 2.459501 secs (213168465 bytes/sec)
Running it on one of my hosts:
root@unifi:/home/kusznir# time dd if=/dev/sda of=/dev/null bs=1024k
count=500
500+0 records in
500+0 records out
524288000 bytes (524 MB, 500 MiB) copied, 7.21337 s, 72.7 MB/s
(I don't know if this is a true apples to apples comparison, as I don't
have a large file inside this VM's image). Even this is faster than I
often see.
I have a VoIP Phone server running as a VM. Voicemail and other recordings
usually fail due to IO issues opening and writing the files. Often, the
first 4 or so seconds of the recording is missed; sometimes the entire
thing just fails. I didn't use to have this problem, but its definately
been getting worse. I finally bit the bullet and ordered a physical server
dedicated for my VoIP System...But I still want to figure out why I'm
having all these IO problems. I read on the list of people running 30+
VMs...I feel that my IO can't take any more VMs with any semblance of
reliability. We have a Quickbooks server on here too (windows), and the
performance is abysmal; my CPA is charging me extra because of all the lost
staff time waiting on the system to respond and generate reports.....
I'm at my whits end...I started with gluster on SSD with 1Gbps network,
migrated to 10Gbps network, and now to dedicated high performance NAS box
over NFS, and still have performance issues.....I don't know how to
troubleshoot the issue any further, but I've never had these kinds of
issues when I was playing with other VM technologies. I'd like to get to
the point where I can resell virtual servers to customers, but I can't do
so with my current performance levels.
I'd greatly appreciate help troubleshooting this further.
--Jim