I also wonder if I had setup the storage as Gluster instead of
"straight" NFS (and thus utilised Gluster's own NFS services), would
this have been an issue? Would Gluster possibly have built-in options
that would not have this sort of performance issue?
-Alan
On 30/07/2015 10:57 PM, Alan Murrell wrote:
OK, so an update... it looks like the issue was indeed my NFS
settings.
I ended up doing a self-hosted engine again. Here was the entry I
have been using for my (NFS) data domain (based on what I had seen in
oVirt documentation):
/storage1/data *(rw,all_squash,anonuid=36,anongid=36)
however, I had come across some articles on NFS tuning and performance
(nothing from oVirt, though) indicating that by default, (current
versions of) NFS use "sync", meaning that it syncs all data changes to
disk first. Indeed, my new test VM was getting the same disk write
performance it was getting before (about 10-15 MB/s)
In my new install, I added my NFS data store as I had been before, but I
also added a second data store like this:
/storage1/data *(rw,all_squash,async,anonuid=36,anongid=36)
and migrated my VM's vHDD to this second data store. Once it was
migrated, I rebooted and ran the HDD test again. Results are *much*
better: about 130MB/s sequential write speed (averaged over a half dozen
or so runs), and almost 2GB/s sequential read speed. If it means
anything to anyone, Random QD32 speeds are about 30MB/s for write and
40MB/s for read.
Hopefully this can help someone else out there. Would it be appropriate
to add this to the "Troubleshooting NFS" documentation page? As long as
people are aware of the possible consequences on the 'async' option
(possible data loss if server shut down suddenly), then it seems to be a
viable solution.
@Donny: Thanks for pointing me in the right direction. I was actually
starting to get a bit frustrated as it felt like I was talking to myself
there... :-(
-Alan
_______________________________________________
Users mailing list
Users(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/users