[ovirt-users] Storage Performance

Bryan Sockel Bryan.Sockel at altn.com
Thu Oct 26 15:15:49 UTC 2017


My server load is pretty light.  Currently there is no more than 15-20 VM's 
running on my ovirt config.  Attached are performance testing results on the 
local host and from a windows box.  I have also attached my gluster volume 
configuration.   There seems to be a significant performance loss between 
server performance and guest performance.  Not sure if it is related to my 
limited bandwidth, mismatched server hardware or something else.


When not performing any major disk intensive activities system typically 
runs at less than 10 MiB/s.  Network activity about 100 Mbps.

Does anyone know if there is any sort of best practices document when it 
comes to hardware setup for gluster? Specifically related to hardware raid 
vs JBOD, stripe size, brick configuration etc.

Thanks
Bryan 


-----Original Message-----
From: Juan Pablo <pablo.localhost at gmail.com>
To: Bryan Sockel <Bryan.Sockel at altn.com>
Cc: "users at ovirt.org" <users at ovirt.org>
Date: Thu, 26 Oct 2017 09:59:16 -0300
Subject: Re: [ovirt-users] Storage Performance

Hi, can you check IOPS? and state # of VM's ? do : iostat -x 1 for a while 
=)

Isnt RAID discouraged ? AFAIK gluster likes JBOD, am I wrong?


regards,
JP

2017-10-25 12:05 GMT-03:00 Bryan Sockel <Bryan.Sockel at altn.com>:
Have a question in regards to storage performance.  I have a gluster replica 
3 volume that we are testing for performance.  In my current configuration 
is 1 server has 16X1.2TB( 10K 2.5 Inch) drives configured in Raid 10 with a 
256k stripe.  My 2nd server is configured with 4X6TB (3.5 Inch Drives) 
configured Raid 10 with a 256k stripe.  Each server is configured with 802.3 
Bond (4X1GB) network links.  Each server is configured with write-back on 
the raid controller.

I am seeing a lot of network usage (solid 3 Gbps) when i perform file copies 
on the vm attached to that gluster volume,  But i see spikes on the disk io 
when watching the dashboard through the cockpit interface.  I spikes are up 
to 1.5 Gbps, but i would say the average through put is maybe 256 Mbps.

Is this to be expected, or should it be a solid activity in the graphs for 
disk IO.  Is it better to use a 256K stripe or a 512 strip on the hardware 
raid configuration?

Eventually i plan on having the hardware match up for better performance.


Thanks
 
Bryan

_______________________________________________
Users mailing list
Users at ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20171026/abe7fa04/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: CrystalDiskmark-Guest.JPG
Type: image/jpeg
Size: 57333 bytes
Desc: not available
URL: <http://lists.ovirt.org/pipermail/users/attachments/20171026/abe7fa04/attachment.jpe>
-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
Name: CrystalDiskMark-Guest.txt
URL: <http://lists.ovirt.org/pipermail/users/attachments/20171026/abe7fa04/attachment.txt>
-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
Name: Gluster Volume Config.txt
URL: <http://lists.ovirt.org/pipermail/users/attachments/20171026/abe7fa04/attachment-0001.txt>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: Iozone-host.jpg
Type: image/jpeg
Size: 57854 bytes
Desc: not available
URL: <http://lists.ovirt.org/pipermail/users/attachments/20171026/abe7fa04/attachment.jpg>
-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
Name: iozone-host.txt
URL: <http://lists.ovirt.org/pipermail/users/attachments/20171026/abe7fa04/attachment-0002.txt>


More information about the Users mailing list