On Oct 26, 2017 6:16 PM, "Bryan Sockel" <Bryan.Sockel@altn.com> wrote:
My server load is pretty light.  Currently there is no more than 15-20 VM's running on my ovirt config.  Attached are performance testing results on the local host and from a windows box.  I have also attached my gluster volume configuration.   There seems to be a significant performance loss between server performance and guest performance.  Not sure if it is related to my limited bandwidth, mismatched server hardware or something else.

I would begin by comparing the host performance with a Linux guest, then move on to Windows. Also, please ensure the rng driver is installed (I assume you already use virtio or virtio-scsi). 
Y. 

 
 
When not performing any major disk intensive activities system typically runs at less than 10 MiB/s.  Network activity about 100 Mbps.
 
Does anyone know if there is any sort of best practices document when it comes to hardware setup for gluster? Specifically related to hardware raid vs JBOD, stripe size, brick configuration etc.
 
Thanks
Bryan 
 
 
-----Original Message-----
From: Juan Pablo <pablo.localhost@gmail.com>
To: Bryan Sockel <Bryan.Sockel@altn.com>
Cc: "users@ovirt.org" <users@ovirt.org>
Date: Thu, 26 Oct 2017 09:59:16 -0300
Subject: Re: [ovirt-users] Storage Performance
 
Hi, can you check IOPS? and state # of VM's ? do : iostat -x 1 for a while =)
 
Isnt RAID discouraged ? AFAIK gluster likes JBOD, am I wrong?

 
regards,
JP
 
2017-10-25 12:05 GMT-03:00 Bryan Sockel <Bryan.Sockel@altn.com>:
Have a question in regards to storage performance.  I have a gluster replica 3 volume that we are testing for performance.  In my current configuration is 1 server has 16X1.2TB( 10K 2.5 Inch) drives configured in Raid 10 with a 256k stripe.  My 2nd server is configured with 4X6TB (3.5 Inch Drives) configured Raid 10 with a 256k stripe.  Each server is configured with 802.3 Bond (4X1GB) network links.  Each server is configured with write-back on the raid controller.
 
I am seeing a lot of network usage (solid 3 Gbps) when i perform file copies on the vm attached to that gluster volume,  But i see spikes on the disk io when watching the dashboard through the cockpit interface.  I spikes are up to 1.5 Gbps, but i would say the average through put is maybe 256 Mbps.
 
Is this to be expected, or should it be a solid activity in the graphs for disk IO.  Is it better to use a 256K stripe or a 512 strip on the hardware raid configuration?
 
Eventually i plan on having the hardware match up for better performance.
 
 
Thanks
 
Bryan

_______________________________________________
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
 

_______________________________________________
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users