So I've finally completed my first HCI build using the below configuration:

3x
Dell PowerEdge R720
2x 2.9 GHz 8 Core E5-2690
256GB RAM
2x250gb SSD Raid 1 (boot/os)
2x2TB SSD jbod passthrough (used for gluster bricks)
1Gbe Nic for management 10Gbe nic for Gluster

Using Replica 3 with no arbiter. 

Installed the latest version of oVirt available at the time 4.2.5.  Created recommended volumes (with an additional data volume on second SSD). Not using VDO

First thing I did was setup glusterFS network on 10Gbe and set it to be used for glusterFS and migration traffic. 

I've setup a single test VM using Centos7 minimal on the default "x-large instance" profile. 

Within this VM if I do very basic write test using something like:

dd bs=1M count=256 if=/dev/zero of=test conv=fdatasync

I'm seeing quite slow speeds, only 8mb/sec.  

If I do the same from one of the hosts gluster mounts i.e.

host1: /rhev/data-center/mnt/glusterSD/HOST:data 

I get about 30mb/sec (which still seems fairly low?)

Am I testing incorrectly here?  Is there anything I should be tuning on the Gluster volumes to increase performance with SSDs?  Where can I find out where the bottle neck is here, or is this expected performance of Gluster?