Ok guys now my setup it like this:
2 x Servers with 5 x 4TB 7200RPM drives in raidz1 and 10G storage network (mtu 9000) in
each - my gluster_bricks folders
1 x SFF workstation with 2 x 50GB SSD's in ZFS mirror - my gluster_bricks folder for
arbiter
My gluster vol info looks like this:
Volume Name: vmstore
Type: Replicate
Volume ID: 7deac39b-3109-4229-b99f-afa50fc8d5a1
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: ovirtn1.5erverssan.lan:/gluster_bricks/vmstore/vmstore
Brick2: ovirtn2.5erverssan.lan:/gluster_bricks/vmstore/vmstore
Brick3: ovirtn3.5erverssan.lan:/gluster_bricks/vmstore/vmstore (arbiter)
Options Reconfigured:
cluster.granular-entry-heal: enable
performance.strict-o-direct: off
network.ping-timeout: 30
storage.owner-gid: 36
storage.owner-uid: 36
server.event-threads: 4
client.event-threads: 4
cluster.choose-local: off
user.cifs: off
features.shard: on
cluster.shd-wait-qlength: 10000
cluster.shd-max-threads: 8
cluster.locking-scheme: granular
cluster.data-self-heal-algorithm: full
cluster.server-quorum-type: server
cluster.quorum-type: auto
cluster.eager-lock: enable
network.remote-dio: enable
performance.low-prio-threads: 32
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
transport.address-family: inet
storage.fips-mode-rchecksum: on
nfs.disable: on
performance.client-io-threads: on
And my test results look like this:
starting on engine
/tmp
50M
dd: error writing './junk': No space left on device
40+0 records in
39+0 records out
2044723200 bytes (2.0 GB, 1.9 GiB) copied, 22.1341 s, 92.4 MB/s
starting
/tmp
10M
100+0 records in
100+0 records out
1048576000 bytes (1.0 GB, 1000 MiB) copied, 11.4612 s, 91.5 MB/s
starting
/tmp
1M
100+0 records in
100+0 records out
104857600 bytes (105 MB, 100 MiB) copied, 0.602421 s, 174 MB/s
starting on node1
/gluster_bricks
50M
100+0 records in
100+0 records out
5242880000 bytes (5.2 GB, 4.9 GiB) copied, 40.8802 s, 128 MB/s
starting
/gluster_bricks
10M
100+0 records in
100+0 records out
1048576000 bytes (1.0 GB, 1000 MiB) copied, 7.49434 s, 140 MB/s
starting
/gluster_bricks
1M
100+0 records in
100+0 records out
104857600 bytes (105 MB, 100 MiB) copied, 0.164098 s, 639 MB/s
starting on node2
/gluster_bricks
50M
100+0 records in
100+0 records out
5242880000 bytes (5.2 GB, 4.9 GiB) copied, 22.0764 s, 237 MB/s
starting
/gluster_bricks
10M
100+0 records in
100+0 records out
1048576000 bytes (1.0 GB, 1000 MiB) copied, 4.32239 s, 243 MB/s
starting
/gluster_bricks
1M
100+0 records in
100+0 records out
104857600 bytes (105 MB, 100 MiB) copied, 0.0584058 s, 1.8 GB/s
I don't know why my zfs arrays perform different, its the same drives with the same
config.
Is this performace normal or bad? I think it is too bad hmm... Any tips or tricks for
this?