I would love to see something similar to your performance numbers WK.
Here is my gluster volume options and info:
[root@ovirtn1 ~]# gluster v info vmstore
Volume Name: vmstore
Type: Replicate
Volume ID: stuff
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: ovirtn1.5ervers.lan:/gluster_bricks/vmstore/vmstore
Brick2: ovirtn2.5ervers.lan:/gluster_bricks/vmstore/vmstore
Brick3: ovirtn3.5ervers.lan:/gluster_bricks/vmstore/vmstore (arbiter)
Options Reconfigured:
cluster.granular-entry-heal: enable
performance.strict-o-direct: on
network.ping-timeout: 30
storage.owner-gid: 36
storage.owner-uid: 36
server.event-threads: 4
client.event-threads: 4
cluster.choose-local: off
user.cifs: off
features.shard: on
cluster.shd-wait-qlength: 10000
cluster.shd-max-threads: 8
cluster.locking-scheme: granular
cluster.data-self-heal-algorithm: full
cluster.server-quorum-type: server
cluster.quorum-type: auto
cluster.eager-lock: enable
network.remote-dio: enable
performance.low-prio-threads: 32
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
transport.address-family: inet
storage.fips-mode-rchecksum: on
nfs.disable: on
performance.client-io-threads: on
Does it look like sharding is on Strahil Nikolov?
Running "gluster volume set vmstore group virt" had no effect.
I don't know why I ended up using dsync flag.
For real work test, I have crystal disk mark on windows VM, this is the results:
https://gofile.io/d/7nOeEL