Hey folks,
gluster related question. Having SSD in a RAID that can do 2 GB writes
and Reads (actually above, but meh) in a 3-way HCI cluster connected
with 10gbit connection things are pretty slow inside gluster.
I have these settings:
Options Reconfigured:
cluster.server-quorum-type: server
cluster.quorum-type: auto
cluster.shd-max-threads: 8
features.shard: on
features.shard-block-size: 64MB
server.event-threads: 8
user.cifs: off
cluster.shd-wait-qlength: 10000
cluster.locking-scheme: granular
cluster.eager-lock: enable
performance.low-prio-threads: 32
network.ping-timeout: 30
cluster.granular-entry-heal: enable
storage.owner-gid: 36
storage.owner-uid: 36
cluster.choose-local: true
client.event-threads: 16
performance.strict-o-direct: on
network.remote-dio: enable
performance.client-io-threads: on
nfs.disable: on
storage.fips-mode-rchecksum: on
transport.address-family: inet
cluster.readdir-optimize: on
cluster.metadata-self-heal: on
cluster.data-self-heal: on
cluster.entry-self-heal: on
cluster.data-self-heal-algorithm: full
features.uss: enable
features.show-snapshot-directory: on
features.barrier: disable
auto-delete: enable
snap-activate-on-create: enable
Writing inside the /gluster_bricks yields those 2GB/sec writes, Reading
the same.
Reading inside the /rhev/data-center/mnt/glusterSD/ dir reads go down to
366mb/sec while writes plummet to to 200mb/sec.
Summed up: Writing into the SSD Raid in the lvm/xfs gluster brick
directory is fast, writing into the mounted gluster dir is horribly slow.
The above can be seen and repeated on all 3 servers. The network can do
full 10gbit (tested with, among others: rsync, iperf3).
Anyone with some idea on whats missing/ going on here?
Thanks folks,
as always stay safe and healthy!
--
with kind regards,
mit freundlichen Gruessen,
Christian Reiss