Hey,
thanks for writing. Sorry about the delay.
On 25/03/2020 00:25, Nir Soffer wrote:
These settings mean:
> performance.strict-o-direct: on
> network.remote-dio: enable
That you are using direct I/O both on the client and server side.
I changed them to
off, to no avail. Yields the same results.
> Writing inside the /gluster_bricks yields those 2GB/sec writes,
Reading
> the same.
How did you test this?
I ran
dd if=/dev/zero of=testfile oflag=direct bs=1M status=progress
(with varying block sized) on
- the mounted gluster brick (/gluster_bricks...)
- the mounted gluster volume (/rhev.../mount/...)
- inside a running VM
I also switched it around and read an image file from the gluster volume
with the same speeds.
Did you test reading from the storage on the server side using
direct
I/O? if not,
you test accessing server buffer cache, which is pretty fast.
Which is where oflag
comes in. I can confirm skipping that will results
in really, really fast io until the buffer is full. oflag=direct shows
~2gb on the raid, 200mb on gluster volume, still.
> Reading inside the /rhev/data-center/mnt/glusterSD/ dir reads go
down to
> 366mb/sec while writes plummet to to 200mb/sec.
This use direct I/O.
Even with the direct I/O turned on (which is off and yielding
the same
results) this is way too slow for direct IO.
Please share the commands/configuration files used to perform the
tests.
Adding storage folks that can help with analyzing this.
I am happy to oblige and
supply and required logs or profiling
information if you'd be so kind to tell me which one, precisely.
Stay healthy!
--
Christian Reiss - email(a)christian-reiss.de /"\ ASCII Ribbon
support(a)alpha-labs.net \ / Campaign
X against HTML
WEB
alpha-labs.net / \ in eMails
GPG Retrieval
https://gpg.christian-reiss.de
GPG ID ABCD43C5, 0x44E29126ABCD43C5
GPG fingerprint = 9549 F537 2596 86BA 733C A4ED 44E2 9126 ABCD 43C5
"It's better to reign in hell than to serve in heaven.",
John Milton, Paradise lost.