On March 24, 2020 11:20:10 AM GMT+02:00, Christian Reiss <
email@christian-reiss.de> wrote:
Hey Strahil,
seems you're the go-to-guy with pretty much all my issues. I thank you
for this and your continued support. Much appreciated.
200mb/reads however seems like a broken config or malfunctioning
gluster
than requiring performance tweaks. I enabled profiling so I have real
life data available. But seriously even without tweaks I would like
(need) 4 times those numbers, 800mb write speed is okay'ish, given the
fact that 10gbit backbone can be the limiting factor.
We are running BigCouch/CouchDB Applications that really really need
IO.
Not in throughput but in response times. 200mb/s is just way off.
It feels as gluster can/should do more, natively.
-Chris.
On 24/03/2020 06:17, Strahil Nikolov wrote:
Hey Chris,,
You got some options.
1. To speedup the reads in HCI - you can use the option :
cluster.choose-local: on
2. You can adjust the server and client event-threads
3. You can use NFS Ganesha (which connects to all servers via
libgfapi) as a NFS Server.
In such case you have to use some clustering like ctdb or pacemaker.
Note:disable cluster.choose-local if you use this one
4 You can try the built-in NFS , although it's deprecated (NFS
Ganesha is fully supported)
5. Create a gluster profile during the tests. I have seen numerous
improperly selected tests -> so test with real-world workload.
Synthetic tests are not good.
Best Regards,
Strahil Nikolov
Hey Chris,
What type is your VM ?
Try with 'High Performance' one (there is a good RH documentation on that topic).
If the DB load was directly on gluster, you could use the settings in the '/var/lib/gluster/groups/db-workload' to optimize that, but I'm not sure if this will bring any performance on a VM.
1. Check the VM disk scheduler. Use 'noop/none' (depends on multiqueue is enabled) to allow the Hypervisor aggregate the I/O requests from multiple VMs.
Next, set 'noop/none' disk scheduler on the hosts - these 2 are the optimal for SSDs and NVME disks (if I recall corectly you are using SSDs)
2. Disable cstates on the host and Guest (there are a lot of articles about that)
3. Enable MTU 9000 for Hypervisor (gluster node).
4. You can try setting/unsetting the tunables in the db-workload group and run benchmarks with real workload .
5. Some users reported that enabling TCP offload on the hosts gave huge improvement in performance of gluster - you can try that.
Of course there are mixed feelings - as others report that disabling it brings performance. I guess it is workload specific.
6. You can try to tune the 'performance.readahead' on your gluster volume.
Here are some settings of some users /from an old e-mail/:
performance.read-ahead: on
performance.stat-prefetch: on
performance.flush-behind: on
performance.client-io-threads: on
performance.write-behind-window-size: 64MB (shard size)
For a 48 cores / host:
server.event-threads: 4
client.event-threads: 8
Your ecent-threads seem to be too high.And yes, documentation explains it , but without an example it becomes more confusing.
Best Regards,
Strahil Nikolov
_______________________________________________
Users mailing list --
users@ovirt.orgTo unsubscribe send an email to
users-leave@ovirt.orgPrivacy Statement:
https://www.ovirt.org/privacy-policy.htmloVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BOFZEJPBIRXUAXLJS6M34Z3RHPDNQB4D/