Check and tune the following:
- I/O scheduler on the host (usually noop/none are good for writes, (mq-)deadline
for reads)
- CPU cstates
- Tuned profile , there are some 'dirty' settings that will avoid I/O locks
- MTU size and tcp offloading (some users report enabled is better, others feel
disabled brings better performance)
-I/O scheduler in VMs is best to be noop/none as we already reorder our I/O on Hypervisour
level
- with oVirt 4.4 Hugepages for VMs should be fixed, so you can try
- Use high-performance VMs
- Using libgfapi is better than FUSE, so if you are OK with the limitations that it
imposes -> you can swith to libgfapi instead of FUSE (requires VM power off and power
on to switch)
Best Regards,
Strahil Nikolov
На 19 август 2020 г. 21:26:44 GMT+03:00, info--- via Users <users(a)ovirt.org>
написа:
>Hello,
>
>I'm running a home setup with 3 nodes and 2 SATA SSDs per node.
>As storage I'm running glusterfs and 40GBit/s links.
>
>Software Version:4.3.9.4-1.el7
>
>I've a lot of I/O Wait on the nodes (20%) and on the VMs (50%).
>
>gluster volume top vmstore write-perf bs 2014 count 1024 | grep Through
>Throughput 635.54 MBps time 0.0032 secs
>Throughput 614.89 MBps time 0.0034 secs
>Throughput 622.31 MBps time 0.0033 secs
>Throughput 643.07 MBps time 0.0032 secs
>Throughput 621.75 MBps time 0.0033 secs
>Throughput 609.26 MBps time 0.0034 secs
>
>gluster volume top vmstore read-perf bs 2014 count 1024 | grep Through
>Throughput 1274.62 MBps time 0.0016 secs
>Throughput 1320.32 MBps time 0.0016 secs
>Throughput 1203.93 MBps time 0.0017 secs
>Throughput 1293.81 MBps time 0.0016 secs
>Throughput 1213.14 MBps time 0.0017 secs
>Throughput 1193.48 MBps time 0.0017 secs
>
>Volume Name: vmstore
>Type: Distributed-Replicate
>Volume ID: 195e2a05-9667-4b8b-b0b7-82294631de50
>Status: Started
>Snapshot Count: 0
>Number of Bricks: 2 x 3 = 6
>Transport-type: tcp
>Bricks:
>Brick1: 10.9.9.101:/gluster_bricks/vmstore/vmstore
>Brick2: 10.9.9.102:/gluster_bricks/vmstore/vmstore
>Brick3: 10.9.9.103:/gluster_bricks/vmstore/vmstore
>Brick4: 10.9.9.101:/gluster_bricks/S4CYNF0M219849L/S4CYNF0M219849L
>Brick5: 10.9.9.102:/gluster_bricks/S4CYNF0M219836L/S4CYNF0M219836L
>Brick6: 10.9.9.103:/gluster_bricks/S4CYNF0M219801Y/S4CYNF0M219801Y
>Options Reconfigured:
>performance.client-io-threads: on
>nfs.disable: on
>transport.address-family: inet
>performance.strict-o-direct: on
>performance.quick-read: off
>performance.read-ahead: off
>performance.io-cache: off
>performance.low-prio-threads: 32
>network.remote-dio: off
>cluster.eager-lock: enable
>cluster.quorum-type: auto
>cluster.server-quorum-type: server
>cluster.data-self-heal-algorithm: full
>cluster.locking-scheme: granular
>cluster.shd-max-threads: 8
>cluster.shd-wait-qlength: 10000
>features.shard: on
>user.cifs: off
>cluster.choose-local: off
>client.event-threads: 4
>server.event-threads: 4
>network.ping-timeout: 30
>storage.owner-uid: 36
>storage.owner-gid: 36
>cluster.granular-entry-heal: enable
>
>Please help me to analyse the root cause.
>
>Many thanks
>Metz
>_______________________________________________
>Users mailing list -- users(a)ovirt.org
>To unsubscribe send an email to users-leave(a)ovirt.org
>Privacy Statement:
https://www.ovirt.org/privacy-policy.html
>oVirt Code of Conduct:
>https://www.ovirt.org/community/about/community-guidelines/
>List Archives:
>https://lists.ovirt.org/archives/list/users@ovirt.org/message/TSACLDXFZXD5XV6WQ5GPJDBJQBNAUC7P/