Thank you for the quick reply.
- I/O scheduler hosts -> changed
echo noop > /sys/block/sdb/queue/scheduler
echo noop > /sys/block/sdc/queue/scheduler
- CPU states -> can you explain this a bit more?
cat /dev/cpu_dma_latency
F
hexdump -C /dev/cpu_dma_latency
00000000 46 00 00 00 |F...|
00000004
- Tuned profile -> can you explain this a bit more?
- MTU was already set to 9000 -> can you explain a bit more tcp-offloading and how I
change the settings?
ethtool -k enp1s0f0 | grep offload
tcp-segmentation-offload: on
udp-fragmentation-offload: off [fixed]
generic-segmentation-offload: on
generic-receive-offload: on
large-receive-offload: off [fixed]
rx-vlan-offload: on
tx-vlan-offload: on
l2-fwd-offload: off [fixed]
hw-tc-offload: off
rx-udp_tunnel-port-offload: on
- I/O scheduler VM had already none
cat /sys/block/vda/queue/scheduler
none
- oVirt 4.4 -> thanks. Was not aware that the new version is final. I will check and
upgrade.
- Use High Performance VMs -> I got following message -> So I need to do changes
also on the VM?
The following recommended settings for running the High Performance type with the optimal
configuration were not detected. Please consider manually changing of the following before
applying:
CPU PINNING:
VIRTUAL NUMA AND NUMA PINNING:
HUGE PAGES:
KERNEL SAME PAGE MERGING (KSM):
- libgfapi -> I will execute on the engine following command and then it will change
all VMs after reboot?
engine-config -s LibgfApiSupported=true --cver4.2
- changed values (from your other threat):
gluster volume set vmstore performance.read-ahead on
gluster volume set vmstore performance.stat-prefetch on
gluster volume set vmstore performance.write-behind-window-size 64MB
gluster volume set vmstore performance.flush-behind on
- already configured:
performance.client-io-threads: on
Best Regards
Dirk