
На 19 август 2020 г. 22:39:22 GMT+03:00, info--- via Users <users@ovirt.org> написа:
Thank you for the quick reply.
- I/O scheduler hosts -> changed echo noop > /sys/block/sdb/queue/scheduler echo noop > /sys/block/sdc/queue/scheduler
On reboot it will be reverted. Test this way and if you notice improvement use udev rules. Keep in mind that multipath devices also have a scheduler.
- CPU states -> can you explain this a bit more?
On Intel CPUs you can use these kernel parameters (check them online as I type them by memory) processor.max_cstate=1 intel_idle.max_cstate=0
cat /dev/cpu_dma_latency F hexdump -C /dev/cpu_dma_latency 00000000 46 00 00 00 |F...| 00000004
- Tuned profile -> can you explain this a bit more?
Download ftp://ftp.redhat.com/redhat/linux/enterprise/7Server/en/RHS/SRPMS/redhat-storage-server-3.5.0.0-6.el7rhgs.src.rpm and extract the srpm. Inside it you will find 2 dirs: 'rhgs-random-io' &'rhgs-sequential-io'. Open the tuned.conf files inside them and check the 'dirty' sysctl tunings. One is for random I/O (usually the Vm workload is random I/O) and another for sequential. You can set the 'dirty' settings in your sysctl.conf or your custom tuned profile.
- MTU was already set to 9000 -> can you explain a bit more
Have you tested via : ping -M do -s 8972 another-gluster-node
tcp-offloading and how I change the settings?
You can google it. Usually ethtool is your best friend.
ethtool -k enp1s0f0 | grep offload tcp-segmentation-offload: on udp-fragmentation-offload: off [fixed] generic-segmentation-offload: on generic-receive-offload: on large-receive-offload: off [fixed] rx-vlan-offload: on tx-vlan-offload: on l2-fwd-offload: off [fixed] hw-tc-offload: off rx-udp_tunnel-port-offload: on
- I/O scheduler VM had already none cat /sys/block/vda/queue/scheduler none
Consider using udev here too.
- oVirt 4.4 -> thanks. Was not aware that the new version is final. I will check and upgrade.
Upgrades are not so easy as you need to: A) Upgrade OS to EL8 B) Upgrade ovirt Test in advance before proceeding on prod !
- Use High Performance VMs -> I got following message -> So I need to do changes also on the VM? The following recommended settings for running the High Performance type with the optimal configuration were not detected. Please consider manually changing of the following before applying: CPU PINNING: VIRTUAL NUMA AND NUMA PINNING: HUGE PAGES: KERNEL SAME PAGE MERGING (KSM):
- libgfapi -> I will execute on the engine following command and then it will change all VMs after reboot? engine-config -s LibgfApiSupported=true --cver4.2
VM reboot won't reread the VM config. You will need an 'off' and 'on' action for each VM. I'm not sure about the command. I haven't used libgfapi for eons.
- changed values (from your other threat): gluster volume set vmstore performance.read-ahead on gluster volume set vmstore performance.stat-prefetch on gluster volume set vmstore performance.write-behind-window-size 64MB gluster volume set vmstore performance.flush-behind on
- already configured: performance.client-io-threads: on
It's easier to use: gluster volume set VOLUME group virt WARNING: This will enable sharding and SHARDING cannot be disabled in an easy way. NEVER EVER DISABLE SHARDING. You can check the 'virt' group of settings on your gluster nodes in /var/lib/glusterd/group.
Best Regards Dirk _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/VNE4O46SKA5KAL...