On 2019-04-15 12:58, Alex McWhirter wrote:

On 2019-04-15 12:43, Darrell Budic wrote:

Interesting. Who's 10g cards and which offload settings did you disable? Did you do that on the servers or the vm host clients or both?

On Apr 15, 2019, at 11:37 AM, Alex McWhirter <alex@triadic.us> wrote:

I went in and disabled TCP offload on all the nics, huge performance boost. went from 110MB/s to 240MB/s seq writes, reads lost a bit of performance going down to 680MB/s, but that's a decent trade off. Latency is still really high though, need to work on that. I think some more TCP tuning might help.

 

Those changes didn't do a whole lot, but i ended up enabling performance.read-ahead on the gluster volume. my blockdev read ahead values were already 8192, which seemed good enough. Not sure if ovirt set those, or if it's just the defaults of my raid controller.

Anyways up to 350MB/s writes, 700MB/s reads. Which so happens to correlate with the saturation of my 10G network. Latency is still a slight issue, but at least now im not blocking :)

 
_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-leave@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/5COPHAIVCVK42KMMGWZQVMNGDH6Q32ZC/

These are dual port qlogic QLGE cards, plugging into dual Cisco Nexus 3064's with VPC to allow me to LACP across two switches. These are FCoE/10GBE cards, so on the Cisco Switches i had to disable lldp on the ports to stop FCoE initiator errors from disabling ports (as i don't use FCoE atm)

 

bond options are "mode=4 lacp_rate=1 miimon=100 xmit_hash_policy=1"

 

then i have the following /sbin/ifup-local script that triggers on storage network creation

#!/bin/bash
case "$1" in
  Storage)
    /sbin/ethtool -K ens2f0 tx off rx off tso off gso off
    /sbin/ethtool -K ens2f1 tx off rx off tso off gso off
    /sbin/ip link set dev ens2f0 txqueuelen 10000
    /sbin/ip link set dev ens2f1 txqueuelen 10000
    /sbin/ip link set dev bond2 txqueuelen 10000
    /sbin/ip link set dev Storage txqueuelen 10000
  ;;
  *)
  ;;
esac
exit 0

if you have lro, disable it too IMO, these cards do not do lro so it's not applicable to me.

This did cut down my read performance by about 50MB/s, but my write went from 98-110MB/s to about 240MB/s, then enabling read-ahead got me to the 350MB/s it should have been.

 

Oh and i did it on both, the VM hosts and storage machines. Same cards in all of them.