Interesting. Who’s 10g cards and which offload settings did you disable? Did you do that
on the servers or the vm host clients or both?
On Apr 15, 2019, at 11:37 AM, Alex McWhirter <alex(a)triadic.us>
wrote:
> I went in and disabled TCP offload on all the nics, huge performance boost. went from
110MB/s to 240MB/s seq writes, reads lost a bit of performance going down to 680MB/s, but
that's a decent trade off. Latency is still really high though, need to work on that.
I think some more TCP tuning might help.
>
>
Those changes didn't do a whole lot, but i ended up enabling performance.read-ahead
on the gluster volume. my blockdev read ahead values were already 8192, which seemed good
enough. Not sure if ovirt set those, or if it's just the defaults of my raid
controller.
Anyways up to 350MB/s writes, 700MB/s reads. Which so happens to correlate with the
saturation of my 10G network. Latency is still a slight issue, but at least now im not
blocking :)
_______________________________________________
Users mailing list -- users(a)ovirt.org
To unsubscribe send an email to users-leave(a)ovirt.org
Privacy Statement:
https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5COPHAIVCVK...