In the ideal case, what you'd have:
|---- Single virtio virtual interface
|
VM ---- Host ==== Switch stack
|
|------- 4x 1Gbit interfaces bonded over LACP
The change: virtio instead of "1 Gbit"
You can't get blood from a stone, that is, you can't manufacture bandwidth that isn't there. If you need more than gigabit speed, you need something like 10Gbit. Realize that usually, we're talking about a system created to run more than one VM. If just one, you'll do better with dedicated hardware. If more than one VM, then there sharing going on, though you might be able to use QoS (either in oVirt or outside). Even so, if just one VM on 10Gbit, you won't necessarily get full 10Gbit out of virtio. But at the same time bonding should help in the case of multiple VMs.
Now, back to the suggestion at hand. Multiple virtual NICs. If the logical networks presented via oVirt are such that each (however many) logical network has it's own "pipe", then defining a vNIC on each of those networks gets you the same sort of "gain" with respect to bonding. That is, no magic bandwidth increase for a particular connection, but more pipes available for multiple connections (essentially what you'd expect).
Obviously up to you how you want to do this. I think you might do better to consider a better underlying infrastructure to oVirt rather than trying to bond vNICs. Pretty sure I'm right about that. Would think the idea of bonding at the VM level might be best for simulating something rather than something you do because it's right/best.
On 05/14/2018 03:03 PM, Doug Ingham wrote:
____________________________________________________________On 14 May 2018 at 15:35, Juan Pablo <pablo.localhost@gmail.com <mailto:pablo.localhost@gmail.com >> wrote:
so you have lacp on your host, and you want lacp also on your vm...
somehow doesn't sounds correct.
there are several lacp modes. which one are you using on the host?
Correct!
|---- Single 1Gbit virtual interface
|
VM ---- Host ==== Switch stack
|
|------- 4x 1Gbit interfaces bonded over LACP
The traffic for all of the VMs is distributed across the host's 4 bonded links, however each VM is limited to the 1Gbit of its own virtual interface. In the case of my proxy, all web traffic is routed through it, so its single Gbit interface has become a bottleneck.
To increase the total bandwidth available to my VM, I presume I will need to add multiple Gbit VIFs & bridge them with a bonding mode.
Balance-alb (mode 6) is one option, however I'd prefer to use LACP (mode 4) if possible.
2018-05-14 16:20 GMT-03:00 Doug Ingham:
On 14 May 2018 at 15:03, Vinícius Ferrão wrote:
You should use better hashing algorithms for LACP.
Take a look at this explanation:
https://www.ibm.com/developerworks/community/blogs/ storageneers/entry/Enhancing_ IP_Network_Performance_with_ LACP?lang=en
<https://www.ibm.com/developerworks/community/blogs/ >storageneers/entry/Enhancing_ IP_Network_Performance_with_ LACP?lang=en
In general only L2 hashing is made, you can achieve better
throughput with L3 and multiple IPs, or with L4 (ports).
Your switch should support those features too, if you’re
using one.
V.
The problem isn't the LACP connection between the host & the
switch, but setting up LACP between the VM & the host. For
reasons of stability, my 4.1 cluster's switch type is currently
"Linux Bridge", not "OVS". Ergo my question, is LACP on the VM
possible with that, or will I have to use ALB?
Regards,
Doug
On 14 May 2018, at 15:16, Doug Ingham wrote:
Hi All,
My hosts have all of their interfaces bonded via LACP to
maximise throughput, however the VMs are still limited to
Gbit virtual interfaces. Is there a way to configure my VMs
to take full advantage of the bonded physical interfaces?
One way might be adding several VIFs to each VM & using ALB
bonding, however I'd rather use LACP if possible...
Cheers,
--
Doug
-- Doug
--
Doug
_________________ To unsubscribe send an email to users-leave@ovirt.org
Users mailing list -- users@ovirt.org
_________________ To unsubscribe send an email to users-leave@ovirt.org
Users mailing list -- users@ovirt.org
oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
List Archives: