
Hi All, My hosts have all of their interfaces bonded via LACP to maximise throughput, however the VMs are still limited to Gbit virtual interfaces. Is there a way to configure my VMs to take full advantage of the bonded physical interfaces? One way might be adding several VIFs to each VM & using ALB bonding, however I'd rather use LACP if possible... Cheers, -- Doug

LACP is not intended for maximizing throughtput. if you are using iscsi, you should use multipathd instead. regards, 2018-05-14 15:16 GMT-03:00 Doug Ingham <dougti@gmail.com>:
Hi All, My hosts have all of their interfaces bonded via LACP to maximise throughput, however the VMs are still limited to Gbit virtual interfaces. Is there a way to configure my VMs to take full advantage of the bonded physical interfaces?
One way might be adding several VIFs to each VM & using ALB bonding, however I'd rather use LACP if possible...
Cheers, -- Doug
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org

On 14 May 2018 at 15:01, Juan Pablo <pablo.localhost@gmail.com> wrote:
LACP is not intended for maximizing throughtput. if you are using iscsi, you should use multipathd instead.
regards,
Umm, maximising the total throughput for multiple concurrent connections is most definitely one of the uses of LACP. In this case, the VM is our main reverse proxy, and its single Gbit VIF has become a bottleneck.

it doesnt work that way... anyway can you share detailed your network config? which lacp mode are you using? have you run an iperf/test or something to see where's the problem? can you share the results? 2018-05-14 16:08 GMT-03:00 Doug Ingham <dougti@gmail.com>:
On 14 May 2018 at 15:01, Juan Pablo <pablo.localhost@gmail.com> wrote:
LACP is not intended for maximizing throughtput. if you are using iscsi, you should use multipathd instead.
regards,
Umm, maximising the total throughput for multiple concurrent connections is most definitely one of the uses of LACP. In this case, the VM is our main reverse proxy, and its single Gbit VIF has become a bottleneck.

You should use better hashing algorithms for LACP. Take a look at this explanation: https://www.ibm.com/developerworks/community/blogs/storageneers/entry/Enhanc... In general only L2 hashing is made, you can achieve better throughput with L3 and multiple IPs, or with L4 (ports). Your switch should support those features too, if you’re using one. V. On 14 May 2018, at 15:16, Doug Ingham <dougti@gmail.com<mailto:dougti@gmail.com>> wrote: Hi All, My hosts have all of their interfaces bonded via LACP to maximise throughput, however the VMs are still limited to Gbit virtual interfaces. Is there a way to configure my VMs to take full advantage of the bonded physical interfaces? One way might be adding several VIFs to each VM & using ALB bonding, however I'd rather use LACP if possible... Cheers, -- Doug _______________________________________________ Users mailing list -- users@ovirt.org<mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org<mailto:users-leave@ovirt.org>

On 14 May 2018 at 15:03, Vinícius Ferrão <ferrao@if.ufrj.br> wrote:
You should use better hashing algorithms for LACP.
Take a look at this explanation: https://www.ibm.com/ developerworks/community/blogs/storageneers/entry/Enhancing_IP_Network_ Performance_with_LACP?lang=en
In general only L2 hashing is made, you can achieve better throughput with L3 and multiple IPs, or with L4 (ports).
Your switch should support those features too, if you’re using one.
V.
The problem isn't the LACP connection between the host & the switch, but setting up LACP between the VM & the host. For reasons of stability, my 4.1 cluster's switch type is currently "Linux Bridge", not "OVS". Ergo my question, is LACP on the VM possible with that, or will I have to use ALB? Regards, Doug
On 14 May 2018, at 15:16, Doug Ingham wrote:
Hi All, My hosts have all of their interfaces bonded via LACP to maximise throughput, however the VMs are still limited to Gbit virtual interfaces. Is there a way to configure my VMs to take full advantage of the bonded physical interfaces?
One way might be adding several VIFs to each VM & using ALB bonding, however I'd rather use LACP if possible...
Cheers, -- Doug
-- Doug

so you have lacp on your host, and you want lacp also on your vm... somehow doesn't sounds correct. there are several lacp modes. which one are you using on the host? 2018-05-14 16:20 GMT-03:00 Doug Ingham <dougti@gmail.com>:
On 14 May 2018 at 15:03, Vinícius Ferrão <ferrao@if.ufrj.br> wrote:
You should use better hashing algorithms for LACP.
Take a look at this explanation: https://www.ibm.com/developerw orks/community/blogs/storageneers/entry/Enhancing_ IP_Network_Performance_with_LACP?lang=en
In general only L2 hashing is made, you can achieve better throughput with L3 and multiple IPs, or with L4 (ports).
Your switch should support those features too, if you’re using one.
V.
The problem isn't the LACP connection between the host & the switch, but setting up LACP between the VM & the host. For reasons of stability, my 4.1 cluster's switch type is currently "Linux Bridge", not "OVS". Ergo my question, is LACP on the VM possible with that, or will I have to use ALB?
Regards, Doug
On 14 May 2018, at 15:16, Doug Ingham wrote:
Hi All, My hosts have all of their interfaces bonded via LACP to maximise throughput, however the VMs are still limited to Gbit virtual interfaces. Is there a way to configure my VMs to take full advantage of the bonded physical interfaces?
One way might be adding several VIFs to each VM & using ALB bonding, however I'd rather use LACP if possible...
Cheers, -- Doug
-- Doug
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org

On 14 May 2018 at 15:35, Juan Pablo <pablo.localhost@gmail.com> wrote:
so you have lacp on your host, and you want lacp also on your vm... somehow doesn't sounds correct. there are several lacp modes. which one are you using on the host?
Correct! |---- Single 1Gbit virtual interface | VM ---- Host ==== Switch stack | |------- 4x 1Gbit interfaces bonded over LACP The traffic for all of the VMs is distributed across the host's 4 bonded links, however each VM is limited to the 1Gbit of its own virtual interface. In the case of my proxy, all web traffic is routed through it, so its single Gbit interface has become a bottleneck. To increase the total bandwidth available to my VM, I presume I will need to add multiple Gbit VIFs & bridge them with a bonding mode. Balance-alb (mode 6) is one option, however I'd prefer to use LACP (mode 4) if possible.
2018-05-14 16:20 GMT-03:00 Doug Ingham:
On 14 May 2018 at 15:03, Vinícius Ferrão wrote:
You should use better hashing algorithms for LACP.
Take a look at this explanation: https://www.ibm.com/developerw orks/community/blogs/storageneers/entry/Enhancing_IP_ Network_Performance_with_LACP?lang=en
In general only L2 hashing is made, you can achieve better throughput with L3 and multiple IPs, or with L4 (ports).
Your switch should support those features too, if you’re using one.
V.
The problem isn't the LACP connection between the host & the switch, but setting up LACP between the VM & the host. For reasons of stability, my 4.1 cluster's switch type is currently "Linux Bridge", not "OVS". Ergo my question, is LACP on the VM possible with that, or will I have to use ALB?
Regards, Doug
On 14 May 2018, at 15:16, Doug Ingham wrote:
Hi All, My hosts have all of their interfaces bonded via LACP to maximise throughput, however the VMs are still limited to Gbit virtual interfaces. Is there a way to configure my VMs to take full advantage of the bonded physical interfaces?
One way might be adding several VIFs to each VM & using ALB bonding, however I'd rather use LACP if possible...
Cheers, -- Doug
-- Doug
-- Doug

Once upon a time, Doug Ingham <dougti@gmail.com> said:
Correct!
|---- Single 1Gbit virtual interface | VM ---- Host ==== Switch stack | |------- 4x 1Gbit interfaces bonded over LACP
The traffic for all of the VMs is distributed across the host's 4 bonded links, however each VM is limited to the 1Gbit of its own virtual interface. In the case of my proxy, all web traffic is routed through it, so its single Gbit interface has become a bottleneck.
It was my understanding that the virtual interface showing up as 1 gig was just a reporting thing (something has to be put in the speed field). I don't think the virtual interface is actually limited to 1 gig, the server will just pass packets as fast as it can. -- Chris Adams <cma@cmadams.net>

On Mon, May 14, 2018, 11:33 PM Chris Adams <cma@cmadams.net> wrote:
Once upon a time, Doug Ingham <dougti@gmail.com> said:
Correct!
|---- Single 1Gbit virtual interface | VM ---- Host ==== Switch stack | |------- 4x 1Gbit interfaces bonded over LACP
The traffic for all of the VMs is distributed across the host's 4 bonded links, however each VM is limited to the 1Gbit of its own virtual interface. In the case of my proxy, all web traffic is routed through it, so its single Gbit interface has become a bottleneck.
It was my understanding that the virtual interface showing up as 1 gig was just a reporting thing (something has to be put in the speed field). I don't think the virtual interface is actually limited to 1 gig, the server will just pass packets as fast as it can.
Absolutely right. Y.
-- Chris Adams <cma@cmadams.net> _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org

Very handy to know! Cheers! I've been running a couple of tests over the past few days & it seems, counter to what I said earlier, the proxy's interfering with the LACP balancing too, as it rewrites the origin. Duh. *facepalm* It skipped my mind that all our logs use the x-forwarded headers, so I overlooked than one! I'm going to test a new config on the reverse proxy to round-robin the outbound IPs. We'll find out tomorrow if the VIF really isn't limited to the reported 1Gbit. Thanks On 14 May 2018 at 17:45, Yaniv Kaul <ykaul@redhat.com> wrote:
On Mon, May 14, 2018, 11:33 PM Chris Adams <cma@cmadams.net> wrote:
Once upon a time, Doug Ingham <dougti@gmail.com> said:
Correct!
|---- Single 1Gbit virtual interface | VM ---- Host ==== Switch stack | |------- 4x 1Gbit interfaces bonded over LACP
The traffic for all of the VMs is distributed across the host's 4 bonded links, however each VM is limited to the 1Gbit of its own virtual interface. In the case of my proxy, all web traffic is routed through it, so its single Gbit interface has become a bottleneck.
It was my understanding that the virtual interface showing up as 1 gig was just a reporting thing (something has to be put in the speed field). I don't think the virtual interface is actually limited to 1 gig, the server will just pass packets as fast as it can.
Absolutely right. Y.
-- Chris Adams <cma@cmadams.net> _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org
-- Doug

In the ideal case, what you'd have: |---- Single virtio virtual interface | VM ---- Host ==== Switch stack | |------- 4x 1Gbit interfaces bonded over LACP The change: virtio instead of "1 Gbit" You can't get blood from a stone, that is, you can't manufacture bandwidth that isn't there. If you need more than gigabit speed, you need something like 10Gbit. Realize that usually, we're talking about a system created to run more than one VM. If just one, you'll do better with dedicated hardware. If more than one VM, then there sharing going on, though you might be able to use QoS (either in oVirt or outside). Even so, if just one VM on 10Gbit, you won't necessarily get full 10Gbit out of virtio. But at the same time bonding should help in the case of multiple VMs. Now, back to the suggestion at hand. Multiple virtual NICs. If the logical networks presented via oVirt are such that each (however many) logical network has it's own "pipe", then defining a vNIC on each of those networks gets you the same sort of "gain" with respect to bonding. That is, no magic bandwidth increase for a particular connection, but more pipes available for multiple connections (essentially what you'd expect). Obviously up to you how you want to do this. I think you might do better to consider a better underlying infrastructure to oVirt rather than trying to bond vNICs. Pretty sure I'm right about that. Would think the idea of bonding at the VM level might be best for simulating something rather than something you do because it's right/best. On 05/14/2018 03:03 PM, Doug Ingham wrote:
On 14 May 2018 at 15:35, Juan Pablo <pablo.localhost@gmail.com <mailto:pablo.localhost@gmail.com>> wrote:
so you have lacp on your host, and you want lacp also on your vm... somehow doesn't sounds correct. there are several lacp modes. which one are you using on the host?
Correct!
|---- Single 1Gbit virtual interface | VM ---- Host ==== Switch stack | |------- 4x 1Gbit interfaces bonded over LACP
The traffic for all of the VMs is distributed across the host's 4 bonded links, however each VM is limited to the 1Gbit of its own virtual interface. In the case of my proxy, all web traffic is routed through it, so its single Gbit interface has become a bottleneck.
To increase the total bandwidth available to my VM, I presume I will need to add multiple Gbit VIFs & bridge them with a bonding mode. Balance-alb (mode 6) is one option, however I'd prefer to use LACP (mode 4) if possible.
2018-05-14 16:20 GMT-03:00 Doug Ingham:
On 14 May 2018 at 15:03, Vinícius Ferrão wrote:
You should use better hashing algorithms for LACP.
Take a look at this explanation: https://www.ibm.com/developerworks/community/blogs/storageneers/entry/Enhanc... <https://www.ibm.com/developerworks/community/blogs/storageneers/entry/Enhancing_IP_Network_Performance_with_LACP?lang=en>
In general only L2 hashing is made, you can achieve better throughput with L3 and multiple IPs, or with L4 (ports).
Your switch should support those features too, if you’re using one.
V.
The problem isn't the LACP connection between the host & the switch, but setting up LACP between the VM & the host. For reasons of stability, my 4.1 cluster's switch type is currently "Linux Bridge", not "OVS". Ergo my question, is LACP on the VM possible with that, or will I have to use ALB?
Regards, Doug
On 14 May 2018, at 15:16, Doug Ingham wrote:
Hi All, My hosts have all of their interfaces bonded via LACP to maximise throughput, however the VMs are still limited to Gbit virtual interfaces. Is there a way to configure my VMs to take full advantage of the bonded physical interfaces?
One way might be adding several VIFs to each VM & using ALB bonding, however I'd rather use LACP if possible...
Cheers, -- Doug
-- Doug
-- Doug
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org

On Mon, May 14, 2018 at 11:25 PM, Christopher Cox <ccox@endlessnow.com> wrote:
In the ideal case, what you'd have:
|---- Single virtio virtual interface | VM ---- Host ==== Switch stack | |------- 4x 1Gbit interfaces bonded over LACP
The change: virtio instead of "1 Gbit"
You can't get blood from a stone, that is, you can't manufacture bandwidth that isn't there. If you need more than gigabit speed, you need something like 10Gbit. Realize that usually, we're talking about a system created to run more than one VM. If just one, you'll do better with dedicated hardware. If more than one VM, then there sharing going on, though you might be able to use QoS (either in oVirt or outside). Even so, if just one VM on 10Gbit, you won't necessarily get full 10Gbit out of virtio. But at the same time bonding should help in the case of multiple VMs.
Jumbo frames may help in some workloads and give ~5% boost or so. Y.
Now, back to the suggestion at hand. Multiple virtual NICs. If the logical networks presented via oVirt are such that each (however many) logical network has it's own "pipe", then defining a vNIC on each of those networks gets you the same sort of "gain" with respect to bonding. That is, no magic bandwidth increase for a particular connection, but more pipes available for multiple connections (essentially what you'd expect).
Obviously up to you how you want to do this. I think you might do better to consider a better underlying infrastructure to oVirt rather than trying to bond vNICs. Pretty sure I'm right about that. Would think the idea of bonding at the VM level might be best for simulating something rather than something you do because it's right/best.
On 05/14/2018 03:03 PM, Doug Ingham wrote:
On 14 May 2018 at 15:35, Juan Pablo <pablo.localhost@gmail.com <mailto: pablo.localhost@gmail.com>> wrote:
so you have lacp on your host, and you want lacp also on your vm... somehow doesn't sounds correct. there are several lacp modes. which one are you using on the host?
Correct!
|---- Single 1Gbit virtual interface | VM ---- Host ==== Switch stack | |------- 4x 1Gbit interfaces bonded over LACP
The traffic for all of the VMs is distributed across the host's 4 bonded links, however each VM is limited to the 1Gbit of its own virtual interface. In the case of my proxy, all web traffic is routed through it, so its single Gbit interface has become a bottleneck.
To increase the total bandwidth available to my VM, I presume I will need to add multiple Gbit VIFs & bridge them with a bonding mode. Balance-alb (mode 6) is one option, however I'd prefer to use LACP (mode 4) if possible.
2018-05-14 16:20 GMT-03:00 Doug Ingham:
On 14 May 2018 at 15:03, Vinícius Ferrão wrote:
You should use better hashing algorithms for LACP.
Take a look at this explanation: https://www.ibm.com/developerworks/community/blogs/ storageneers/entry/Enhancing_IP_Network_Performance_with_LACP?lang=en <https://www.ibm.com/developerworks/community/blogs/ storageneers/entry/Enhancing_IP_Network_Performance_with_LACP?lang=en>
In general only L2 hashing is made, you can achieve better throughput with L3 and multiple IPs, or with L4 (ports).
Your switch should support those features too, if you’re using one.
V.
The problem isn't the LACP connection between the host & the switch, but setting up LACP between the VM & the host. For reasons of stability, my 4.1 cluster's switch type is currently "Linux Bridge", not "OVS". Ergo my question, is LACP on the VM possible with that, or will I have to use ALB?
Regards, Doug
On 14 May 2018, at 15:16, Doug Ingham wrote:
Hi All, My hosts have all of their interfaces bonded via LACP to maximise throughput, however the VMs are still limited to Gbit virtual interfaces. Is there a way to configure my VMs to take full advantage of the bonded physical interfaces?
One way might be adding several VIFs to each VM & using ALB bonding, however I'd rather use LACP if possible...
Cheers, -- Doug
-- Doug
-- Doug
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org oVirt Code of Conduct: https://www.ovirt.org/communit y/about/community-guidelines/ List Archives:

On 14 May 2018 at 16:25, Christopher Cox <ccox@endlessnow.com> wrote:
In the ideal case, what you'd have:
|---- Single virtio virtual interface | VM ---- Host ==== Switch stack | |------- 4x 1Gbit interfaces bonded over LACP
The change: virtio instead of "1 Gbit"
It's using virtio. My confusion came from the VIF's speed being reported by the Engine & within the guest.
You can't get blood from a stone, that is, you can't manufacture bandwidth that isn't there. If you need more than gigabit speed, you need something like 10Gbit. Realize that usually, we're talking about a system created to run more than one VM. If just one, you'll do better with dedicated hardware. If more than one VM, then there sharing going on, though you might be able to use QoS (either in oVirt or outside). Even so, if just one VM on 10Gbit, you won't necessarily get full 10Gbit out of virtio. But at the same time bonding should help in the case of multiple VMs.
Now, back to the suggestion at hand. Multiple virtual NICs. If the logical networks presented via oVirt are such that each (however many) logical network has it's own "pipe", then defining a vNIC on each of those networks gets you the same sort of "gain" with respect to bonding. That is, no magic bandwidth increase for a particular connection, but more pipes available for multiple connections (essentially what you'd expect).
Obviously up to you how you want to do this. I think you might do better to consider a better underlying infrastructure to oVirt rather than trying to bond vNICs. Pretty sure I'm right about that. Would think the idea of bonding at the VM level might be best for simulating something rather than something you do because it's right/best.
Oh, I'm certain you're right about that! My current budget's focused on beefing up the resilience our storage layer, however the network is next on my list. For the moment though, it's a case of working with what I've got. Bandwidth has only really become an issue recently, since we've started streaming live events, and that's a simple case of (relatively) low bandwidth connections. The only place that might get much benefit from single 10Gbit links would be on our distributed storage layer, although with 10 nodes, each with 4x1Gbit LAGGs, even that's holding up quite well. Let's see how the tests go tomorrow... On 05/14/2018 03:03 PM, Doug Ingham wrote:
On 14 May 2018 at 15:35, Juan Pablo <pablo.localhost@gmail.com <mailto: pablo.localhost@gmail.com>> wrote:
so you have lacp on your host, and you want lacp also on your vm... somehow doesn't sounds correct. there are several lacp modes. which one are you using on the host?
Correct!
|---- Single 1Gbit virtual interface | VM ---- Host ==== Switch stack | |------- 4x 1Gbit interfaces bonded over LACP
The traffic for all of the VMs is distributed across the host's 4 bonded links, however each VM is limited to the 1Gbit of its own virtual interface. In the case of my proxy, all web traffic is routed through it, so its single Gbit interface has become a bottleneck.
To increase the total bandwidth available to my VM, I presume I will need to add multiple Gbit VIFs & bridge them with a bonding mode. Balance-alb (mode 6) is one option, however I'd prefer to use LACP (mode 4) if possible.
2018-05-14 16:20 GMT-03:00 Doug Ingham:
On 14 May 2018 at 15:03, Vinícius Ferrão wrote:
You should use better hashing algorithms for LACP.
Take a look at this explanation: https://www.ibm.com/developerworks/community/blogs/ storageneers/entry/Enhancing_IP_Network_Performance_with_LACP?lang=en <https://www.ibm.com/developerworks/community/blogs/ storageneers/entry/Enhancing_IP_Network_Performance_with_LACP?lang=en>
In general only L2 hashing is made, you can achieve better throughput with L3 and multiple IPs, or with L4 (ports).
Your switch should support those features too, if you’re using one.
V.
The problem isn't the LACP connection between the host & the switch, but setting up LACP between the VM & the host. For reasons of stability, my 4.1 cluster's switch type is currently "Linux Bridge", not "OVS". Ergo my question, is LACP on the VM possible with that, or will I have to use ALB?
Regards, Doug
On 14 May 2018, at 15:16, Doug Ingham wrote:
Hi All, My hosts have all of their interfaces bonded via LACP to maximise throughput, however the VMs are still limited to Gbit virtual interfaces. Is there a way to configure my VMs to take full advantage of the bonded physical interfaces?
One way might be adding several VIFs to each VM & using ALB bonding, however I'd rather use LACP if possible...
Cheers, -- Doug
-- Doug
-- Doug
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org oVirt Code of Conduct: https://www.ovirt.org/communit y/about/community-guidelines/ List Archives:
-- Doug
participants (6)
-
Chris Adams
-
Christopher Cox
-
Doug Ingham
-
Juan Pablo
-
Vinícius Ferrão
-
Yaniv Kaul