[Users] Network issues - Bonding

Hello All, A little bit ago I wrote an email about network issues I was having. I found the problem... On the VM host, I had a bond set up between two network interfaces. The bond mode was set to mode 1 (active/passive). However when I look at the bond on the box, I get this: [root@node02 bonding]# cat bond4 Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011) Bonding Mode: load balancing (round-robin) MII Status: up MII Polling Interval (ms): 0 Up Delay (ms): 0 Down Delay (ms): 0 Slave Interface: em2 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: d4:ae:52:6d:c8:cc Slave queue ID: 0 Slave Interface: em3 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: d4:ae:52:6d:c8:ce Slave queue ID: 0 Somehow, the OS is not setting the bonding mode right. I verified that it was set to mode 1 in /etc/sysconfig/network-scripts/ifcfe-bond4 When I take the bond away, the host network works perfectly on both of the formerly bonded interfaces. So again, if anyone has any ideas, I'm open to suggestions. Thanks, Dan

If you ifdown bond4 then ifup it, does the bond mode properly update to bond mode 1? If not, it sounds like an initscripts or bonding module bug. Assaf Muller, Cloud Networking Engineer Red Hat ----- Original Message ----- From: "Dan Ferris" <dferris@prometheusresearch.com> To: "users" <users@ovirt.org> Sent: Saturday, January 4, 2014 5:31:21 AM Subject: [Users] Network issues - Bonding Hello All, A little bit ago I wrote an email about network issues I was having. I found the problem... On the VM host, I had a bond set up between two network interfaces. The bond mode was set to mode 1 (active/passive). However when I look at the bond on the box, I get this: [root@node02 bonding]# cat bond4 Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011) Bonding Mode: load balancing (round-robin) MII Status: up MII Polling Interval (ms): 0 Up Delay (ms): 0 Down Delay (ms): 0 Slave Interface: em2 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: d4:ae:52:6d:c8:cc Slave queue ID: 0 Slave Interface: em3 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: d4:ae:52:6d:c8:ce Slave queue ID: 0 Somehow, the OS is not setting the bonding mode right. I verified that it was set to mode 1 in /etc/sysconfig/network-scripts/ifcfe-bond4 When I take the bond away, the host network works perfectly on both of the formerly bonded interfaces. So again, if anyone has any ideas, I'm open to suggestions. Thanks, Dan _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On Sun, Jan 05, 2014 at 03:39:45AM -0500, Assaf Muller wrote:
If you ifdown bond4 then ifup it, does the bond mode properly update to bond mode 1? If not, it sounds like an initscripts or bonding module bug.
Assaf Muller, Cloud Networking Engineer Red Hat
----- Original Message ----- From: "Dan Ferris" <dferris@prometheusresearch.com> To: "users" <users@ovirt.org> Sent: Saturday, January 4, 2014 5:31:21 AM Subject: [Users] Network issues - Bonding
Hello All,
A little bit ago I wrote an email about network issues I was having.
I found the problem...
On the VM host, I had a bond set up between two network interfaces. The bond mode was set to mode 1 (active/passive).
However when I look at the bond on the box, I get this:
[root@node02 bonding]# cat bond4 Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)
Bonding Mode: load balancing (round-robin) MII Status: up MII Polling Interval (ms): 0 Up Delay (ms): 0 Down Delay (ms): 0
Slave Interface: em2 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: d4:ae:52:6d:c8:cc Slave queue ID: 0
Slave Interface: em3 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: d4:ae:52:6d:c8:ce Slave queue ID: 0
Somehow, the OS is not setting the bonding mode right. I verified that it was set to mode 1 in /etc/sysconfig/network-scripts/ifcfe-bond4
When I take the bond away, the host network works perfectly on both of the formerly bonded interfaces.
So again, if anyone has any ideas, I'm open to suggestions.
In particular, if you ifconfig bond0 down echo 1 > /sys/class/net/bond0/bonding/mode ifconfig bond0 up the content of /sys/class/net/bond0/bonding/mode should change. If it does not, it is most probably a kernel bug.

Hi, just out of curiosity (I'm also looking at implementing bonding on our ComputeNodes): Which OS version are you running ? -- Mit freundlichen Grüßen / Regards Sven Kieske Systemadministrator Mittwald CM Service GmbH & Co. KG Königsberger Straße 6 32339 Espelkamp T: +49-5772-293-100 F: +49-5772-293-333 https://www.mittwald.de Geschäftsführer: Robert Meyer St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen

----- Original Message -----
From: "Dan Ferris" <dferris@prometheusresearch.com> To: users@ovirt.org Sent: Monday, January 6, 2014 3:52:45 PM Subject: Re: [Users] Network issues - Bonding
It's FC 19 with all of the latest updates.
On 01/06/2014 05:56 AM, Sven Kieske wrote:
Hi,
just out of curiosity (I'm also looking at implementing bonding on our ComputeNodes):
Which OS version are you running ?
I think I once ran into it and a restart of the computer solved it, tried to reproduce again and couldn't (so no bug was filed).
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
participants (5)
-
Antoni Segura Puimedon
-
Assaf Muller
-
Dan Ferris
-
Dan Kenigsberg
-
Sven Kieske