[Users] Network issues - Bonding

Assaf Muller amuller at redhat.com
Sun Jan 5 08:39:45 UTC 2014


If you ifdown bond4 then ifup it, does the bond mode properly update
to bond mode 1? If not, it sounds like an initscripts or bonding module bug.


Assaf Muller, Cloud Networking Engineer 
Red Hat 

----- Original Message -----
From: "Dan Ferris" <dferris at prometheusresearch.com>
To: "users" <users at ovirt.org>
Sent: Saturday, January 4, 2014 5:31:21 AM
Subject: [Users] Network issues - Bonding

Hello All,

A little bit ago I wrote an email about network issues I was having.

I found the problem...

On the VM host, I had a bond set up between two network interfaces.  The
bond mode was set to mode 1 (active/passive).

However when I look at the bond on the box, I get this:

[root at node02 bonding]# cat bond4
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

Bonding Mode: load balancing (round-robin)
MII Status: up
MII Polling Interval (ms): 0
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: em2
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: d4:ae:52:6d:c8:cc
Slave queue ID: 0

Slave Interface: em3
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: d4:ae:52:6d:c8:ce
Slave queue ID: 0

Somehow, the OS is not setting the bonding mode right.  I verified that
it was set to mode 1 in /etc/sysconfig/network-scripts/ifcfe-bond4

When I take the bond away, the host network works perfectly on both of
the formerly bonded interfaces.

So again, if anyone has any ideas, I'm open to suggestions.

Thanks,

Dan
_______________________________________________
Users mailing list
Users at ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



More information about the Users mailing list