[ovirt-users] Debugging warning messages about bonding mode 4

Darrell Budic budic at onholyground.com
Fri Oct 6 15:49:34 UTC 2017


That looks like the normal state for a LACP bond, but it does record some churn (bond renegotiations, I believe). So it probably bounced once or twice coming up. Maybe a slow switch, maybe a switch relying on dynamic bonding instead of static bonds, and taking longer to establish. 

For the ones with a down link, and this one too, you could ask the network guys if they statically configured the bond, or if they could, might make it quicker to bring it up.

I don’t think anything updates when the host is in maintenance, you could take it out and see what happens :) The bond is lower level though, should come up if it’s configured properly, and you should be able to see that on the host.

  -Darrell

a bond on one of mine:

# cat /proc/net/bonding/bond0 
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

Bonding Mode: IEEE 802.3ad Dynamic link aggregation
Transmit Hash Policy: layer2 (0)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

802.3ad info
LACP rate: slow
Min links: 0
Aggregator selection policy (ad_select): stable
System priority: 65535
System MAC address: 00:0f:53:08:4b:ac
Active Aggregator Info:
	Aggregator ID: 1
	Number of ports: 2
	Actor Key: 13
	Partner Key: 14
	Partner Mac Address: 64:64:9b:5e:9b:00

Slave Interface: p1p1
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:0f:53:08:4b:ac
Slave queue ID: 0
Aggregator ID: 1
Actor Churn State: none
Partner Churn State: none
Actor Churned Count: 0
Partner Churned Count: 0
details actor lacp pdu:
    system priority: 65535
    system mac address: 00:0f:53:08:4b:ac
    port key: 13
    port priority: 255
    port number: 1
    port state: 61
details partner lacp pdu:
    system priority: 127
    system mac address: 64:64:9b:5e:9b:00
    oper key: 14
    port priority: 127
    port number: 8
    port state: 63

Slave Interface: p1p2
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:0f:53:08:4b:ad
Slave queue ID: 0
Aggregator ID: 1
Actor Churn State: none
Partner Churn State: none
Actor Churned Count: 0
Partner Churned Count: 0
details actor lacp pdu:
    system priority: 65535
    system mac address: 00:0f:53:08:4b:ac
    port key: 13
    port priority: 255
    port number: 2
    port state: 61
details partner lacp pdu:
    system priority: 127
    system mac address: 64:64:9b:5e:9b:00
    oper key: 14
    port priority: 127
    port number: 7
    port state: 63


> From: Gianluca Cecchi <gianluca.cecchi at gmail.com>
> Subject: [ovirt-users] Debugging warning messages about bonding mode 4
> Date: October 6, 2017 at 6:28:16 AM CDT
> To: users
> 
> Hello,
> on a 2 nodes cluster in 4.1.6 I have this situation.
> Every node has 3 bonds, each one composed by 2 network adapters and each one of type mode=4
> (actually in setup networks I have configured custom and then the value: 
> "mode=4 miimon=100"
> )
> 
> At this moment only one of the servers has access to FC storage, while the other is currently on maintenance.
> 
> On 2 of the 3 bonds of the active server I get an exclamation point in "Network Interfaces" subtab with this mouseover popup
> 
> Bond is in link aggregation mode (mode 4), but no partner mac has been reported for it
> 
> What is the exact meaning of this message? Do I have to care about (I think so..)?
> What should I report to network guys?
> Eg, one of these two warning bonds status is:
> 
> # cat /proc/net/bonding/bond2
> Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)
> 
> Bonding Mode: IEEE 802.3ad Dynamic link aggregation
> Transmit Hash Policy: layer2 (0)
> MII Status: up
> MII Polling Interval (ms): 100
> Up Delay (ms): 0
> Down Delay (ms): 0
> 
> 802.3ad info
> LACP rate: slow
> Min links: 0
> Aggregator selection policy (ad_select): stable
> System priority: 65535
> System MAC address: 48:df:37:0c:7f:5a
> Active Aggregator Info:
>         Aggregator ID: 5
>         Number of ports: 2
>         Actor Key: 9
>         Partner Key: 6
>         Partner Mac Address: b8:38:61:9c:75:80
> 
> Slave Interface: ens2f2
> MII Status: up
> Speed: 1000 Mbps
> Duplex: full
> Link Failure Count: 2
> Permanent HW addr: 48:df:37:0c:7f:5a
> Slave queue ID: 0
> Aggregator ID: 5
> Actor Churn State: none
> Partner Churn State: none
> Actor Churned Count: 2
> Partner Churned Count: 3
> details actor lacp pdu:
>     system priority: 65535
>     system mac address: 48:df:37:0c:7f:5a
>     port key: 9
>     port priority: 255
>     port number: 1
>     port state: 61
> details partner lacp pdu:
>     system priority: 32768
>     system mac address: b8:38:61:9c:75:80
>     oper key: 6
>     port priority: 32768
>     port number: 293
>     port state: 61
> 
> Slave Interface: ens2f3
> MII Status: up
> Speed: 1000 Mbps
> Duplex: full
> Link Failure Count: 2
> Permanent HW addr: 48:df:37:0c:7f:5b
> Slave queue ID: 0
> Aggregator ID: 5
> Actor Churn State: none
> Partner Churn State: none
> Actor Churned Count: 0
> Partner Churned Count: 3
> details actor lacp pdu:
>     system priority: 65535
>     system mac address: 48:df:37:0c:7f:5a
>     port key: 9
>     port priority: 255
>     port number: 2
>     port state: 61
> details partner lacp pdu:
>     system priority: 32768
>     system mac address: b8:38:61:9c:75:80
>     oper key: 6
>     port priority: 32768
>     port number: 549
>     port state: 61
> 
> Also, the other node (that is currently in maintenance) shows one of the 2 interfaces of bond2 (ens2f2) as down (red arrow) but on the host
> 
> # ip link show ens2f2
> 6: ens2f2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond2 state UP mode DEFAULT qlen 1000
>     link/ether 48:df:37:0c:85:4e brd ff:ff:ff:ff:ff:ff
> # 
> 
> Does this depend on the host being in maintenance? 
> Perhaps when a host is in maintenance, the warnings on it are not checked/updated again from engine?
> 
> Thanks in advance,
> Gianluca
> 
>  
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users



More information about the Users mailing list