[ovirt-users] 3.6.1 HE install on CentOS 7.2 resulted in unsync'd network
Yedidyah Bar David
didi at redhat.com
Sun Dec 20 10:13:15 UTC 2015
On Sat, Dec 19, 2015 at 12:53 PM, Gianluca Cecchi
<gianluca.cecchi at gmail.com> wrote:
> On Sat, Dec 19, 2015 at 1:08 AM, John Florian <jflorian at doubledog.org>
> wrote:
>>
>> I'm trying to get a 3.6.1 HE setup going where I have 4 VLANs (VIDs
>> 101-104) for storage networks, 1 VLAN (VID 100) for ovirtmgmt and 1 more
>> (VID 1) for everything else. Because I know of no way to manipulate the
>> network configuration from the management GUI once the HE is running and
>> with only a single Host, I made the OS configuration as close as possible to
>> what I'd want when done. This looks like:
>
>
> Why do you think of this necessary pre-work? I configured (in 3.6.0) an
> environment with HE too on a single host and I only preconfigured my bond1
> in 802.3ad mode with the interfaces I planned to use for ovirtmgmt and I
> left the other interfaces unconfigured, so that all is not used by Network
> Manager.
> During the "hosted-engine --deploy" setup I got this input:
>
> --== NETWORK CONFIGURATION ==--
>
> Please indicate a nic to set ovirtmgmt bridge on: (em1, bond1,
> em2) [em1]: bond1
> iptables was detected on your computer, do you wish setup to
> configure it? (Yes, No)[Yes]:
> Please indicate a pingable gateway IP address [10.4.168.254]:
>
> and then on preview of configuration to apply:
>
> --== CONFIGURATION PREVIEW ==--
>
> Bridge interface : bond1
> Engine FQDN : ractorshe.mydomain.local
> Bridge name : ovirtmgmt
>
> After setup I configured my vlan based networks for my VMS from the GUI
> itself as in the usual way, so that now I have this bond0 created by oVirt
> GUI on the other two interfaces (em1 and em2):
>
> [root at ractor ~]# cat /proc/net/bonding/bond0
> Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)
>
> Bonding Mode: IEEE 802.3ad Dynamic link aggregation
> Transmit Hash Policy: layer2 (0)
> MII Status: up
> MII Polling Interval (ms): 100
> Up Delay (ms): 0
> Down Delay (ms): 0
>
> 802.3ad info
> LACP rate: fast
> Min links: 0
> Aggregator selection policy (ad_select): stable
> Active Aggregator Info:
> Aggregator ID: 2
> Number of ports: 2
> Actor Key: 17
> Partner Key: 8
> Partner Mac Address: 00:01:02:03:04:0c
>
> Slave Interface: em1
> MII Status: up
> Speed: 1000 Mbps
> Duplex: full
> Link Failure Count: 0
> Permanent HW addr: 00:25:64:ff:0b:f0
> Aggregator ID: 2
> Slave queue ID: 0
>
> Slave Interface: em2
> MII Status: up
> Speed: 1000 Mbps
> Duplex: full
> Link Failure Count: 0
> Permanent HW addr: 00:25:64:ff:0b:f2
> Aggregator ID: 2
> Slave queue ID: 0
>
> And then "ip a" command returns:
>
> 9: bond0.65 at bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue
> master vlan65 state UP
> link/ether 00:25:64:ff:0b:f0 brd ff:ff:ff:ff:ff:ff
> 10: vlan65: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state
> UP
> link/ether 00:25:64:ff:0b:f0 brd ff:ff:ff:ff:ff:ff
>
> with
> [root at ractor ~]# brctl show
> bridge name bridge id STP enabled interfaces
> ;vdsmdummy; 8000.000000000000 no
> ovirtmgmt 8000.002564ff0bf4 no bond1
> vnet0
> vlan65 8000.002564ff0bf0 no bond0.65
> vnet1
> vnet2
>
> vnet1 and vnet2 being the virtual network interfaces of my two running VMs.
>
> The only note I can submit is that by default when you set a network in
> oVirt GUI with mode=4 (802.3ad), it defaults to configuring it with
> "lacp_rate=0" so slow, that I think it is bad, as I read in many articles
> (but I'm not a network guru at all)
> So that I chose custom mode in the GUI and specified "mode=4 lacp_rate=1" in
> options and this was reflected in my configuration as you see above in bond0
> output.
>
> Can we set lacp_rate=1 as a default option for mode=4 in oVirt?
No idea, adding Dan. I guess you can always open an RFE bz...
Dan - any specific reason for the current defaults?
--
Didi
More information about the Users
mailing list