3.6.1 HE install on CentOS 7.2 resulted in unsync'd network

This is a multi-part message in MIME format. --------------070705070705000607090706 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8bit I'm trying to get a 3.6.1 HE setup going where I have 4 VLANs (VIDs 101-104) for storage networks, 1 VLAN (VID 100) for ovirtmgmt and 1 more (VID 1) for everything else. Because I know of no way to manipulate the network configuration from the management GUI once the HE is running and with only a single Host, I made the OS configuration as close as possible to what I'd want when done. This looks like: [root@orthosie ~]# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue master ovirtmgmt state UP link/ether 78:2b:cb:3c:da:02 brd ff:ff:ff:ff:ff:ff 3: em1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP qlen 1000 link/ether 78:2b:cb:3c:da:02 brd ff:ff:ff:ff:ff:ff 4: em2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP qlen 1000 link/ether 78:2b:cb:3c:da:02 brd ff:ff:ff:ff:ff:ff 5: em3: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP qlen 1000 link/ether 78:2b:cb:3c:da:02 brd ff:ff:ff:ff:ff:ff 6: em4: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP qlen 1000 link/ether 78:2b:cb:3c:da:02 brd ff:ff:ff:ff:ff:ff 8: bond0.1@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP link/ether 78:2b:cb:3c:da:02 brd ff:ff:ff:ff:ff:ff inet 172.16.7.8/24 brd 172.16.7.255 scope global bond0.1 valid_lft forever preferred_lft forever inet6 fe80::7a2b:cbff:fe3c:da02/64 scope link valid_lft forever preferred_lft forever 9: bond0.101@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP link/ether 78:2b:cb:3c:da:02 brd ff:ff:ff:ff:ff:ff inet 192.168.101.203/24 brd 192.168.101.255 scope global bond0.101 valid_lft forever preferred_lft forever inet6 fe80::7a2b:cbff:fe3c:da02/64 scope link valid_lft forever preferred_lft forever 10: bond0.102@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP link/ether 78:2b:cb:3c:da:02 brd ff:ff:ff:ff:ff:ff inet 192.168.102.203/24 brd 192.168.102.255 scope global bond0.102 valid_lft forever preferred_lft forever inet6 fe80::7a2b:cbff:fe3c:da02/64 scope link valid_lft forever preferred_lft forever 11: bond0.103@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP link/ether 78:2b:cb:3c:da:02 brd ff:ff:ff:ff:ff:ff inet 192.168.103.203/24 brd 192.168.103.255 scope global bond0.103 valid_lft forever preferred_lft forever inet6 fe80::7a2b:cbff:fe3c:da02/64 scope link valid_lft forever preferred_lft forever 12: bond0.104@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP link/ether 78:2b:cb:3c:da:02 brd ff:ff:ff:ff:ff:ff inet 192.168.104.203/24 brd 192.168.104.255 scope global bond0.104 valid_lft forever preferred_lft forever inet6 fe80::7a2b:cbff:fe3c:da02/64 scope link valid_lft forever preferred_lft forever 13: ovirtmgmt: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP link/ether 78:2b:cb:3c:da:02 brd ff:ff:ff:ff:ff:ff inet 192.168.100.102/24 brd 192.168.100.255 scope global ovirtmgmt valid_lft forever preferred_lft forever The hosted-engine deploy script got stuck near the end when it wanted the HA broker to take over. It said the ovirtmgmt network was unavailable on the Host and suggested trying to activate it within the GUI. Though I had my bonding and bridging all configured prior to any HE deployment attempt (as shown above), the GUI didn’t see it that way. It knew of the bond, and the 4 IFs of course, but it showed all 4 IFs as down and the required ovirtmgmt network was off on the right side – effectively not yet associated with the physical devices. I dragged the ovirtmgmt net over to the left to associate it the 4 IFs and pressed Save. The GUI now shows all 4 IFs up with ovirtmgmt assigned. But it is not in sync -- specifically the netmask property on the host is "255.255.255.0" while on the DC its "24". They're saying the same thing; just in different ways. Since I only have the one Host, how can I sync this? -- John Florian --------------070705070705000607090706 Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: 8bit <html> <head> <meta http-equiv="content-type" content="text/html; charset=utf-8"> </head> <body text="#000000" bgcolor="#FFFFFF"> I'm trying to get a 3.6.1 HE setup going where I have 4 VLANs (VIDs 101-104) for storage networks, 1 VLAN (VID 100) for ovirtmgmt and 1 more (VID 1) for everything else. Because I know of no way to manipulate the network configuration from the management GUI once the HE is running and with only a single Host, I made the OS configuration as close as possible to what I'd want when done. This looks like:<br> <br> <tt>[root@orthosie ~]# ip a</tt><tt><br> </tt><tt> 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN </tt><tt><br> </tt><tt> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00</tt><tt><br> </tt><tt> inet 127.0.0.1/8 scope host lo</tt><tt><br> </tt><tt> valid_lft forever preferred_lft forever</tt><tt><br> </tt><tt> inet6 ::1/128 scope host </tt><tt><br> </tt><tt> valid_lft forever preferred_lft forever</tt><tt><br> </tt><tt> 2: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue master ovirtmgmt state UP </tt><tt><br> </tt><tt> link/ether 78:2b:cb:3c:da:02 brd ff:ff:ff:ff:ff:ff</tt><tt><br> </tt><tt> 3: em1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP qlen 1000</tt><tt><br> </tt><tt> link/ether 78:2b:cb:3c:da:02 brd ff:ff:ff:ff:ff:ff</tt><tt><br> </tt><tt> 4: em2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP qlen 1000</tt><tt><br> </tt><tt> link/ether 78:2b:cb:3c:da:02 brd ff:ff:ff:ff:ff:ff</tt><tt><br> </tt><tt> 5: em3: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP qlen 1000</tt><tt><br> </tt><tt> link/ether 78:2b:cb:3c:da:02 brd ff:ff:ff:ff:ff:ff</tt><tt><br> </tt><tt> 6: em4: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP qlen 1000</tt><tt><br> </tt><tt> link/ether 78:2b:cb:3c:da:02 brd ff:ff:ff:ff:ff:ff</tt><tt><br> </tt><tt> 8: bond0.1@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP </tt><tt><br> </tt><tt> link/ether 78:2b:cb:3c:da:02 brd ff:ff:ff:ff:ff:ff</tt><tt><br> </tt><tt> inet 172.16.7.8/24 brd 172.16.7.255 scope global bond0.1</tt><tt><br> </tt><tt> valid_lft forever preferred_lft forever</tt><tt><br> </tt><tt> inet6 fe80::7a2b:cbff:fe3c:da02/64 scope link </tt><tt><br> </tt><tt> valid_lft forever preferred_lft forever</tt><tt><br> </tt><tt> 9: bond0.101@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP </tt><tt><br> </tt><tt> link/ether 78:2b:cb:3c:da:02 brd ff:ff:ff:ff:ff:ff</tt><tt><br> </tt><tt> inet 192.168.101.203/24 brd 192.168.101.255 scope global bond0.101</tt><tt><br> </tt><tt> valid_lft forever preferred_lft forever</tt><tt><br> </tt><tt> inet6 fe80::7a2b:cbff:fe3c:da02/64 scope link</tt><tt><br> </tt><tt> valid_lft forever preferred_lft forever</tt><tt><br> </tt><tt> 10: bond0.102@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP</tt><tt><br> </tt><tt> link/ether 78:2b:cb:3c:da:02 brd ff:ff:ff:ff:ff:ff</tt><tt><br> </tt><tt> inet 192.168.102.203/24 brd 192.168.102.255 scope global bond0.102</tt><tt><br> </tt><tt> valid_lft forever preferred_lft forever</tt><tt><br> </tt><tt> inet6 fe80::7a2b:cbff:fe3c:da02/64 scope link</tt><tt><br> </tt><tt> valid_lft forever preferred_lft forever</tt><tt><br> </tt><tt> 11: bond0.103@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP</tt><tt><br> </tt><tt> link/ether 78:2b:cb:3c:da:02 brd ff:ff:ff:ff:ff:ff</tt><tt><br> </tt><tt> inet 192.168.103.203/24 brd 192.168.103.255 scope global bond0.103</tt><tt><br> </tt><tt> valid_lft forever preferred_lft forever</tt><tt><br> </tt><tt> inet6 fe80::7a2b:cbff:fe3c:da02/64 scope link</tt><tt><br> </tt><tt> valid_lft forever preferred_lft forever</tt><tt><br> </tt><tt> 12: bond0.104@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP</tt><tt><br> </tt><tt> link/ether 78:2b:cb:3c:da:02 brd ff:ff:ff:ff:ff:ff</tt><tt><br> </tt><tt> inet 192.168.104.203/24 brd 192.168.104.255 scope global bond0.104</tt><tt><br> </tt><tt> valid_lft forever preferred_lft forever</tt><tt><br> </tt><tt> inet6 fe80::7a2b:cbff:fe3c:da02/64 scope link</tt><tt><br> </tt><tt> valid_lft forever preferred_lft forever</tt><tt><br> </tt><tt> 13: ovirtmgmt: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP</tt><tt><br> </tt><tt> link/ether 78:2b:cb:3c:da:02 brd ff:ff:ff:ff:ff:ff</tt><tt><br> </tt><tt> inet 192.168.100.102/24 brd 192.168.100.255 scope global ovirtmgmt</tt><tt><br> </tt><tt> valid_lft forever preferred_lft forever</tt><br> <br> <span lang="en-US"><font face="Calibri,sans-serif" size="2"><span style="font-size:11pt;">The hosted-engine deploy script got stuck near the end when it wanted the HA broker to take over. It said the ovirtmgmt network was unavailable on the Host and suggested trying to activate it within the GUI. Though I had my bonding and bridging all configured prior to any HE deployment attempt (as shown above), the GUI didn’t see it that way. It knew of the bond, and the 4 IFs of course, but it showed all 4 IFs as down and the required ovirtmgmt network was off on the right side – effectively not yet associated with the physical devices. </span></font></span><span lang="en-US"><font face="Calibri,sans-serif" size="2"><span style="font-size:11pt;"> I dragged the ovirtmgmt net over to the left to associate it the 4 IFs and pressed Save. The GUI now shows all 4 IFs up with ovirtmgmt assigned. But it is not in sync -- specifically the netmask property on the host is "255.255.255.0" while on the DC its "24". </span></font></span><span lang="en-US"><font face="Calibri,sans-serif" size="2"><span style="font-size:11pt;"><span lang="en-US"><font face="Calibri,sans-serif" size="2"><span style="font-size:11pt;">They're saying the same thing; just in different ways.<br> <br> </span></font></span>Since I only have the one Host, how can I sync this?</span></font></span><br> <span lang="en-US"><font face="Calibri,sans-serif" size="2"><span style="font-size:11pt;"></span></font></span> <pre class="moz-signature" cols="72">-- John Florian </pre> </body> </html> --------------070705070705000607090706--

On Sat, Dec 19, 2015 at 1:08 AM, John Florian <jflorian@doubledog.org> wrote:
I'm trying to get a 3.6.1 HE setup going where I have 4 VLANs (VIDs 101-104) for storage networks, 1 VLAN (VID 100) for ovirtmgmt and 1 more (VID 1) for everything else. Because I know of no way to manipulate the network configuration from the management GUI once the HE is running and with only a single Host, I made the OS configuration as close as possible to what I'd want when done. This looks like:
Why do you think of this necessary pre-work? I configured (in 3.6.0) an environment with HE too on a single host and I only preconfigured my bond1 in 802.3ad mode with the interfaces I planned to use for ovirtmgmt and I left the other interfaces unconfigured, so that all is not used by Network Manager. During the "hosted-engine --deploy" setup I got this input: --== NETWORK CONFIGURATION ==-- Please indicate a nic to set ovirtmgmt bridge on: (em1, bond1, em2) [em1]: bond1 iptables was detected on your computer, do you wish setup to configure it? (Yes, No)[Yes]: Please indicate a pingable gateway IP address [10.4.168.254]: and then on preview of configuration to apply: --== CONFIGURATION PREVIEW ==-- Bridge interface : bond1 Engine FQDN : ractorshe.mydomain.local Bridge name : ovirtmgmt After setup I configured my vlan based networks for my VMS from the GUI itself as in the usual way, so that now I have this bond0 created by oVirt GUI on the other two interfaces (em1 and em2): [root@ractor ~]# cat /proc/net/bonding/bond0 Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011) Bonding Mode: IEEE 802.3ad Dynamic link aggregation Transmit Hash Policy: layer2 (0) MII Status: up MII Polling Interval (ms): 100 Up Delay (ms): 0 Down Delay (ms): 0 802.3ad info LACP rate: fast Min links: 0 Aggregator selection policy (ad_select): stable Active Aggregator Info: Aggregator ID: 2 Number of ports: 2 Actor Key: 17 Partner Key: 8 Partner Mac Address: 00:01:02:03:04:0c Slave Interface: em1 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 00:25:64:ff:0b:f0 Aggregator ID: 2 Slave queue ID: 0 Slave Interface: em2 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 00:25:64:ff:0b:f2 Aggregator ID: 2 Slave queue ID: 0 And then "ip a" command returns: 9: bond0.65@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vlan65 state UP link/ether 00:25:64:ff:0b:f0 brd ff:ff:ff:ff:ff:ff 10: vlan65: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP link/ether 00:25:64:ff:0b:f0 brd ff:ff:ff:ff:ff:ff with [root@ractor ~]# brctl show bridge name bridge id STP enabled interfaces ;vdsmdummy; 8000.000000000000 no ovirtmgmt 8000.002564ff0bf4 no bond1 vnet0 vlan65 8000.002564ff0bf0 no bond0.65 vnet1 vnet2 vnet1 and vnet2 being the virtual network interfaces of my two running VMs. The only note I can submit is that by default when you set a network in oVirt GUI with mode=4 (802.3ad), it defaults to configuring it with "lacp_rate=0" so slow, that I think it is bad, as I read in many articles (but I'm not a network guru at all) So that I chose custom mode in the GUI and specified "mode=4 lacp_rate=1" in options and this was reflected in my configuration as you see above in bond0 output. Can we set lacp_rate=1 as a default option for mode=4 in oVirt? HIH, Gianluca

On Sat, Dec 19, 2015 at 12:53 PM, Gianluca Cecchi <gianluca.cecchi@gmail.com> wrote:
On Sat, Dec 19, 2015 at 1:08 AM, John Florian <jflorian@doubledog.org> wrote:
I'm trying to get a 3.6.1 HE setup going where I have 4 VLANs (VIDs 101-104) for storage networks, 1 VLAN (VID 100) for ovirtmgmt and 1 more (VID 1) for everything else. Because I know of no way to manipulate the network configuration from the management GUI once the HE is running and with only a single Host, I made the OS configuration as close as possible to what I'd want when done. This looks like:
Why do you think of this necessary pre-work? I configured (in 3.6.0) an environment with HE too on a single host and I only preconfigured my bond1 in 802.3ad mode with the interfaces I planned to use for ovirtmgmt and I left the other interfaces unconfigured, so that all is not used by Network Manager. During the "hosted-engine --deploy" setup I got this input:
--== NETWORK CONFIGURATION ==--
Please indicate a nic to set ovirtmgmt bridge on: (em1, bond1, em2) [em1]: bond1 iptables was detected on your computer, do you wish setup to configure it? (Yes, No)[Yes]: Please indicate a pingable gateway IP address [10.4.168.254]:
and then on preview of configuration to apply:
--== CONFIGURATION PREVIEW ==--
Bridge interface : bond1 Engine FQDN : ractorshe.mydomain.local Bridge name : ovirtmgmt
After setup I configured my vlan based networks for my VMS from the GUI itself as in the usual way, so that now I have this bond0 created by oVirt GUI on the other two interfaces (em1 and em2):
[root@ractor ~]# cat /proc/net/bonding/bond0 Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)
Bonding Mode: IEEE 802.3ad Dynamic link aggregation Transmit Hash Policy: layer2 (0) MII Status: up MII Polling Interval (ms): 100 Up Delay (ms): 0 Down Delay (ms): 0
802.3ad info LACP rate: fast Min links: 0 Aggregator selection policy (ad_select): stable Active Aggregator Info: Aggregator ID: 2 Number of ports: 2 Actor Key: 17 Partner Key: 8 Partner Mac Address: 00:01:02:03:04:0c
Slave Interface: em1 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 00:25:64:ff:0b:f0 Aggregator ID: 2 Slave queue ID: 0
Slave Interface: em2 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 00:25:64:ff:0b:f2 Aggregator ID: 2 Slave queue ID: 0
And then "ip a" command returns:
9: bond0.65@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vlan65 state UP link/ether 00:25:64:ff:0b:f0 brd ff:ff:ff:ff:ff:ff 10: vlan65: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP link/ether 00:25:64:ff:0b:f0 brd ff:ff:ff:ff:ff:ff
with [root@ractor ~]# brctl show bridge name bridge id STP enabled interfaces ;vdsmdummy; 8000.000000000000 no ovirtmgmt 8000.002564ff0bf4 no bond1 vnet0 vlan65 8000.002564ff0bf0 no bond0.65 vnet1 vnet2
vnet1 and vnet2 being the virtual network interfaces of my two running VMs.
The only note I can submit is that by default when you set a network in oVirt GUI with mode=4 (802.3ad), it defaults to configuring it with "lacp_rate=0" so slow, that I think it is bad, as I read in many articles (but I'm not a network guru at all) So that I chose custom mode in the GUI and specified "mode=4 lacp_rate=1" in options and this was reflected in my configuration as you see above in bond0 output.
Can we set lacp_rate=1 as a default option for mode=4 in oVirt?
No idea, adding Dan. I guess you can always open an RFE bz... Dan - any specific reason for the current defaults? -- Didi

On Sun, Dec 20, 2015 at 12:13:15PM +0200, Yedidyah Bar David wrote:
On Sat, Dec 19, 2015 at 12:53 PM, Gianluca Cecchi <gianluca.cecchi@gmail.com> wrote:
On Sat, Dec 19, 2015 at 1:08 AM, John Florian <jflorian@doubledog.org> wrote:
I'm trying to get a 3.6.1 HE setup going where I have 4 VLANs (VIDs 101-104) for storage networks, 1 VLAN (VID 100) for ovirtmgmt and 1 more (VID 1) for everything else. Because I know of no way to manipulate the network configuration from the management GUI once the HE is running and with only a single Host, I made the OS configuration as close as possible to what I'd want when done. This looks like:
Why do you think of this necessary pre-work? I configured (in 3.6.0) an environment with HE too on a single host and I only preconfigured my bond1 in 802.3ad mode with the interfaces I planned to use for ovirtmgmt and I left the other interfaces unconfigured, so that all is not used by Network Manager. During the "hosted-engine --deploy" setup I got this input:
--== NETWORK CONFIGURATION ==--
Please indicate a nic to set ovirtmgmt bridge on: (em1, bond1, em2) [em1]: bond1 iptables was detected on your computer, do you wish setup to configure it? (Yes, No)[Yes]: Please indicate a pingable gateway IP address [10.4.168.254]:
and then on preview of configuration to apply:
--== CONFIGURATION PREVIEW ==--
Bridge interface : bond1 Engine FQDN : ractorshe.mydomain.local Bridge name : ovirtmgmt
After setup I configured my vlan based networks for my VMS from the GUI itself as in the usual way, so that now I have this bond0 created by oVirt GUI on the other two interfaces (em1 and em2):
[root@ractor ~]# cat /proc/net/bonding/bond0 Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)
Bonding Mode: IEEE 802.3ad Dynamic link aggregation Transmit Hash Policy: layer2 (0) MII Status: up MII Polling Interval (ms): 100 Up Delay (ms): 0 Down Delay (ms): 0
802.3ad info LACP rate: fast Min links: 0 Aggregator selection policy (ad_select): stable Active Aggregator Info: Aggregator ID: 2 Number of ports: 2 Actor Key: 17 Partner Key: 8 Partner Mac Address: 00:01:02:03:04:0c
Slave Interface: em1 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 00:25:64:ff:0b:f0 Aggregator ID: 2 Slave queue ID: 0
Slave Interface: em2 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 00:25:64:ff:0b:f2 Aggregator ID: 2 Slave queue ID: 0
And then "ip a" command returns:
9: bond0.65@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vlan65 state UP link/ether 00:25:64:ff:0b:f0 brd ff:ff:ff:ff:ff:ff 10: vlan65: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP link/ether 00:25:64:ff:0b:f0 brd ff:ff:ff:ff:ff:ff
with [root@ractor ~]# brctl show bridge name bridge id STP enabled interfaces ;vdsmdummy; 8000.000000000000 no ovirtmgmt 8000.002564ff0bf4 no bond1 vnet0 vlan65 8000.002564ff0bf0 no bond0.65 vnet1 vnet2
vnet1 and vnet2 being the virtual network interfaces of my two running VMs.
The only note I can submit is that by default when you set a network in oVirt GUI with mode=4 (802.3ad), it defaults to configuring it with "lacp_rate=0" so slow, that I think it is bad, as I read in many articles (but I'm not a network guru at all) So that I chose custom mode in the GUI and specified "mode=4 lacp_rate=1" in options and this was reflected in my configuration as you see above in bond0 output.
Can we set lacp_rate=1 as a default option for mode=4 in oVirt?
No idea, adding Dan. I guess you can always open an RFE bz... Dan - any specific reason for the current defaults?
lacp_rate=0 ('slow') is the default of the bonding module in mode=4, and we do not change that (even though we could). Please open an RFE, citing the articles that recomend the usage of faster rate. Until then - configure Engine's custom bond option to your likings.

This is a multi-part message in MIME format. --------------000807010405010304010305 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit On 12/19/2015 05:53 AM, Gianluca Cecchi wrote:
On Sat, Dec 19, 2015 at 1:08 AM, John Florian <jflorian@doubledog.org <mailto:jflorian@doubledog.org>> wrote:
I'm trying to get a 3.6.1 HE setup going where I have 4 VLANs (VIDs 101-104) for storage networks, 1 VLAN (VID 100) for ovirtmgmt and 1 more (VID 1) for everything else. Because I know of no way to manipulate the network configuration from the management GUI once the HE is running and with only a single Host, I made the OS configuration as close as possible to what I'd want when done. This looks like:
Why do you think of this necessary pre-work?
Because my storage is iSCSI and I need the VLAN configuration in place for the Host to access it on behalf of the HE. Otherwise, yes I agree it would be easier to let the hosted-engine script deal with the set up. I've done a workable setup before letting the script do everything, but the mode 4 bonding only gave me half the possible performance because in effect one NIC on the NAS did all the transmitting while the other NIC did all the receiving. So I really need all of the storage network setup in place prior to starting the HE deployment. It seems like it should be trivial to convince the engine that the two netmasks are indeed equivalent. I tried changing in /var/lib/vdsm/persistence/netconf/nets/ovirtmgmt the '"prefix": "24"' setting to '"netmask": "255.255.255.0"' and running /usr/share/vdsm/vdsm-restore-net-config but that didn't seem to change anything WRT the network being out of sync.
I configured (in 3.6.0) an environment with HE too on a single host and I only preconfigured my bond1 in 802.3ad mode with the interfaces I planned to use for ovirtmgmt and I left the other interfaces unconfigured, so that all is not used by Network Manager. During the "hosted-engine --deploy" setup I got this input:
--== NETWORK CONFIGURATION ==--
Please indicate a nic to set ovirtmgmt bridge on: (em1, bond1, em2) [em1]: bond1 iptables was detected on your computer, do you wish setup to configure it? (Yes, No)[Yes]: Please indicate a pingable gateway IP address [10.4.168.254]:
and then on preview of configuration to apply:
--== CONFIGURATION PREVIEW ==--
Bridge interface : bond1 Engine FQDN : ractorshe.mydomain.local Bridge name : ovirtmgmt
After setup I configured my vlan based networks for my VMS from the GUI itself as in the usual way, so that now I have this bond0 created by oVirt GUI on the other two interfaces (em1 and em2):
[root@ractor ~]# cat /proc/net/bonding/bond0 Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)
Bonding Mode: IEEE 802.3ad Dynamic link aggregation Transmit Hash Policy: layer2 (0) MII Status: up MII Polling Interval (ms): 100 Up Delay (ms): 0 Down Delay (ms): 0
802.3ad info LACP rate: fast Min links: 0 Aggregator selection policy (ad_select): stable Active Aggregator Info: Aggregator ID: 2 Number of ports: 2 Actor Key: 17 Partner Key: 8 Partner Mac Address: 00:01:02:03:04:0c
Slave Interface: em1 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 00:25:64:ff:0b:f0 Aggregator ID: 2 Slave queue ID: 0
Slave Interface: em2 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 00:25:64:ff:0b:f2 Aggregator ID: 2 Slave queue ID: 0
And then "ip a" command returns:
9: bond0.65@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vlan65 state UP link/ether 00:25:64:ff:0b:f0 brd ff:ff:ff:ff:ff:ff 10: vlan65: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP link/ether 00:25:64:ff:0b:f0 brd ff:ff:ff:ff:ff:ff
with [root@ractor ~]# brctl show bridge name bridge id STP enabled interfaces ;vdsmdummy; 8000.000000000000 no ovirtmgmt 8000.002564ff0bf4 no bond1 vnet0 vlan65 8000.002564ff0bf0 no bond0.65 vnet1 vnet2
vnet1 and vnet2 being the virtual network interfaces of my two running VMs.
The only note I can submit is that by default when you set a network in oVirt GUI with mode=4 (802.3ad), it defaults to configuring it with "lacp_rate=0" so slow, that I think it is bad, as I read in many articles (but I'm not a network guru at all) So that I chose custom mode in the GUI and specified "mode=4 lacp_rate=1" in options and this was reflected in my configuration as you see above in bond0 output.
Can we set lacp_rate=1 as a default option for mode=4 in oVirt?
HIH, Gianluca
-- John Florian --------------000807010405010304010305 Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: 8bit <html> <head> <meta content="text/html; charset=utf-8" http-equiv="Content-Type"> </head> <body text="#000000" bgcolor="#FFFFFF"> <div class="moz-cite-prefix">On 12/19/2015 05:53 AM, Gianluca Cecchi wrote:<br> </div> <blockquote cite="mid:CAG2kNCx72GAq1CNfmisV_Ohec4v+MVdZ-UPzooyoyTi9Xbo4Zw@mail.gmail.com" type="cite"> <div dir="ltr"> <div class="gmail_extra"> <div class="gmail_quote">On Sat, Dec 19, 2015 at 1:08 AM, John Florian <span dir="ltr"><<a moz-do-not-send="true" href="mailto:jflorian@doubledog.org" target="_blank">jflorian@doubledog.org</a>></span> wrote:<br> <blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"> <div text="#000000" bgcolor="#FFFFFF"> I'm trying to get a 3.6.1 HE setup going where I have 4 VLANs (VIDs 101-104) for storage networks, 1 VLAN (VID 100) for ovirtmgmt and 1 more (VID 1) for everything else. Because I know of no way to manipulate the network configuration from the management GUI once the HE is running and with only a single Host, I made the OS configuration as close as possible to what I'd want when done. This looks like:<br> </div> </blockquote> </div> <br> </div> <div class="gmail_extra">Why do you think of this necessary pre-work?</div> </div> </blockquote> <br> Because my storage is iSCSI and I need the VLAN configuration in place for the Host to access it on behalf of the HE. Otherwise, yes I agree it would be easier to let the hosted-engine script deal with the set up. I've done a workable setup before letting the script do everything, but the mode 4 bonding only gave me half the possible performance because in effect one NIC on the NAS did all the transmitting while the other NIC did all the receiving. So I really need all of the storage network setup in place prior to starting the HE deployment.<br> <br> It seems like it should be trivial to convince the engine that the two netmasks are indeed equivalent. I tried changing in /var/lib/vdsm/persistence/netconf/nets/ovirtmgmt the '"prefix": "24"' setting to '"netmask": "255.255.255.0"' and running /usr/share/vdsm/vdsm-restore-net-config but that didn't seem to change anything WRT the network being out of sync.<br> <br> <blockquote cite="mid:CAG2kNCx72GAq1CNfmisV_Ohec4v+MVdZ-UPzooyoyTi9Xbo4Zw@mail.gmail.com" type="cite"> <div dir="ltr"> <div class="gmail_extra"> I configured (in 3.6.0) an environment with HE too on a single host and I only preconfigured my bond1 in 802.3ad mode with the interfaces I planned to use for ovirtmgmt and I left the other interfaces unconfigured, so that all is not used by Network Manager.<br> </div> <div class="gmail_extra">During the "hosted-engine --deploy" setup I got this input:<br> <br> --== NETWORK CONFIGURATION ==--<br> <br> Please indicate a nic to set ovirtmgmt bridge on: (em1, bond1, em2) [em1]: bond1<br> iptables was detected on your computer, do you wish setup to configure it? (Yes, No)[Yes]: <br> Please indicate a pingable gateway IP address [10.4.168.254]: <br> <br> </div> <div class="gmail_extra">and then on preview of configuration to apply:<br> <br> --== CONFIGURATION PREVIEW ==--<br> <br> Bridge interface : bond1<br> Engine FQDN : ractorshe.mydomain.local<br> Bridge name : ovirtmgmt<br> <br> </div> <div class="gmail_extra">After setup I configured my vlan based networks for my VMS from the GUI itself as in the usual way, so that now I have this bond0 created by oVirt GUI on the other two interfaces (em1 and em2):<br> <br> [root@ractor ~]# cat /proc/net/bonding/bond0<br> Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)<br> <br> Bonding Mode: IEEE 802.3ad Dynamic link aggregation<br> Transmit Hash Policy: layer2 (0)<br> MII Status: up<br> MII Polling Interval (ms): 100<br> Up Delay (ms): 0<br> Down Delay (ms): 0<br> <br> 802.3ad info<br> LACP rate: fast<br> Min links: 0<br> Aggregator selection policy (ad_select): stable<br> Active Aggregator Info:<br> Aggregator ID: 2<br> Number of ports: 2<br> Actor Key: 17<br> Partner Key: 8<br> Partner Mac Address: 00:01:02:03:04:0c<br> <br> Slave Interface: em1<br> MII Status: up<br> Speed: 1000 Mbps<br> Duplex: full<br> Link Failure Count: 0<br> Permanent HW addr: 00:25:64:ff:0b:f0<br> Aggregator ID: 2<br> Slave queue ID: 0<br> <br> Slave Interface: em2<br> MII Status: up<br> Speed: 1000 Mbps<br> Duplex: full<br> Link Failure Count: 0<br> Permanent HW addr: 00:25:64:ff:0b:f2<br> Aggregator ID: 2<br> Slave queue ID: 0<br> <br> </div> <div class="gmail_extra">And then "ip a" command returns:<br> <br> 9: bond0.65@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vlan65 state UP <br> link/ether 00:25:64:ff:0b:f0 brd ff:ff:ff:ff:ff:ff<br> 10: vlan65: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP <br> link/ether 00:25:64:ff:0b:f0 brd ff:ff:ff:ff:ff:ff<br> <br> with<br> [root@ractor ~]# brctl show<br> bridge name bridge id STP enabled interfaces<br> ;vdsmdummy; 8000.000000000000 no <br> ovirtmgmt 8000.002564ff0bf4 no bond1<br> vnet0<br> vlan65 8000.002564ff0bf0 no bond0.65<br> vnet1<br> vnet2<br> <br> </div> <div class="gmail_extra">vnet1 and vnet2 being the virtual network interfaces of my two running VMs.<br> <br> </div> <div class="gmail_extra">The only note I can submit is that by default when you set a network in oVirt GUI with mode=4 (802.3ad), it defaults to configuring it with "lacp_rate=0" so slow, that I think it is bad, as I read in many articles (but I'm not a network guru at all)<br> </div> <div class="gmail_extra">So that I chose custom mode in the GUI and specified "mode=4 lacp_rate=1" in options and this was reflected in my configuration as you see above in bond0 output.<br> <br> </div> <div class="gmail_extra">Can we set lacp_rate=1 as a default option for mode=4 in oVirt?<br> </div> <div class="gmail_extra"><br> </div> <div class="gmail_extra">HIH,<br> </div> <div class="gmail_extra">Gianluca <br> </div> </div> </blockquote> <br> <br> <pre class="moz-signature" cols="72">-- John Florian </pre> </body> </html> --------------000807010405010304010305--
participants (4)
-
Dan Kenigsberg
-
Gianluca Cecchi
-
John Florian
-
Yedidyah Bar David