Re: [Users] Making bonds in oVirt

----- Original Message -----
----- Original Message -----
Hi All,
Hi Joop
I'm slowly progressing with our test environment and currently I'm testing my 10Ge nics and switches and hit a snag. I watched the webcast about setting up a non-vm network which included creating a bond using the webinterface. I can't get it to work correctly. I have two problems - my ovirtmgmt network always returns to DHCP in Hosts/Network Interfaces/Setup Host Networks - bonding p3p1 and p3p2 into bond0 with mode 4 and assigning an ip address doesn't work either.
It might be a good idea to try and isolate the two problems:
1. Even without bonding the BOOTPROTO won't stick? Do you see any errors? No, the following is the relevant part of vdsm.log:
<snip>
[ifcfg-em1] DEVICE=em1 ONBOOT=yes BOOTPROTO=none HWADDR=d8:d3:85:8f:11:dc BRIDGE=ovirtmgmt NM_CONTROLLED=no
[ifcfg-ovirtmgmt] DEVICE=ovirtmgmt TYPE=Bridge ONBOOT=yes IPADDR=192.168.216.151 NETMASK=255.255.255.0 GATEWAY=192.168.216.254 DELAY=0 NM_CONTROLLED=no
So seems like the BOOTPROTO data in this case is what you wanted, a static config. And when you do the same over an existing bond then it doesn't save the data?
2. If you have 3 nics (or more) try to bond the 2 that aren't connected to the management network and just see if that works.
Already doing that.
These are the configs before bonding.
[ifcfg-p3p1] DEVICE=p3p1 ONBOOT=yes BOOTPROTO=none HWADDR=00:1b:21:bb:41:21
[ifcfg-p3p2] DEVICE=p3p2 ONBOOT=yes BOOTPROTO=none HWADDR=00:1b:21:bb:41:20
There is currently no ifcfg-bond0 but it will appear when I goto the webinterface and configure a bond between p3p1 and p3p2. Doing that right now ......
<snip>
[ifcfg-bond4] DEVICE=bond4 ONBOOT=yes BOOTPROTO=none BONDING_OPTS=mode=4 NM_CONTROLLED=no
output from dmesg: [ 3868.457852] bonding: bond4: Adding slave p3p1. [ 3868.479455] 8021q: adding VLAN 0 to HW filter on device p3p1 [ 3868.479842] bonding: bond4: enslaving p3p1 as an active interface with an up link. [ 3868.573557] bonding: bond4: Adding slave p3p2. [ 3868.594368] 8021q: adding VLAN 0 to HW filter on device p3p2 [ 3868.594790] bonding: bond4: enslaving p3p2 as an active interface with an up link. [ 3868.655028] bonding: unable to update mode of bond4 because it has slaves. [ 3868.658528] 8021q: adding VLAN 0 to HW filter on device bond4 [ 3869.994124] ixgbe 0000:20:00.0: p3p1: detected SFP+: 0 [ 3870.055458] ixgbe 0000:20:00.1: p3p2: detected SFP+: 0 [ 3871.955610] ixgbe 0000:20:00.0: p3p1: NIC Link is Up 10 Gbps, Flow Control: RX/TX [ 3872.170130] ixgbe 0000:20:00.1: p3p2: NIC Link is Up 10 Gbps, Flow Control: RX/TX [ 3879.167139] bond4: no IPv6 routers present
Notice the error about bonding and slaves that error is also in vdsm.log, atleast I suspect that vdsm.log/ifup is complaining about not able to change the bond mode.
Yes this does look problematic, although overall bond was created fine and the config file does have the mode you set..
I ran yum update both on mgmt and storage, current package versions: mgmt: [root@mgmt01 ~]# rpm -aq | grep ovirt | sort ovirt-engine-3.1.0-2.fc17.noarch ovirt-engine-backend-3.1.0-2.fc17.noarch ovirt-engine-cli-3.2.0.5-1.fc17.noarch ovirt-engine-config-3.1.0-2.fc17.noarch ovirt-engine-dbscripts-3.1.0-2.fc17.noarch ovirt-engine-genericapi-3.1.0-2.fc17.noarch ovirt-engine-notification-service-3.1.0-2.fc17.noarch ovirt-engine-restapi-3.1.0-2.fc17.noarch ovirt-engine-sdk-3.2.0.2-1.fc17.noarch ovirt-engine-setup-3.1.0-2.fc17.noarch ovirt-engine-tools-common-3.1.0-2.fc17.noarch ovirt-engine-userportal-3.1.0-2.fc17.noarch ovirt-engine-webadmin-portal-3.1.0-2.fc17.noarch ovirt-guest-agent-1.0.5-1.fc17.x86_64 ovirt-image-uploader-3.1.0-0.git9c42c8.fc17.noarch ovirt-iso-uploader-3.1.0-0.git1841d9.fc17.noarch ovirt-log-collector-3.1.0-0.git10d719.fc17.noarch ovirt-node-2.5.5-0.fc17.noarch ovirt-node-iso-2.5.5-0.1.fc17.noarch ovirt-release-fedora-4-2.noarch
storage: [root@st01 ~]# rpm -aq | grep vdsm | sort vdsm-4.10.0-10.fc17.x86_64 vdsm-cli-4.10.0-10.fc17.noarch vdsm-gluster-4.10.0-10.fc17.noarch vdsm-python-4.10.0-10.fc17.x86_64 vdsm-rest-4.10.0-10.fc17.noarch vdsm-xmlrpc-4.10.0-10.fc17.noarch
Joop

On Tue, Oct 23, 2012 at 11:39:24AM -0400, Mike Kolesnik wrote: <snap>
<snip>
[ifcfg-bond4] DEVICE=bond4 ONBOOT=yes BOOTPROTO=none BONDING_OPTS=mode=4 NM_CONTROLLED=no
output from dmesg: [ 3868.457852] bonding: bond4: Adding slave p3p1. [ 3868.479455] 8021q: adding VLAN 0 to HW filter on device p3p1 [ 3868.479842] bonding: bond4: enslaving p3p1 as an active interface with an up link. [ 3868.573557] bonding: bond4: Adding slave p3p2. [ 3868.594368] 8021q: adding VLAN 0 to HW filter on device p3p2 [ 3868.594790] bonding: bond4: enslaving p3p2 as an active interface with an up link. [ 3868.655028] bonding: unable to update mode of bond4 because it has slaves. [ 3868.658528] 8021q: adding VLAN 0 to HW filter on device bond4 [ 3869.994124] ixgbe 0000:20:00.0: p3p1: detected SFP+: 0 [ 3870.055458] ixgbe 0000:20:00.1: p3p2: detected SFP+: 0 [ 3871.955610] ixgbe 0000:20:00.0: p3p1: NIC Link is Up 10 Gbps, Flow Control: RX/TX [ 3872.170130] ixgbe 0000:20:00.1: p3p2: NIC Link is Up 10 Gbps, Flow Control: RX/TX [ 3879.167139] bond4: no IPv6 routers present
Notice the error about bonding and slaves that error is also in vdsm.log, atleast I suspect that vdsm.log/ifup is complaining about not able to change the bond mode.
Yes this does look problematic, although overall bond was created fine and the config file does have the mode you set..
Yes it sounds like the issue fixed by Mark Wu in http://gerrit.ovirt.org/6217 That fix is not rebased on ovirt-3.1's vdsm. Maybe you can test master branch vdsm, to see if this issue is resolved (but other problems may crop when you use the bleeding edge of the master branch...).
I ran yum update both on mgmt and storage, current package versions: mgmt: [root@mgmt01 ~]# rpm -aq | grep ovirt | sort ovirt-engine-3.1.0-2.fc17.noarch ovirt-engine-backend-3.1.0-2.fc17.noarch ovirt-engine-cli-3.2.0.5-1.fc17.noarch ovirt-engine-config-3.1.0-2.fc17.noarch ovirt-engine-dbscripts-3.1.0-2.fc17.noarch ovirt-engine-genericapi-3.1.0-2.fc17.noarch ovirt-engine-notification-service-3.1.0-2.fc17.noarch ovirt-engine-restapi-3.1.0-2.fc17.noarch ovirt-engine-sdk-3.2.0.2-1.fc17.noarch ovirt-engine-setup-3.1.0-2.fc17.noarch ovirt-engine-tools-common-3.1.0-2.fc17.noarch ovirt-engine-userportal-3.1.0-2.fc17.noarch ovirt-engine-webadmin-portal-3.1.0-2.fc17.noarch ovirt-guest-agent-1.0.5-1.fc17.x86_64 ovirt-image-uploader-3.1.0-0.git9c42c8.fc17.noarch ovirt-iso-uploader-3.1.0-0.git1841d9.fc17.noarch ovirt-log-collector-3.1.0-0.git10d719.fc17.noarch ovirt-node-2.5.5-0.fc17.noarch ovirt-node-iso-2.5.5-0.1.fc17.noarch ovirt-release-fedora-4-2.noarch
storage: [root@st01 ~]# rpm -aq | grep vdsm | sort vdsm-4.10.0-10.fc17.x86_64 vdsm-cli-4.10.0-10.fc17.noarch vdsm-gluster-4.10.0-10.fc17.noarch vdsm-python-4.10.0-10.fc17.x86_64 vdsm-rest-4.10.0-10.fc17.noarch vdsm-xmlrpc-4.10.0-10.fc17.noarch
Joop
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
participants (2)
-
Dan Kenigsberg
-
Mike Kolesnik