[Users] Making bonds in oVirt

Karli Sjöberg Karli.Sjoberg at slu.se
Tue Oct 23 05:38:12 UTC 2012


22 okt 2012 kl. 22.46 skrev Joop:

> Hi All,
> 
> I'm slowly progressing with our test environment and currently I'm testing my 10Ge nics and switches and hit a snag.
> I watched the webcast about setting up a non-vm network which included creating a bond using the webinterface. I can't get it to work correctly.
> I have two problems
> - my ovirtmgmt network always returns to DHCP in Hosts/Network Interfaces/Setup Host Networks
> - bonding p3p1 and p3p2 into bond0 with mode 4 and assigning an ip address doesn't work either.
> The bond is created but no ip address assigned and I end up with errors in dmesg.
> On ST01 I just dropped the bond and restarted the node so I have a clean dmesg.
> On ST02 I repeated the bonding but no luck
> Host is now Non Responding because of missing network so put it into maintenance.
> In Setup Host Networks I create a bond between p3p1 and p3p2, bond name bond0 and mode 4, edit ovirtmgmt to set it to static instead of DHCP, ip address is correct.
> Assign my storage network to the bond, edit its properties and set its ip to Static 172.29.0.1 mask 255.255.255.0. The mouse over popups show the correct ip addresses for both storage and ovirtmgmt, check Save and Verify, press OK, no ip address on the bond :-((
> ST02 dmesg:
> [43523.670576] bonding: unable to update mode of bond3 because it has slaves.
> [43523.734204] bonding: bond3: Setting MII monitoring interval to 150.
> [43523.736633] 8021q: adding VLAN 0 to HW filter on device bond3
> [43534.234245] bond3: no IPv6 routers present
> 
> Nothing in engine.log but maybe if logging is set higher, but how??
> 
> [root at st02 ~]# rpm -aq | grep vdsm
> vdsm-4.10.0-7.fc17.x86_64
> vdsm-cli-4.10.0-7.fc17.noarch
> vdsm-xmlrpc-4.10.0-7.fc17.noarch
> vdsm-python-4.10.0-7.fc17.x86_64
> vdsm-gluster-4.10.0-7.fc17.noarch
> vdsm-rest-4.10.0-7.fc17.noarch
> 
> 
> 
> Nothing on ST01 when I try to create a bond but I do get an error in the mgmt web interface:
> Error while executing action Setup Networks: Internal oVirt Engine Error
> and this in engine.log
> 2012-10-22 22:41:40,175 INFO  [org.ovirt.engine.core.bll.SetupNetworksCommand] (ajp--0.0.0.0-8009-8) [27ee141a] Running command: SetupNetworksCommand internal: false. Entities affected :  ID: b84be568-f101-11e1-9f16-78e7d1f4ada5 Type: VDS
> 2012-10-22 22:41:40,177 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.SetupNetworksVDSCommand] (ajp--0.0.0.0-8009-8) [27ee141a] START, SetupNetworksVDSCommand(vdsId = b84be568-f101-11e1-9f16-78e7d1f4ada5), log id: 518d832b
> 2012-10-22 22:41:40,177 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.SetupNetworksVDSCommand] (ajp--0.0.0.0-8009-8) [27ee141a] FINISH, SetupNetworksVDSCommand, log id: 518d832b
> 2012-10-22 22:41:40,185 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (ajp--0.0.0.0-8009-8) [27ee141a] java.util.concurrent.ExecutionException: java.lang.reflect.InvocationTargetException
> 2012-10-22 22:41:40,186 ERROR [org.ovirt.engine.core.vdsbroker.VDSCommandBase] (ajp--0.0.0.0-8009-8) [27ee141a] Command PollVDS execution failed. Exception: RuntimeException: java.util.concurrent.ExecutionException: java.lang.reflect.InvocationTargetException
> 2012-10-22 22:41:40,687 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (ajp--0.0.0.0-8009-8) [27ee141a] java.util.concurrent.ExecutionException: java.lang.reflect.InvocationTargetException
> 2012-10-22 22:41:40,688 ERROR [org.ovirt.engine.core.vdsbroker.VDSCommandBase] (ajp--0.0.0.0-8009-8) [27ee141a] Command SetupNetworksVDS execution failed. Exception: RuntimeException: java.util.concurrent.ExecutionException: java.lang.reflect.InvocationTargetException
> 2012-10-22 22:41:40,693 INFO  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ajp--0.0.0.0-8009-8) [27ee141a] No string for UNASSIGNED type. Use default Log
> 
> ovirt-engine-3.1.0-2.fc17.noarch
> ovirt-engine-backend-3.1.0-2.fc17.noarch
> ovirt-engine-cli-3.1.0.6-1.fc17.noarch
> ovirt-engine-config-3.1.0-2.fc17.noarch
> ovirt-engine-dbscripts-3.1.0-2.fc17.noarch
> ovirt-engine-genericapi-3.1.0-2.fc17.noarch
> ovirt-engine-notification-service-3.1.0-2.fc17.noarch
> ovirt-engine-restapi-3.1.0-2.fc17.noarch
> ovirt-engine-sdk-3.1.0.4-1.fc17.noarch
> ovirt-engine-setup-3.1.0-2.fc17.noarch
> ovirt-engine-tools-common-3.1.0-2.fc17.noarch
> ovirt-engine-userportal-3.1.0-2.fc17.noarch
> ovirt-engine-webadmin-portal-3.1.0-2.fc17.noarch
> ovirt-guest-agent-1.0.5-1.fc17.x86_64
> ovirt-image-uploader-3.1.0-0.git9c42c8.fc17.noarch
> ovirt-iso-uploader-3.1.0-0.git1841d9.fc17.noarch
> ovirt-log-collector-3.1.0-0.git10d719.fc17.noarch
> ovirt-node-2.5.5-0.fc17.noarch
> ovirt-node-iso-2.5.5-0.1.fc17.noarch
> ovirt-release-fedora-4-2.noarch
> 
> Please let me know if I need to provide other logs or need to run some commands to verify setup.
> 
> Thanks,
> 
> Joop
> 
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users

Hey,

I was having issues with Node, not being able to create networking "just right", how I wanted them, so I restarted with plain Fedora and set it up this way:

Services that needs tuning in the beginning:
# systemctl stop NetworkManager.service
# systemctl disable NetworkManager.service
# systemctl start sshd.service
# systemctl enable sshd.service
# systemctl enable network.service
# iptables --flush
# iptables-save > /etc/sysconfig/iptables

# Install yum-priorities, define oVirt repo and install needed packages:
# yum install -y yum-priorities

# cat > /etc/yum.repos.d/ovirt-engine.repo << EOF
[ovirt-engine-3.1]
priority=1
name=ovirt-engine-3.1
baseurl=http://www.ovirt.org/releases/3.1/rpm/Fedora/17/
enabled=1
gpgcheck=0

EOF

# yum upgrade -y
# yum install -y qemu-kvm qemu-kvm-tools vdsm vdsm-cli libjpeg spice-server pixman seabios qemu-img fence-agents libselinux-python

Now comes networking. Start by defining to load bonding at boot:
# cat > /etc/modprobe.d/bonding.conf << EOF
alias bond0 bonding

EOF

Then define the bond. This is LACP mode:
# cat > /etc/sysconfig/network-scripts/ifcfg-bond0 << EOF
DEVICE=bond0
NM_CONTROLLED=no
USERCTL=no
BOOTPROTO=none
BONDING_OPTS="mode=4 miimon=100"
TYPE=Ethernet
MTU=9000

EOF

"Enslave" the physical NICs to the bond:
# cat > /etc/sysconfig/network-scripts/ifcfg-em1 << EOF
NM_CONTROLLED="no"
BOOTPROTO="none"
DEVICE="em1"
ONBOOT="yes"
USERCTL=no
MASTER=bond0
SLAVE=yes

EOF

# cat > /etc/sysconfig/network-scripts/ifcfg-em2 << EOF
NM_CONTROLLED="no"
BOOTPROTO="none"
DEVICE="em2"
ONBOOT="yes"
USERCTL=no
MASTER=bond0
SLAVE=yes

EOF

After that, you create VLAN interfaces ontop of the bond. In this example, I´m using VLAN ID 1 and 2:
# cat > /etc/sysconfig/network-scripts/ifcfg-bond0.1 << EOF
DEVICE=bond0.1
VLAN=yes
BOOTPROTO=none
NM_CONTROLLED=no
BRIDGE=br1
MTU=1500

EOF

# cat > /etc/sysconfig/network-scripts/ifcfg-bond0.2 << EOF
DEVICE=bond0.2
VLAN=yes
BOOTPROTO=none
NM_CONTROLLED=no
BRIDGE=ovirtmgmt
MTU=9000

EOF

Create the bridges ontop of the VLAN interfaces. The names, as I have understood it, can be whatever you want, but one needs to be called "ovirtmgmt" of course:
# cat > /etc/sysconfig/network-scripts/ifcfg-ovirtmgmt << EOF
TYPE=Bridge
NM_CONTROLLED="no"
BOOTPROTO="none"
DEVICE="ovirtmgmt"
ONBOOT="yes"
IPADDR=XXX.XXX.XXX.XXX
NETMASK=255.255.255.0

EOF

# cat > /etc/sysconfig/network-scripts/ifcfg-br1 << EOF
TYPE=Bridge
NM_CONTROLLED="no"
BOOTPROTO="none"
DEVICE="br1"
ONBOOT="yes"
IPADDR=XXX.XXX.XXX.XXX
NETMASK=255.255.255.0

EOF

Gateway goes into:
# cat > /etc/sysconfig/network << EOF
GATEWAY=XXX.XXX.XXX.XXX

EOF

Lastly, restart networking and for good measure start ntpd:
# systemctl restart network.service
# systemctl enable ntpd.service
# systemctl start ntpd.service

This way, you can have almost(4096) as many interfaces as you want with only two physical NICs. I also gave an example on how you tune Jumbo Frames to be active on some interfaces, and have regular window size on the rest. Jumbo must only be active on interfaces that isn´t routed, since the default routing size is 1500.

/Karli




More information about the Users mailing list