[ovirt-users] Configuring another interface for trunked (tagged) VM traffic

Will Dennis wdennis at nec-labs.com
Sat Jan 2 22:44:17 UTC 2016


I found this following (older) article, that gave me a clue…
http://captainkvm.com/2013/04/maximizing-your-10gb-ethernet-in-kvm/

So I configured the following up in /etc/sysconfig/network-scripts for each of my hosts —

[root at ovirt-node-01 network-scripts]# cat ifcfg-enp4s0f0
HWADDR=00:15:17:7B:E9:EA
TYPE=Ethernet
BOOTPROTO=none
NAME=enp4s0f0
UUID=8b006c8c-b5d3-4dae-a1e7-5ca463119be3
ONBOOT=yes
SLAVE=yes
MASTER=bond0

(^^^ same sort of file made for enp4s0f1)

[root at ovirt-node-01 network-scripts]# cat ifcfg-bond0
DEVICE=bond0
ONBOOT=yes
BOOTPROTO=none
BONDING_OPTS="mode=4 miimon=100"

[root at ovirt-node-01 network-scripts]# cat ifcfg-bond0.180
DEVICE=bond0.180
VLAN=yes
BOOTPROTO=static
ONBOOT=yes
BRIDGE=br180

(^^^ same sort of file made for other VLANs)

[root at ovirt-node-03 network-scripts]# cat ifcfg-br180
DEVICE=br180
TYPE=Bridge
BOOTPROTO=static
ONBOOT=yes
DELAY=0

(^^^ same sort of file made for other bridges)

So that all makes the following sort of device chain:

<a href="http://s1096.photobucket.com/user/willdennis/media/ovirt-bond-layout.png.html" target="_blank"><img src="http://i1096.photobucket.com/albums/g330/willdennis/ovirt-bond-layout.png" border="0" alt="ovirt-bond-layout.png"/></a>

But then I read this next article:
http://captainkvm.com/2013/04/maximizing-your-10gb-ethernet-in-rhev/

This leads me to believe (if it’s still the same process on current oVirt/RHEV) that I could stop with the bond0 setup, and then by tying the networks I created for the VLANs of interest (which do have the proper VLAN tags set on them) that oVirt would automatically create the needed bond0 VLAN sub-interfaces, and the related  per-VLAN bridges.

So, is there a way to tie the oVirt networks to use the bridges I’ve already created (they don’t show up in the oVirt webadmin “Setup host networks” dialog) or should I just match the oVirt networks with the bond0 interface, and let whatever structure oVirt creates happen? (and if so, I guess I’d need to remove the bond0 VLAN sub-interfaces, and the related per-VLAN bridges I created?)


On Dec 31, 2015, at 1:56 PM, Will Dennis <wdennis at nec-labs.com<mailto:wdennis at nec-labs.com>> wrote:

Hi all,

Taking the next step on configuring my newly-established oVirt cluster, and that would be to set up a trunk (VLAN tagged) connection to each cluster host (there are 3) for VM traffic. What I’m looking at is akin to setting up vSwitches on VMware, except I have never done this on a VMware cluster, just on individual hosts…

Anyhow, I have the following NICs available on my three hosts (conveniently, they are the exact same hardware platform):

ovirt-node-01 | success | rc=0 >>
3: enp4s0f0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT qlen 1000
4: enp4s0f1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT qlen 1000
5: enp12s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master ovirtmgmt state UP mode DEFAULT qlen 1000
6: enp12s0f1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT qlen 1000

ovirt-node-02 | success | rc=0 >>
3: enp4s0f0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT qlen 1000
4: enp4s0f1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT qlen 1000
5: enp12s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master ovirtmgmt state UP mode DEFAULT qlen 1000
6: enp12s0f1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT qlen 1000

ovirt-node-03 | success | rc=0 >>
3: enp4s0f0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT qlen 1000
4: enp4s0f1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT qlen 1000
5: enp12s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master ovirtmgmt state UP mode DEFAULT qlen 1000
6: enp12s0f1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT qlen 1000

As you may see, I am using the ‘enp12s0f0’ interface on each host for the ‘ovirtmgmt’ bridge. This network carries the admin traffic as well as Gluster distributed filesystem traffic, but I now want to establish a separate link to each host for VM traffic. The ‘ovirtmgmt’ bridge is NOT trunked/tagged, only a single VLAN is used. For the VM traffic, I’d like to use the ‘enp4s0f0’ interface on each host, and tie them into a logical network named “vm-traffic” (or the like) and make that a trunked/tagged interface.

Are there any existing succinct instructions on how to do this? I have been reading thru the oVirt Admin Manual’s “Logical Networks” section (http://www.ovirt.org/OVirt_Administration_Guide#Logical_Network_Tasks) but it hasn’t “clicked” in my mind yet...

Thanks,
Will



More information about the Users mailing list