[ovirt-users] oVirt 3.5.3 Host Network Interfaces

Prof. Dr. Michael Schefczyk michael at schefczyk.net
Tue Jul 7 10:46:15 UTC 2015


Dear Soeren,

Thank you very much for your feedback. I think that the issue is that bonding mode balance-alb (with no special switch configuration) does work while 802.3ad (with appropriate switch configuration) does not.

With correct switch configuration, 802.3ad does connect the server to the LAN but it does prevent oVirt/VDSM from working, I think. As the hosted engine, does not even start, it is then not possible to further change the configuration.

This is surprising to me, as I prefer standardized solutions making use of available hardware capabilities (i.e., 802.3ad). As a result, I will explore oVirt further based on balance-alb NIC bonding.

Regards,

Michael

-----Ursprüngliche Nachricht-----
Von: Soeren Malchow [mailto:soeren.malchow at mcon.net] 
Gesendet: Dienstag, 7. Juli 2015 10:54
An: Prof. Dr. Michael Schefczyk; users at ovirt.org
Betreff: Re: AW: [ovirt-users] oVirt 3.5.3 Host Network Interfaces

Dear Michael,

The network scripts look exactly as we did that, and then during setting up the host network we also got the "already enslaved" message which did not cause a problem since it actually was enslaved, afterwards there were proper vdsm generated interface files.

Though we have the difference, that the ovirt already showed the bond even when it was not managed by ovirt yet.

You can still change the one bonded interface in ovirt and add one additional interface, just drag and drop, put the bridge in and configured the Ips, however, if you want 802.3ad you need to change the switch ports to do 802.3ad as well a little bit after you applied the change, to avoid having to do the switch configuration use "custom" and "mode=5" or "mode=6"


Regards
Soeren 


On 07/07/15 01:14, "Prof. Dr. Michael Schefczyk" <michael at schefczyk.net>
wrote:

>Dear Soeren, dear all,
>
>Thank you very much. I did try the steps on an installed system. The 
>result was that after manipulating the network scripts, the server 
>would communicate well with the rest of the LAN - as many other servers 
>on which I use NIC bonds. However, my hosted-engine refused to start:
>"hosted-engine --vm-start" results in "Connection to localhost:54321 
>refused". Needless to say, it is not possible to revert the maintenance 
>mode then. ABRT points out what the problem is (but not the solution):
>"api.py:119:objectivizeNetwork:ConfigNetworkError: (24, u'nic enp0s20f1 
>already enslaved to bond0')" This shows that oVirt cannot start after 
>the manipulation of network scripts as it does expect enp0s20f1 no not 
>be a slave of a bond. Naming of the devices becomes clear from the 
>following paragraph.
>
>After multiple failed experiments to set up the server with bonded NIC 
>(each implying configuring the server from scratch), I took the 
>following approach this time:
>- The server has two LAN facing NICs enp0s20f0, enp0s20f1. The other 
>two NICs point to the other server in the intended gluster cluster - 
>creating a bond for them is not problematical.
>- Setting up Centos 7, I just used enp0s20f0 with no bond and bridge.
>- Deploying oVirt one gets asked for an unused NIC, so I selected 
>enp0s20f1. Then the network was only resting on enp0s20f1 with a 
>working setup of oVirt - just without bonding two NICs.
>
>I am still surprised that this HA network issue is so difficult to 
>manage in a software that is largely about high availability. Can 
>anyone please indicate how to proceed towards NIC bonding?
>
>Regards,
>
>Michael
>
>
>Network Scripts as set manually:
>
>DEVICE=enp0s20f0
>TYPE=Ethernet
>USERCTL=no
>SLAVE=yes
>MASTER=bond0
>BOOTPROTO=none
>HWADDR=00:25:90:F5:18:9A
>NM_CONTROLLED=no
>
>DEVICE=enp0s20f1
>TYPE=Ethernet
>USERCTL=no
>SLAVE=yes
>MASTER=bond0
>BOOTPROTO=none
>HWADDR=00:25:90:F5:18:9B
>NM_CONTROLLED=no
>
>DEVICE=bond0
>ONBOOT=yes
>BONDING_OPTS='mode=802.3ad miimon=100'
>BRIDGE=ovirtmgmt
>NM_CONTROLLED=no
>
>DEVICE=ovirtmgmt
>ONBOOT=yes
>TYPE=Bridge
>IPADDR=192.168.12.40
>NETMASK=255.255.255.0
>GATEWAY=192.168.12.1
>DNS=192.168.12.1
>NM_CONTROLLED=no
>
>
>
>
>-----Ursprüngliche Nachricht-----
>Von: Soeren Malchow [mailto:soeren.malchow at mcon.net]
>Gesendet: Montag, 6. Juli 2015 15:01
>An: Prof. Dr. Michael Schefczyk; users at ovirt.org
>Betreff: Re: [ovirt-users] oVirt 3.5.3 Host Network Interfaces
>
>Dear Michael
>
>We actually created the ovirtmgmt and the bond manually upfront and 
>then in the ³Setup Hosts Network² we basically did this again 
>(including setting the IP address), regarding the bonding in the 
>gluster network we did not have a problem, you just drag one interface 
>onto the other and then select the bonding mode, where you can also go 
>for bonding mode TLB or ALB if you choose ³custom² or just LACP if you 
>have switches that support this.
>
>
>Step by Step:
>
>- set the engine to maintenance and shut it down
>- configure the bond on the 2 nics for the ovirtmgmt bridge ( em1+em2 
>->
>bond0 -> ovirtmgmt )
>- configure the IP on the bridge
>- reboot the server and see whethter it comes up correctly
>- remove maintenance and let engine start
>- Set up the ovirtmgmt in hosts networks but do not forget to set IP 
>and Gateway as well.
>
>Though it should work without this hassle (if the bonding mode on the 
>switch and server is compatible) but this way it is easy to get server 
>and switch in the same mode and working without having to do anything 
>in ovirt first.
>
>Hope that helps
>
>Regards
>Soeren
>
>
>
>On 02/07/15 00:31, "users-bounces at ovirt.org on behalf of Prof. Dr.
>Michael Schefczyk" <users-bounces at ovirt.org on behalf of 
>michael at schefczyk.net>
>wrote:
>
>>Dear All,
>>
>>Having set up a Centos 7 server with Gluster, oVirt 3.5.3 and hosted 
>>engine according to
>>https://community.redhat.com/blog/2014/10/up-and-running-with-ovirt-3-
>>5 /, I was hoping that the NIC management and particularly NIC 
>>bond/bridge capabilities would have improved a bit. My server has four 
>>NICs, two connected to the LAN and two to an adjacent server to be 
>>used as a Gluster network between the two servers. My aim is to use 
>>NIC bonding for two NICs each.
>>
>>Via the engine, I would like to use Hosts -> Network Interfaces -> 
>>Setup Host Networks. As I use hosted engine, I cannot set the only 
>>host to maintenance mode. At least during normal operations, however, 
>>I am neither able to change the ovirt bridge from DHCP to static IP 
>>nor create a bond consisting of the two LAN facing NICs. In each case 
>>I get, "Error while executing action Setup Networks: Network is 
>>currently being used".
>>Editing the network scripts manually is not an option either, as that 
>>does not survive a reboot. Contrary to this real view, everything 
>>should be easily configurable according to section 6.6 of the oVirt 
>>administration guide.
>>
>>One workaround approach could be to temporarily move one NIC 
>>connection from the adjacent server to the LAN or even temporarily 
>>swap both pairs of NICs and edit interfaces while they are not in use. 
>>Is this really the way forward? Should there not be a more elegant 
>>approach not requiring physically plugging NIC connections just to 
>>work around such issue?
>>
>>Regards,
>>
>>Michael
>>
>>
>
>





More information about the Users mailing list