Host network management roadmap inquiry

Mark Wu wudxw at linux.vnet.ibm.com
Wed Aug 1 12:59:14 UTC 2012


Sorry for cross-posting!

I would like to inquiry about the roadmap of host network management in 
oVirt in order to
make sure the ideas to be worked on are welcomed by community.

I did some initial investigation on the following topics. I am not very 
familiar with them, so the information may contain some inaccuracies or 
errors.

netcf:

     It provides cross-platform network configuration library/tool by 
converting the XML definition of an interface into local config file. 
It's already used by libvirt to manage host network interfaces.It 
supports all network entities including bridge, vlan, bond, nic. And it 
also supports configuration rollback.  The benefit for vdsm is making 
host network stack configuration easy to port to other distros.

Problems found:
     It doesn't restore interface live state during config transaction 
now. There's a feature request submit for it.
     There're some advanced settings not supported in netcf, like 
'NM_CONTROLLED' and some less used bonding options.

     It doesn't provide python binding officially. But we can use 
libvirt API to integrate it into vdsm. It shouldn't have any impact on 
engine side.


IEEE 802.1Qbg(VEPA)

      It can offload network switching from server to external physical 
switch. It makes all VMs' traffic visible to physical switch, and 
therefore the existing switch functions (firewall, QoS etc) can be 
applied to VM immediately. The offload also frees up the server resource 
used by switching.
      Now libvirt supports it by using macvtap as vif and working with 
lldpad, which registers vif's mac/vlan information to the physical 
switch. We can just add a 'virtualport' element to an interface XML 
definition to add a VEPA interface. Probably, to support it in oVirt we 
still need configure lldpad  and query available VSI types for 
virtualport profile.


quantum

    Both the plugins openvswitch and linuxbridge stores abstract network 
entities (network, port) in database and create bridge/vlan via the tool 
ip/brctl or ovs-vsctl on demand. Only one bridge is created on one 
server and one vlan is created for each virtual network. That means that 
only one nic can be configured for vm network.  It doesn't configure nic 
or bond even if openvswitch also supports bond. Both of traditional 
network stack configuration and quantum will be supported oVirt for 
different purpose, right?

    Any comments? Thanks!
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/arch/attachments/20120801/36ed3a7c/attachment.html>


More information about the Arch mailing list