<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=ISO-8859-1">
</head>
<body bgcolor="#FFFFFF" text="#000000">
Sorry for cross-posting!<br>
<br>
I would like to inquiry about the roadmap of host network management
in oVirt in order to<br>
make sure the ideas to be worked on are welcomed by community. <br>
<br>
I did some initial investigation on the following topics. <font
color="#ff0000">I am not very familiar with them, so the
information may contain some inaccuracies or errors.<br>
</font><br>
netcf:<br>
<br>
It provides cross-platform network configuration library/tool by
converting the XML definition of an interface into local config
file. It's already used by libvirt to manage host network
interfaces.It supports all network entities including bridge, vlan,
bond, nic. And it also supports configuration rollback. The benefit
for vdsm is making host network stack configuration easy to port to
other distros. <br>
<br>
Problems found:<br>
It doesn't restore interface live state during config
transaction now. There's a feature request submit for it.<br>
There're some advanced settings not supported in netcf, like
'NM_CONTROLLED' and some less used bonding options.<br>
<br>
It doesn't provide python binding officially. But we can use
libvirt API to integrate it into vdsm. It shouldn't have any impact
on engine side.<br>
<br>
<br>
IEEE 802.1Qbg(VEPA)<br>
<br>
It can offload network switching from server to external
physical switch. It makes all VMs' traffic visible to physical
switch, and therefore the existing switch functions (firewall, QoS
etc) can be applied to VM immediately. The offload also frees up the
server resource used by switching. <br>
Now libvirt supports it by using macvtap as vif and working
with lldpad, which registers vif's mac/vlan information to the
physical switch. We can just add a 'virtualport' element to an
interface XML definition to add a VEPA interface. Probably, to
support it in oVirt we still need configure lldpad and query
available VSI types for virtualport profile.<br>
<br>
<br>
quantum<br>
<br>
Both the plugins openvswitch and linuxbridge stores abstract
network entities (network, port) in database and create bridge/vlan
via the tool ip/brctl or ovs-vsctl on demand. Only one bridge is
created on one server and one vlan is created for each virtual
network. That means that only one nic can be configured for vm
network. It doesn't configure nic or bond even if openvswitch also
supports bond. Both of traditional network stack configuration and
quantum will be supported oVirt for different purpose, right?<br>
<br>
Any comments? Thanks!<br>
</body>
</html>