Host network management roadmap inquiry
Livnat Peer
lpeer at redhat.com
Mon Aug 6 07:53:52 UTC 2012
On 06/08/12 10:16, Mark Wu wrote:
>
> Hi Livnat,
>
> Many thanks for your reply! Please see my inline comments.
>
> On 08/05/2012 01:06 AM, Livnat Peer wrote:
>> On 01/08/12 15:59, Mark Wu wrote:
>>> Sorry for cross-posting!
>>>
>>> I would like to inquiry about the roadmap of host network management in
>>> oVirt in order to
>>> make sure the ideas to be worked on are welcomed by community.
>>>
>>> I did some initial investigation on the following topics. I am not very
>>> familiar with them, so the information may contain some inaccuracies or
>>> errors.
>>>
>> Hi Mark,
>>
>> My name is Livnat Peer, I'm focused on Networking in oVirt.
>> I am wondering if there is an interest for a monthly meeting on
>> networking in oVirt. I think we can discuss the current status in
>> networking features/bugs and the road map for future oVirt versions.
> Sure, I am glad to join. Thanks for your invitation.
An invite was sent to the lists.
>>
>>> netcf:
>>>
>>> It provides cross-platform network configuration library/tool by
>>> converting the XML definition of an interface into local config file.
>>> It's already used by libvirt to manage host network interfaces.It
>>> supports all network entities including bridge, vlan, bond, nic. And it
>>> also supports configuration rollback. The benefit for vdsm is making
>>> host network stack configuration easy to port to other distros.
>>>
>>> Problems found:
>>> It doesn't restore interface live state during config transaction
>>> now. There's a feature request submit for it.
>>> There're some advanced settings not supported in netcf, like
>>> 'NM_CONTROLLED' and some less used bonding options.
>>>
>>> It doesn't provide python binding officially. But we can use libvirt
>>> API to integrate it into vdsm. It shouldn't have any impact on engine side.
>>>
>> Making it easy to consume vdsm in other distros has great value for the
>> ovirt project, I don't see a reason why not to do that.
>> I think we should start with mapping the gaps of the functionality
>> currently used by vdsm and see what is missing for us to use netcf.
> I am going to implement a prototype for it. Probably, we can find more
> what's missing in netcf during the prototype development.
>> I think there was a proposal to use Network Manager in Fedora that also
>> was supposed to work with netcf but I don't have more details on that,
>>
>> danken - do you recall something more specific?
>>
>> BTW - Can you please send the link to the feature request for netcf to
>> support restore?
> Here's the feature request, and I have added you to the cc list :)
> https://bugzilla.redhat.com/show_bug.cgi?id=737149
thank you
>>
>>> IEEE 802.1Qbg(VEPA)
>>>
>>> It can offload network switching from server to external physical
>>> switch. It makes all VMs' traffic visible to physical switch, and
>>> therefore the existing switch functions (firewall, QoS etc) can be
>>> applied to VM immediately. The offload also frees up the server resource
>>> used by switching.
>>> Now libvirt supports it by using macvtap as vif and working with
>>> lldpad, which registers vif's mac/vlan information to the physical
>>> switch. We can just add a 'virtualport' element to an interface XML
>>> definition to add a VEPA interface. Probably, to support it in oVirt we
>>> still need configure lldpad and query available VSI types for
>>> virtualport profile.
>>>
>> when discussing the modeling of 802.1Qbg we should also look into
>> 802.1Qbh, the modeling of the two should have a lot in common.
>>
>> We looked into modeling the above two in the past but did not get a
>> chance to actually work on it yet.
>>
>> When adding support for a new technology in ovirt, especially in the
>> modeling phase I think it is important to understand how ovirt users are
>> going to use this technology and how the engine and vdsm together are
>> going to provide a complete solution for our users.
>>
> Make sense. We need collect more customer use case of them before
> modelling it.
>>
>>
>>> quantum
>>>
>>> Both the plugins openvswitch and linuxbridge stores abstract network
>>> entities (network, port) in database and create bridge/vlan via the tool
>>> ip/brctl or ovs-vsctl on demand. Only one bridge is created on one
>>> server and one vlan is created for each virtual network. That means that
>>> only one nic can be configured for vm network. It doesn't configure nic
>>> or bond even if openvswitch also supports bond. Both of traditional
>>> network stack configuration and quantum will be supported oVirt for
>>> different purpose, right?
>>>
>> We had some discussions on integration with Quntum which included a few
>> upstream calls to discuss the gaps we have in order to use quantum in
>> ovirt. We had Gary that is working on quantum in these sessions and the
>> link to the summary of our work so far was sent earlier on this thread.
>>
>>
>>
>> Other than the above I maintain a wiki page with all the gaps we are
>> aware of for networking in Ovirt -
>>
>> http://wiki.ovirt.org/wiki/Networking
>>
>> There you can see that there was a proposal to use Network-Manager in VDSM.
>>
>> I see that Fabian split the page to features and technologies, thanks
>> Fabian :)
>>
>>
>> Livnat
>>
>>
>>
>>> Any comments? Thanks!
>>>
>>>
>>> _______________________________________________
>>> Arch mailing list
>>> Arch at ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/arch
>>>
>
More information about the Arch
mailing list