vdsm networking changes proposal

Dan Kenigsberg danken at redhat.com
Tue Feb 26 15:45:50 UTC 2013


On Tue, Feb 26, 2013 at 10:11:46AM -0500, Alon Bar-Lev wrote:
> 
> 
> ----- Original Message -----
> > From: "Dan Kenigsberg" <danken at redhat.com>
> > To: "Alon Bar-Lev" <alonbl at redhat.com>
> > Cc: "Antoni Segura Puimedon" <asegurap at redhat.com>, vdsm-devel at fedorahosted.org, arch at ovirt.org
> > Sent: Monday, February 25, 2013 12:34:46 PM
> > Subject: Re: vdsm networking changes proposal
> > 
> > On Sun, Feb 17, 2013 at 03:57:33PM -0500, Alon Bar-Lev wrote:
> > > Hello Antoni,
> > > 
> > > Great work!
> > > I am very excited we are going this route, it is first of many to
> > > allow us to be run on different distributions.
> > > I apologize I got to this so late.
> > > 
> > > Notes for the model, I am unsure if someone already noted.
> > > 
> > > I think that the abstraction should be more than entity and
> > > properties.
> > > 
> > > For example:
> > > 
> > > nic is a network interface
> > > bridge is a network interface and ports network interfaces
> > > bound is a network interface and slave network interfaces
> > > vlan is a network interface and vlan id
> > > 
> > > network interface can have:
> > > - name
> > > - ip config
> > > - state
> > > - mtu
> > > 
> > > this way it would be easier to share common code that handle pure
> > > interfaces.
> > 
> > I agree with you - even though OOD is falling out of fashion in
> > certain
> > circles.
> 
> If we develop software like dressing fashion, we end up with software working for a single season.
> 
> > 
> > > 
> > > I don't quite understand the 'Team' configurator, are you
> > > suggesting a
> > > provider for each technology?
> > 
> > Just as we may decide to move away from standard linux bridge to
> > ovs-based bridging, we may switch from bonding to teaming. I do not
> > think that we should do it now, but make sure that the design
> > accomodates this.
> 
> So there should a separate provider for each object type, unless I am missing something.
> 
> > > 
> > > bridge
> > > - iproute2 provider
> > > - ovs provider
> > > - ifcfg provider
> > > 
> > > bond
> > > - iproute2
> > > - team
> > > - ovs
> > > - ifcfg
> > > 
> > > vlan
> > > - iproute2
> > > - ovs
> > > - ifcfg
> > > 
> > > So we can get a configuration of:
> > > bridge:iproute2
> > > bond:team
> > > vlan:ovs
> > 
> > I do not think that such complex combinations are of real interest.
> > The
> > client should not (currently) be allowed to request them. Some say
> > that
> > the specific combination that is used by Vdsm to implement the
> > network
> > should be defined in a config file. I think that a python file is
> > good
> > enough for that, at least for now.
> 
> I completely lost you, and how it got to do with python nor file.
> 
> If we have implementation of iproute2 that does bridge, vlan, bond, but we like to use ovs for bridge and vlan, how can we reuse the iproute2 provider for the bond?
> 
> If we register provider per object type we may allow easier reuse.

Yes, this is the plan. However I do not think it is wise to support all
conceivable combinations of provider/object. A fixed one, such as "ovs
for bridge and vlan, iproute2 for bond" is good enough.

> 
> This, however, does not imply that the implementation is in python (oh
> well...) nor if the implementation is single file or multiple file...

> 
> > > 
> > > ?
> > > 
> > > I also would like us to explore a future alternative of the network
> > > configuration via crypto vpn directly from qemu to another qemu,
> > > the
> > > idea is to have a kerberos like key per layer3(or layer2)
> > > destination,
> > > while communication is encrypted at user space and sent to a flat
> > > network. The advantage of this is that we manage logical network
> > > and
> > > not physical network, while relaying on hardware to find the best
> > > route to destination. The question is how and if we can provide
> > > this
> > > via the suggestion abstraction. But maybe it is too soon to address
> > > this kind of future.
> > 
> > This is something completely different, as we say in Python.
> > The nice thing about your idea, is that in the context of host
> > network
> > configuration we need nothing more than our current bridge-bond-nic.
> > The sad  thing about your idea, is that it would scale badly with the
> > nubmer of virtual networks. If a new VM comes live and sends an ARP
> > who-has broadcast message - which VMs should be bothered to attempt
> > to
> > decrypt it?
> 
> This is easily filtered by a tag. Just like in MPLS.

How is it different from a vlan tag, then? Or that you suggest that we
trust qemu to do the tagging, instead of the host kernel?




More information about the Arch mailing list