feature suggestion: in-host network with no external nics

Dan Kenigsberg danken at redhat.com
Tue Jan 8 12:22:14 UTC 2013


On Mon, Jan 07, 2013 at 12:07:15PM -0500, Simon Grinberg wrote:
> 
> 
> ----- Original Message -----
> > From: "Dan Kenigsberg" <danken at redhat.com>
> > To: "arch" <arch at ovirt.org>
> > Cc: "Livnat Peer" <lpeer at redhat.com>, "Moti Asayag" <masayag at redhat.com>, "Michael Pasternak" <mpastern at redhat.com>
> > Sent: Thursday, January 3, 2013 12:07:22 PM
> > Subject: feature suggestion: in-host network with no external nics
> > 
> > Description
> > ===========
> > In oVirt, after a VM network is defined in the Data Center level and
> > added to a cluster, it needs to be implemented on each host. All VM
> > networks are (currently) based on a Linux software bridge. The
> > specific
> > implementation controls how traffic from that bridge reaches the
> > outer
> > world. For example, the bridge may be connected externally via eth3,
> > or
> > bond3 over eth2 and p1p2. This feature is about implementing a
> > network
> > with no network interfaces (NICs) at all.
> > 
> > Having a disconnected network may first seem to add complexity to VM
> > placement. Until now, we assumed that if a network (say, blue) is
> > defined on two hosts, the two hosts lie in the same broadcast domain.
> > If
> > a couple of VMs are connected to "blue" it does not matter where they
> > run - they would always hear each other. This is of course no longer
> > true if one of the hosts implements "blue" as nicless.
> > However, this is nothing new. oVirt never validates the single
> > broadcast
> > domain assumption, which can be easily broken by an admin: on one
> > host,
> > an admin can implement blue using a nic that has completely unrelated
> > physical connectivity.
> > 
> > Benefits
> > ========
> > * All-in-One http://www.ovirt.org/Feature/AllInOne use case: we'd
> > like
> >   to have a complete oVirt deployment that does not rely on external
> >   resources, such as layer-2 connectivity or DNS.
> > * Collaborative computing: an oVirt user may wish to have a group
> >   of VMs with heavy in-group secret communication, where only one of
> >   the
> >   VMs exposes an external web service. The in-group secret
> >   communication
> >   could be limited to a nic-less network, no need to let it spill
> >   outside.
> > * [SciFi] NIC-less networks can be tunneled to remove network
> > segments
> >   over IP, a layer 2 NIC may not be part of its definition.
> > 
> > Vdsm
> > ====
> > Vdsm already supports defining a network with no nics attached.
> > 
> > Engine
> > ======
> > I am told that implementing this in Engine is quite a pain, as
> > network
> > is not a first-class citizen in the DB; it is more of an attribute of
> > its primary external interface.
> 
> There is more then that. 

Indeed what you describe is alot more than my planned idea. I'd say that
my Nicless_Network feature is a small building block of yours. However,
since it is a strictly required building block, Nicless_Network should
be implemented as a first stage of smarter scheduler logic above it.

I'd fire up a new Host Only Network feature page that would note
Nicless_Network as a requirement.

> You may take the approach of: 
> 1. Configure this network statically on a host 
> 2. Pin the VMs to host since otherwise what use there is to define such a network on VMs if the scheduler is free to schedule the VMs on different hosts?

I tried to answer this worry in my original post. A user may want to
redifine "blue" on eth2 instead of eth1. Another user may have his
reasons to define "blue" with no nic at all - he may devise a cool
tunnel to connect his host to the world. Nicless_Network is only about
allowing this.

I'd say that a host-only network is nicless network with some logic
attached to it; I actually like the one you detail here:

> 
> Or, 
> 1. Create this network ad-hoc according to the first VM that needs it 
> 2. Use the VM affinity feature to state that these VMs must run together on the same host
> 3. Assigning a network to these VMs automatically configures the affinity.
> 
> The first is simplistic, and requires minimal changes to the engine (you do need to allow LN as device-less entity*) , the second approach is more robust and user friendly but require more work in the engine. 
> 
> On top of the above you may like to:
> 1. Allow this network to be NATed - libvirt already supports that - should be simple.
> 2. Combine this with the upcoming IP setting for the guests - A bit more complex 
Once we have IPAM ticking, we could apply it to any network, including
this. Or do you mean anything else?

> 3. May want to easily define it as a Inter-VM-group-channel property
> same as affinity-group instead of explicit define such a network.
> Meaning define group of VMs. Define affinity, define
> Inter-VM-group-channel, define group's SLA etc - Let's admit that VMs
> that require this type of internal networking are part of VM group
> that together compose a workload/application. 

I'm not sure why defining an "Inter-VM-group-channel" is easier than
calling it a "network". The VMs would see a NIC, and someone should
decide if there is only one NIC per vm and provide it with a mac
address, so I do not see the merit of this abstraction layer. I'm
probably missing something crucial.

> 
> *A relativity easy change under current modelling (a model that I
> don't like in the first place), is to define another 'NIC' of type
> bridge (same as you have VLAN nic, bond nic, and NIC nic) so a
> 'floating bridge' is a LN on the Bridge NIC. Ugly but this is the
> current modelling.

We may implement Nicless_Network as something like that, under the hood.
(but bridge, vlan and bond are not nics, they are Linux net devices)
> 
> > 
> > This message is an html-to-text rendering of
> > http://www.ovirt.org/Features/Nicless_Network
> > (I like the name, it sounds like a jewelery)
> 
> The name commonly used for this is 'Host only network' 
> Though we really into inventing new terminologies to things, in this case I would rather not since it's used in similar solutions, (VMWare, Parallels, Virtual-Box, etc) hence it's not vendor specific. 
> 
> In any case Nicless is bad since external interface may also be a Bond. 



More information about the Arch mailing list