[Engine-devel] live migration and different technologies

Dan Kenigsberg danken at redhat.com
Mon Jun 11 12:41:38 UTC 2012


On Mon, Jun 11, 2012 at 10:47:16AM +0100, Daniel P. Berrange wrote:
> On Sat, Jun 09, 2012 at 03:57:40PM +0300, Itamar Heim wrote:
> > On 06/08/2012 06:54 PM, Daniel P. Berrange wrote:
> > >On Wed, Jun 06, 2012 at 05:15:53PM +0300, Itamar Heim wrote:
> > >>Hi Daniel,
> > >>
> > >>on the quantum-ovirt call today the question of live migration
> > >>between multiple technologies was raised.
> > >>
> > >>iirc, you implemented the abstraction in libvirt between what the
> > >>guest sees and the actual host networking implementation for live
> > >>migration.
> > >>
> > >>can you please share if there are any considerations around live
> > >>migrations across different network implementations (bridge, sr-iov,
> > >>ovs, qbg, openflow, etc.)
> > >
> > >Yes, we added the ability to use libvirt's 'virtual network' APIs
> > >(virNetworkXXXXX) to define host networks using bridging, macvtap,
> > >etc, etc. A guest's NICs can then be configured solely using
> > ><interface type='network'>. This means that the guest XML will
> > >not have any host-specific data in it, as you see when using
> > ><interface type='bridge'>  or<interface type='direct'>
> > >
> > >This means you can migrate between machines where the bridges have
> > >different names (eg br0 on host A and br7 on host B), without any
> > >limitations.
> > >
> > >You can also migrate between different impls of the same technology
> > >(eg traditional software bridging vs macvtap bridging without
> > >limitations.
> > >
> > >Finally, you can migrate between completely different technologies
> > >(eg bridging vs vepa), but you will likely loose connectivity in
> > >the guests, since the technologies are not compatible at the ethernet
> > >layer.
> > 
> > can you please explain this point - how would packet going out of
> > the host or arriving to the guest would be different between a
> > bridged and a vepa implemtnaiton?
> 
> I'm not the expert on VEPA - I'm just relaying what I have been told
> wrt VEPA modes in the past.
> 
> IIUC, with VEPA modes there is quite alot of extra traffic due to a
> handshake negotiation between the host & switch, before any guest
> traffic can pass, and there needs to be a special synchronization
> done with VEPA during migration to maintain state in the switch.

But at least on the libvirt level, <virtualport> tags are ignored if a
domain is migrated from Qbsomething to a bridge?

I suppose that a default <virtualport> tag cannot be generated, so
migration from bridge to Qb* is impossible without the source tweaking
the destinationxml?



More information about the Engine-devel mailing list