feature suggestion: migration network

Dan Kenigsberg danken at redhat.com
Thu Jan 10 12:04:58 UTC 2013


On Thu, Jan 10, 2013 at 11:04:04AM +0800, Mark Wu wrote:
> On 01/08/2013 09:04 PM, Dan Kenigsberg wrote:
> >There's talk about this for ages, so it's time to have proper discussion
> >and a feature page about it: let us have a "migration" network role, and
> >use such networks to carry migration data
> >
> >When Engine requests to migrate a VM from one node to another, the VM
> >state (Bios, IO devices, RAM) is transferred over a TCP/IP connection
> >that is opened from the source qemu process to the destination qemu.
> >Currently, destination qemu listens for the incoming connection on the
> >management IP address of the destination host. This has serious
> >downsides: a "migration storm" may choke the destination's management
> >interface; migration is plaintext and ovirtmgmt includes Engine which
> >sits may sit the node cluster.
> >
> >With this feature, a cluster administrator may grant the "migration"
> >role to one of the cluster networks. Engine would use that network's IP
> >address on the destination host when it requests a migration of a VM.
> >With proper network setup, migration data would be separated to that
> >network.
> >
> >=== Benefit to oVirt ===
> >* Users would be able to define and dedicate a separate network for
> >   migration. Users that need quick migration would use nics with high
> >   bandwidth. Users who want to cap the bandwidth consumed by migration
> >   could define a migration network over nics with bandwidth limitation.
> >* Migration data can be limited to a separate network, that has no
> >   layer-2 access from Engine
> >
> >=== Vdsm ===
> >The "migrate" verb should be extended with an additional parameter,
> >specifying the address that the remote qemu process should listen on. A
> >new argument is to be added to the currently-defined migration
> >arguments:
> >* vmId: UUID
> >* dst: management address of destination host
> >* dstparams: hibernation volumes definition
> >* mode: migration/hibernation
> >* method: rotten legacy
> >* ''New'': migration uri, according to http://libvirt.org/html/libvirt-libvirt.html#virDomainMigrateToURI2 such as tcp://<ip of migration network on remote node>
> If we would like to resolve the migration storm, we also could add
> the qemu migration bandwidth limit as a parameter for migrate verb.
> Currently, we use it
> as a static configuration on vdsm host. It's not flexible.  Engine
> could pass appropriate parameters according to the traffic load and
> bandwidth of migration network.
> It also could be specified by customer according to the priority
> they suppose.

Yes, we should be able to cap vm traffic on vNics and on migration.

But as I've answered to Doron and Simon, I believe that bandwidth
capping should be kept out of this specific feature: when we define it
for VM networks, we should keep migration netowkr in mind, too.



More information about the Arch mailing list