[ovirt-users] How long do your migrations last?
Roy Golan
rgolan at redhat.com
Thu Feb 19 08:30:06 UTC 2015
On 02/13/2015 08:20 PM, Markus Stockhausen wrote:
>> Von: users-bounces at ovirt.org [users-bounces at ovirt.org]" im Auftrag von "Darrell Budic [budic at onholyground.com]
>> Gesendet: Freitag, 13. Februar 2015 19:03
>> An: Nicolas Ecarnot
>> Cc: users
>> Betreff: Re: [ovirt-users] How long do your migrations last?
>>
>> I’m under the impression it depends more on the hosts memory assignment
>> than disk size. libvirt has to synchronize that over your networking setup. Your
>> times sound like mine over 1G ethernet with a 9000 MTU, most of my machines
>> are 1-4GB ram. I’ve another setup with a 10G backend that can migrate larger
>> machines much faster. Things that do a lot of memory access (databases, say)
>> or use more of their allocated memory, tend to take longer to migrate as it’s
>> more work for libvirt to get it synchronized.
>>
>> A 10G+ backend is the best way to speed this up, and there are libvirt variables
>> you can tweak to allocate more bandwidth to a migration (and the # of simultaneous
>> migrations you allow). I think the defaults are 3 at max of 30% of your available
>> bandwidth. I don’t think this takes bonds into account, so if you have bonded
>> connections, you may be able to allocate more % or allow more simultaneous
>> migrations. Keep in mind that if you’re sharing bandwidth/media with iSCSI, that
>> some bandwidth will be needed there as well, how much depends on your storage
>> load. A dedicated NIC could definitely help, especially if you’re trying to tune libvirt for this.
>>
>> -Darrell
>>
>>> On Feb 13, 2015, at 8:53 AM, Nicolas Ecarnot <nicolas at ecarnot.net> wrote:
>>>
>>> Hello list,
>>>
>>> Our storage domains are iSCSI on dedicated network, and when migrating VMs, the duration varies according to the size of the vDisks.
>>>
>>> The smallest VMs are migrated in about 20 seconds, while the biggest one may take more than 5 or 10 minutes.
>>> The average duration is 90 seconds.
>>>
>>> Questions :
>>>
>>> 1- Though I may have understood that the task of migration was made by the SPM, I don't know what it actually does? (which bytes goes where)
>>>
>>> 2- Do our times sound OK, or does it look like improvable?
>>>
>>> 3- What bottleneck should I investigate? I'm thinking about the dedicated hardware NICs setup of the hosts, the SAN, the MTU has already been setup at 9000...
>>>
>>> Any ideas welcomed.
>>>
>>> --
>> Nicolas Ecarnot
> If we speak about migration of VMs - relocating qemu process -
> than speed depends mostly on memory change pressure. The
> more changes per second the more restart the process needs.
> Best solution to speed it up is to enlarge migration_max_bandwidth
> in /etc/vdsm/vdsm.conf from default 30MB/s to something higher.
> We use 150Mb/s in 10Gbit network. With default we have seen
> migrations that will not come to an end.
+1
moreover, upstream qemu have some more ways to speed this up
- post migration copy (a.k.a "user page faults") - basically migrate
immediate to the dest and copy mem pages from source
- migration over rdma
- migration throttling -
http://lists.gnu.org/archive/html/qemu-devel/2013-05/msg00040.html
> When talking about disks. It depends on how many disks you
> have attached to a single VM. The more disks and the more
> similar their sizes they are the faster you can migrate/operate
> on them.
>
> For example take a SAP system with 3 disks of 20GB system
> 20 GB executables and 300GB database. When issung disk
> operations (like snapshots) they will start in parallel for each disk.
> Disk operations will finish earlier for smaller disks. So in the end
> you will have only one operation left that may take hours.
>
> E.g. delete snapshot will start at ~220MB/s when running with
> three disks and end at ~60MB/s when only one disk snapshot
> deletion is active.
>
> Best regards.
>
> Markus=
>
>
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20150219/aaa4e17e/attachment-0001.html>
More information about the Users
mailing list