Am 28.09.2017 um 12:44 hat Nir Soffer geschrieben:
On Thu, Sep 28, 2017 at 12:03 PM Gianluca Cecchi
<gianluca.cecchi(a)gmail.com>
wrote:
> Hello,
> I'm on 4.1.5 and I'm cloning a snapshot of a VM with 3 disks for a total
> of about 200Gb to copy
> The target I choose is on a different domain than the source one.
> They are both FC storage domains, with the source on SSD disks and the
> target on SAS disks.
>
> The disks are preallocated
>
> Now I have 3 processes of kind:
> /usr/bin/qemu-img convert -p -t none -T none -f raw
>
/rhev/data-center/59b7af54-0155-01c2-0248-000000000195/fad05d79-254d-4f40-8201-360757128ede/images/8f62600a-057d-4d59-9655-631f080a73f6/21a8812f-6a89-4015-a79e-150d7e202450
> -O raw
>
/rhev/data-center/mnt/blockSD/6911716c-aa99-4750-a7fe-f83675a2d676/images/c3973d1b-a168-4ec5-8c1a-630cfc4b66c4/27980581-5935-4b23-989a-4811f80956ca
>
> but despite capabilities it seems it is copying using very low system
> resources.
>
We run qemu-img convert (and other storage related commands) with:
nice -n 19 ionice -c 3 qemu-img ...
ionice should not have any effect unless you use the CFQ I/O scheduler.
The intent is to limit the effect of virtual machines.
> I see this both using iotop and vmstat
>
> vmstat 3 gives:
> ----io---- -system-- ------cpu-----
> bi bo in cs us sy id wa st
> 2527 698 3771 29394 1 0 89 10 0
>
us 94% also seems very high - maybe this hypervisor is overloaded with
other workloads?
wa 89% seems very high
The alignment in the table is a bit off, but us is 1%. The 94 you saw is
part of cs=29394. A high percentage for wait is generally a good sign
because that means that the system is busy with actual I/O work.
Obviously, this I/O work is rather slow, but at least qemu-img is making
requests to the kernel instead of doing other work, otherwise user would
be much higher.
Kevin