1 Mar
2018
1 Mar
'18
5:27 p.m.
------=_=-_OpenGroupware_org_NGMime-18019-1519921630.582362-42------
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Content-Length: 38502
I Milan,
I tried to activate the debug mode, but the restart of libvirt crashed =
something on the host : it was no more possible to start any vm on it, =
and migration to it just never started. So I decided to restart it, and=
to be sure, I've restarted all the hosts.
And... now the migration of all VMs, simple or multi-disks, works ?!? S=
o, there was probably something hidden that was resetted or repaired by=
the global restart !.... In french, we call that "tomber en marche" ;)=
So : solved. Thank you for the wasted time !
--
Cordialement,
Frank Soyer
Mob. 06 72 28 38 53 - Fix. 05 49 50 52 34
Le Lundi, F=C3=A9vrier 26, 2018 12:59 CET, Milan Zamazal <mzamazal@redh=
at.com> a =C3=A9crit:
=C2=A0"fsoyer" <fsoyer@systea.fr> writes:
> I don't beleive that this is relatd to a host, tests have been done f=
rom victor
> source to ginger dest and ginger to victor. I don't see problems on s=
torage
> (gluster 3.12 native managed by ovirt), when VMs with a uniq disk fro=
m 20 to
> 250G migrate without error in some seconds and with no downtime.
The host itself may be fine, but libvirt/QEMU running there may expose
problems, perhaps just for some VMs. According to your logs something
is not behaving as expected on the source host during the faulty
migration.
> How ca I enable this libvirt debug mode ?
Set the following options in /etc/libvirt/libvirtd.conf (look for
examples in comments there)
- log=5Flevel=3D1
- log=5Foutputs=3D"1:file:/var/log/libvirt/libvirtd.log"
and restart libvirt. Then /var/log/libvirt/libvirtd.log should contain
the log. It will be huge, so I suggest to enable it only for the time
of reproducing the problem.
> --
>
> Cordialement,
>
> Frank Soyer
>
> =C2=A0
>
> Le Vendredi, F=C3=A9vrier 23, 2018 09:56 CET, Milan Zamazal <mzamazal=
@redhat.com> a =C3=A9crit:
> =C2=A0Maor Lipchuk <mlipchuk@redhat.com> writes:
>
>> I encountered a bug (see [1]) which contains the same error mentione=
d in
>> your VDSM logs (see [2]), but I doubt it is related.
>
> Indeed, it's not related.
>
> The error in vdsm=5Fvictor.log just means that the info gathering cal=
l
> tries to access libvirt domain before the incoming migration is
> completed. It's ugly but harmless.
>
>> Milan, maybe you have any advice to troubleshoot the issue? Will the=
>> libvirt/qemu logs can help?
>
> It seems there is something wrong on (at least) the source host. Ther=
e
> are no migration progress messages in the vdsm=5Fginger.log and there=
are
> warnings about stale stat samples. That looks like problems with
> calling libvirt =E2=80=93 slow and/or stuck calls, maybe due to stora=
ge
> problems. The possibly faulty second disk could cause that.
>
> libvirt debug logs could tell us whether that is indeed the problem a=
nd
> whether it is caused by storage or something else.
>
>> I would suggest to open a bug on that issue so we can track it more
>> properly.
>>
>> Regards,
>> Maor
>>
>>
>> [1]
>> https://bugzilla.redhat.com/show=5Fbug.cgi?id=3D1486543 - Migration =
leads to
>> VM running on 2 Hosts
>>
>> [2]
>> 2018-02-16 09:43:35,236+0100 ERROR (jsonrpc/7) [jsonrpc.JsonRpcServe=
r]
>> Internal server error (=5F=5Finit=5F=5F:577)
>> Traceback (most recent call last):
>> File "/usr/lib/python2.7/site-packages/yajsonrpc/=5F=5Finit=5F=5F.py=
", line 572,
>> in =5Fhandle=5Frequest
>> res =3D method(**params)
>> File "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 198=
, in
>> =5FdynamicMethod
>> result =3D fn(*methodArgs)
>> File "/usr/share/vdsm/API.py", line 1454, in getAllVmIoTunePolicies
>> io=5Ftune=5Fpolicies=5Fdict =3D self.=5Fcif.getAllVmIoTunePolicies()=
>> File "/usr/share/vdsm/clientIF.py", line 454, in getAllVmIoTunePolic=
ies
>> 'current=5Fvalues': v.getIoTune()}
>> File "/usr/share/vdsm/virt/vm.py", line 2859, in getIoTune
>> result =3D self.getIoTuneResponse()
>> File "/usr/share/vdsm/virt/vm.py", line 2878, in getIoTuneResponse
>> res =3D self.=5Fdom.blockIoTune(
>> File "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py", line=
47,
>> in =5F=5Fgetattr=5F=5F
>> % self.vmid)
>> NotConnectedError: VM u'755cf168-de65-42ed-b22f-efe9136f7594' was no=
t
>> started yet or was shut down
>>
>> On Thu, Feb 22, 2018 at 4:22 PM, fsoyer <fsoyer@systea.fr> wrote:
>>
>>> Hi,
>>> Yes, on 2018-02-16 (vdsm logs) I tried with a VM standing on ginger=
>>> (192.168.0.6) migrated (or failed to migrate...) to victor (192.168=
.0.5),
>>> while the engine.log in the first mail on 2018-02-12 was for VMs st=
anding
>>> on victor, migrated (or failed to migrate...) to ginger. Symptoms w=
ere
>>> exactly the same, in both directions, and VMs works like a charm be=
fore,
>>> and even after (migration "killed" by a poweroff of VMs).
>>> Am I the only one experimenting this problem ?
>>>
>>>
>>> Thanks
>>> --
>>>
>>> Cordialement,
>>>
>>> *Frank Soyer *
>>>
>>>
>>>
>>> Le Jeudi, F=C3=A9vrier 22, 2018 00:45 CET, Maor Lipchuk <mlipchuk@r=
edhat.com>
>>> a =C3=A9crit:
>>>
>>>
>>> Hi Frank,
>>>
>>> Sorry about the delay repond.
>>> I've been going through the logs you attached, although I could not=
find
>>> any specific indication why the migration failed because of the dis=
k you
>>> were mentionning.
>>> Does this VM run with both disks on the target host without migrati=
on?
>>>
>>> Regards,
>>> Maor
>>>
>>>
>>> On Fri, Feb 16, 2018 at 11:03 AM, fsoyer <fsoyer@systea.fr> wrote:
>>>>
>>>> Hi Maor,
>>>> sorry for the double post, I've change the email adress of my acco=
unt and
>>>> supposed that I'd need to re-post it.
>>>> And thank you for your time. Here are the logs. I added a vdisk to=
an
>>>> existing VM : it no more migrates, needing to poweroff it after mi=
nutes.
>>>> Then simply deleting the second disk makes migrate it in exactly 9=
s without
>>>> problem !
>>>> https://gist.github.com/fgth/4707446331d201eef574ac31b6e89561
>>>> https://gist.github.com/fgth/f8de9c22664aee53722af676bff8719d
>>>>
>>>> --
>>>>
>>>> Cordialement,
>>>>
>>>> *Frank Soyer *
>>>> Le Mercredi, F=C3=A9vrier 14, 2018 11:04 CET, Maor Lipchuk <
>>>> mlipchuk@redhat.com> a =C3=A9crit:
>>>>
>>>>
>>>> Hi Frank,
>>>>
>>>> I already replied on your last email.
>>>> Can you provide the VDSM logs from the time of the migration failu=
re for
>>>> both hosts:
>>>> ginger.local.systea.f <http://ginger.local.systea.fr/>r and v
>>>> ictor.local.systea.fr
>>>>
>>>> Thanks,
>>>> Maor
>>>>
>>>> On Wed, Feb 14, 2018 at 11:23 AM, fsoyer <fsoyer@systea.fr> wrote:=
>>>>>
>>>>> Hi all,
>>>>> I discovered yesterday a problem when migrating VM with more than=
one
>>>>> vdisk.
>>>>> On our test servers (oVirt4.1, shared storage with Gluster), I cr=
eated 2
>>>>> VMs needed for a test, from a template with a 20G vdisk. On this =
VMs I
>>>>> added a 100G vdisk (for this tests I didn't want to waste time to=
extend
>>>>> the existing vdisks... But I lost time finally...). The VMs with =
the 2
>>>>> vdisks works well.
>>>>> Now I saw some updates waiting on the host. I tried to put it in
>>>>> maintenance... But it stopped on the two VM. They were marked "mi=
grating",
>>>>> but no more accessible. Other (small) VMs with only 1 vdisk was m=
igrated
>>>>> without problem at the same time.
>>>>> I saw that a kvm process for the (big) VMs was launched on the so=
urce
>>>>> AND destination host, but after tens of minutes, the migration an=
d the VMs
>>>>> was always freezed. I tried to cancel the migration for the VMs :=
failed.
>>>>> The only way to stop it was to poweroff the VMs : the kvm process=
died on
>>>>> the 2 hosts and the GUI alerted on a failed migration.
>>>>> In doubt, I tried to delete the second vdisk on one of this VMs :=
it
>>>>> migrates then without error ! And no access problem.
>>>>> I tried to extend the first vdisk of the second VM, the delete th=
e
>>>>> second vdisk : it migrates now without problem !
>>>>>
>>>>> So after another test with a VM with 2 vdisks, I can say that thi=
s
>>>>> blocked the migration process :(
>>>>>
>>>>> In engine.log, for a VMs with 1 vdisk migrating well, we see :
>>>>>
>>>>> 2018-02-12 16:46:29,705+01 INFO [org.ovirt.engine.core.bll.Migrat=
eVmToServerCommand]
>>>>> (default task-28) [2f712024-5982-46a8-82c8-fd8293da5725] Lock Acq=
uired
>>>>> to object 'EngineLock:{exclusiveLocks=3D'[3f57e669-5e4c-4d10-85cc=
-d573004a099d=3DVM]',
>>>>> sharedLocks=3D''}'
>>>>> 2018-02-12 16:46:29,955+01 INFO [org.ovirt.engine.core.bll.Migrat=
eVmToServerCommand]
>>>>> (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd82=
93da5725]
>>>>> Running command: MigrateVmToServerCommand internal: false. Entiti=
es
>>>>> affected : ID: 3f57e669-5e4c-4d10-85cc-d573004a099d Type: VMActio=
n
>>>>> group MIGRATE=5FVM with role type USER
>>>>> 2018-02-12 16:46:30,261+01 INFO [org.ovirt.engine.core.vdsbroker.=
MigrateVDSCommand]
>>>>> (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd82=
93da5725]
>>>>> START, MigrateVDSCommand( MigrateVDSCommandParameters:{runAsync=3D=
'true',
>>>>> hostId=3D'ce3938b1-b23f-4d22-840a-f17d7cd87bb1',
>>>>> vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d', srcHost=3D'192.168=
.0.6',
>>>>> dstVdsId=3D'd569c2dd-8f30-4878-8aea-858db285cf69', dstHost=3D'
>>>>> 192.168.0.5:54321', migrationMethod=3D'ONLINE', tunnelMigration=3D=
'false',
>>>>> migrationDowntime=3D'0', autoConverge=3D'true', migrateCompressed=
=3D'false',
>>>>> consoleAddress=3D'null', maxBandwidth=3D'500', enableGuestEvents=3D=
'true',
>>>>> maxIncomingMigrations=3D'2', maxOutgoingMigrations=3D'2',
>>>>> convergenceSchedule=3D'[init=3D[{name=3DsetDowntime, params=3D[10=
0]}],
>>>>> stalling=3D[{limit=3D1, action=3D{name=3DsetDowntime, params=3D[1=
50]}}, {limit=3D2,
>>>>> action=3D{name=3DsetDowntime, params=3D[200]}}, {limit=3D3,
>>>>> action=3D{name=3DsetDowntime, params=3D[300]}}, {limit=3D4,
>>>>> action=3D{name=3DsetDowntime, params=3D[400]}}, {limit=3D6,
>>>>> action=3D{name=3DsetDowntime, params=3D[500]}}, {limit=3D-1, acti=
on=3D{name=3Dabort,
>>>>> params=3D[]}}]]'}), log id: 14f61ee0
>>>>> 2018-02-12 16:46:30,262+01 INFO [org.ovirt.engine.core.vdsbro
>>>>> ker.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-t=
hread-32)
>>>>> [2f712024-5982-46a8-82c8-fd8293da5725] START,
>>>>> MigrateBrokerVDSCommand(HostName =3D victor.local.systea.fr,
>>>>> MigrateVDSCommandParameters:{runAsync=3D'true',
>>>>> hostId=3D'ce3938b1-b23f-4d22-840a-f17d7cd87bb1',
>>>>> vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d', srcHost=3D'192.168=
.0.6',
>>>>> dstVdsId=3D'd569c2dd-8f30-4878-8aea-858db285cf69', dstHost=3D'
>>>>> 192.168.0.5:54321', migrationMethod=3D'ONLINE', tunnelMigration=3D=
'false',
>>>>> migrationDowntime=3D'0', autoConverge=3D'true', migrateCompressed=
=3D'false',
>>>>> consoleAddress=3D'null', maxBandwidth=3D'500', enableGuestEvents=3D=
'true',
>>>>> maxIncomingMigrations=3D'2', maxOutgoingMigrations=3D'2',
>>>>> convergenceSchedule=3D'[init=3D[{name=3DsetDowntime, params=3D[10=
0]}],
>>>>> stalling=3D[{limit=3D1, action=3D{name=3DsetDowntime, params=3D[1=
50]}}, {limit=3D2,
>>>>> action=3D{name=3DsetDowntime, params=3D[200]}}, {limit=3D3,
>>>>> action=3D{name=3DsetDowntime, params=3D[300]}}, {limit=3D4,
>>>>> action=3D{name=3DsetDowntime, params=3D[400]}}, {limit=3D6,
>>>>> action=3D{name=3DsetDowntime, params=3D[500]}}, {limit=3D-1, acti=
on=3D{name=3Dabort,
>>>>> params=3D[]}}]]'}), log id: 775cd381
>>>>> 2018-02-12 16:46:30,277+01 INFO [org.ovirt.engine.core.vdsbro
>>>>> ker.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-t=
hread-32)
>>>>> [2f712024-5982-46a8-82c8-fd8293da5725] FINISH, MigrateBrokerVDSCo=
mmand,
>>>>> log id: 775cd381
>>>>> 2018-02-12 16:46:30,285+01 INFO [org.ovirt.engine.core.vdsbroker.=
MigrateVDSCommand]
>>>>> (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd82=
93da5725]
>>>>> FINISH, MigrateVDSCommand, return: MigratingFrom, log id: 14f61ee=
0
>>>>> 2018-02-12 16:46:30,301+01 INFO [org.ovirt.engine.core.dal.db
>>>>> broker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-=
6-thread-32)
>>>>> [2f712024-5982-46a8-82c8-fd8293da5725] EVENT=5FID:
>>>>> VM=5FMIGRATION=5FSTART(62), Correlation ID: 2f712024-5982-46a8-82=
c8-fd8293da5725,
>>>>> Job ID: 4bd19aa9-cc99-4d02-884e-5a1e857a7738, Call Stack: null, C=
ustom
>>>>> ID: null, Custom Event ID: -1, Message: Migration started (VM:
>>>>> Oracle=5FSECONDARY, Source: victor.local.systea.fr, Destination:
>>>>> ginger.local.systea.fr, User: admin@internal-authz).
>>>>> 2018-02-12 16:46:31,106+01 INFO [org.ovirt.engine.core.vdsbro
>>>>> ker.vdsbroker.FullListVDSCommand] (DefaultQuartzScheduler9) [54a6=
5b66]
>>>>> START, FullListVDSCommand(HostName =3D victor.local.systea.fr,
>>>>> FullListVDSCommandParameters:{runAsync=3D'true',
>>>>> hostId=3D'ce3938b1-b23f-4d22-840a-f17d7cd87bb1',
>>>>> vmIds=3D'[3f57e669-5e4c-4d10-85cc-d573004a099d]'}), log id: 54b4b=
435
>>>>> 2018-02-12 16:46:31,147+01 INFO [org.ovirt.engine.core.vdsbro
>>>>> ker.vdsbroker.FullListVDSCommand] (DefaultQuartzScheduler9) [54a6=
5b66]
>>>>> FINISH, FullListVDSCommand, return: [{acpiEnable=3Dtrue,
>>>>> emulatedMachine=3Dpc-i440fx-rhel7.3.0, tabletEnable=3Dtrue, pid=3D=
1493,
>>>>> guestDiskMapping=3D{0QEMU=5FQEMU=5FHARDDISK=5Fd890fa68-fba4-4f49-=
9=3D{name=3D/dev/sda},
>>>>> QEMU=5FDVD-ROM=5FQM00003=3D{name=3D/dev/sr0}}, transparentHugePag=
es=3Dtrue,
>>>>> timeOffset=3D0, cpuType=3DNehalem, smp=3D2, pauseCode=3DNOERR,
>>>>> guestNumaNodes=3D[Ljava.lang.Object;@1d9042cd, smartcardEnable=3D=
false,
>>>>> custom=3D{device=5Ffbddd528-7d93-49c6-a286-180e021cb274device=5F8=
7
>>>>> 9c93ab-4df1-435c-af02-565039fcc254=3DVmDevice:{id=3D'VmDeviceId:
>>>>> {deviceId=3D'879c93ab-4df1-435c-af02-565039fcc254',
>>>>> vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}', device=3D'unix',=
>>>>> type=3D'CHANNEL', bootOrder=3D'0', specParams=3D'[]', address=3D'=
{bus=3D0,
>>>>> controller=3D0, type=3Dvirtio-serial, port=3D1}', managed=3D'fals=
e',
>>>>> plugged=3D'true', readOnly=3D'false', deviceAlias=3D'channel0',
>>>>> customProperties=3D'[]', snapshotId=3D'null', logicalName=3D'null=
',
>>>>> hostDevice=3D'null'}, device=5Ffbddd528-7d93-49c6-a286
>>>>> -180e021cb274device=5F879c93ab-4df1-435c-af02-565039fcc254devi
>>>>> ce=5F8945f61a-abbe-4156-8485-a4aa6f1908dbdevice=5F017b5e59-01c4-
>>>>> 4aac-bf0c-b5d9557284d6=3DVmDevice:{id=3D'VmDeviceId:{deviceId=3D'=
0
>>>>> 17b5e59-01c4-4aac-bf0c-b5d9557284d6', vmId=3D'3f57e669-5e4c-4d10-=
85cc-d573004a099d'}',
>>>>> device=3D'tablet', type=3D'UNKNOWN', bootOrder=3D'0', specParams=3D=
'[]',
>>>>> address=3D'{bus=3D0, type=3Dusb, port=3D1}', managed=3D'false', p=
lugged=3D'true',
>>>>> readOnly=3D'false', deviceAlias=3D'input0', customProperties=3D'[=
]',
>>>>> snapshotId=3D'null', logicalName=3D'null', hostDevice=3D'null'},
>>>>> device=5Ffbddd528-7d93-49c6-a286-180e021cb274=3DVmDevice:{id=3D'V=
m
>>>>> DeviceId:{deviceId=3D'fbddd528-7d93-49c6-a286-180e021cb274',
>>>>> vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}', device=3D'ide',
>>>>> type=3D'CONTROLLER', bootOrder=3D'0', specParams=3D'[]', address=3D=
'{slot=3D0x01,
>>>>> bus=3D0x00, domain=3D0x0000, type=3Dpci, function=3D0x1}', manage=
d=3D'false',
>>>>> plugged=3D'true', readOnly=3D'false', deviceAlias=3D'ide', custom=
Properties=3D'[]',
>>>>> snapshotId=3D'null', logicalName=3D'null', hostDevice=3D'null'},
>>>>> device=5Ffbddd528-7d93-49c6-a286-180e021cb274device=5F879c93ab-4
>>>>> df1-435c-af02-565039fcc254device=5F8945f61a-abbe-4156-8485-a4a
>>>>> a6f1908db=3DVmDevice:{id=3D'VmDeviceId:{deviceId=3D'8945f61a-abbe=
-4156-8485-a4aa6f1908db',
>>>>> vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}', device=3D'unix',=
>>>>> type=3D'CHANNEL', bootOrder=3D'0', specParams=3D'[]', address=3D'=
{bus=3D0,
>>>>> controller=3D0, type=3Dvirtio-serial, port=3D2}', managed=3D'fals=
e',
>>>>> plugged=3D'true', readOnly=3D'false', deviceAlias=3D'channel1',
>>>>> customProperties=3D'[]', snapshotId=3D'null', logicalName=3D'null=
',
>>>>> hostDevice=3D'null'}}, vmType=3Dkvm, memSize=3D8192, smpCoresPerS=
ocket=3D1,
>>>>> vmName=3DOracle=5FSECONDARY, nice=3D0, status=3DMigration Source,=
maxMemSize=3D32768,
>>>>> bootMenuEnable=3Dfalse, vmId=3D3f57e669-5e4c-4d10-85cc-d573004a09=
9d,
>>>>> numOfIoThreads=3D2, smpThreadsPerCore=3D1, memGuaranteedSize=3D81=
92,
>>>>> kvmEnable=3Dtrue, pitReinjection=3Dfalse, displayNetwork=3Dovirtm=
gmt,
>>>>> devices=3D[Ljava.lang.Object;@28ae66d7, display=3Dvnc, maxVCpus=3D=
16,
>>>>> clientIp=3D, statusTime=3D4299484520, maxMemSlots=3D16}], log id:=
54b4b435
>>>>> 2018-02-12 16:46:31,150+01 INFO [org.ovirt.engine.core.vdsbro
>>>>> ker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler1)
>>>>> [27fac647] Fetched 3 VMs from VDS 'd569c2dd-8f30-4878-8aea-858db2=
85cf69'
>>>>> 2018-02-12 16:46:31,151+01 INFO [org.ovirt.engine.core.vdsbro
>>>>> ker.monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler9)
>>>>> [54a65b66] Received a vnc Device without an address when processi=
ng VM
>>>>> 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device:
>>>>> {device=3Dvnc, specParams=3D{displayNetwork=3Dovirtmgmt, keyMap=3D=
fr,
>>>>> displayIp=3D192.168.0.6}, type=3Dgraphics, deviceId=3D813957b1-44=
6a-4e88-9e40-9fe76d2c442d,
>>>>> port=3D5901}
>>>>> 2018-02-12 16:46:31,151+01 INFO [org.ovirt.engine.core.vdsbro
>>>>> ker.monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler9)
>>>>> [54a65b66] Received a lease Device without an address when proces=
sing VM
>>>>> 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device:
>>>>> {lease=5Fid=3D3f57e669-5e4c-4d10-85cc-d573004a099d,
>>>>> sd=5Fid=3D1e51cecc-eb2e-47d0-b185-920fdc7afa16,
>>>>> deviceId=3D{uuid=3Da09949aa-5642-4b6d-94a4-8b0d04257be5}, offset=3D=
6291456,
>>>>> device=3Dlease, path=3D/rhev/data-center/mnt/glusterSD/192.168.0.=
6:
>>>>> =5FDATA01/1e51cecc-eb2e-47d0-b185-920fdc7afa16/dom=5Fmd/xleases, =
type=3Dlease}
>>>>> 2018-02-12 16:46:31,152+01 INFO [org.ovirt.engine.core.vdsbroker.=
monitoring.VmAnalyzer]
>>>>> (DefaultQuartzScheduler1) [27fac647] VM '3f57e669-5e4c-4d10-85cc-=
d573004a099d'(Oracle=5FSECONDARY)
>>>>> was unexpectedly detected as 'MigratingTo' on VDS
>>>>> 'd569c2dd-8f30-4878-8aea-858db285cf69'(ginger.local.systea.fr)
>>>>> (expected on 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1')
>>>>> 2018-02-12 16:46:31,152+01 INFO [org.ovirt.engine.core.vdsbroker.=
monitoring.VmAnalyzer]
>>>>> (DefaultQuartzScheduler1) [27fac647] VM '3f57e669-5e4c-4d10-85cc-=
d573004a099d'
>>>>> is migrating to VDS 'd569c2dd-8f30-4878-8aea-858db285cf69'(
>>>>> ginger.local.systea.fr) ignoring it in the refresh until migratio=
n is
>>>>> done
>>>>> ....
>>>>> 2018-02-12 16:46:41,631+01 INFO [org.ovirt.engine.core.vdsbroker.=
monitoring.VmAnalyzer]
>>>>> (ForkJoinPool-1-worker-11) [] VM '3f57e669-5e4c-4d10-85cc-d573004=
a099d'
>>>>> was reported as Down on VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1=
'(
>>>>> victor.local.systea.fr)
>>>>> 2018-02-12 16:46:41,632+01 INFO [org.ovirt.engine.core.vdsbro
>>>>> ker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-11) [] ST=
ART,
>>>>> DestroyVDSCommand(HostName =3D victor.local.systea.fr,
>>>>> DestroyVmVDSCommandParameters:{runAsync=3D'true',
>>>>> hostId=3D'ce3938b1-b23f-4d22-840a-f17d7cd87bb1',
>>>>> vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d', force=3D'false',
>>>>> secondsToWait=3D'0', gracefully=3D'false', reason=3D'', ignoreNoV=
m=3D'true'}), log
>>>>> id: 560eca57
>>>>> 2018-02-12 16:46:41,650+01 INFO [org.ovirt.engine.core.vdsbro
>>>>> ker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-11) [] FI=
NISH,
>>>>> DestroyVDSCommand, log id: 560eca57
>>>>> 2018-02-12 16:46:41,650+01 INFO [org.ovirt.engine.core.vdsbroker.=
monitoring.VmAnalyzer]
>>>>> (ForkJoinPool-1-worker-11) [] VM '3f57e669-5e4c-4d10-85cc-d573004=
a099d'(Oracle=5FSECONDARY)
>>>>> moved from 'MigratingFrom' --> 'Down'
>>>>> 2018-02-12 16:46:41,651+01 INFO [org.ovirt.engine.core.vdsbroker.=
monitoring.VmAnalyzer]
>>>>> (ForkJoinPool-1-worker-11) [] Handing over VM '3f57e669-5e4c-4d10=
-85cc-d573004a099d'(Oracle=5FSECONDARY)
>>>>> to Host 'd569c2dd-8f30-4878-8aea-858db285cf69'. Setting VM to sta=
tus
>>>>> 'MigratingTo'
>>>>> 2018-02-12 16:46:42,163+01 INFO [org.ovirt.engine.core.vdsbroker.=
monitoring.VmAnalyzer]
>>>>> (ForkJoinPool-1-worker-4) [] VM '3f57e669-5e4c-4d10-85cc-d573004a=
099d'(Oracle=5FSECONDARY)
>>>>> moved from 'MigratingTo' --> 'Up'
>>>>> 2018-02-12 16:46:42,169+01 INFO [org.ovirt.engine.core.vdsbro
>>>>> ker.vdsbroker.MigrateStatusVDSCommand] (ForkJoinPool-1-worker-4) =
[]
>>>>> START, MigrateStatusVDSCommand(HostName =3D ginger.local.systea.f=
r,
>>>>> MigrateStatusVDSCommandParameters:{runAsync=3D'true',
>>>>> hostId=3D'd569c2dd-8f30-4878-8aea-858db285cf69',
>>>>> vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}), log id: 7a25c281=
>>>>> 2018-02-12 16:46:42,174+01 INFO [org.ovirt.engine.core.vdsbro
>>>>> ker.vdsbroker.MigrateStatusVDSCommand] (ForkJoinPool-1-worker-4) =
[]
>>>>> FINISH, MigrateStatusVDSCommand, log id: 7a25c281
>>>>> 2018-02-12 16:46:42,194+01 INFO [org.ovirt.engine.core.dal.db
>>>>> broker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-=
4) []
>>>>> EVENT=5FID: VM=5FMIGRATION=5FDONE(63), Correlation ID:
>>>>> 2f712024-5982-46a8-82c8-fd8293da5725, Job ID:
>>>>> 4bd19aa9-cc99-4d02-884e-5a1e857a7738, Call Stack: null, Custom ID=
:
>>>>> null, Custom Event ID: -1, Message: Migration completed (VM:
>>>>> Oracle=5FSECONDARY, Source: victor.local.systea.fr, Destination:
>>>>> ginger.local.systea.fr, Duration: 11 seconds, Total: 11 seconds, =
Actual
>>>>> downtime: (N/A))
>>>>> 2018-02-12 16:46:42,201+01 INFO [org.ovirt.engine.core.bll.Migrat=
eVmToServerCommand]
>>>>> (ForkJoinPool-1-worker-4) [] Lock freed to object
>>>>> 'EngineLock:{exclusiveLocks=3D'[3f57e669-5e4c-4d10-85cc-d573004a0=
99d=3DVM]',
>>>>> sharedLocks=3D''}'
>>>>> 2018-02-12 16:46:42,203+01 INFO [org.ovirt.engine.core.vdsbro
>>>>> ker.vdsbroker.FullListVDSCommand] (ForkJoinPool-1-worker-4) [] ST=
ART,
>>>>> FullListVDSCommand(HostName =3D ginger.local.systea.fr,
>>>>> FullListVDSCommandParameters:{runAsync=3D'true',
>>>>> hostId=3D'd569c2dd-8f30-4878-8aea-858db285cf69',
>>>>> vmIds=3D'[3f57e669-5e4c-4d10-85cc-d573004a099d]'}), log id: 7cc65=
298
>>>>> 2018-02-12 16:46:42,254+01 INFO [org.ovirt.engine.core.vdsbro
>>>>> ker.vdsbroker.FullListVDSCommand] (ForkJoinPool-1-worker-4) [] FI=
NISH,
>>>>> FullListVDSCommand, return: [{acpiEnable=3Dtrue,
>>>>> emulatedMachine=3Dpc-i440fx-rhel7.3.0, afterMigrationStatus=3D,
>>>>> tabletEnable=3Dtrue, pid=3D18748, guestDiskMapping=3D{},
>>>>> transparentHugePages=3Dtrue, timeOffset=3D0, cpuType=3DNehalem, s=
mp=3D2,
>>>>> guestNumaNodes=3D[Ljava.lang.Object;@760085fd,
>>>>> custom=3D{device=5Ffbddd528-7d93-49c6-a286-180e021cb274device=5F8=
7
>>>>> 9c93ab-4df1-435c-af02-565039fcc254=3DVmDevice:{id=3D'VmDeviceId:
>>>>> {deviceId=3D'879c93ab-4df1-435c-af02-565039fcc254',
>>>>> vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}', device=3D'unix',=
>>>>> type=3D'CHANNEL', bootOrder=3D'0', specParams=3D'[]', address=3D'=
{bus=3D0,
>>>>> controller=3D0, type=3Dvirtio-serial, port=3D1}', managed=3D'fals=
e',
>>>>> plugged=3D'true', readOnly=3D'false', deviceAlias=3D'channel0',
>>>>> customProperties=3D'[]', snapshotId=3D'null', logicalName=3D'null=
',
>>>>> hostDevice=3D'null'}, device=5Ffbddd528-7d93-49c6-a286
>>>>> -180e021cb274device=5F879c93ab-4df1-435c-af02-565039fcc254devi
>>>>> ce=5F8945f61a-abbe-4156-8485-a4aa6f1908dbdevice=5F017b5e59-01c4-
>>>>> 4aac-bf0c-b5d9557284d6=3DVmDevice:{id=3D'VmDeviceId:{deviceId=3D'=
0
>>>>> 17b5e59-01c4-4aac-bf0c-b5d9557284d6', vmId=3D'3f57e669-5e4c-4d10-=
85cc-d573004a099d'}',
>>>>> device=3D'tablet', type=3D'UNKNOWN', bootOrder=3D'0', specParams=3D=
'[]',
>>>>> address=3D'{bus=3D0, type=3Dusb, port=3D1}', managed=3D'false', p=
lugged=3D'true',
>>>>> readOnly=3D'false', deviceAlias=3D'input0', customProperties=3D'[=
]',
>>>>> snapshotId=3D'null', logicalName=3D'null', hostDevice=3D'null'},
>>>>> device=5Ffbddd528-7d93-49c6-a286-180e021cb274=3DVmDevice:{id=3D'V=
m
>>>>> DeviceId:{deviceId=3D'fbddd528-7d93-49c6-a286-180e021cb274',
>>>>> vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}', device=3D'ide',
>>>>> type=3D'CONTROLLER', bootOrder=3D'0', specParams=3D'[]', address=3D=
'{slot=3D0x01,
>>>>> bus=3D0x00, domain=3D0x0000, type=3Dpci, function=3D0x1}', manage=
d=3D'false',
>>>>> plugged=3D'true', readOnly=3D'false', deviceAlias=3D'ide', custom=
Properties=3D'[]',
>>>>> snapshotId=3D'null', logicalName=3D'null', hostDevice=3D'null'},
>>>>> device=5Ffbddd528-7d93-49c6-a286-180e021cb274device=5F879c93ab-4
>>>>> df1-435c-af02-565039fcc254device=5F8945f61a-abbe-4156-8485-a4a
>>>>> a6f1908db=3DVmDevice:{id=3D'VmDeviceId:{deviceId=3D'8945f61a-abbe=
-4156-8485-a4aa6f1908db',
>>>>> vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}', device=3D'unix',=
>>>>> type=3D'CHANNEL', bootOrder=3D'0', specParams=3D'[]', address=3D'=
{bus=3D0,
>>>>> controller=3D0, type=3Dvirtio-serial, port=3D2}', managed=3D'fals=
e',
>>>>> plugged=3D'true', readOnly=3D'false', deviceAlias=3D'channel1',
>>>>> customProperties=3D'[]', snapshotId=3D'null', logicalName=3D'null=
',
>>>>> hostDevice=3D'null'}}, vmType=3Dkvm, memSize=3D8192, smpCoresPerS=
ocket=3D1,
>>>>> vmName=3DOracle=5FSECONDARY, nice=3D0, status=3DUp, maxMemSize=3D=
32768,
>>>>> bootMenuEnable=3Dfalse, vmId=3D3f57e669-5e4c-4d10-85cc-d573004a09=
9d,
>>>>> numOfIoThreads=3D2, smpThreadsPerCore=3D1, smartcardEnable=3Dfals=
e,
>>>>> maxMemSlots=3D16, kvmEnable=3Dtrue, pitReinjection=3Dfalse,
>>>>> displayNetwork=3Dovirtmgmt, devices=3D[Ljava.lang.Object;@2e4d3dd=
3,
>>>>> memGuaranteedSize=3D8192, maxVCpus=3D16, clientIp=3D, statusTime=3D=
4304259600
>>>>> <(430)%20425-9600>, display=3Dvnc}], log id: 7cc65298
>>>>> 2018-02-12 16:46:42,257+01 INFO [org.ovirt.engine.core.vdsbro
>>>>> ker.monitoring.VmDevicesMonitoring] (ForkJoinPool-1-worker-4) []
>>>>> Received a vnc Device without an address when processing VM
>>>>> 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device:
>>>>> {device=3Dvnc, specParams=3D{displayNetwork=3Dovirtmgmt, keyMap=3D=
fr,
>>>>> displayIp=3D192.168.0.5}, type=3Dgraphics, deviceId=3D813957b1-44=
6a-4e88-9e40-9fe76d2c442d,
>>>>> port=3D5901}
>>>>> 2018-02-12 16:46:42,257+01 INFO [org.ovirt.engine.core.vdsbro
>>>>> ker.monitoring.VmDevicesMonitoring] (ForkJoinPool-1-worker-4) []
>>>>> Received a lease Device without an address when processing VM
>>>>> 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device:
>>>>> {lease=5Fid=3D3f57e669-5e4c-4d10-85cc-d573004a099d,
>>>>> sd=5Fid=3D1e51cecc-eb2e-47d0-b185-920fdc7afa16,
>>>>> deviceId=3D{uuid=3Da09949aa-5642-4b6d-94a4-8b0d04257be5}, offset=3D=
6291456,
>>>>> device=3Dlease, path=3D/rhev/data-center/mnt/glusterSD/192.168.0.=
6:
>>>>> =5FDATA01/1e51cecc-eb2e-47d0-b185-920fdc7afa16/dom=5Fmd/xleases, =
type=3Dlease}
>>>>> 2018-02-12 16:46:46,260+01 INFO [org.ovirt.engine.core.vdsbro
>>>>> ker.vdsbroker.FullListVDSCommand] (DefaultQuartzScheduler5) [7fcb=
200a]
>>>>> FINISH, FullListVDSCommand, return: [{acpiEnable=3Dtrue,
>>>>> emulatedMachine=3Dpc-i440fx-rhel7.3.0, afterMigrationStatus=3D,
>>>>> tabletEnable=3Dtrue, pid=3D18748, guestDiskMapping=3D{0QEMU=5FQEM=
U=5FH
>>>>> ARDDISK=5Fd890fa68-fba4-4f49-9=3D{name=3D/dev/sda},
>>>>> QEMU=5FDVD-ROM=5FQM00003=3D{name=3D/dev/sr0}}, transparentHugePag=
es=3Dtrue,
>>>>> timeOffset=3D0, cpuType=3DNehalem, smp=3D2, guestNumaNodes=3D[Lja=
va.lang.Obj
>>>>> ect;@77951faf, custom=3D{device=5Ffbddd528-7d93-4
>>>>> 9c6-a286-180e021cb274device=5F879c93ab-4df1-435c-af02-565039fc
>>>>> c254=3DVmDevice:{id=3D'VmDeviceId:{deviceId=3D'879c93ab-4df1-435c=
-af02-565039fcc254',
>>>>> vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}', device=3D'unix',=
>>>>> type=3D'CHANNEL', bootOrder=3D'0', specParams=3D'[]', address=3D'=
{bus=3D0,
>>>>> controller=3D0, type=3Dvirtio-serial, port=3D1}', managed=3D'fals=
e',
>>>>> plugged=3D'true', readOnly=3D'false', deviceAlias=3D'channel0',
>>>>> customProperties=3D'[]', snapshotId=3D'null', logicalName=3D'null=
',
>>>>> hostDevice=3D'null'}, device=5Ffbddd528-7d93-49c6-a286
>>>>> -180e021cb274device=5F879c93ab-4df1-435c-af02-565039fcc254devi
>>>>> ce=5F8945f61a-abbe-4156-8485-a4aa6f1908dbdevice=5F017b5e59-01c4-
>>>>> 4aac-bf0c-b5d9557284d6=3DVmDevice:{id=3D'VmDeviceId:{deviceId=3D'=
0
>>>>> 17b5e59-01c4-4aac-bf0c-b5d9557284d6', vmId=3D'3f57e669-5e4c-4d10-=
85cc-d573004a099d'}',
>>>>> device=3D'tablet', type=3D'UNKNOWN', bootOrder=3D'0', specParams=3D=
'[]',
>>>>> address=3D'{bus=3D0, type=3Dusb, port=3D1}', managed=3D'false', p=
lugged=3D'true',
>>>>> readOnly=3D'false', deviceAlias=3D'input0', customProperties=3D'[=
]',
>>>>> snapshotId=3D'null', logicalName=3D'null', hostDevice=3D'null'},
>>>>> device=5Ffbddd528-7d93-49c6-a286-180e021cb274=3DVmDevice:{id=3D'V=
m
>>>>> DeviceId:{deviceId=3D'fbddd528-7d93-49c6-a286-180e021cb274',
>>>>> vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}', device=3D'ide',
>>>>> type=3D'CONTROLLER', bootOrder=3D'0', specParams=3D'[]', address=3D=
'{slot=3D0x01,
>>>>> bus=3D0x00, domain=3D0x0000, type=3Dpci, function=3D0x1}', manage=
d=3D'false',
>>>>> plugged=3D'true', readOnly=3D'false', deviceAlias=3D'ide', custom=
Properties=3D'[]',
>>>>> snapshotId=3D'null', logicalName=3D'null', hostDevice=3D'null'},
>>>>> device=5Ffbddd528-7d93-49c6-a286-180e021cb274device=5F879c93ab-4
>>>>> df1-435c-af02-565039fcc254device=5F8945f61a-abbe-4156-8485-a4a
>>>>> a6f1908db=3DVmDevice:{id=3D'VmDeviceId:{deviceId=3D'8945f61a-abbe=
-4156-8485-a4aa6f1908db',
>>>>> vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}', device=3D'unix',=
>>>>> type=3D'CHANNEL', bootOrder=3D'0', specParams=3D'[]', address=3D'=
{bus=3D0,
>>>>> controller=3D0, type=3Dvirtio-serial, port=3D2}', managed=3D'fals=
e',
>>>>> plugged=3D'true', readOnly=3D'false', deviceAlias=3D'channel1',
>>>>> customProperties=3D'[]', snapshotId=3D'null', logicalName=3D'null=
',
>>>>> hostDevice=3D'null'}}, vmType=3Dkvm, memSize=3D8192, smpCoresPerS=
ocket=3D1,
>>>>> vmName=3DOracle=5FSECONDARY, nice=3D0, status=3DUp, maxMemSize=3D=
32768,
>>>>> bootMenuEnable=3Dfalse, vmId=3D3f57e669-5e4c-4d10-85cc-d573004a09=
9d,
>>>>> numOfIoThreads=3D2, smpThreadsPerCore=3D1, smartcardEnable=3Dfals=
e,
>>>>> maxMemSlots=3D16, kvmEnable=3Dtrue, pitReinjection=3Dfalse,
>>>>> displayNetwork=3Dovirtmgmt, devices=3D[Ljava.lang.Object;@286410f=
d,
>>>>> memGuaranteedSize=3D8192, maxVCpus=3D16, clientIp=3D, statusTime=3D=
4304263620
>>>>> <(430)%20426-3620>, display=3Dvnc}], log id: 58cdef4c
>>>>> 2018-02-12 16:46:46,267+01 INFO [org.ovirt.engine.core.vdsbro
>>>>> ker.monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler5)
>>>>> [7fcb200a] Received a vnc Device without an address when processi=
ng VM
>>>>> 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device:
>>>>> {device=3Dvnc, specParams=3D{displayNetwork=3Dovirtmgmt, keyMap=3D=
fr,
>>>>> displayIp=3D192.168.0.5}, type=3Dgraphics, deviceId=3D813957b1-44=
6a-4e88-9e40-9fe76d2c442d,
>>>>> port=3D5901}
>>>>> 2018-02-12 16:46:46,268+01 INFO [org.ovirt.engine.core.vdsbro
>>>>> ker.monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler5)
>>>>> [7fcb200a] Received a lease Device without an address when proces=
sing VM
>>>>> 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device:
>>>>> {lease=5Fid=3D3f57e669-5e4c-4d10-85cc-d573004a099d,
>>>>> sd=5Fid=3D1e51cecc-eb2e-47d0-b185-920fdc7afa16,
>>>>> deviceId=3D{uuid=3Da09949aa-5642-4b6d-94a4-8b0d04257be5}, offset=3D=
6291456,
>>>>> device=3Dlease, path=3D/rhev/data-center/mnt/glusterSD/192.168.0.=
6:
>>>>> =5FDATA01/1e51cecc-eb2e-47d0-b185-920fdc7afa16/dom=5Fmd/xleases, =
type=3Dlease}
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> For the VM with 2 vdisks we see :
>>>>>
>>>>> 2018-02-12 16:49:06,112+01 INFO [org.ovirt.engine.core.bll.Migrat=
eVmToServerCommand]
>>>>> (default task-50) [92b5af33-cb87-4142-b8fe-8b838dd7458e] Lock Acq=
uired
>>>>> to object 'EngineLock:{exclusiveLocks=3D'[f7d4ec12-627a-4b83-b59e=
-886400d55474=3DVM]',
>>>>> sharedLocks=3D''}'
>>>>> 2018-02-12 16:49:06,407+01 INFO [org.ovirt.engine.core.bll.Migrat=
eVmToServerCommand]
>>>>> (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4142-b8fe-8b83=
8dd7458e]
>>>>> Running command: MigrateVmToServerCommand internal: false. Entiti=
es
>>>>> affected : ID: f7d4ec12-627a-4b83-b59e-886400d55474 Type: VMActio=
n
>>>>> group MIGRATE=5FVM with role type USER
>>>>> 2018-02-12 16:49:06,712+01 INFO [org.ovirt.engine.core.vdsbroker.=
MigrateVDSCommand]
>>>>> (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4142-b8fe-8b83=
8dd7458e]
>>>>> START, MigrateVDSCommand( MigrateVDSCommandParameters:{runAsync=3D=
'true',
>>>>> hostId=3D'd569c2dd-8f30-4878-8aea-858db285cf69',
>>>>> vmId=3D'f7d4ec12-627a-4b83-b59e-886400d55474', srcHost=3D'192.168=
.0.5',
>>>>> dstVdsId=3D'ce3938b1-b23f-4d22-840a-f17d7cd87bb1', dstHost=3D'
>>>>> 192.168.0.6:54321', migrationMethod=3D'ONLINE', tunnelMigration=3D=
'false',
>>>>> migrationDowntime=3D'0', autoConverge=3D'true', migrateCompressed=
=3D'false',
>>>>> consoleAddress=3D'null', maxBandwidth=3D'500', enableGuestEvents=3D=
'true',
>>>>> maxIncomingMigrations=3D'2', maxOutgoingMigrations=3D'2',
>>>>> convergenceSchedule=3D'[init=3D[{name=3DsetDowntime, params=3D[10=
0]}],
>>>>> stalling=3D[{limit=3D1, action=3D{name=3DsetDowntime, params=3D[1=
50]}}, {limit=3D2,
>>>>> action=3D{name=3DsetDowntime, params=3D[200]}}, {limit=3D3,
>>>>> action=3D{name=3DsetDowntime, params=3D[300]}}, {limit=3D4,
>>>>> action=3D{name=3DsetDowntime, params=3D[400]}}, {limit=3D6,
>>>>> action=3D{name=3DsetDowntime, params=3D[500]}}, {limit=3D-1, acti=
on=3D{name=3Dabort,
>>>>> params=3D[]}}]]'}), log id: 3702a9e0
>>>>> 2018-02-12 16:49:06,713+01 INFO [org.ovirt.engine.core.vdsbro
>>>>> ker.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-t=
hread-49)
>>>>> [92b5af33-cb87-4142-b8fe-8b838dd7458e] START,
>>>>> MigrateBrokerVDSCommand(HostName =3D ginger.local.systea.fr,
>>>>> MigrateVDSCommandParameters:{runAsync=3D'true',
>>>>> hostId=3D'd569c2dd-8f30-4878-8aea-858db285cf69',
>>>>> vmId=3D'f7d4ec12-627a-4b83-b59e-886400d55474', srcHost=3D'192.168=
.0.5',
>>>>> dstVdsId=3D'ce3938b1-b23f-4d22-840a-f17d7cd87bb1', dstHost=3D'
>>>>> 192.168.0.6:54321', migrationMethod=3D'ONLINE', tunnelMigration=3D=
'false',
>>>>> migrationDowntime=3D'0', autoConverge=3D'true', migrateCompressed=
=3D'false',
>>>>> consoleAddress=3D'null', maxBandwidth=3D'500', enableGuestEvents=3D=
'true',
>>>>> maxIncomingMigrations=3D'2', maxOutgoingMigrations=3D'2',
>>>>> convergenceSchedule=3D'[init=3D[{name=3DsetDowntime, params=3D[10=
0]}],
>>>>> stalling=3D[{limit=3D1, action=3D{name=3DsetDowntime, params=3D[1=
50]}}, {limit=3D2,
>>>>> action=3D{name=3DsetDowntime, params=3D[200]}}, {limit=3D3,
>>>>> action=3D{name=3DsetDowntime, params=3D[300]}}, {limit=3D4,
>>>>> action=3D{name=3DsetDowntime, params=3D[400]}}, {limit=3D6,
>>>>> action=3D{name=3DsetDowntime, params=3D[500]}}, {limit=3D-1, acti=
on=3D{name=3Dabort,
>>>>> params=3D[]}}]]'}), log id: 1840069c
>>>>> 2018-02-12 16:49:06,724+01 INFO [org.ovirt.engine.core.vdsbro
>>>>> ker.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-t=
hread-49)
>>>>> [92b5af33-cb87-4142-b8fe-8b838dd7458e] FINISH, MigrateBrokerVDSCo=
mmand,
>>>>> log id: 1840069c
>>>>> 2018-02-12 16:49:06,732+01 INFO [org.ovirt.engine.core.vdsbroker.=
MigrateVDSCommand]
>>>>> (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4142-b8fe-8b83=
8dd7458e]
>>>>> FINISH, MigrateVDSCommand, return: MigratingFrom, log id: 3702a9e=
0
>>>>> 2018-02-12 16:49:06,753+01 INFO [org.ovirt.engine.core.dal.db
>>>>> broker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-=
6-thread-49)
>>>>> [92b5af33-cb87-4142-b8fe-8b838dd7458e] EVENT=5FID:
>>>>> VM=5FMIGRATION=5FSTART(62), Correlation ID: 92b5af33-cb87-4142-b8=
fe-8b838dd7458e,
>>>>> Job ID: f4f54054-f7c8-4481-8eda-d5a15c383061, Call Stack: null, C=
ustom
>>>>> ID: null, Custom Event ID: -1, Message: Migration started (VM:
>>>>> Oracle=5FPRIMARY, Source: ginger.local.systea.fr, Destination:
>>>>> victor.local.systea.fr, User: admin@internal-authz).
>>>>> ...
>>>>> 2018-02-12 16:49:16,453+01 INFO [org.ovirt.engine.core.vdsbro
>>>>> ker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler4)
>>>>> [162a5bc3] Fetched 2 VMs from VDS 'ce3938b1-b23f-4d22-840a-f17d7c=
d87bb1'
>>>>> 2018-02-12 16:49:16,455+01 INFO [org.ovirt.engine.core.vdsbroker.=
monitoring.VmAnalyzer]
>>>>> (DefaultQuartzScheduler4) [162a5bc3] VM 'f7d4ec12-627a-4b83-b59e-=
886400d55474'(Oracle=5FPRIMARY)
>>>>> was unexpectedly detected as 'MigratingTo' on VDS
>>>>> 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'(victor.local.systea.fr)
>>>>> (expected on 'd569c2dd-8f30-4878-8aea-858db285cf69')
>>>>> 2018-02-12 16:49:16,455+01 INFO [org.ovirt.engine.core.vdsbroker.=
monitoring.VmAnalyzer]
>>>>> (DefaultQuartzScheduler4) [162a5bc3] VM 'f7d4ec12-627a-4b83-b59e-=
886400d55474'
>>>>> is migrating to VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'(
>>>>> victor.local.systea.fr) ignoring it in the refresh until migratio=
n is
>>>>> done
>>>>> ...
>>>>> 2018-02-12 16:49:31,484+01 INFO [org.ovirt.engine.core.vdsbroker.=
monitoring.VmAnalyzer]
>>>>> (DefaultQuartzScheduler5) [11a7619a] VM 'f7d4ec12-627a-4b83-b59e-=
886400d55474'(Oracle=5FPRIMARY)
>>>>> was unexpectedly detected as 'MigratingTo' on VDS
>>>>> 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'(victor.local.systea.fr)
>>>>> (expected on 'd569c2dd-8f30-4878-8aea-858db285cf69')
>>>>> 2018-02-12 16:49:31,484+01 INFO [org.ovirt.engine.core.vdsbroker.=
monitoring.VmAnalyzer]
>>>>> (DefaultQuartzScheduler5) [11a7619a] VM 'f7d4ec12-627a-4b83-b59e-=
886400d55474'
>>>>> is migrating to VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'(
>>>>> victor.local.systea.fr) ignoring it in the refresh until migratio=
n is
>>>>> done
>>>>>
>>>>>
>>>>>
>>>>> and so on, last lines repeated indefinitly for hours since we pow=
eroff
>>>>> the VM...
>>>>> Is this something known ? Any idea about that ?
>>>>>
>>>>> Thanks
>>>>>
>>>>> Ovirt 4.1.6, updated last at feb-13. Gluster 3.12.1.
>>>>>
>>>>> --
>>>>>
>>>>> Cordialement,
>>>>>
>>>>> *Frank Soyer *
>>>>>
>>>>> =5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=
=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=
=5F
>>>>> Users mailing list
>>>>> Users@ovirt.org
>>>>> http://lists.ovirt.org/mailman/listinfo/users
=C2=A0
=C2=A0
------=_=-_OpenGroupware_org_NGMime-18019-1519921630.582362-42------
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Content-Length: 51348
<html>I Milan,<br />I tried to activate the debug mode, but the restart=
of libvirt crashed something on the host : it was no more possible to =
start any vm on it, and migration to it just never started. So I decide=
d to restart it, and to be sure, I've restarted all the hosts.<br />And=
... now the migration of all VMs, simple or multi-disks, works ?!? So, =
there was probably something hidden that was resetted or repaired by th=
e global restart !.... In french, we call that "tomber en marche" ;)<br=
/><br />So : solved. Thank you for the wasted time !<br /><br />--<br =
/><style type=3D"text/css">.Text1 {
color: black;
font-size:9pt;
font-family:Verdana;
}
.Text2 {
color: black;
font-size:7pt;
font-family:Verdana;
}</style><p class=3D"Text1">Cordialement,<br /><br /><b>Frank Soyer=
</b><br />Mob. 06 72 28 38 53 - Fix. 05 49 50 52 34</p><br /><br />Le =
Lundi, F=C3=A9vrier 26, 2018 12:59 CET, Milan Zamazal <mzamazal@redh=
at.com> a =C3=A9crit:<br /> <blockquote type=3D"cite" cite=3D"8=
78tbg9d1y.fsf@redhat.com">"fsoyer" <fsoyer@systea.fr> writes:<br =
/><br />> I don't beleive that this is relatd to a host, tests have =
been done from victor<br />> source to ginger dest and ginger to vic=
tor. I don't see problems on storage<br />> (gluster 3.12 native man=
aged by ovirt), when VMs with a uniq disk from 20 to<br />> 250G mig=
rate without error in some seconds and with no downtime.<br /><br />The=
host itself may be fine, but libvirt/QEMU running there may expose<br =
/>problems, perhaps just for some VMs. According to your logs something=
<br />is not behaving as expected on the source host during the faulty<=
br />migration.<br /><br />> How ca I enable this libvirt debug mode=
?<br /><br />Set the following options in /etc/libvirt/libvirtd.conf (=
look for<br />examples in comments there)<br /><br />- log=5Flevel=3D1<=
br />- log=5Foutputs=3D"1:file:/var/log/libvirt/libvirtd.log"<br /><br =
/>and restart libvirt. Then /var/log/libvirt/libvirtd.log should contai=
n<br />the log. It will be huge, so I suggest to enable it only for the=
time<br />of reproducing the problem.<br /><br />> --<br />><br =
/>> Cordialement,<br />><br />> Frank Soyer<br />><br />>=
; <br />><br />> Le Vendredi, F=C3=A9vrier 23, 2018 09:56 C=
ET, Milan Zamazal <mzamazal@redhat.com> a =C3=A9crit:<br />> &=
nbsp;Maor Lipchuk <mlipchuk@redhat.com> writes:<br />><br />&g=
t;> I encountered a bug (see [1]) which contains the same error ment=
ioned in<br />>> your VDSM logs (see [2]), but I doubt it is rela=
ted.<br />><br />> Indeed, it's not related.<br />><br />> =
The error in vdsm=5Fvictor.log just means that the info gathering call<=
br />> tries to access libvirt domain before the incoming migration =
is<br />> completed. It's ugly but harmless.<br />><br />>>=
Milan, maybe you have any advice to troubleshoot the issue? Will the<b=
r />>> libvirt/qemu logs can help?<br />><br />> It seems t=
here is something wrong on (at least) the source host. There<br />> =
are no migration progress messages in the vdsm=5Fginger.log and there a=
re<br />> warnings about stale stat samples. That looks like problem=
s with<br />> calling libvirt =E2=80=93 slow and/or stuck calls, may=
be due to storage<br />> problems. The possibly faulty second disk c=
ould cause that.<br />><br />> libvirt debug logs could tell us w=
hether that is indeed the problem and<br />> whether it is caused by=
storage or something else.<br />><br />>> I would suggest to =
open a bug on that issue so we can track it more<br />>> properly=
.<br />>><br />>> Regards,<br />>> Maor<br />>>=
<br />>><br />>> [1]<br />>> https://bugzilla.redhat.=
com/show=5Fbug.cgi?id=3D1486543 - Migration leads to<br />>> VM r=
unning on 2 Hosts<br />>><br />>> [2]<br />>> 2018-02=
-16 09:43:35,236+0100 ERROR (jsonrpc/7) [jsonrpc.JsonRpcServer]<br />&g=
t;> Internal server error (=5F=5Finit=5F=5F:577)<br />>> Trace=
back (most recent call last):<br />>> File "/usr/lib/python2.7/si=
te-packages/yajsonrpc/=5F=5Finit=5F=5F.py", line 572,<br />>> in =
=5Fhandle=5Frequest<br />>> res =3D method(**params)<br />>>=
; File "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 198,=
in<br />>> =5FdynamicMethod<br />>> result =3D fn(*methodA=
rgs)<br />>> File "/usr/share/vdsm/API.py", line 1454, in getAllV=
mIoTunePolicies<br />>> io=5Ftune=5Fpolicies=5Fdict =3D self.=5Fc=
if.getAllVmIoTunePolicies()<br />>> File "/usr/share/vdsm/clientI=
F.py", line 454, in getAllVmIoTunePolicies<br />>> 'current=5Fval=
ues': v.getIoTune()}<br />>> File "/usr/share/vdsm/virt/vm.py", l=
ine 2859, in getIoTune<br />>> result =3D self.getIoTuneResponse(=
)<br />>> File "/usr/share/vdsm/virt/vm.py", line 2878, in getIoT=
uneResponse<br />>> res =3D self.=5Fdom.blockIoTune(<br />>>=
; File "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py", line =
47,<br />>> in =5F=5Fgetattr=5F=5F<br />>> % self.vmid)<br =
/>>> NotConnectedError: VM u'755cf168-de65-42ed-b22f-efe9136f7594=
' was not<br />>> started yet or was shut down<br />>><br /=
>>> On Thu, Feb 22, 2018 at 4:22 PM, fsoyer <fsoyer@systea.fr&=
gt; wrote:<br />>><br />>>> Hi,<br />>>> Yes, o=
n 2018-02-16 (vdsm logs) I tried with a VM standing on ginger<br />>=
>> (192.168.0.6) migrated (or failed to migrate...) to victor (19=
2.168.0.5),<br />>>> while the engine.log in the first mail on=
2018-02-12 was for VMs standing<br />>>> on victor, migrated =
(or failed to migrate...) to ginger. Symptoms were<br />>>> ex=
actly the same, in both directions, and VMs works like a charm before,<=
br />>>> and even after (migration "killed" by a poweroff of V=
Ms).<br />>>> Am I the only one experimenting this problem ?<b=
r />>>><br />>>><br />>>> Thanks<br />>&g=
t;> --<br />>>><br />>>> Cordialement,<br />>&g=
t;><br />>>> *Frank Soyer *<br />>>><br />>>=
><br />>>><br />>>> Le Jeudi, F=C3=A9vrier 22, 201=
8 00:45 CET, Maor Lipchuk <mlipchuk@redhat.com><br />>>>=
a =C3=A9crit:<br />>>><br />>>><br />>>> Hi=
Frank,<br />>>><br />>>> Sorry about the delay repon=
d.<br />>>> I've been going through the logs you attached, alt=
hough I could not find<br />>>> any specific indication why th=
e migration failed because of the disk you<br />>>> were menti=
onning.<br />>>> Does this VM run with both disks on the targe=
t host without migration?<br />>>><br />>>> Regards,<=
br />>>> Maor<br />>>><br />>>><br />>>=
;> On Fri, Feb 16, 2018 at 11:03 AM, fsoyer <fsoyer@systea.fr>=
wrote:<br />>>>><br />>>>> Hi Maor,<br />>&=
gt;>> sorry for the double post, I've change the email adress of =
my account and<br />>>>> supposed that I'd need to re-post =
it.<br />>>>> And thank you for your time. Here are the log=
s. I added a vdisk to an<br />>>>> existing VM : it no more=
migrates, needing to poweroff it after minutes.<br />>>>> =
Then simply deleting the second disk makes migrate it in exactly 9s wit=
hout<br />>>>> problem !<br />>>>> https://gist=
.github.com/fgth/4707446331d201eef574ac31b6e89561<br />>>>>=
https://gist.github.com/fgth/f8de9c22664aee53722af676bff8719d<br />>=
;>>><br />>>>> --<br />>>>><br />>&=
gt;>> Cordialement,<br />>>>><br />>>>> *=
Frank Soyer *<br />>>>> Le Mercredi, F=C3=A9vrier 14, 2018 =
11:04 CET, Maor Lipchuk <<br />>>>> mlipchuk@redhat.com&=
gt; a =C3=A9crit:<br />>>>><br />>>>><br />>=
>>> Hi Frank,<br />>>>><br />>>>> I al=
ready replied on your last email.<br />>>>> Can you provide=
the VDSM logs from the time of the migration failure for<br />>>=
>> both hosts:<br />>>>> ginger.local.systea.f <ht=
tp://ginger.local.systea.fr/>r and v<br />>>>> ictor.loc=
al.systea.fr<br />>>>><br />>>>> Thanks,<br />&=
gt;>>> Maor<br />>>>><br />>>>> On Wed=
, Feb 14, 2018 at 11:23 AM, fsoyer <fsoyer@systea.fr> wrote:<br /=
>>>>>><br />>>>>> Hi all,<br />>>&g=
t;>> I discovered yesterday a problem when migrating VM with more=
than one<br />>>>>> vdisk.<br />>>>>> On=
our test servers (oVirt4.1, shared storage with Gluster), I created 2<=
br />>>>>> VMs needed for a test, from a template with a=
20G vdisk. On this VMs I<br />>>>>> added a 100G vdisk =
(for this tests I didn't want to waste time to extend<br />>>>=
>> the existing vdisks... But I lost time finally...). The VMs wi=
th the 2<br />>>>>> vdisks works well.<br />>>>=
>> Now I saw some updates waiting on the host. I tried to put it =
in<br />>>>>> maintenance... But it stopped on the two V=
M. They were marked "migrating",<br />>>>>> but no more =
accessible. Other (small) VMs with only 1 vdisk was migrated<br />>&=
gt;>>> without problem at the same time.<br />>>>>=
> I saw that a kvm process for the (big) VMs was launched on the sou=
rce<br />>>>>> AND destination host, but after tens of m=
inutes, the migration and the VMs<br />>>>>> was always =
freezed. I tried to cancel the migration for the VMs : failed.<br />>=
;>>>> The only way to stop it was to poweroff the VMs : the=
kvm process died on<br />>>>>> the 2 hosts and the GUI =
alerted on a failed migration.<br />>>>>> In doubt, I tr=
ied to delete the second vdisk on one of this VMs : it<br />>>>=
;>> migrates then without error ! And no access problem.<br />>=
;>>>> I tried to extend the first vdisk of the second VM, t=
he delete the<br />>>>>> second vdisk : it migrates now =
without problem !<br />>>>>><br />>>>>> S=
o after another test with a VM with 2 vdisks, I can say that this<br />=
>>>>> blocked the migration process :(<br />>>>=
>><br />>>>>> In engine.log, for a VMs with 1 vdis=
k migrating well, we see :<br />>>>>><br />>>>&=
gt;> 2018-02-12 16:46:29,705+01 INFO [org.ovirt.engine.core.bll.Migr=
ateVmToServerCommand]<br />>>>>> (default task-28) [2f71=
2024-5982-46a8-82c8-fd8293da5725] Lock Acquired<br />>>>>&g=
t; to object 'EngineLock:{exclusiveLocks=3D'[3f57e669-5e4c-4d10-85cc-d5=
73004a099d=3DVM]',<br />>>>>> sharedLocks=3D''}'<br />&g=
t;>>>> 2018-02-12 16:46:29,955+01 INFO [org.ovirt.engine.co=
re.bll.MigrateVmToServerCommand]<br />>>>>> (org.ovirt.t=
hread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725]<br />>=
;>>>> Running command: MigrateVmToServerCommand internal: f=
alse. Entities<br />>>>>> affected : ID: 3f57e669-5e4c-4=
d10-85cc-d573004a099d Type: VMAction<br />>>>>> group MI=
GRATE=5FVM with role type USER<br />>>>>> 2018-02-12 16:=
46:30,261+01 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand]<b=
r />>>>>> (org.ovirt.thread.pool-6-thread-32) [2f712024-=
5982-46a8-82c8-fd8293da5725]<br />>>>>> START, MigrateVD=
SCommand( MigrateVDSCommandParameters:{runAsync=3D'true',<br />>>=
>>> hostId=3D'ce3938b1-b23f-4d22-840a-f17d7cd87bb1',<br />>=
>>>> vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d', srcHost=
=3D'192.168.0.6',<br />>>>>> dstVdsId=3D'd569c2dd-8f30-4=
878-8aea-858db285cf69', dstHost=3D'<br />>>>>> 192.168.0=
.5:54321', migrationMethod=3D'ONLINE', tunnelMigration=3D'false',<br />=
>>>>> migrationDowntime=3D'0', autoConverge=3D'true', mi=
grateCompressed=3D'false',<br />>>>>> consoleAddress=3D'=
null', maxBandwidth=3D'500', enableGuestEvents=3D'true',<br />>>&=
gt;>> maxIncomingMigrations=3D'2', maxOutgoingMigrations=3D'2',<b=
r />>>>>> convergenceSchedule=3D'[init=3D[{name=3DsetDow=
ntime, params=3D[100]}],<br />>>>>> stalling=3D[{limit=3D=
1, action=3D{name=3DsetDowntime, params=3D[150]}}, {limit=3D2,<br />>=
;>>>> action=3D{name=3DsetDowntime, params=3D[200]}}, {limi=
t=3D3,<br />>>>>> action=3D{name=3DsetDowntime, params=3D=
[300]}}, {limit=3D4,<br />>>>>> action=3D{name=3DsetDown=
time, params=3D[400]}}, {limit=3D6,<br />>>>>> action=3D=
{name=3DsetDowntime, params=3D[500]}}, {limit=3D-1, action=3D{name=3Dab=
ort,<br />>>>>> params=3D[]}}]]'}), log id: 14f61ee0<br =
/>>>>>> 2018-02-12 16:46:30,262+01 INFO [org.ovirt.engin=
e.core.vdsbro<br />>>>>> ker.vdsbroker.MigrateBrokerVDSC=
ommand] (org.ovirt.thread.pool-6-thread-32)<br />>>>>> [=
2f712024-5982-46a8-82c8-fd8293da5725] START,<br />>>>>> =
MigrateBrokerVDSCommand(HostName =3D victor.local.systea.fr,<br />>&=
gt;>>> MigrateVDSCommandParameters:{runAsync=3D'true',<br />&g=
t;>>>> hostId=3D'ce3938b1-b23f-4d22-840a-f17d7cd87bb1',<br =
/>>>>>> vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d', s=
rcHost=3D'192.168.0.6',<br />>>>>> dstVdsId=3D'd569c2dd-=
8f30-4878-8aea-858db285cf69', dstHost=3D'<br />>>>>> 192=
.168.0.5:54321', migrationMethod=3D'ONLINE', tunnelMigration=3D'false',=
<br />>>>>> migrationDowntime=3D'0', autoConverge=3D'tru=
e', migrateCompressed=3D'false',<br />>>>>> consoleAddre=
ss=3D'null', maxBandwidth=3D'500', enableGuestEvents=3D'true',<br />>=
;>>>> maxIncomingMigrations=3D'2', maxOutgoingMigrations=3D=
'2',<br />>>>>> convergenceSchedule=3D'[init=3D[{name=3D=
setDowntime, params=3D[100]}],<br />>>>>> stalling=3D[{l=
imit=3D1, action=3D{name=3DsetDowntime, params=3D[150]}}, {limit=3D2,<b=
r />>>>>> action=3D{name=3DsetDowntime, params=3D[200]}}=
, {limit=3D3,<br />>>>>> action=3D{name=3DsetDowntime, p=
arams=3D[300]}}, {limit=3D4,<br />>>>>> action=3D{name=3D=
setDowntime, params=3D[400]}}, {limit=3D6,<br />>>>>> ac=
tion=3D{name=3DsetDowntime, params=3D[500]}}, {limit=3D-1, action=3D{na=
me=3Dabort,<br />>>>>> params=3D[]}}]]'}), log id: 775cd=
381<br />>>>>> 2018-02-12 16:46:30,277+01 INFO [org.ovir=
t.engine.core.vdsbro<br />>>>>> ker.vdsbroker.MigrateBro=
kerVDSCommand] (org.ovirt.thread.pool-6-thread-32)<br />>>>>=
;> [2f712024-5982-46a8-82c8-fd8293da5725] FINISH, MigrateBrokerVDSCo=
mmand,<br />>>>>> log id: 775cd381<br />>>>>=
> 2018-02-12 16:46:30,285+01 INFO [org.ovirt.engine.core.vdsbroker.M=
igrateVDSCommand]<br />>>>>> (org.ovirt.thread.pool-6-th=
read-32) [2f712024-5982-46a8-82c8-fd8293da5725]<br />>>>>&g=
t; FINISH, MigrateVDSCommand, return: MigratingFrom, log id: 14f61ee0<b=
r />>>>>> 2018-02-12 16:46:30,301+01 INFO [org.ovirt.eng=
ine.core.dal.db<br />>>>>> broker.auditloghandling.Audit=
LogDirector] (org.ovirt.thread.pool-6-thread-32)<br />>>>>&=
gt; [2f712024-5982-46a8-82c8-fd8293da5725] EVENT=5FID:<br />>>>=
;>> VM=5FMIGRATION=5FSTART(62), Correlation ID: 2f712024-5982-46a=
8-82c8-fd8293da5725,<br />>>>>> Job ID: 4bd19aa9-cc99-4d=
02-884e-5a1e857a7738, Call Stack: null, Custom<br />>>>>>=
; ID: null, Custom Event ID: -1, Message: Migration started (VM:<br />&=
gt;>>>> Oracle=5FSECONDARY, Source: victor.local.systea.fr,=
Destination:<br />>>>>> ginger.local.systea.fr, User: a=
dmin@internal-authz).<br />>>>>> 2018-02-12 16:46:31,106=
+01 INFO [org.ovirt.engine.core.vdsbro<br />>>>>> ker.vd=
sbroker.FullListVDSCommand] (DefaultQuartzScheduler9) [54a65b66]<br />&=
gt;>>>> START, FullListVDSCommand(HostName =3D victor.local=
.systea.fr,<br />>>>>> FullListVDSCommandParameters:{run=
Async=3D'true',<br />>>>>> hostId=3D'ce3938b1-b23f-4d22-=
840a-f17d7cd87bb1',<br />>>>>> vmIds=3D'[3f57e669-5e4c-4=
d10-85cc-d573004a099d]'}), log id: 54b4b435<br />>>>>> 2=
018-02-12 16:46:31,147+01 INFO [org.ovirt.engine.core.vdsbro<br />>&=
gt;>>> ker.vdsbroker.FullListVDSCommand] (DefaultQuartzSchedul=
er9) [54a65b66]<br />>>>>> FINISH, FullListVDSCommand, r=
eturn: [{acpiEnable=3Dtrue,<br />>>>>> emulatedMachine=3D=
pc-i440fx-rhel7.3.0, tabletEnable=3Dtrue, pid=3D1493,<br />>>>=
>> guestDiskMapping=3D{0QEMU=5FQEMU=5FHARDDISK=5Fd890fa68-fba4-4f=
49-9=3D{name=3D/dev/sda},<br />>>>>> QEMU=5FDVD-ROM=5FQM=
00003=3D{name=3D/dev/sr0}}, transparentHugePages=3Dtrue,<br />>>&=
gt;>> timeOffset=3D0, cpuType=3DNehalem, smp=3D2, pauseCode=3DNOE=
RR,<br />>>>>> guestNumaNodes=3D[Ljava.lang.Object;@1d90=
42cd, smartcardEnable=3Dfalse,<br />>>>>> custom=3D{devi=
ce=5Ffbddd528-7d93-49c6-a286-180e021cb274device=5F87<br />>>>&=
gt;> 9c93ab-4df1-435c-af02-565039fcc254=3DVmDevice:{id=3D'VmDeviceId=
:<br />>>>>> {deviceId=3D'879c93ab-4df1-435c-af02-565039=
fcc254',<br />>>>>> vmId=3D'3f57e669-5e4c-4d10-85cc-d573=
004a099d'}', device=3D'unix',<br />>>>>> type=3D'CHANNEL=
', bootOrder=3D'0', specParams=3D'[]', address=3D'{bus=3D0,<br />>&g=
t;>>> controller=3D0, type=3Dvirtio-serial, port=3D1}', manage=
d=3D'false',<br />>>>>> plugged=3D'true', readOnly=3D'fa=
lse', deviceAlias=3D'channel0',<br />>>>>> customPropert=
ies=3D'[]', snapshotId=3D'null', logicalName=3D'null',<br />>>>=
;>> hostDevice=3D'null'}, device=5Ffbddd528-7d93-49c6-a286<br />&=
gt;>>>> -180e021cb274device=5F879c93ab-4df1-435c-af02-56503=
9fcc254devi<br />>>>>> ce=5F8945f61a-abbe-4156-8485-a4aa=
6f1908dbdevice=5F017b5e59-01c4-<br />>>>>> 4aac-bf0c-b5d=
9557284d6=3DVmDevice:{id=3D'VmDeviceId:{deviceId=3D'0<br />>>>=
>> 17b5e59-01c4-4aac-bf0c-b5d9557284d6', vmId=3D'3f57e669-5e4c-4d=
10-85cc-d573004a099d'}',<br />>>>>> device=3D'tablet', t=
ype=3D'UNKNOWN', bootOrder=3D'0', specParams=3D'[]',<br />>>>&=
gt;> address=3D'{bus=3D0, type=3Dusb, port=3D1}', managed=3D'false',=
plugged=3D'true',<br />>>>>> readOnly=3D'false', device=
Alias=3D'input0', customProperties=3D'[]',<br />>>>>> sn=
apshotId=3D'null', logicalName=3D'null', hostDevice=3D'null'},<br />>=
;>>>> device=5Ffbddd528-7d93-49c6-a286-180e021cb274=3DVmDev=
ice:{id=3D'Vm<br />>>>>> DeviceId:{deviceId=3D'fbddd528-=
7d93-49c6-a286-180e021cb274',<br />>>>>> vmId=3D'3f57e66=
9-5e4c-4d10-85cc-d573004a099d'}', device=3D'ide',<br />>>>>=
> type=3D'CONTROLLER', bootOrder=3D'0', specParams=3D'[]', address=3D=
'{slot=3D0x01,<br />>>>>> bus=3D0x00, domain=3D0x0000, t=
ype=3Dpci, function=3D0x1}', managed=3D'false',<br />>>>>&g=
t; plugged=3D'true', readOnly=3D'false', deviceAlias=3D'ide', customPro=
perties=3D'[]',<br />>>>>> snapshotId=3D'null', logicalN=
ame=3D'null', hostDevice=3D'null'},<br />>>>>> device=5F=
fbddd528-7d93-49c6-a286-180e021cb274device=5F879c93ab-4<br />>>&g=
t;>> df1-435c-af02-565039fcc254device=5F8945f61a-abbe-4156-8485-a=
4a<br />>>>>> a6f1908db=3DVmDevice:{id=3D'VmDeviceId:{de=
viceId=3D'8945f61a-abbe-4156-8485-a4aa6f1908db',<br />>>>>&=
gt; vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}', device=3D'unix',<b=
r />>>>>> type=3D'CHANNEL', bootOrder=3D'0', specParams=3D=
'[]', address=3D'{bus=3D0,<br />>>>>> controller=3D0, ty=
pe=3Dvirtio-serial, port=3D2}', managed=3D'false',<br />>>>>=
;> plugged=3D'true', readOnly=3D'false', deviceAlias=3D'channel1',<b=
r />>>>>> customProperties=3D'[]', snapshotId=3D'null', =
logicalName=3D'null',<br />>>>>> hostDevice=3D'null'}}, =
vmType=3Dkvm, memSize=3D8192, smpCoresPerSocket=3D1,<br />>>>&=
gt;> vmName=3DOracle=5FSECONDARY, nice=3D0, status=3DMigration Sourc=
e, maxMemSize=3D32768,<br />>>>>> bootMenuEnable=3Dfalse=
, vmId=3D3f57e669-5e4c-4d10-85cc-d573004a099d,<br />>>>>>=
; numOfIoThreads=3D2, smpThreadsPerCore=3D1, memGuaranteedSize=3D8192,<=
br />>>>>> kvmEnable=3Dtrue, pitReinjection=3Dfalse, dis=
playNetwork=3Dovirtmgmt,<br />>>>>> devices=3D[Ljava.lan=
g.Object;@28ae66d7, display=3Dvnc, maxVCpus=3D16,<br />>>>>=
> clientIp=3D, statusTime=3D4299484520, maxMemSlots=3D16}], log id: =
54b4b435<br />>>>>> 2018-02-12 16:46:31,150+01 INFO [org=
.ovirt.engine.core.vdsbro<br />>>>>> ker.monitoring.VmsS=
tatisticsFetcher] (DefaultQuartzScheduler1)<br />>>>>> [=
27fac647] Fetched 3 VMs from VDS 'd569c2dd-8f30-4878-8aea-858db285cf69'=
<br />>>>>> 2018-02-12 16:46:31,151+01 INFO [org.ovirt.e=
ngine.core.vdsbro<br />>>>>> ker.monitoring.VmDevicesMon=
itoring] (DefaultQuartzScheduler9)<br />>>>>> [54a65b66]=
Received a vnc Device without an address when processing VM<br />>&=
gt;>>> 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping =
device:<br />>>>>> {device=3Dvnc, specParams=3D{displayN=
etwork=3Dovirtmgmt, keyMap=3Dfr,<br />>>>>> displayIp=3D=
192.168.0.6}, type=3Dgraphics, deviceId=3D813957b1-446a-4e88-9e40-9fe76=
d2c442d,<br />>>>>> port=3D5901}<br />>>>>&g=
t; 2018-02-12 16:46:31,151+01 INFO [org.ovirt.engine.core.vdsbro<br />&=
gt;>>>> ker.monitoring.VmDevicesMonitoring] (DefaultQuartzS=
cheduler9)<br />>>>>> [54a65b66] Received a lease Device=
without an address when processing VM<br />>>>>> 3f57e6=
69-5e4c-4d10-85cc-d573004a099d devices, skipping device:<br />>>&=
gt;>> {lease=5Fid=3D3f57e669-5e4c-4d10-85cc-d573004a099d,<br />&g=
t;>>>> sd=5Fid=3D1e51cecc-eb2e-47d0-b185-920fdc7afa16,<br /=
>>>>>> deviceId=3D{uuid=3Da09949aa-5642-4b6d-94a4-8b0d04=
257be5}, offset=3D6291456,<br />>>>>> device=3Dlease, pa=
th=3D/rhev/data-center/mnt/glusterSD/192.168.0.6:<br />>>>>=
> =5FDATA01/1e51cecc-eb2e-47d0-b185-920fdc7afa16/dom=5Fmd/xleases, t=
ype=3Dlease}<br />>>>>> 2018-02-12 16:46:31,152+01 INFO =
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]<br />>>&g=
t;>> (DefaultQuartzScheduler1) [27fac647] VM '3f57e669-5e4c-4d10-=
85cc-d573004a099d'(Oracle=5FSECONDARY)<br />>>>>> was un=
expectedly detected as 'MigratingTo' on VDS<br />>>>>> '=
d569c2dd-8f30-4878-8aea-858db285cf69'(ginger.local.systea.fr)<br />>=
>>>> (expected on 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1')<b=
r />>>>>> 2018-02-12 16:46:31,152+01 INFO [org.ovirt.eng=
ine.core.vdsbroker.monitoring.VmAnalyzer]<br />>>>>> (De=
faultQuartzScheduler1) [27fac647] VM '3f57e669-5e4c-4d10-85cc-d573004a0=
99d'<br />>>>>> is migrating to VDS 'd569c2dd-8f30-4878-=
8aea-858db285cf69'(<br />>>>>> ginger.local.systea.fr) i=
gnoring it in the refresh until migration is<br />>>>>> =
done<br />>>>>> ....<br />>>>>> 2018-02-1=
2 16:46:41,631+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAn=
alyzer]<br />>>>>> (ForkJoinPool-1-worker-11) [] VM '3f5=
7e669-5e4c-4d10-85cc-d573004a099d'<br />>>>>> was report=
ed as Down on VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'(<br />>>=
>>> victor.local.systea.fr)<br />>>>>> 2018-02-=
12 16:46:41,632+01 INFO [org.ovirt.engine.core.vdsbro<br />>>>=
>> ker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-11) []=
START,<br />>>>>> DestroyVDSCommand(HostName =3D victor=
.local.systea.fr,<br />>>>>> DestroyVmVDSCommandParamete=
rs:{runAsync=3D'true',<br />>>>>> hostId=3D'ce3938b1-b23=
f-4d22-840a-f17d7cd87bb1',<br />>>>>> vmId=3D'3f57e669-5=
e4c-4d10-85cc-d573004a099d', force=3D'false',<br />>>>>>=
secondsToWait=3D'0', gracefully=3D'false', reason=3D'', ignoreNoVm=3D'=
true'}), log<br />>>>>> id: 560eca57<br />>>>&g=
t;> 2018-02-12 16:46:41,650+01 INFO [org.ovirt.engine.core.vdsbro<br=
/>>>>>> ker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-=
1-worker-11) [] FINISH,<br />>>>>> DestroyVDSCommand, lo=
g id: 560eca57<br />>>>>> 2018-02-12 16:46:41,650+01 INF=
O [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]<br />>>=
>>> (ForkJoinPool-1-worker-11) [] VM '3f57e669-5e4c-4d10-85cc-=
d573004a099d'(Oracle=5FSECONDARY)<br />>>>>> moved from =
'MigratingFrom' --> 'Down'<br />>>>>> 2018-02-12 16:4=
6:41,651+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer=
]<br />>>>>> (ForkJoinPool-1-worker-11) [] Handing over =
VM '3f57e669-5e4c-4d10-85cc-d573004a099d'(Oracle=5FSECONDARY)<br />>=
>>>> to Host 'd569c2dd-8f30-4878-8aea-858db285cf69'. Settin=
g VM to status<br />>>>>> 'MigratingTo'<br />>>>=
;>> 2018-02-12 16:46:42,163+01 INFO [org.ovirt.engine.core.vdsbro=
ker.monitoring.VmAnalyzer]<br />>>>>> (ForkJoinPool-1-wo=
rker-4) [] VM '3f57e669-5e4c-4d10-85cc-d573004a099d'(Oracle=5FSECONDARY=
)<br />>>>>> moved from 'MigratingTo' --> 'Up'<br />&=
gt;>>>> 2018-02-12 16:46:42,169+01 INFO [org.ovirt.engine.c=
ore.vdsbro<br />>>>>> ker.vdsbroker.MigrateStatusVDSComm=
and] (ForkJoinPool-1-worker-4) []<br />>>>>> START, Migr=
ateStatusVDSCommand(HostName =3D ginger.local.systea.fr,<br />>>&=
gt;>> MigrateStatusVDSCommandParameters:{runAsync=3D'true',<br />=
>>>>> hostId=3D'd569c2dd-8f30-4878-8aea-858db285cf69',<b=
r />>>>>> vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}=
), log id: 7a25c281<br />>>>>> 2018-02-12 16:46:42,174+0=
1 INFO [org.ovirt.engine.core.vdsbro<br />>>>>> ker.vdsb=
roker.MigrateStatusVDSCommand] (ForkJoinPool-1-worker-4) []<br />>&g=
t;>>> FINISH, MigrateStatusVDSCommand, log id: 7a25c281<br />&=
gt;>>>> 2018-02-12 16:46:42,194+01 INFO [org.ovirt.engine.c=
ore.dal.db<br />>>>>> broker.auditloghandling.AuditLogDi=
rector] (ForkJoinPool-1-worker-4) []<br />>>>>> EVENT=5F=
ID: VM=5FMIGRATION=5FDONE(63), Correlation ID:<br />>>>>>=
; 2f712024-5982-46a8-82c8-fd8293da5725, Job ID:<br />>>>>&g=
t; 4bd19aa9-cc99-4d02-884e-5a1e857a7738, Call Stack: null, Custom ID:<b=
r />>>>>> null, Custom Event ID: -1, Message: Migration =
completed (VM:<br />>>>>> Oracle=5FSECONDARY, Source: vi=
ctor.local.systea.fr, Destination:<br />>>>>> ginger.loc=
al.systea.fr, Duration: 11 seconds, Total: 11 seconds, Actual<br />>=
>>>> downtime: (N/A))<br />>>>>> 2018-02-12 =
16:46:42,201+01 INFO [org.ovirt.engine.core.bll.MigrateVmToServerComman=
d]<br />>>>>> (ForkJoinPool-1-worker-4) [] Lock freed to=
object<br />>>>>> 'EngineLock:{exclusiveLocks=3D'[3f57e=
669-5e4c-4d10-85cc-d573004a099d=3DVM]',<br />>>>>> share=
dLocks=3D''}'<br />>>>>> 2018-02-12 16:46:42,203+01 INFO=
[org.ovirt.engine.core.vdsbro<br />>>>>> ker.vdsbroker.=
FullListVDSCommand] (ForkJoinPool-1-worker-4) [] START,<br />>>&g=
t;>> FullListVDSCommand(HostName =3D ginger.local.systea.fr,<br /=
>>>>>> FullListVDSCommandParameters:{runAsync=3D'true',<=
br />>>>>> hostId=3D'd569c2dd-8f30-4878-8aea-858db285cf6=
9',<br />>>>>> vmIds=3D'[3f57e669-5e4c-4d10-85cc-d573004=
a099d]'}), log id: 7cc65298<br />>>>>> 2018-02-12 16:46:=
42,254+01 INFO [org.ovirt.engine.core.vdsbro<br />>>>>> =
ker.vdsbroker.FullListVDSCommand] (ForkJoinPool-1-worker-4) [] FINISH,<=
br />>>>>> FullListVDSCommand, return: [{acpiEnable=3Dtr=
ue,<br />>>>>> emulatedMachine=3Dpc-i440fx-rhel7.3.0, af=
terMigrationStatus=3D,<br />>>>>> tabletEnable=3Dtrue, p=
id=3D18748, guestDiskMapping=3D{},<br />>>>>> transparen=
tHugePages=3Dtrue, timeOffset=3D0, cpuType=3DNehalem, smp=3D2,<br />>=
;>>>> guestNumaNodes=3D[Ljava.lang.Object;@760085fd,<br />&=
gt;>>>> custom=3D{device=5Ffbddd528-7d93-49c6-a286-180e021c=
b274device=5F87<br />>>>>> 9c93ab-4df1-435c-af02-565039f=
cc254=3DVmDevice:{id=3D'VmDeviceId:<br />>>>>> {deviceId=
=3D'879c93ab-4df1-435c-af02-565039fcc254',<br />>>>>> vm=
Id=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}', device=3D'unix',<br />&g=
t;>>>> type=3D'CHANNEL', bootOrder=3D'0', specParams=3D'[]'=
, address=3D'{bus=3D0,<br />>>>>> controller=3D0, type=3D=
virtio-serial, port=3D1}', managed=3D'false',<br />>>>>>=
plugged=3D'true', readOnly=3D'false', deviceAlias=3D'channel0',<br />&=
gt;>>>> customProperties=3D'[]', snapshotId=3D'null', logic=
alName=3D'null',<br />>>>>> hostDevice=3D'null'}, device=
=5Ffbddd528-7d93-49c6-a286<br />>>>>> -180e021cb274devic=
e=5F879c93ab-4df1-435c-af02-565039fcc254devi<br />>>>>> =
ce=5F8945f61a-abbe-4156-8485-a4aa6f1908dbdevice=5F017b5e59-01c4-<br />&=
gt;>>>> 4aac-bf0c-b5d9557284d6=3DVmDevice:{id=3D'VmDeviceId=
:{deviceId=3D'0<br />>>>>> 17b5e59-01c4-4aac-bf0c-b5d955=
7284d6', vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}',<br />>>=
>>> device=3D'tablet', type=3D'UNKNOWN', bootOrder=3D'0', spec=
Params=3D'[]',<br />>>>>> address=3D'{bus=3D0, type=3Dus=
b, port=3D1}', managed=3D'false', plugged=3D'true',<br />>>>&g=
t;> readOnly=3D'false', deviceAlias=3D'input0', customProperties=3D'=
[]',<br />>>>>> snapshotId=3D'null', logicalName=3D'null=
', hostDevice=3D'null'},<br />>>>>> device=5Ffbddd528-7d=
93-49c6-a286-180e021cb274=3DVmDevice:{id=3D'Vm<br />>>>>>=
; DeviceId:{deviceId=3D'fbddd528-7d93-49c6-a286-180e021cb274',<br />>=
;>>>> vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}', devi=
ce=3D'ide',<br />>>>>> type=3D'CONTROLLER', bootOrder=3D=
'0', specParams=3D'[]', address=3D'{slot=3D0x01,<br />>>>>&=
gt; bus=3D0x00, domain=3D0x0000, type=3Dpci, function=3D0x1}', managed=3D=
'false',<br />>>>>> plugged=3D'true', readOnly=3D'false'=
, deviceAlias=3D'ide', customProperties=3D'[]',<br />>>>>&g=
t; snapshotId=3D'null', logicalName=3D'null', hostDevice=3D'null'},<br =
/>>>>>> device=5Ffbddd528-7d93-49c6-a286-180e021cb274dev=
ice=5F879c93ab-4<br />>>>>> df1-435c-af02-565039fcc254de=
vice=5F8945f61a-abbe-4156-8485-a4a<br />>>>>> a6f1908db=3D=
VmDevice:{id=3D'VmDeviceId:{deviceId=3D'8945f61a-abbe-4156-8485-a4aa6f1=
908db',<br />>>>>> vmId=3D'3f57e669-5e4c-4d10-85cc-d5730=
04a099d'}', device=3D'unix',<br />>>>>> type=3D'CHANNEL'=
, bootOrder=3D'0', specParams=3D'[]', address=3D'{bus=3D0,<br />>>=
;>>> controller=3D0, type=3Dvirtio-serial, port=3D2}', managed=
=3D'false',<br />>>>>> plugged=3D'true', readOnly=3D'fal=
se', deviceAlias=3D'channel1',<br />>>>>> customProperti=
es=3D'[]', snapshotId=3D'null', logicalName=3D'null',<br />>>>=
>> hostDevice=3D'null'}}, vmType=3Dkvm, memSize=3D8192, smpCoresP=
erSocket=3D1,<br />>>>>> vmName=3DOracle=5FSECONDARY, ni=
ce=3D0, status=3DUp, maxMemSize=3D32768,<br />>>>>> boot=
MenuEnable=3Dfalse, vmId=3D3f57e669-5e4c-4d10-85cc-d573004a099d,<br />&=
gt;>>>> numOfIoThreads=3D2, smpThreadsPerCore=3D1, smartcar=
dEnable=3Dfalse,<br />>>>>> maxMemSlots=3D16, kvmEnable=3D=
true, pitReinjection=3Dfalse,<br />>>>>> displayNetwork=3D=
ovirtmgmt, devices=3D[Ljava.lang.Object;@2e4d3dd3,<br />>>>>=
;> memGuaranteedSize=3D8192, maxVCpus=3D16, clientIp=3D, statusTime=3D=
4304259600<br />>>>>> <(430)%20425-9600>, display=3D=
vnc}], log id: 7cc65298<br />>>>>> 2018-02-12 16:46:42,2=
57+01 INFO [org.ovirt.engine.core.vdsbro<br />>>>>> ker.=
monitoring.VmDevicesMonitoring] (ForkJoinPool-1-worker-4) []<br />>&=
gt;>>> Received a vnc Device without an address when processin=
g VM<br />>>>>> 3f57e669-5e4c-4d10-85cc-d573004a099d dev=
ices, skipping device:<br />>>>>> {device=3Dvnc, specPar=
ams=3D{displayNetwork=3Dovirtmgmt, keyMap=3Dfr,<br />>>>>&g=
t; displayIp=3D192.168.0.5}, type=3Dgraphics, deviceId=3D813957b1-446a-=
4e88-9e40-9fe76d2c442d,<br />>>>>> port=3D5901}<br />>=
;>>>> 2018-02-12 16:46:42,257+01 INFO [org.ovirt.engine.cor=
e.vdsbro<br />>>>>> ker.monitoring.VmDevicesMonitoring] =
(ForkJoinPool-1-worker-4) []<br />>>>>> Received a lease=
Device without an address when processing VM<br />>>>>>=
3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device:<br />&g=
t;>>>> {lease=5Fid=3D3f57e669-5e4c-4d10-85cc-d573004a099d,<=
br />>>>>> sd=5Fid=3D1e51cecc-eb2e-47d0-b185-920fdc7afa1=
6,<br />>>>>> deviceId=3D{uuid=3Da09949aa-5642-4b6d-94a4=
-8b0d04257be5}, offset=3D6291456,<br />>>>>> device=3Dle=
ase, path=3D/rhev/data-center/mnt/glusterSD/192.168.0.6:<br />>>&=
gt;>> =5FDATA01/1e51cecc-eb2e-47d0-b185-920fdc7afa16/dom=5Fmd/xle=
ases, type=3Dlease}<br />>>>>> 2018-02-12 16:46:46,260+0=
1 INFO [org.ovirt.engine.core.vdsbro<br />>>>>> ker.vdsb=
roker.FullListVDSCommand] (DefaultQuartzScheduler5) [7fcb200a]<br />>=
;>>>> FINISH, FullListVDSCommand, return: [{acpiEnable=3Dtr=
ue,<br />>>>>> emulatedMachine=3Dpc-i440fx-rhel7.3.0, af=
terMigrationStatus=3D,<br />>>>>> tabletEnable=3Dtrue, p=
id=3D18748, guestDiskMapping=3D{0QEMU=5FQEMU=5FH<br />>>>>&=
gt; ARDDISK=5Fd890fa68-fba4-4f49-9=3D{name=3D/dev/sda},<br />>>&g=
t;>> QEMU=5FDVD-ROM=5FQM00003=3D{name=3D/dev/sr0}}, transparentHu=
gePages=3Dtrue,<br />>>>>> timeOffset=3D0, cpuType=3DNeh=
alem, smp=3D2, guestNumaNodes=3D[Ljava.lang.Obj<br />>>>>&g=
t; ect;@77951faf, custom=3D{device=5Ffbddd528-7d93-4<br />>>>&=
gt;> 9c6-a286-180e021cb274device=5F879c93ab-4df1-435c-af02-565039fc<=
br />>>>>> c254=3DVmDevice:{id=3D'VmDeviceId:{deviceId=3D=
'879c93ab-4df1-435c-af02-565039fcc254',<br />>>>>> vmId=3D=
'3f57e669-5e4c-4d10-85cc-d573004a099d'}', device=3D'unix',<br />>>=
;>>> type=3D'CHANNEL', bootOrder=3D'0', specParams=3D'[]', add=
ress=3D'{bus=3D0,<br />>>>>> controller=3D0, type=3Dvirt=
io-serial, port=3D1}', managed=3D'false',<br />>>>>> plu=
gged=3D'true', readOnly=3D'false', deviceAlias=3D'channel0',<br />>&=
gt;>>> customProperties=3D'[]', snapshotId=3D'null', logicalNa=
me=3D'null',<br />>>>>> hostDevice=3D'null'}, device=5Ff=
bddd528-7d93-49c6-a286<br />>>>>> -180e021cb274device=5F=
879c93ab-4df1-435c-af02-565039fcc254devi<br />>>>>> ce=5F=
8945f61a-abbe-4156-8485-a4aa6f1908dbdevice=5F017b5e59-01c4-<br />>&g=
t;>>> 4aac-bf0c-b5d9557284d6=3DVmDevice:{id=3D'VmDeviceId:{dev=
iceId=3D'0<br />>>>>> 17b5e59-01c4-4aac-bf0c-b5d9557284d=
6', vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}',<br />>>>&=
gt;> device=3D'tablet', type=3D'UNKNOWN', bootOrder=3D'0', specParam=
s=3D'[]',<br />>>>>> address=3D'{bus=3D0, type=3Dusb, po=
rt=3D1}', managed=3D'false', plugged=3D'true',<br />>>>>>=
; readOnly=3D'false', deviceAlias=3D'input0', customProperties=3D'[]',<=
br />>>>>> snapshotId=3D'null', logicalName=3D'null', ho=
stDevice=3D'null'},<br />>>>>> device=5Ffbddd528-7d93-49=
c6-a286-180e021cb274=3DVmDevice:{id=3D'Vm<br />>>>>> Dev=
iceId:{deviceId=3D'fbddd528-7d93-49c6-a286-180e021cb274',<br />>>=
>>> vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}', device=3D=
'ide',<br />>>>>> type=3D'CONTROLLER', bootOrder=3D'0', =
specParams=3D'[]', address=3D'{slot=3D0x01,<br />>>>>> b=
us=3D0x00, domain=3D0x0000, type=3Dpci, function=3D0x1}', managed=3D'fa=
lse',<br />>>>>> plugged=3D'true', readOnly=3D'false', d=
eviceAlias=3D'ide', customProperties=3D'[]',<br />>>>>> =
snapshotId=3D'null', logicalName=3D'null', hostDevice=3D'null'},<br />&=
gt;>>>> device=5Ffbddd528-7d93-49c6-a286-180e021cb274device=
=5F879c93ab-4<br />>>>>> df1-435c-af02-565039fcc254devic=
e=5F8945f61a-abbe-4156-8485-a4a<br />>>>>> a6f1908db=3DV=
mDevice:{id=3D'VmDeviceId:{deviceId=3D'8945f61a-abbe-4156-8485-a4aa6f19=
08db',<br />>>>>> vmId=3D'3f57e669-5e4c-4d10-85cc-d57300=
4a099d'}', device=3D'unix',<br />>>>>> type=3D'CHANNEL',=
bootOrder=3D'0', specParams=3D'[]', address=3D'{bus=3D0,<br />>>=
>>> controller=3D0, type=3Dvirtio-serial, port=3D2}', managed=3D=
'false',<br />>>>>> plugged=3D'true', readOnly=3D'false'=
, deviceAlias=3D'channel1',<br />>>>>> customProperties=3D=
'[]', snapshotId=3D'null', logicalName=3D'null',<br />>>>>&=
gt; hostDevice=3D'null'}}, vmType=3Dkvm, memSize=3D8192, smpCoresPerSoc=
ket=3D1,<br />>>>>> vmName=3DOracle=5FSECONDARY, nice=3D=
0, status=3DUp, maxMemSize=3D32768,<br />>>>>> bootMenuE=
nable=3Dfalse, vmId=3D3f57e669-5e4c-4d10-85cc-d573004a099d,<br />>&g=
t;>>> numOfIoThreads=3D2, smpThreadsPerCore=3D1, smartcardEnab=
le=3Dfalse,<br />>>>>> maxMemSlots=3D16, kvmEnable=3Dtru=
e, pitReinjection=3Dfalse,<br />>>>>> displayNetwork=3Do=
virtmgmt, devices=3D[Ljava.lang.Object;@286410fd,<br />>>>>=
> memGuaranteedSize=3D8192, maxVCpus=3D16, clientIp=3D, statusTime=3D=
4304263620<br />>>>>> <(430)%20426-3620>, display=3D=
vnc}], log id: 58cdef4c<br />>>>>> 2018-02-12 16:46:46,2=
67+01 INFO [org.ovirt.engine.core.vdsbro<br />>>>>> ker.=
monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler5)<br />>>=
>>> [7fcb200a] Received a vnc Device without an address when p=
rocessing VM<br />>>>>> 3f57e669-5e4c-4d10-85cc-d573004a=
099d devices, skipping device:<br />>>>>> {device=3Dvnc,=
specParams=3D{displayNetwork=3Dovirtmgmt, keyMap=3Dfr,<br />>>&g=
t;>> displayIp=3D192.168.0.5}, type=3Dgraphics, deviceId=3D813957=
b1-446a-4e88-9e40-9fe76d2c442d,<br />>>>>> port=3D5901}<=
br />>>>>> 2018-02-12 16:46:46,268+01 INFO [org.ovirt.en=
gine.core.vdsbro<br />>>>>> ker.monitoring.VmDevicesMoni=
toring] (DefaultQuartzScheduler5)<br />>>>>> [7fcb200a] =
Received a lease Device without an address when processing VM<br />>=
>>>> 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping=
device:<br />>>>>> {lease=5Fid=3D3f57e669-5e4c-4d10-85c=
c-d573004a099d,<br />>>>>> sd=5Fid=3D1e51cecc-eb2e-47d0-=
b185-920fdc7afa16,<br />>>>>> deviceId=3D{uuid=3Da09949a=
a-5642-4b6d-94a4-8b0d04257be5}, offset=3D6291456,<br />>>>>=
> device=3Dlease, path=3D/rhev/data-center/mnt/glusterSD/192.168.0.6=
:<br />>>>>> =5FDATA01/1e51cecc-eb2e-47d0-b185-920fdc7af=
a16/dom=5Fmd/xleases, type=3Dlease}<br />>>>>><br />>=
>>>><br />>>>>><br />>>>>><br=
/>>>>>> For the VM with 2 vdisks we see :<br />>>=
>>><br />>>>>> 2018-02-12 16:49:06,112+01 INFO =
[org.ovirt.engine.core.bll.MigrateVmToServerCommand]<br />>>>&=
gt;> (default task-50) [92b5af33-cb87-4142-b8fe-8b838dd7458e] Lock A=
cquired<br />>>>>> to object 'EngineLock:{exclusiveLocks=
=3D'[f7d4ec12-627a-4b83-b59e-886400d55474=3DVM]',<br />>>>>=
> sharedLocks=3D''}'<br />>>>>> 2018-02-12 16:49:06,4=
07+01 INFO [org.ovirt.engine.core.bll.MigrateVmToServerCommand]<br />&g=
t;>>>> (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4=
142-b8fe-8b838dd7458e]<br />>>>>> Running command: Migra=
teVmToServerCommand internal: false. Entities<br />>>>>>=
affected : ID: f7d4ec12-627a-4b83-b59e-886400d55474 Type: VMAction<br =
/>>>>>> group MIGRATE=5FVM with role type USER<br />>=
>>>> 2018-02-12 16:49:06,712+01 INFO [org.ovirt.engine.core=
.vdsbroker.MigrateVDSCommand]<br />>>>>> (org.ovirt.thre=
ad.pool-6-thread-49) [92b5af33-cb87-4142-b8fe-8b838dd7458e]<br />>&g=
t;>>> START, MigrateVDSCommand( MigrateVDSCommandParameters:{r=
unAsync=3D'true',<br />>>>>> hostId=3D'd569c2dd-8f30-487=
8-8aea-858db285cf69',<br />>>>>> vmId=3D'f7d4ec12-627a-4=
b83-b59e-886400d55474', srcHost=3D'192.168.0.5',<br />>>>>&=
gt; dstVdsId=3D'ce3938b1-b23f-4d22-840a-f17d7cd87bb1', dstHost=3D'<br /=
>>>>>> 192.168.0.6:54321', migrationMethod=3D'ONLINE', t=
unnelMigration=3D'false',<br />>>>>> migrationDowntime=3D=
'0', autoConverge=3D'true', migrateCompressed=3D'false',<br />>>&=
gt;>> consoleAddress=3D'null', maxBandwidth=3D'500', enableGuestE=
vents=3D'true',<br />>>>>> maxIncomingMigrations=3D'2', =
maxOutgoingMigrations=3D'2',<br />>>>>> convergenceSched=
ule=3D'[init=3D[{name=3DsetDowntime, params=3D[100]}],<br />>>>=
;>> stalling=3D[{limit=3D1, action=3D{name=3DsetDowntime, params=3D=
[150]}}, {limit=3D2,<br />>>>>> action=3D{name=3DsetDown=
time, params=3D[200]}}, {limit=3D3,<br />>>>>> action=3D=
{name=3DsetDowntime, params=3D[300]}}, {limit=3D4,<br />>>>>=
;> action=3D{name=3DsetDowntime, params=3D[400]}}, {limit=3D6,<br />=
>>>>> action=3D{name=3DsetDowntime, params=3D[500]}}, {l=
imit=3D-1, action=3D{name=3Dabort,<br />>>>>> params=3D[=
]}}]]'}), log id: 3702a9e0<br />>>>>> 2018-02-12 16:49:0=
6,713+01 INFO [org.ovirt.engine.core.vdsbro<br />>>>>> k=
er.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-thread-4=
9)<br />>>>>> [92b5af33-cb87-4142-b8fe-8b838dd7458e] STA=
RT,<br />>>>>> MigrateBrokerVDSCommand(HostName =3D ging=
er.local.systea.fr,<br />>>>>> MigrateVDSCommandParamete=
rs:{runAsync=3D'true',<br />>>>>> hostId=3D'd569c2dd-8f3=
0-4878-8aea-858db285cf69',<br />>>>>> vmId=3D'f7d4ec12-6=
27a-4b83-b59e-886400d55474', srcHost=3D'192.168.0.5',<br />>>>=
>> dstVdsId=3D'ce3938b1-b23f-4d22-840a-f17d7cd87bb1', dstHost=3D'=
<br />>>>>> 192.168.0.6:54321', migrationMethod=3D'ONLIN=
E', tunnelMigration=3D'false',<br />>>>>> migrationDownt=
ime=3D'0', autoConverge=3D'true', migrateCompressed=3D'false',<br />>=
;>>>> consoleAddress=3D'null', maxBandwidth=3D'500', enable=
GuestEvents=3D'true',<br />>>>>> maxIncomingMigrations=3D=
'2', maxOutgoingMigrations=3D'2',<br />>>>>> convergence=
Schedule=3D'[init=3D[{name=3DsetDowntime, params=3D[100]}],<br />>&g=
t;>>> stalling=3D[{limit=3D1, action=3D{name=3DsetDowntime, pa=
rams=3D[150]}}, {limit=3D2,<br />>>>>> action=3D{name=3D=
setDowntime, params=3D[200]}}, {limit=3D3,<br />>>>>> ac=
tion=3D{name=3DsetDowntime, params=3D[300]}}, {limit=3D4,<br />>>=
>>> action=3D{name=3DsetDowntime, params=3D[400]}}, {limit=3D6=
,<br />>>>>> action=3D{name=3DsetDowntime, params=3D[500=
]}}, {limit=3D-1, action=3D{name=3Dabort,<br />>>>>> par=
ams=3D[]}}]]'}), log id: 1840069c<br />>>>>> 2018-02-12 =
16:49:06,724+01 INFO [org.ovirt.engine.core.vdsbro<br />>>>>=
;> ker.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-t=
hread-49)<br />>>>>> [92b5af33-cb87-4142-b8fe-8b838dd745=
8e] FINISH, MigrateBrokerVDSCommand,<br />>>>>> log id: =
1840069c<br />>>>>> 2018-02-12 16:49:06,732+01 INFO [org=
.ovirt.engine.core.vdsbroker.MigrateVDSCommand]<br />>>>>&g=
t; (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4142-b8fe-8b838dd=
7458e]<br />>>>>> FINISH, MigrateVDSCommand, return: Mig=
ratingFrom, log id: 3702a9e0<br />>>>>> 2018-02-12 16:49=
:06,753+01 INFO [org.ovirt.engine.core.dal.db<br />>>>>>=
broker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-6-thr=
ead-49)<br />>>>>> [92b5af33-cb87-4142-b8fe-8b838dd7458e=
] EVENT=5FID:<br />>>>>> VM=5FMIGRATION=5FSTART(62), Cor=
relation ID: 92b5af33-cb87-4142-b8fe-8b838dd7458e,<br />>>>>=
;> Job ID: f4f54054-f7c8-4481-8eda-d5a15c383061, Call Stack: null, C=
ustom<br />>>>>> ID: null, Custom Event ID: -1, Message:=
Migration started (VM:<br />>>>>> Oracle=5FPRIMARY, Sou=
rce: ginger.local.systea.fr, Destination:<br />>>>>> vic=
tor.local.systea.fr, User: admin@internal-authz).<br />>>>>=
> ...<br />>>>>> 2018-02-12 16:49:16,453+01 INFO [org=
.ovirt.engine.core.vdsbro<br />>>>>> ker.monitoring.VmsS=
tatisticsFetcher] (DefaultQuartzScheduler4)<br />>>>>> [=
162a5bc3] Fetched 2 VMs from VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'=
<br />>>>>> 2018-02-12 16:49:16,455+01 INFO [org.ovirt.e=
ngine.core.vdsbroker.monitoring.VmAnalyzer]<br />>>>>> (=
DefaultQuartzScheduler4) [162a5bc3] VM 'f7d4ec12-627a-4b83-b59e-886400d=
55474'(Oracle=5FPRIMARY)<br />>>>>> was unexpectedly det=
ected as 'MigratingTo' on VDS<br />>>>>> 'ce3938b1-b23f-=
4d22-840a-f17d7cd87bb1'(victor.local.systea.fr)<br />>>>>&g=
t; (expected on 'd569c2dd-8f30-4878-8aea-858db285cf69')<br />>>&g=
t;>> 2018-02-12 16:49:16,455+01 INFO [org.ovirt.engine.core.vdsbr=
oker.monitoring.VmAnalyzer]<br />>>>>> (DefaultQuartzSch=
eduler4) [162a5bc3] VM 'f7d4ec12-627a-4b83-b59e-886400d55474'<br />>=
>>>> is migrating to VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87=
bb1'(<br />>>>>> victor.local.systea.fr) ignoring it in =
the refresh until migration is<br />>>>>> done<br />>=
>>>> ...<br />>>>>> 2018-02-12 16:49:31,484+=
01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]<br />&g=
t;>>>> (DefaultQuartzScheduler5) [11a7619a] VM 'f7d4ec12-62=
7a-4b83-b59e-886400d55474'(Oracle=5FPRIMARY)<br />>>>>> =
was unexpectedly detected as 'MigratingTo' on VDS<br />>>>>=
> 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'(victor.local.systea.fr)<br =
/>>>>>> (expected on 'd569c2dd-8f30-4878-8aea-858db285cf=
69')<br />>>>>> 2018-02-12 16:49:31,484+01 INFO [org.ovi=
rt.engine.core.vdsbroker.monitoring.VmAnalyzer]<br />>>>>&g=
t; (DefaultQuartzScheduler5) [11a7619a] VM 'f7d4ec12-627a-4b83-b59e-886=
400d55474'<br />>>>>> is migrating to VDS 'ce3938b1-b23f=
-4d22-840a-f17d7cd87bb1'(<br />>>>>> victor.local.systea=
.fr) ignoring it in the refresh until migration is<br />>>>>=
;> done<br />>>>>><br />>>>>><br />>=
;>>>><br />>>>>> and so on, last lines repea=
ted indefinitly for hours since we poweroff<br />>>>>> t=
he VM...<br />>>>>> Is this something known ? Any idea a=
bout that ?<br />>>>>><br />>>>>> Thanks<=
br />>>>>><br />>>>>> Ovirt 4.1.6, update=
d last at feb-13. Gluster 3.12.1.<br />>>>>><br />>&g=
t;>>> --<br />>>>>><br />>>>>> C=
ordialement,<br />>>>>><br />>>>>> *Frank=
Soyer *<br />>>>>><br />>>>>> =5F=5F=5F=5F=
=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=
=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F<br />>>=
>>> Users mailing list<br />>>>>> Users@ovirt.o=
rg<br />>>>>> http://lists.ovirt.org/mailman/listinfo/us=
ers<br /> </blockquote><br /> </html>
------=_=-_OpenGroupware_org_NGMime-18019-1519921630.582362-42--------