------=_=-_OpenGroupware_org_NGMime-15477-1519640593.181730-42------
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Content-Length: 36556
Hi,
I don't beleive that this is relatd to a host, tests have been done fro=
m victor source to ginger dest and ginger to victor. I don't see proble=
ms on storage (gluster 3.12 native managed by ovirt), when VMs with a u=
niq disk from 20 to 250G migrate without error in some seconds and with=
no downtime.
How ca I enable this libvirt debug mode ?
--
Cordialement,
Frank Soyer
=C2=A0
Le Vendredi, F=C3=A9vrier 23, 2018 09:56 CET, Milan Zamazal <mzamazal@r=
edhat.com> a =C3=A9crit:
=C2=A0Maor Lipchuk <mlipchuk(a)redhat.com> writes:
I encountered a bug (see [1]) which contains the same error
mentioned=
in
your VDSM logs (see [2]), but I doubt it is related.
Indeed, it's not related.
The error in vdsm=5Fvictor.log just means that the info gathering call
tries to access libvirt domain before the incoming migration is
completed. It's ugly but harmless.
Milan, maybe you have any advice to troubleshoot the issue? Will the
libvirt/qemu logs can help?
It seems there is something wrong on (at least) the source host. There
are no migration progress messages in the vdsm=5Fginger.log and there a=
re
warnings about stale stat samples. That looks like problems with
calling libvirt =E2=80=93 slow and/or stuck calls, maybe due to storage=
problems. The possibly faulty second disk could cause that.
libvirt debug logs could tell us whether that is indeed the problem and=
whether it is caused by storage or something else.
I would suggest to open a bug on that issue so we can track it more
properly.
Regards,
Maor
[1]
https://bugzilla.redhat.com/show=5Fbug.cgi?id=3D1486543 - Migration l=
eads to
VM running on 2 Hosts
[2]
2018-02-16 09:43:35,236+0100 ERROR (jsonrpc/7) [jsonrpc.JsonRpcServer=
]
Internal server error (=5F=5Finit=5F=5F:577)
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/yajsonrpc/=5F=5Finit=5F=5F.py"=
,
line 572,
in =5Fhandle=5Frequest
res =3D method(**params)
File "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 198,=
in
=5FdynamicMethod
result =3D fn(*methodArgs)
File "/usr/share/vdsm/API.py", line 1454, in getAllVmIoTunePolicies
io=5Ftune=5Fpolicies=5Fdict =3D self.=5Fcif.getAllVmIoTunePolicies()
File "/usr/share/vdsm/clientIF.py", line 454, in getAllVmIoTunePolici=
es
'current=5Fvalues': v.getIoTune()}
File "/usr/share/vdsm/virt/vm.py", line 2859, in getIoTune
result =3D self.getIoTuneResponse()
File "/usr/share/vdsm/virt/vm.py", line 2878, in getIoTuneResponse
res =3D self.=5Fdom.blockIoTune(
File "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py", line =
47,
in =5F=5Fgetattr=5F=5F
% self.vmid)
NotConnectedError: VM u'755cf168-de65-42ed-b22f-efe9136f7594' was not=
started yet or was shut down
On Thu, Feb 22, 2018 at 4:22 PM, fsoyer <fsoyer(a)systea.fr> wrote:
> Hi,
> Yes, on 2018-02-16 (vdsm logs) I tried with a VM standing on ginger
> (192.168.0.6) migrated (or failed to migrate...) to victor (192.168.=
0.5),
> while the engine.log in the first mail on 2018-02-12 was for VMs
sta=
nding
> on victor, migrated (or failed to migrate...) to ginger. Symptoms
we=
re
> exactly the same, in both directions, and VMs works like a charm
bef=
ore,
> and even after (migration "killed" by a poweroff of
VMs).
> Am I the only one experimenting this problem ?
>
>
> Thanks
> --
>
> Cordialement,
>
> *Frank Soyer *
>
>
>
> Le Jeudi, F=C3=A9vrier 22, 2018 00:45 CET, Maor Lipchuk <mlipchuk@re=
dhat.com>
> a =C3=A9crit:
>
>
> Hi Frank,
>
> Sorry about the delay repond.
> I've been going through the logs you attached, although I could not =
find
> any specific indication why the migration failed because of the
disk=
you
> were mentionning.
> Does this VM run with both disks on the target host without migratio=
n?
>
> Regards,
> Maor
>
>
> On Fri, Feb 16, 2018 at 11:03 AM, fsoyer <fsoyer(a)systea.fr> wrote:
>>
>> Hi Maor,
>> sorry for the double post, I've change the email adress of my accou=
nt
and
>> supposed that I'd need to re-post it.
>> And thank you for your time. Here are the logs. I added a vdisk to =
an
>> existing VM : it no more migrates, needing to poweroff it
after min=
utes.
>> Then simply deleting the second disk makes migrate it in
exactly 9s=
without
>> problem !
>>
https://gist.github.com/fgth/4707446331d201eef574ac31b6e89561
>>
https://gist.github.com/fgth/f8de9c22664aee53722af676bff8719d
>>
>> --
>>
>> Cordialement,
>>
>> *Frank Soyer *
>> Le Mercredi, F=C3=A9vrier 14, 2018 11:04 CET, Maor Lipchuk <
>> mlipchuk(a)redhat.com> a =C3=A9crit:
>>
>>
>> Hi Frank,
>>
>> I already replied on your last email.
>> Can you provide the VDSM logs from the time of the migration failur=
e for
>> both hosts:
>> ginger.local.systea.f <
http://ginger.local.systea.fr/>r and v
>> ictor.local.systea.fr
>>
>> Thanks,
>> Maor
>>
>> On Wed, Feb 14, 2018 at 11:23 AM, fsoyer <fsoyer(a)systea.fr> wrote:
>>>
>>> Hi all,
>>> I discovered yesterday a problem when migrating VM with more than =
one
>>> vdisk.
>>> On our test servers (oVirt4.1, shared storage with Gluster), I cre=
ated 2
>>> VMs needed for a test, from a template with a 20G vdisk.
On this V=
Ms I
>>> added a 100G vdisk (for this tests I didn't want to
waste time to =
extend
>>> the existing vdisks... But I lost time finally...). The
VMs with t=
he 2
>>> vdisks works well.
>>> Now I saw some updates waiting on the host. I tried to put it in
>>> maintenance... But it stopped on the two VM. They were marked "mig=
rating",
>>> but no more accessible. Other (small) VMs with only 1
vdisk was mi=
grated
>>> without problem at the same time.
>>> I saw that a kvm process for the (big) VMs was launched on the sou=
rce
>>> AND destination host, but after tens of minutes, the
migration and=
the VMs
>>> was always freezed. I tried to cancel the migration for
the VMs : =
failed.
>>> The only way to stop it was to poweroff the VMs : the kvm
process =
died on
>>> the 2 hosts and the GUI alerted on a failed migration.
>>> In doubt, I tried to delete the second vdisk on one of this VMs : =
it
>>> migrates then without error ! And no access problem.
>>> I tried to extend the first vdisk of the second VM, the delete the=
>>> second vdisk : it migrates now without problem !
>>>
>>> So after another test with a VM with 2 vdisks, I can say that this=
>>> blocked the migration process :(
>>>
>>> In engine.log, for a VMs with 1 vdisk migrating well, we see :
>>>
>>> 2018-02-12 16:46:29,705+01 INFO [org.ovirt.engine.core.bll.Migrate=
VmToServerCommand]
>>> (default task-28) [2f712024-5982-46a8-82c8-fd8293da5725]
Lock Acqu=
ired
>>> to object
'EngineLock:{exclusiveLocks=3D'[3f57e669-5e4c-4d10-85cc-=
d573004a099d=3DVM]',
>>> sharedLocks=3D''}'
>>> 2018-02-12 16:46:29,955+01 INFO [org.ovirt.engine.core.bll.Migrate=
VmToServerCommand]
>>> (org.ovirt.thread.pool-6-thread-32)
[2f712024-5982-46a8-82c8-fd829=
3da5725]
>>> Running command: MigrateVmToServerCommand internal:
false. Entitie=
s
>>> affected : ID: 3f57e669-5e4c-4d10-85cc-d573004a099d Type:
VMAction=
>>> group MIGRATE=5FVM with role type USER
>>> 2018-02-12 16:46:30,261+01 INFO [org.ovirt.engine.core.vdsbroker.M=
igrateVDSCommand]
>>> (org.ovirt.thread.pool-6-thread-32)
[2f712024-5982-46a8-82c8-fd829=
3da5725]
>>> START, MigrateVDSCommand(
MigrateVDSCommandParameters:{runAsync=3D=
'true',
>>> hostId=3D'ce3938b1-b23f-4d22-840a-f17d7cd87bb1',
>>> vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d',
srcHost=3D'192.168.=
0.6',
>>>
dstVdsId=3D'd569c2dd-8f30-4878-8aea-858db285cf69', dstHost=3D'
>>> 192.168.0.5:54321', migrationMethod=3D'ONLINE',
tunnelMigration=3D=
'false',
>>> migrationDowntime=3D'0',
autoConverge=3D'true', migrateCompressed=3D=
'false',
>>> consoleAddress=3D'null',
maxBandwidth=3D'500', enableGuestEvents=3D=
'true',
>>> maxIncomingMigrations=3D'2',
maxOutgoingMigrations=3D'2',
>>> convergenceSchedule=3D'[init=3D[{name=3DsetDowntime, params=3D[100=
]}],
>>> stalling=3D[{limit=3D1, action=3D{name=3DsetDowntime,
params=3D[15=
0]}}, {limit=3D2,
>>> action=3D{name=3DsetDowntime, params=3D[200]}},
{limit=3D3,
>>> action=3D{name=3DsetDowntime, params=3D[300]}}, {limit=3D4,
>>> action=3D{name=3DsetDowntime, params=3D[400]}}, {limit=3D6,
>>> action=3D{name=3DsetDowntime, params=3D[500]}}, {limit=3D-1, actio=
n=3D{name=3Dabort,
>>> params=3D[]}}]]'}), log id: 14f61ee0
>>> 2018-02-12 16:46:30,262+01 INFO [org.ovirt.engine.core.vdsbro
>>> ker.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-th=
read-32)
>>>> [2f712024-5982-46a8-82c8-fd8293da5725] START,
>>>> MigrateBrokerVDSCommand(HostName =3D victor.local.systea.fr,
>>>> MigrateVDSCommandParameters:{runAsync=3D'true',
>>> hostId=3D'ce3938b1-b23f-4d22-840a-f17d7cd87bb1',
>>> vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d',
srcHost=3D'192.168.=
0.6',
>>>
dstVdsId=3D'd569c2dd-8f30-4878-8aea-858db285cf69', dstHost=3D'
>>> 192.168.0.5:54321', migrationMethod=3D'ONLINE',
tunnelMigration=3D=
'false',
>>> migrationDowntime=3D'0',
autoConverge=3D'true', migrateCompressed=3D=
'false',
>>> consoleAddress=3D'null',
maxBandwidth=3D'500', enableGuestEvents=3D=
'true',
>>> maxIncomingMigrations=3D'2',
maxOutgoingMigrations=3D'2',
>>> convergenceSchedule=3D'[init=3D[{name=3DsetDowntime, params=3D[100=
]}],
>>> stalling=3D[{limit=3D1, action=3D{name=3DsetDowntime,
params=3D[15=
0]}}, {limit=3D2,
>>> action=3D{name=3DsetDowntime, params=3D[200]}},
{limit=3D3,
>>> action=3D{name=3DsetDowntime, params=3D[300]}}, {limit=3D4,
>>> action=3D{name=3DsetDowntime, params=3D[400]}}, {limit=3D6,
>>> action=3D{name=3DsetDowntime, params=3D[500]}}, {limit=3D-1, actio=
n=3D{name=3Dabort,
>>> params=3D[]}}]]'}), log id: 775cd381
>>> 2018-02-12 16:46:30,277+01 INFO [org.ovirt.engine.core.vdsbro
>>> ker.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-th=
read-32)
>>> [2f712024-5982-46a8-82c8-fd8293da5725] FINISH,
MigrateBrokerVDSCom=
mand,
>>> log id: 775cd381
>>> 2018-02-12 16:46:30,285+01 INFO [org.ovirt.engine.core.vdsbroker.M=
igrateVDSCommand]
>>> (org.ovirt.thread.pool-6-thread-32)
[2f712024-5982-46a8-82c8-fd829=
3da5725]
>>> FINISH, MigrateVDSCommand, return: MigratingFrom, log id:
14f61ee0=
>>> 2018-02-12 16:46:30,301+01 INFO
[org.ovirt.engine.core.dal.db
>>> broker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-6=
-thread-32)
>>> [2f712024-5982-46a8-82c8-fd8293da5725] EVENT=5FID:
>>> VM=5FMIGRATION=5FSTART(62), Correlation ID: 2f712024-5982-46a8-82c=
8-fd8293da5725,
>>> Job ID: 4bd19aa9-cc99-4d02-884e-5a1e857a7738, Call Stack:
null, Cu=
stom
>>> ID: null, Custom Event ID: -1, Message: Migration started
(VM:
>>> Oracle=5FSECONDARY, Source: victor.local.systea.fr, Destination:
>>> ginger.local.systea.fr, User: admin@internal-authz).
>>> 2018-02-12 16:46:31,106+01 INFO [org.ovirt.engine.core.vdsbro
>>> ker.vdsbroker.FullListVDSCommand] (DefaultQuartzScheduler9) [54a65=
b66]
>>> START, FullListVDSCommand(HostName =3D
victor.local.systea.fr,
>>> FullListVDSCommandParameters:{runAsync=3D'true',
>>> hostId=3D'ce3938b1-b23f-4d22-840a-f17d7cd87bb1',
>>> vmIds=3D'[3f57e669-5e4c-4d10-85cc-d573004a099d]'}), log id: 54b4b4=
35
>>> 2018-02-12 16:46:31,147+01 INFO
[org.ovirt.engine.core.vdsbro
>>> ker.vdsbroker.FullListVDSCommand] (DefaultQuartzScheduler9) [54a65=
b66]
>>> FINISH, FullListVDSCommand, return: [{acpiEnable=3Dtrue,
>>> emulatedMachine=3Dpc-i440fx-rhel7.3.0, tabletEnable=3Dtrue, pid=3D=
1493,
>>>
guestDiskMapping=3D{0QEMU=5FQEMU=5FHARDDISK=5Fd890fa68-fba4-4f49-9=
=3D{name=3D/dev/sda},
>>> QEMU=5FDVD-ROM=5FQM00003=3D{name=3D/dev/sr0}},
transparentHugePage=
s=3Dtrue,
>>> timeOffset=3D0, cpuType=3DNehalem, smp=3D2,
pauseCode=3DNOERR,
>>> guestNumaNodes=3D[Ljava.lang.Object;@1d9042cd, smartcardEnable=3Df=
alse,
>>>
custom=3D{device=5Ffbddd528-7d93-49c6-a286-180e021cb274device=5F87=
>>>
9c93ab-4df1-435c-af02-565039fcc254=3DVmDevice:{id=3D'VmDeviceId:
>>> {deviceId=3D'879c93ab-4df1-435c-af02-565039fcc254',
>>> vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}',
device=3D'unix',
>>> type=3D'CHANNEL', bootOrder=3D'0', specParams=3D'[]',
address=3D'{=
bus=3D0,
>>> controller=3D0, type=3Dvirtio-serial, port=3D1}',
managed=3D'false=
',
>>> plugged=3D'true', readOnly=3D'false',
deviceAlias=3D'channel0',
>>> customProperties=3D'[]', snapshotId=3D'null',
logicalName=3D'null'=
,
>>> hostDevice=3D'null'},
device=5Ffbddd528-7d93-49c6-a286
>>> -180e021cb274device=5F879c93ab-4df1-435c-af02-565039fcc254devi
>>> ce=5F8945f61a-abbe-4156-8485-a4aa6f1908dbdevice=5F017b5e59-01c4-
>>> 4aac-bf0c-b5d9557284d6=3DVmDevice:{id=3D'VmDeviceId:{deviceId=3D'0=
>>> 17b5e59-01c4-4aac-bf0c-b5d9557284d6',
vmId=3D'3f57e669-5e4c-4d10-8=
5cc-d573004a099d'}',
>>> device=3D'tablet', type=3D'UNKNOWN',
bootOrder=3D'0', specParams=3D=
'[]',
>>> address=3D'{bus=3D0, type=3Dusb, port=3D1}',
managed=3D'false', pl=
ugged=3D'true',
>>> readOnly=3D'false',
deviceAlias=3D'input0', customProperties=3D'[]=
',
>>> snapshotId=3D'null',
logicalName=3D'null', hostDevice=3D'null'},
>>> device=5Ffbddd528-7d93-49c6-a286-180e021cb274=3DVmDevice:{id=3D'Vm=
>>>
DeviceId:{deviceId=3D'fbddd528-7d93-49c6-a286-180e021cb274',
>>> vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}',
device=3D'ide',
>>> type=3D'CONTROLLER', bootOrder=3D'0',
specParams=3D'[]', address=3D=
'{slot=3D0x01,
>>> bus=3D0x00, domain=3D0x0000, type=3Dpci,
function=3D0x1}', managed=
=3D'false',
>>> plugged=3D'true', readOnly=3D'false',
deviceAlias=3D'ide', customP=
roperties=3D'[]',
>>> snapshotId=3D'null',
logicalName=3D'null', hostDevice=3D'null'},
>>> device=5Ffbddd528-7d93-49c6-a286-180e021cb274device=5F879c93ab-4
>>> df1-435c-af02-565039fcc254device=5F8945f61a-abbe-4156-8485-a4a
>>> a6f1908db=3DVmDevice:{id=3D'VmDeviceId:{deviceId=3D'8945f61a-abbe-=
4156-8485-a4aa6f1908db',
>>>
vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}', device=3D'unix',
>>> type=3D'CHANNEL', bootOrder=3D'0', specParams=3D'[]',
address=3D'{=
bus=3D0,
>>> controller=3D0, type=3Dvirtio-serial, port=3D2}',
managed=3D'false=
',
>>> plugged=3D'true', readOnly=3D'false',
deviceAlias=3D'channel1',
>>> customProperties=3D'[]', snapshotId=3D'null',
logicalName=3D'null'=
,
>>> hostDevice=3D'null'}}, vmType=3Dkvm,
memSize=3D8192, smpCoresPerSo=
cket=3D1,
>>> vmName=3DOracle=5FSECONDARY, nice=3D0, status=3DMigration
Source, =
maxMemSize=3D32768,
>>> bootMenuEnable=3Dfalse,
vmId=3D3f57e669-5e4c-4d10-85cc-d573004a099=
d,
>>> numOfIoThreads=3D2, smpThreadsPerCore=3D1,
memGuaranteedSize=3D819=
2,
>>> kvmEnable=3Dtrue, pitReinjection=3Dfalse,
displayNetwork=3Dovirtmg=
mt,
>>> devices=3D[Ljava.lang.Object;@28ae66d7, display=3Dvnc,
maxVCpus=3D=
16,
>>> clientIp=3D, statusTime=3D4299484520, maxMemSlots=3D16}],
log id: =
54b4b435
>>> 2018-02-12 16:46:31,150+01 INFO
[org.ovirt.engine.core.vdsbro
>>> ker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler1)
>>> [27fac647] Fetched 3 VMs from VDS 'd569c2dd-8f30-4878-8aea-858db28=
5cf69'
>>> 2018-02-12 16:46:31,151+01 INFO
[org.ovirt.engine.core.vdsbro
>>> ker.monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler9)
>>> [54a65b66] Received a vnc Device without an address when processin=
g
VM
>>> 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping
device:
>>> {device=3Dvnc, specParams=3D{displayNetwork=3Dovirtmgmt, keyMap=3D=
fr,
>>> displayIp=3D192.168.0.6}, type=3Dgraphics,
deviceId=3D813957b1-446=
a-4e88-9e40-9fe76d2c442d,
>>> port=3D5901}
>>> 2018-02-12 16:46:31,151+01 INFO [org.ovirt.engine.core.vdsbro
>>> ker.monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler9)
>>> [54a65b66] Received a lease Device without an address when process=
ing
VM
>>> 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping
device:
>>> {lease=5Fid=3D3f57e669-5e4c-4d10-85cc-d573004a099d,
>>> sd=5Fid=3D1e51cecc-eb2e-47d0-b185-920fdc7afa16,
>>> deviceId=3D{uuid=3Da09949aa-5642-4b6d-94a4-8b0d04257be5}, offset=3D=
6291456,
>>> device=3Dlease,
path=3D/rhev/data-center/mnt/glusterSD/192.168.0.6=
:
>>>
=5FDATA01/1e51cecc-eb2e-47d0-b185-920fdc7afa16/dom=5Fmd/xleases, t=
ype=3Dlease}
>>> 2018-02-12 16:46:31,152+01 INFO
[org.ovirt.engine.core.vdsbroker.m=
onitoring.VmAnalyzer]
>>> (DefaultQuartzScheduler1) [27fac647] VM
'3f57e669-5e4c-4d10-85cc-d=
573004a099d'(Oracle=5FSECONDARY)
>>>> was unexpectedly detected as 'MigratingTo' on VDS
>>>> 'd569c2dd-8f30-4878-8aea-858db285cf69'(ginger.local.systea.fr)
>>>> (expected on 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1')
>>> 2018-02-12 16:46:31,152+01 INFO
[org.ovirt.engine.core.vdsbroker.m=
onitoring.VmAnalyzer]
>>> (DefaultQuartzScheduler1) [27fac647] VM
'3f57e669-5e4c-4d10-85cc-d=
573004a099d'
>>> is migrating to VDS
'd569c2dd-8f30-4878-8aea-858db285cf69'(
>>> ginger.local.systea.fr) ignoring it in the refresh until migration=
is
>>> done
>>> ....
>>> 2018-02-12 16:46:41,631+01 INFO [org.ovirt.engine.core.vdsbroker.m=
onitoring.VmAnalyzer]
>>> (ForkJoinPool-1-worker-11) [] VM
'3f57e669-5e4c-4d10-85cc-d573004a=
099d'
>>> was reported as Down on VDS
'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'=
(
>>> victor.local.systea.fr)
>>> 2018-02-12 16:46:41,632+01 INFO [org.ovirt.engine.core.vdsbro
>>> ker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-11) [] STA=
RT,
>>> DestroyVDSCommand(HostName =3D victor.local.systea.fr,
>>> DestroyVmVDSCommandParameters:{runAsync=3D'true',
>>> hostId=3D'ce3938b1-b23f-4d22-840a-f17d7cd87bb1',
>>> vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d',
force=3D'false',
>>> secondsToWait=3D'0', gracefully=3D'false',
reason=3D'', ignoreNoVm=
=3D'true'}), log
>>> id: 560eca57
>>> 2018-02-12 16:46:41,650+01 INFO [org.ovirt.engine.core.vdsbro
>>> ker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-11) [] FIN=
ISH,
>>> DestroyVDSCommand, log id: 560eca57
>>> 2018-02-12 16:46:41,650+01 INFO [org.ovirt.engine.core.vdsbroker.m=
onitoring.VmAnalyzer]
>>> (ForkJoinPool-1-worker-11) [] VM
'3f57e669-5e4c-4d10-85cc-d573004a=
099d'(Oracle=5FSECONDARY)
>>> moved from 'MigratingFrom' --> 'Down'
>>> 2018-02-12 16:46:41,651+01 INFO [org.ovirt.engine.core.vdsbroker.m=
onitoring.VmAnalyzer]
>>> (ForkJoinPool-1-worker-11) [] Handing over VM
'3f57e669-5e4c-4d10-=
85cc-d573004a099d'(Oracle=5FSECONDARY)
>>> to Host 'd569c2dd-8f30-4878-8aea-858db285cf69'.
Setting VM to stat=
us
>>> 'MigratingTo'
>>> 2018-02-12 16:46:42,163+01 INFO [org.ovirt.engine.core.vdsbroker.m=
onitoring.VmAnalyzer]
>>> (ForkJoinPool-1-worker-4) [] VM
'3f57e669-5e4c-4d10-85cc-d573004a0=
99d'(Oracle=5FSECONDARY)
>>> moved from 'MigratingTo' --> 'Up'
>>> 2018-02-12 16:46:42,169+01 INFO [org.ovirt.engine.core.vdsbro
>>> ker.vdsbroker.MigrateStatusVDSCommand] (ForkJoinPool-1-worker-4) [=
]
>>> START, MigrateStatusVDSCommand(HostName =3D
ginger.local.systea.fr=
,
>>>
MigrateStatusVDSCommandParameters:{runAsync=3D'true',
>>> hostId=3D'd569c2dd-8f30-4878-8aea-858db285cf69',
>>> vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}), log id: 7a25c281
>>> 2018-02-12 16:46:42,174+01 INFO [org.ovirt.engine.core.vdsbro
>>> ker.vdsbroker.MigrateStatusVDSCommand] (ForkJoinPool-1-worker-4) [=
]
>>> FINISH, MigrateStatusVDSCommand, log id: 7a25c281
>>> 2018-02-12 16:46:42,194+01 INFO [org.ovirt.engine.core.dal.db
>>> broker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-4=
)
[]
>>> EVENT=5FID: VM=5FMIGRATION=5FDONE(63), Correlation ID:
>>> 2f712024-5982-46a8-82c8-fd8293da5725, Job ID:
>>> 4bd19aa9-cc99-4d02-884e-5a1e857a7738, Call Stack: null, Custom ID:=
>>> null, Custom Event ID: -1, Message: Migration completed
(VM:
>>> Oracle=5FSECONDARY, Source: victor.local.systea.fr, Destination:
>>> ginger.local.systea.fr, Duration: 11 seconds, Total: 11 seconds, A=
ctual
>>> downtime: (N/A))
>>> 2018-02-12 16:46:42,201+01 INFO [org.ovirt.engine.core.bll.Migrate=
VmToServerCommand]
>>> (ForkJoinPool-1-worker-4) [] Lock freed to object
>>> 'EngineLock:{exclusiveLocks=3D'[3f57e669-5e4c-4d10-85cc-d573004a09=
9d=3DVM]',
>>> sharedLocks=3D''}'
>>> 2018-02-12 16:46:42,203+01 INFO [org.ovirt.engine.core.vdsbro
>>> ker.vdsbroker.FullListVDSCommand] (ForkJoinPool-1-worker-4) [] STA=
RT,
>>> FullListVDSCommand(HostName =3D ginger.local.systea.fr,
>>> FullListVDSCommandParameters:{runAsync=3D'true',
>>> hostId=3D'd569c2dd-8f30-4878-8aea-858db285cf69',
>>> vmIds=3D'[3f57e669-5e4c-4d10-85cc-d573004a099d]'}), log id: 7cc652=
98
>>> 2018-02-12 16:46:42,254+01 INFO
[org.ovirt.engine.core.vdsbro
>>> ker.vdsbroker.FullListVDSCommand] (ForkJoinPool-1-worker-4) [] FIN=
ISH,
>>> FullListVDSCommand, return: [{acpiEnable=3Dtrue,
>>> emulatedMachine=3Dpc-i440fx-rhel7.3.0, afterMigrationStatus=3D,
>>> tabletEnable=3Dtrue, pid=3D18748, guestDiskMapping=3D{},
>>> transparentHugePages=3Dtrue, timeOffset=3D0, cpuType=3DNehalem, sm=
p=3D2,
>>>> guestNumaNodes=3D[Ljava.lang.Object;@760085fd,
>>>
custom=3D{device=5Ffbddd528-7d93-49c6-a286-180e021cb274device=5F87=
>>>
9c93ab-4df1-435c-af02-565039fcc254=3DVmDevice:{id=3D'VmDeviceId:
>>> {deviceId=3D'879c93ab-4df1-435c-af02-565039fcc254',
>>> vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}',
device=3D'unix',
>>> type=3D'CHANNEL', bootOrder=3D'0', specParams=3D'[]',
address=3D'{=
bus=3D0,
>>> controller=3D0, type=3Dvirtio-serial, port=3D1}',
managed=3D'false=
',
>>> plugged=3D'true', readOnly=3D'false',
deviceAlias=3D'channel0',
>>> customProperties=3D'[]', snapshotId=3D'null',
logicalName=3D'null'=
,
>>> hostDevice=3D'null'},
device=5Ffbddd528-7d93-49c6-a286
>>> -180e021cb274device=5F879c93ab-4df1-435c-af02-565039fcc254devi
>>> ce=5F8945f61a-abbe-4156-8485-a4aa6f1908dbdevice=5F017b5e59-01c4-
>>> 4aac-bf0c-b5d9557284d6=3DVmDevice:{id=3D'VmDeviceId:{deviceId=3D'0=
>>> 17b5e59-01c4-4aac-bf0c-b5d9557284d6',
vmId=3D'3f57e669-5e4c-4d10-8=
5cc-d573004a099d'}',
>>> device=3D'tablet', type=3D'UNKNOWN',
bootOrder=3D'0', specParams=3D=
'[]',
>>> address=3D'{bus=3D0, type=3Dusb, port=3D1}',
managed=3D'false', pl=
ugged=3D'true',
>>> readOnly=3D'false',
deviceAlias=3D'input0', customProperties=3D'[]=
',
>>> snapshotId=3D'null',
logicalName=3D'null', hostDevice=3D'null'},
>>> device=5Ffbddd528-7d93-49c6-a286-180e021cb274=3DVmDevice:{id=3D'Vm=
>>>
DeviceId:{deviceId=3D'fbddd528-7d93-49c6-a286-180e021cb274',
>>> vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}',
device=3D'ide',
>>> type=3D'CONTROLLER', bootOrder=3D'0',
specParams=3D'[]', address=3D=
'{slot=3D0x01,
>>> bus=3D0x00, domain=3D0x0000, type=3Dpci,
function=3D0x1}', managed=
=3D'false',
>>> plugged=3D'true', readOnly=3D'false',
deviceAlias=3D'ide', customP=
roperties=3D'[]',
>>> snapshotId=3D'null',
logicalName=3D'null', hostDevice=3D'null'},
>>> device=5Ffbddd528-7d93-49c6-a286-180e021cb274device=5F879c93ab-4
>>> df1-435c-af02-565039fcc254device=5F8945f61a-abbe-4156-8485-a4a
>>> a6f1908db=3DVmDevice:{id=3D'VmDeviceId:{deviceId=3D'8945f61a-abbe-=
4156-8485-a4aa6f1908db',
>>>
vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}', device=3D'unix',
>>> type=3D'CHANNEL', bootOrder=3D'0', specParams=3D'[]',
address=3D'{=
bus=3D0,
>>> controller=3D0, type=3Dvirtio-serial, port=3D2}',
managed=3D'false=
',
>>> plugged=3D'true', readOnly=3D'false',
deviceAlias=3D'channel1',
>>> customProperties=3D'[]', snapshotId=3D'null',
logicalName=3D'null'=
,
>>> hostDevice=3D'null'}}, vmType=3Dkvm,
memSize=3D8192, smpCoresPerSo=
cket=3D1,
>>> vmName=3DOracle=5FSECONDARY, nice=3D0, status=3DUp,
maxMemSize=3D3=
2768,
>>> bootMenuEnable=3Dfalse,
vmId=3D3f57e669-5e4c-4d10-85cc-d573004a099=
d,
>>> numOfIoThreads=3D2, smpThreadsPerCore=3D1,
smartcardEnable=3Dfalse=
,
>>> maxMemSlots=3D16, kvmEnable=3Dtrue,
pitReinjection=3Dfalse,
>>> displayNetwork=3Dovirtmgmt, devices=3D[Ljava.lang.Object;@2e4d3dd3=
,
>>> memGuaranteedSize=3D8192, maxVCpus=3D16, clientIp=3D,
statusTime=3D=
4304259600
>>>> <(430)%20425-9600>, display=3Dvnc}], log id: 7cc65298
>>>> 2018-02-12 16:46:42,257+01 INFO [org.ovirt.engine.core.vdsbro
>>>> ker.monitoring.VmDevicesMonitoring] (ForkJoinPool-1-worker-4) []
>>>> Received a vnc Device without an address when processing VM
>>> 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping
device:
>>> {device=3Dvnc, specParams=3D{displayNetwork=3Dovirtmgmt, keyMap=3D=
fr,
>>> displayIp=3D192.168.0.5}, type=3Dgraphics,
deviceId=3D813957b1-446=
a-4e88-9e40-9fe76d2c442d,
>>>> port=3D5901}
>>>> 2018-02-12 16:46:42,257+01 INFO [org.ovirt.engine.core.vdsbro
>>>> ker.monitoring.VmDevicesMonitoring] (ForkJoinPool-1-worker-4) []
>>>> Received a lease Device without an address when processing VM
>>> 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping
device:
>>> {lease=5Fid=3D3f57e669-5e4c-4d10-85cc-d573004a099d,
>>> sd=5Fid=3D1e51cecc-eb2e-47d0-b185-920fdc7afa16,
>>> deviceId=3D{uuid=3Da09949aa-5642-4b6d-94a4-8b0d04257be5}, offset=3D=
6291456,
>>> device=3Dlease,
path=3D/rhev/data-center/mnt/glusterSD/192.168.0.6=
:
>>>
=5FDATA01/1e51cecc-eb2e-47d0-b185-920fdc7afa16/dom=5Fmd/xleases, t=
ype=3Dlease}
>>> 2018-02-12 16:46:46,260+01 INFO
[org.ovirt.engine.core.vdsbro
>>> ker.vdsbroker.FullListVDSCommand] (DefaultQuartzScheduler5) [7fcb2=
00a]
>>> FINISH, FullListVDSCommand, return: [{acpiEnable=3Dtrue,
>>> emulatedMachine=3Dpc-i440fx-rhel7.3.0, afterMigrationStatus=3D,
>>> tabletEnable=3Dtrue, pid=3D18748, guestDiskMapping=3D{0QEMU=5FQEMU=
=5FH
>>>> ARDDISK=5Fd890fa68-fba4-4f49-9=3D{name=3D/dev/sda},
>>> QEMU=5FDVD-ROM=5FQM00003=3D{name=3D/dev/sr0}},
transparentHugePage=
s=3Dtrue,
>>> timeOffset=3D0, cpuType=3DNehalem, smp=3D2,
guestNumaNodes=3D[Ljav=
a.lang.Obj
>>> ect;@77951faf, custom=3D{device=5Ffbddd528-7d93-4
>>> 9c6-a286-180e021cb274device=5F879c93ab-4df1-435c-af02-565039fc
>>> c254=3DVmDevice:{id=3D'VmDeviceId:{deviceId=3D'879c93ab-4df1-435c-=
af02-565039fcc254',
>>>
vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}', device=3D'unix',
>>> type=3D'CHANNEL', bootOrder=3D'0', specParams=3D'[]',
address=3D'{=
bus=3D0,
>>> controller=3D0, type=3Dvirtio-serial, port=3D1}',
managed=3D'false=
',
>>> plugged=3D'true', readOnly=3D'false',
deviceAlias=3D'channel0',
>>> customProperties=3D'[]', snapshotId=3D'null',
logicalName=3D'null'=
,
>>> hostDevice=3D'null'},
device=5Ffbddd528-7d93-49c6-a286
>>> -180e021cb274device=5F879c93ab-4df1-435c-af02-565039fcc254devi
>>> ce=5F8945f61a-abbe-4156-8485-a4aa6f1908dbdevice=5F017b5e59-01c4-
>>> 4aac-bf0c-b5d9557284d6=3DVmDevice:{id=3D'VmDeviceId:{deviceId=3D'0=
>>> 17b5e59-01c4-4aac-bf0c-b5d9557284d6',
vmId=3D'3f57e669-5e4c-4d10-8=
5cc-d573004a099d'}',
>>> device=3D'tablet', type=3D'UNKNOWN',
bootOrder=3D'0', specParams=3D=
'[]',
>>> address=3D'{bus=3D0, type=3Dusb, port=3D1}',
managed=3D'false', pl=
ugged=3D'true',
>>> readOnly=3D'false',
deviceAlias=3D'input0', customProperties=3D'[]=
',
>>> snapshotId=3D'null',
logicalName=3D'null', hostDevice=3D'null'},
>>> device=5Ffbddd528-7d93-49c6-a286-180e021cb274=3DVmDevice:{id=3D'Vm=
>>>
DeviceId:{deviceId=3D'fbddd528-7d93-49c6-a286-180e021cb274',
>>> vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}',
device=3D'ide',
>>> type=3D'CONTROLLER', bootOrder=3D'0',
specParams=3D'[]', address=3D=
'{slot=3D0x01,
>>> bus=3D0x00, domain=3D0x0000, type=3Dpci,
function=3D0x1}', managed=
=3D'false',
>>> plugged=3D'true', readOnly=3D'false',
deviceAlias=3D'ide', customP=
roperties=3D'[]',
>>> snapshotId=3D'null',
logicalName=3D'null', hostDevice=3D'null'},
>>> device=5Ffbddd528-7d93-49c6-a286-180e021cb274device=5F879c93ab-4
>>> df1-435c-af02-565039fcc254device=5F8945f61a-abbe-4156-8485-a4a
>>> a6f1908db=3DVmDevice:{id=3D'VmDeviceId:{deviceId=3D'8945f61a-abbe-=
4156-8485-a4aa6f1908db',
>>>
vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}', device=3D'unix',
>>> type=3D'CHANNEL', bootOrder=3D'0', specParams=3D'[]',
address=3D'{=
bus=3D0,
>>> controller=3D0, type=3Dvirtio-serial, port=3D2}',
managed=3D'false=
',
>>> plugged=3D'true', readOnly=3D'false',
deviceAlias=3D'channel1',
>>> customProperties=3D'[]', snapshotId=3D'null',
logicalName=3D'null'=
,
>>> hostDevice=3D'null'}}, vmType=3Dkvm,
memSize=3D8192, smpCoresPerSo=
cket=3D1,
>>> vmName=3DOracle=5FSECONDARY, nice=3D0, status=3DUp,
maxMemSize=3D3=
2768,
>>> bootMenuEnable=3Dfalse,
vmId=3D3f57e669-5e4c-4d10-85cc-d573004a099=
d,
>>> numOfIoThreads=3D2, smpThreadsPerCore=3D1,
smartcardEnable=3Dfalse=
,
>>> maxMemSlots=3D16, kvmEnable=3Dtrue,
pitReinjection=3Dfalse,
>>> displayNetwork=3Dovirtmgmt, devices=3D[Ljava.lang.Object;@286410fd=
,
>>> memGuaranteedSize=3D8192, maxVCpus=3D16, clientIp=3D,
statusTime=3D=
4304263620
>>> <(430)%20426-3620>, display=3Dvnc}], log id:
58cdef4c
>>> 2018-02-12 16:46:46,267+01 INFO [org.ovirt.engine.core.vdsbro
>>> ker.monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler5)
>>> [7fcb200a] Received a vnc Device without an address when processin=
g
VM
>>> 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping
device:
>>> {device=3Dvnc, specParams=3D{displayNetwork=3Dovirtmgmt, keyMap=3D=
fr,
>>> displayIp=3D192.168.0.5}, type=3Dgraphics,
deviceId=3D813957b1-446=
a-4e88-9e40-9fe76d2c442d,
>>> port=3D5901}
>>> 2018-02-12 16:46:46,268+01 INFO [org.ovirt.engine.core.vdsbro
>>> ker.monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler5)
>>> [7fcb200a] Received a lease Device without an address when process=
ing
VM
>>> 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping
device:
>>> {lease=5Fid=3D3f57e669-5e4c-4d10-85cc-d573004a099d,
>>> sd=5Fid=3D1e51cecc-eb2e-47d0-b185-920fdc7afa16,
>>> deviceId=3D{uuid=3Da09949aa-5642-4b6d-94a4-8b0d04257be5}, offset=3D=
6291456,
>>> device=3Dlease,
path=3D/rhev/data-center/mnt/glusterSD/192.168.0.6=
:
>>>
=5FDATA01/1e51cecc-eb2e-47d0-b185-920fdc7afa16/dom=5Fmd/xleases, t=
ype=3Dlease}
>>>
>>>
>>>
>>>
>>> For the VM with 2 vdisks we see :
>>>
>>> 2018-02-12 16:49:06,112+01 INFO [org.ovirt.engine.core.bll.Migrate=
VmToServerCommand]
>>> (default task-50) [92b5af33-cb87-4142-b8fe-8b838dd7458e]
Lock Acqu=
ired
>>> to object
'EngineLock:{exclusiveLocks=3D'[f7d4ec12-627a-4b83-b59e-=
886400d55474=3DVM]',
>>> sharedLocks=3D''}'
>>> 2018-02-12 16:49:06,407+01 INFO [org.ovirt.engine.core.bll.Migrate=
VmToServerCommand]
>>> (org.ovirt.thread.pool-6-thread-49)
[92b5af33-cb87-4142-b8fe-8b838=
dd7458e]
>>> Running command: MigrateVmToServerCommand internal:
false. Entitie=
s
>>> affected : ID: f7d4ec12-627a-4b83-b59e-886400d55474 Type:
VMAction=
>>> group MIGRATE=5FVM with role type USER
>>> 2018-02-12 16:49:06,712+01 INFO [org.ovirt.engine.core.vdsbroker.M=
igrateVDSCommand]
>>> (org.ovirt.thread.pool-6-thread-49)
[92b5af33-cb87-4142-b8fe-8b838=
dd7458e]
>>> START, MigrateVDSCommand(
MigrateVDSCommandParameters:{runAsync=3D=
'true',
>>> hostId=3D'd569c2dd-8f30-4878-8aea-858db285cf69',
>>> vmId=3D'f7d4ec12-627a-4b83-b59e-886400d55474',
srcHost=3D'192.168.=
0.5',
>>>
dstVdsId=3D'ce3938b1-b23f-4d22-840a-f17d7cd87bb1', dstHost=3D'
>>> 192.168.0.6:54321', migrationMethod=3D'ONLINE',
tunnelMigration=3D=
'false',
>>> migrationDowntime=3D'0',
autoConverge=3D'true', migrateCompressed=3D=
'false',
>>> consoleAddress=3D'null',
maxBandwidth=3D'500', enableGuestEvents=3D=
'true',
>>> maxIncomingMigrations=3D'2',
maxOutgoingMigrations=3D'2',
>>> convergenceSchedule=3D'[init=3D[{name=3DsetDowntime, params=3D[100=
]}],
>>> stalling=3D[{limit=3D1, action=3D{name=3DsetDowntime,
params=3D[15=
0]}}, {limit=3D2,
>>> action=3D{name=3DsetDowntime, params=3D[200]}},
{limit=3D3,
>>> action=3D{name=3DsetDowntime, params=3D[300]}}, {limit=3D4,
>>> action=3D{name=3DsetDowntime, params=3D[400]}}, {limit=3D6,
>>> action=3D{name=3DsetDowntime, params=3D[500]}}, {limit=3D-1, actio=
n=3D{name=3Dabort,
>>> params=3D[]}}]]'}), log id: 3702a9e0
>>> 2018-02-12 16:49:06,713+01 INFO [org.ovirt.engine.core.vdsbro
>>> ker.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-th=
read-49)
>>>> [92b5af33-cb87-4142-b8fe-8b838dd7458e] START,
>>>> MigrateBrokerVDSCommand(HostName =3D ginger.local.systea.fr,
>>>> MigrateVDSCommandParameters:{runAsync=3D'true',
>>> hostId=3D'd569c2dd-8f30-4878-8aea-858db285cf69',
>>> vmId=3D'f7d4ec12-627a-4b83-b59e-886400d55474',
srcHost=3D'192.168.=
0.5',
>>>
dstVdsId=3D'ce3938b1-b23f-4d22-840a-f17d7cd87bb1', dstHost=3D'
>>> 192.168.0.6:54321', migrationMethod=3D'ONLINE',
tunnelMigration=3D=
'false',
>>> migrationDowntime=3D'0',
autoConverge=3D'true', migrateCompressed=3D=
'false',
>>> consoleAddress=3D'null',
maxBandwidth=3D'500', enableGuestEvents=3D=
'true',
>>> maxIncomingMigrations=3D'2',
maxOutgoingMigrations=3D'2',
>>> convergenceSchedule=3D'[init=3D[{name=3DsetDowntime, params=3D[100=
]}],
>>> stalling=3D[{limit=3D1, action=3D{name=3DsetDowntime,
params=3D[15=
0]}}, {limit=3D2,
>>> action=3D{name=3DsetDowntime, params=3D[200]}},
{limit=3D3,
>>> action=3D{name=3DsetDowntime, params=3D[300]}}, {limit=3D4,
>>> action=3D{name=3DsetDowntime, params=3D[400]}}, {limit=3D6,
>>> action=3D{name=3DsetDowntime, params=3D[500]}}, {limit=3D-1, actio=
n=3D{name=3Dabort,
>>> params=3D[]}}]]'}), log id: 1840069c
>>> 2018-02-12 16:49:06,724+01 INFO [org.ovirt.engine.core.vdsbro
>>> ker.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-th=
read-49)
>>> [92b5af33-cb87-4142-b8fe-8b838dd7458e] FINISH,
MigrateBrokerVDSCom=
mand,
>>> log id: 1840069c
>>> 2018-02-12 16:49:06,732+01 INFO [org.ovirt.engine.core.vdsbroker.M=
igrateVDSCommand]
>>> (org.ovirt.thread.pool-6-thread-49)
[92b5af33-cb87-4142-b8fe-8b838=
dd7458e]
>>> FINISH, MigrateVDSCommand, return: MigratingFrom, log id:
3702a9e0=
>>> 2018-02-12 16:49:06,753+01 INFO
[org.ovirt.engine.core.dal.db
>>> broker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-6=
-thread-49)
>>> [92b5af33-cb87-4142-b8fe-8b838dd7458e] EVENT=5FID:
>>> VM=5FMIGRATION=5FSTART(62), Correlation ID: 92b5af33-cb87-4142-b8f=
e-8b838dd7458e,
>>> Job ID: f4f54054-f7c8-4481-8eda-d5a15c383061, Call Stack:
null, Cu=
stom
>>> ID: null, Custom Event ID: -1, Message: Migration started
(VM:
>>> Oracle=5FPRIMARY, Source: ginger.local.systea.fr, Destination:
>>> victor.local.systea.fr, User: admin@internal-authz).
>>> ...
>>> 2018-02-12 16:49:16,453+01 INFO [org.ovirt.engine.core.vdsbro
>>> ker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler4)
>>> [162a5bc3] Fetched 2 VMs from VDS 'ce3938b1-b23f-4d22-840a-f17d7cd=
87bb1'
>>> 2018-02-12 16:49:16,455+01 INFO
[org.ovirt.engine.core.vdsbroker.m=
onitoring.VmAnalyzer]
>>> (DefaultQuartzScheduler4) [162a5bc3] VM
'f7d4ec12-627a-4b83-b59e-8=
86400d55474'(Oracle=5FPRIMARY)
>>>> was unexpectedly detected as 'MigratingTo' on VDS
>>>> 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'(victor.local.systea.fr)
>>>> (expected on 'd569c2dd-8f30-4878-8aea-858db285cf69')
>>> 2018-02-12 16:49:16,455+01 INFO
[org.ovirt.engine.core.vdsbroker.m=
onitoring.VmAnalyzer]
>>> (DefaultQuartzScheduler4) [162a5bc3] VM
'f7d4ec12-627a-4b83-b59e-8=
86400d55474'
>>> is migrating to VDS
'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'(
>>> victor.local.systea.fr) ignoring it in the refresh until migration=
is
>>> done
>>> ...
>>> 2018-02-12 16:49:31,484+01 INFO [org.ovirt.engine.core.vdsbroker.m=
onitoring.VmAnalyzer]
>>> (DefaultQuartzScheduler5) [11a7619a] VM
'f7d4ec12-627a-4b83-b59e-8=
86400d55474'(Oracle=5FPRIMARY)
>>> was unexpectedly detected as 'MigratingTo' on
VDS
>>> 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'(victor.local.systea.fr)
>>> (expected on 'd569c2dd-8f30-4878-8aea-858db285cf69')
>>> 2018-02-12 16:49:31,484+01 INFO [org.ovirt.engine.core.vdsbroker.m=
onitoring.VmAnalyzer]
>>> (DefaultQuartzScheduler5) [11a7619a] VM
'f7d4ec12-627a-4b83-b59e-8=
86400d55474'
>>> is migrating to VDS
'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'(
>>> victor.local.systea.fr) ignoring it in the refresh until migration=
is
>>> done
>>>
>>>
>>>
>>> and so on, last lines repeated indefinitly for hours since we powe=
roff
>>> the VM...
>>> Is this something known ? Any idea about that ?
>>>
>>> Thanks
>>>
>>> Ovirt 4.1.6, updated last at feb-13. Gluster 3.12.1.
>>>
>>> --
>>>
>>> Cordialement,
>>>
>>> *Frank Soyer *
>>>
>>> =5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=
=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=
=5F
>>> Users mailing list
>>> Users(a)ovirt.org
>>>
http://lists.ovirt.org/mailman/listinfo/users
>>>
>>
>>
>>
>>
>
>
>
=C2=A0
=C2=A0
------=_=-_OpenGroupware_org_NGMime-15477-1519640593.181730-42------
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Content-Length: 47409
<html>Hi,<br />I don't beleive that this is relatd to a host, tests hav=
e been done from victor source to ginger dest and ginger to victor. I d=
on't see problems on storage (gluster 3.12 native managed by ovirt), wh=
en VMs with a uniq disk from 20 to 250G migrate without error in some s=
econds and with no downtime.<br />How ca I enable this libvirt debug mo=
de ?<br /><br />--<br /><style type=3D"text/css">.Text1
{
color: black;
font-size:9pt;
font-family:Verdana;
}
.Text2 {
color: black;
font-size:7pt;
font-family:Verdana;
}</style><p class=3D"Text1">Cordialement,<br /><br
/><b>Frank Soyer=
</b><br /><br /> </p><br /><br />Le
Vendredi, F=C3=A9vrier 23, 20=
18 09:56 CET, Milan Zamazal &lt;mzamazal(a)redhat.com&gt; a =C3=A9crit:<b=
r /> <blockquote type=3D"cite"
cite=3D"87y3jkrsmp.fsf(a)redhat.com">=
Maor Lipchuk &lt;mlipchuk(a)redhat.com&gt; writes:<br /><br />> I
enco=
untered a bug (see [1]) which contains the same error mentioned in<br /=
> your VDSM logs (see [2]), but I doubt it is related.<br
/><br />I=
ndeed, it's not related.<br /><br />The error in
vdsm=5Fvictor.log just=
means that the info gathering call<br />tries to access libvirt domain=
before the incoming migration is<br />completed. It's ugly but harmles=
s.<br /><br />> Milan, maybe you have any advice to troubleshoot the=
issue? Will the<br />> libvirt/qemu logs can help?<br /><br />It
se=
ems there is something wrong on (at least) the source host. There<br />=
are no migration progress messages in the vdsm=5Fginger.log and there a=
re<br />warnings about stale stat samples. That looks like problems wit=
h<br />calling libvirt =E2=80=93 slow and/or stuck calls, maybe due to =
storage<br />problems. The possibly faulty second disk could cause that=
.<br /><br />libvirt debug logs could tell us whether that is indeed th=
e problem and<br />whether it is caused by storage or something else.<b=
r /><br />> I would suggest to open a bug on that issue so we can tr=
ack it more<br />> properly.<br />><br />>
Regards,<br />> =
Maor<br />><br />><br />> [1]<br />>
https://bugzilla.redha=
t.com/show=5Fbug.cgi?id=3D1486543 - Migration leads to<br />> VM run=
ning on 2 Hosts<br />><br />> [2]<br />> 2018-02-16
09:43:35,2=
36+0100 ERROR (jsonrpc/7) [jsonrpc.JsonRpcServer]<br />> Internal se=
rver error (=5F=5Finit=5F=5F:577)<br />> Traceback (most recent call=
last):<br />> File "/usr/lib/python2.7/site-packages/yajsonrpc/=5F=5F=
init=5F=5F.py", line 572,<br />> in =5Fhandle=5Frequest<br
/>> re=
s =3D method(**params)<br />> File "/usr/lib/python2.7/site-packages=
/vdsm/rpc/Bridge.py", line 198, in<br />> =5FdynamicMethod<br
/>>=
result =3D fn(*methodArgs)<br />> File "/usr/share/vdsm/API.py",
li=
ne 1454, in getAllVmIoTunePolicies<br />> io=5Ftune=5Fpolicies=5Fdic=
t =3D self.=5Fcif.getAllVmIoTunePolicies()<br />> File "/usr/share/v=
dsm/clientIF.py", line 454, in getAllVmIoTunePolicies<br />>
'curren=
t=5Fvalues': v.getIoTune()}<br />> File
"/usr/share/vdsm/virt/vm.py"=
, line 2859, in getIoTune<br />> result =3D self.getIoTuneResponse()=
<br />> File "/usr/share/vdsm/virt/vm.py", line 2878, in
getIoTuneRe=
sponse<br />> res =3D self.=5Fdom.blockIoTune(<br />> File
"/usr/=
lib/python2.7/site-packages/vdsm/virt/virdomain.py", line 47,<br />>=
in =5F=5Fgetattr=5F=5F<br />> % self.vmid)<br />>
NotConnectedEr=
ror: VM u'755cf168-de65-42ed-b22f-efe9136f7594' was not<br />>
start=
ed yet or was shut down<br />><br />> On Thu, Feb 22, 2018 at
4:2=
2 PM, fsoyer &lt;fsoyer(a)systea.fr&gt; wrote:<br />><br
/>>> Hi=
,<br />>> Yes, on 2018-02-16 (vdsm logs) I tried with a VM standi=
ng on ginger<br />>> (192.168.0.6) migrated (or failed to migrate=
...) to victor (192.168.0.5),<br />>> while the engine.log in the=
first mail on 2018-02-12 was for VMs standing<br />>> on victor,=
migrated (or failed to migrate...) to ginger. Symptoms were<br />>&=
gt; exactly the same, in both directions, and VMs works like a charm be=
fore,<br />>> and even after (migration "killed" by a
poweroff of=
VMs).<br />>> Am I the only one experimenting this problem ?<br =
/>>><br />>><br />>> Thanks<br
/>>> --<br />>=
;><br />>> Cordialement,<br />>><br
/>>> *Frank So=
yer *<br />>><br />>><br
/>>><br />>> Le Jeudi,=
F=C3=A9vrier 22, 2018 00:45 CET, Maor Lipchuk &lt;mlipchuk(a)redhat.com&=
gt;<br />>> a =C3=A9crit:<br />>><br
/>>><br />>&g=
t; Hi Frank,<br />>><br />>> Sorry about the delay
repond.<=
br />>> I've been going through the logs you attached, although I=
could not find<br />>> any specific indication why the migration=
failed because of the disk you<br />>> were mentionning.<br
/>&g=
t;> Does this VM run with both disks on the target host without migr=
ation?<br />>><br />>> Regards,<br
/>>> Maor<br />>=
;><br />>><br />>> On Fri, Feb 16, 2018 at
11:03 AM, fso=
yer &lt;fsoyer(a)systea.fr&gt; wrote:<br />>>><br
/>>>>=
Hi Maor,<br />>>> sorry for the double post, I've change
the =
email adress of my account and<br />>>> supposed that I'd
need=
to re-post it.<br />>>> And thank you for your time. Here
are=
the logs. I added a vdisk to an<br />>>> existing VM : it no
=
more migrates, needing to poweroff it after minutes.<br />>>>
=
Then simply deleting the second disk makes migrate it in exactly 9s wit=
hout<br />>>> problem !<br />>>>
https://gist.github.=
com/fgth/4707446331d201eef574ac31b6e89561<br />>>>
https://gis=
t.github.com/fgth/f8de9c22664aee53722af676bff8719d<br
/>>>><br=
/>>>> --<br />>>><br
/>>>> Cordialement,<br=
/>>>><br />>>> *Frank Soyer *<br
/>>>> Le M=
ercredi, F=C3=A9vrier 14, 2018 11:04 CET, Maor Lipchuk <<br
/>>&g=
t;> mlipchuk(a)redhat.com&gt; a =C3=A9crit:<br
/>>>><br />>=
;>><br />>>> Hi Frank,<br
/>>>><br />>>&g=
t; I already replied on your last email.<br />>>> Can you
prov=
ide the VDSM logs from the time of the migration failure for<br />>&=
gt;> both hosts:<br />>>> ginger.local.systea.f
<http://=
ginger.local.systea.fr/>r and v<br />>>>
ictor.local.systea=
.fr<br />>>><br />>>> Thanks,<br
/>>>> Maor<=
br />>>><br />>>> On Wed, Feb 14, 2018
at 11:23 AM, f=
soyer &lt;fsoyer(a)systea.fr&gt; wrote:<br
/>>>>><br />>&g=
t;>> Hi all,<br />>>>> I discovered
yesterday a probl=
em when migrating VM with more than one<br />>>>>
vdisk.<br=
/>>>>> On our test servers (oVirt4.1, shared storage with
=
Gluster), I created 2<br />>>>> VMs needed for a test,
from=
a template with a 20G vdisk. On this VMs I<br />>>>>
added=
a 100G vdisk (for this tests I didn't want to waste time to extend<br =
/>>>>> the existing vdisks... But I lost time finally...).
=
The VMs with the 2<br />>>>> vdisks works well.<br
/>>&g=
t;>> Now I saw some updates waiting on the host. I tried to put i=
t in<br />>>>> maintenance... But it stopped on the two
VM.=
They were marked "migrating",<br />>>>> but
no more access=
ible. Other (small) VMs with only 1 vdisk was migrated<br
/>>>>=
;> without problem at the same time.<br />>>>> I
saw tha=
t a kvm process for the (big) VMs was launched on the source<br />>&=
gt;>> AND destination host, but after tens of minutes, the migrat=
ion and the VMs<br />>>>> was always freezed. I tried to
ca=
ncel the migration for the VMs : failed.<br />>>>> The
only=
way to stop it was to poweroff the VMs : the kvm process died on<br />=
>>>> the 2 hosts and the GUI alerted on a failed migration.=
<br />>>>> In doubt, I tried to delete the second vdisk
on =
one of this VMs : it<br />>>>> migrates then without
error =
! And no access problem.<br />>>>> I tried to extend the
fi=
rst vdisk of the second VM, the delete the<br />>>>>
second=
vdisk : it migrates now without problem !<br
/>>>>><br />&=
gt;>>> So after another test with a VM with 2 vdisks, I can sa=
y that this<br />>>>> blocked the migration process
:(<br /=
>>>><br
/>>>>> In engine.log, for a VMs with 1=
vdisk
migrating well, we see :<br />>>>><br
/>>>>=
> 2018-02-12 16:46:29,705+01 INFO [org.ovirt.engine.core.bll.Migrate=
VmToServerCommand]<br />>>>> (default task-28)
[2f712024-59=
82-46a8-82c8-fd8293da5725] Lock Acquired<br />>>>> to
objec=
t 'EngineLock:{exclusiveLocks=3D'[3f57e669-5e4c-4d10-85cc-d573004a099d=3D=
VM]',<br />>>>> sharedLocks=3D''}'<br
/>>>>> 20=
18-02-12 16:46:29,955+01 INFO [org.ovirt.engine.core.bll.MigrateVmToSer=
verCommand]<br />>>>>
(org.ovirt.thread.pool-6-thread-32) [=
2f712024-5982-46a8-82c8-fd8293da5725]<br />>>>> Running
com=
mand: MigrateVmToServerCommand internal: false. Entities<br
/>>>&=
gt;> affected : ID: 3f57e669-5e4c-4d10-85cc-d573004a099d Type: VMAct=
ion<br />>>>> group MIGRATE=5FVM with role type
USER<br />&=
gt;>>> 2018-02-12 16:46:30,261+01 INFO [org.ovirt.engine.core.=
vdsbroker.MigrateVDSCommand]<br />>>>>
(org.ovirt.thread.po=
ol-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725]<br
/>>>>=
;> START, MigrateVDSCommand( MigrateVDSCommandParameters:{runAsync=3D=
'true',<br />>>>>
hostId=3D'ce3938b1-b23f-4d22-840a-f17d7cd=
87bb1',<br />>>>>
vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a0=
99d', srcHost=3D'192.168.0.6',<br />>>>>
dstVdsId=3D'd569c2=
dd-8f30-4878-8aea-858db285cf69', dstHost=3D'<br
/>>>>> 192.=
168.0.5:54321', migrationMethod=3D'ONLINE',
tunnelMigration=3D'false',<=
br />>>>> migrationDowntime=3D'0',
autoConverge=3D'true', m=
igrateCompressed=3D'false',<br />>>>>
consoleAddress=3D'nul=
l', maxBandwidth=3D'500', enableGuestEvents=3D'true',<br
/>>>>=
> maxIncomingMigrations=3D'2', maxOutgoingMigrations=3D'2',<br
/>>=
;>>> convergenceSchedule=3D'[init=3D[{name=3DsetDowntime, para=
ms=3D[100]}],<br />>>>> stalling=3D[{limit=3D1,
action=3D{n=
ame=3DsetDowntime, params=3D[150]}}, {limit=3D2,<br
/>>>>> =
action=3D{name=3DsetDowntime, params=3D[200]}}, {limit=3D3,<br />>&g=
t;>> action=3D{name=3DsetDowntime, params=3D[300]}}, {limit=3D4,<=
br />>>>> action=3D{name=3DsetDowntime, params=3D[400]}},
{=
limit=3D6,<br />>>>> action=3D{name=3DsetDowntime,
params=3D=
[500]}}, {limit=3D-1, action=3D{name=3Dabort,<br />>>>>
par=
ams=3D[]}}]]'}), log id: 14f61ee0<br />>>>>
2018-02-12 16:4=
6:30,262+01 INFO [org.ovirt.engine.core.vdsbro<br />>>>>
ke=
r.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-thread-32=
)<br />>>>> [2f712024-5982-46a8-82c8-fd8293da5725]
START,<b=
r />>>>> MigrateBrokerVDSCommand(HostName =3D
victor.local.=
systea.fr,<br />>>>>
MigrateVDSCommandParameters:{runAsync=3D=
'true',<br />>>>>
hostId=3D'ce3938b1-b23f-4d22-840a-f17d7cd=
87bb1',<br />>>>>
vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a0=
99d', srcHost=3D'192.168.0.6',<br />>>>>
dstVdsId=3D'd569c2=
dd-8f30-4878-8aea-858db285cf69', dstHost=3D'<br
/>>>>> 192.=
168.0.5:54321', migrationMethod=3D'ONLINE',
tunnelMigration=3D'false',<=
br />>>>> migrationDowntime=3D'0',
autoConverge=3D'true', m=
igrateCompressed=3D'false',<br />>>>>
consoleAddress=3D'nul=
l', maxBandwidth=3D'500', enableGuestEvents=3D'true',<br
/>>>>=
> maxIncomingMigrations=3D'2', maxOutgoingMigrations=3D'2',<br
/>>=
;>>> convergenceSchedule=3D'[init=3D[{name=3DsetDowntime, para=
ms=3D[100]}],<br />>>>> stalling=3D[{limit=3D1,
action=3D{n=
ame=3DsetDowntime, params=3D[150]}}, {limit=3D2,<br
/>>>>> =
action=3D{name=3DsetDowntime, params=3D[200]}}, {limit=3D3,<br />>&g=
t;>> action=3D{name=3DsetDowntime, params=3D[300]}}, {limit=3D4,<=
br />>>>> action=3D{name=3DsetDowntime, params=3D[400]}},
{=
limit=3D6,<br />>>>> action=3D{name=3DsetDowntime,
params=3D=
[500]}}, {limit=3D-1, action=3D{name=3Dabort,<br />>>>>
par=
ams=3D[]}}]]'}), log id: 775cd381<br />>>>>
2018-02-12 16:4=
6:30,277+01 INFO [org.ovirt.engine.core.vdsbro<br />>>>>
ke=
r.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-thread-32=
)<br />>>>> [2f712024-5982-46a8-82c8-fd8293da5725]
FINISH, =
MigrateBrokerVDSCommand,<br />>>>> log id:
775cd381<br />&g=
t;>>> 2018-02-12 16:46:30,285+01 INFO [org.ovirt.engine.core.v=
dsbroker.MigrateVDSCommand]<br />>>>>
(org.ovirt.thread.poo=
l-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725]<br
/>>>>=
> FINISH, MigrateVDSCommand, return: MigratingFrom, log id: 14f61ee0=
<br />>>>> 2018-02-12 16:46:30,301+01 INFO
[org.ovirt.engin=
e.core.dal.db<br />>>>>
broker.auditloghandling.AuditLogDir=
ector] (org.ovirt.thread.pool-6-thread-32)<br />>>>>
[2f712=
024-5982-46a8-82c8-fd8293da5725] EVENT=5FID:<br />>>>>
VM=5F=
MIGRATION=5FSTART(62), Correlation ID: 2f712024-5982-46a8-82c8-fd8293da=
5725,<br />>>>> Job ID:
4bd19aa9-cc99-4d02-884e-5a1e857a773=
8, Call Stack: null, Custom<br />>>>> ID: null, Custom
Even=
t ID: -1, Message: Migration started (VM:<br />>>>>
Oracle=5F=
SECONDARY, Source: victor.local.systea.fr, Destination:<br
/>>>&g=
t;> ginger.local.systea.fr, User: admin(a)internal-authz).<br
/>>&g=
t;>> 2018-02-12 16:46:31,106+01 INFO [org.ovirt.engine.core.vdsbr=
o<br />>>>> ker.vdsbroker.FullListVDSCommand]
(DefaultQuart=
zScheduler9) [54a65b66]<br />>>>> START,
FullListVDSCommand=
(HostName =3D victor.local.systea.fr,<br />>>>>
FullListVDS=
CommandParameters:{runAsync=3D'true',<br />>>>>
hostId=3D'c=
e3938b1-b23f-4d22-840a-f17d7cd87bb1',<br />>>>>
vmIds=3D'[3=
f57e669-5e4c-4d10-85cc-d573004a099d]'}), log id: 54b4b435<br
/>>>=
>> 2018-02-12 16:46:31,147+01 INFO [org.ovirt.engine.core.vdsbro<=
br />>>>> ker.vdsbroker.FullListVDSCommand]
(DefaultQuartzS=
cheduler9) [54a65b66]<br />>>>> FINISH,
FullListVDSCommand,=
return: [{acpiEnable=3Dtrue,<br />>>>>
emulatedMachine=3Dp=
c-i440fx-rhel7.3.0, tabletEnable=3Dtrue, pid=3D1493,<br
/>>>>&=
gt; guestDiskMapping=3D{0QEMU=5FQEMU=5FHARDDISK=5Fd890fa68-fba4-4f49-9=3D=
{name=3D/dev/sda},<br />>>>>
QEMU=5FDVD-ROM=5FQM00003=3D{na=
me=3D/dev/sr0}}, transparentHugePages=3Dtrue,<br />>>>>
tim=
eOffset=3D0, cpuType=3DNehalem, smp=3D2, pauseCode=3DNOERR,<br />>&g=
t;>> guestNumaNodes=3D[Ljava.lang.Object;@1d9042cd, smartcardEnab=
le=3Dfalse,<br />>>>>
custom=3D{device=5Ffbddd528-7d93-49c6=
-a286-180e021cb274device=5F87<br />>>>>
9c93ab-4df1-435c-af=
02-565039fcc254=3DVmDevice:{id=3D'VmDeviceId:<br
/>>>>> {de=
viceId=3D'879c93ab-4df1-435c-af02-565039fcc254',<br
/>>>>> =
vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}',
device=3D'unix',<br />=
>>>> type=3D'CHANNEL', bootOrder=3D'0',
specParams=3D'[]', =
address=3D'{bus=3D0,<br />>>>> controller=3D0,
type=3Dvirti=
o-serial, port=3D1}', managed=3D'false',<br
/>>>>> plugged=3D=
'true', readOnly=3D'false', deviceAlias=3D'channel0',<br
/>>>>=
> customProperties=3D'[]', snapshotId=3D'null',
logicalName=3D'null'=
,<br />>>>> hostDevice=3D'null'},
device=5Ffbddd528-7d93-49=
c6-a286<br />>>>>
-180e021cb274device=5F879c93ab-4df1-435c-=
af02-565039fcc254devi<br />>>>>
ce=5F8945f61a-abbe-4156-848=
5-a4aa6f1908dbdevice=5F017b5e59-01c4-<br />>>>>
4aac-bf0c-b=
5d9557284d6=3DVmDevice:{id=3D'VmDeviceId:{deviceId=3D'0<br
/>>>&g=
t;> 17b5e59-01c4-4aac-bf0c-b5d9557284d6', vmId=3D'3f57e669-5e4c-4d10=
-85cc-d573004a099d'}',<br />>>>>
device=3D'tablet', type=3D=
'UNKNOWN', bootOrder=3D'0', specParams=3D'[]',<br
/>>>>> ad=
dress=3D'{bus=3D0, type=3Dusb, port=3D1}', managed=3D'false', plugged=3D=
'true',<br />>>>> readOnly=3D'false',
deviceAlias=3D'input0=
', customProperties=3D'[]',<br />>>>>
snapshotId=3D'null', =
logicalName=3D'null', hostDevice=3D'null'},<br
/>>>>> devic=
e=5Ffbddd528-7d93-49c6-a286-180e021cb274=3DVmDevice:{id=3D'Vm<br />>=
>>> DeviceId:{deviceId=3D'fbddd528-7d93-49c6-a286-180e021cb274=
',<br />>>>>
vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}=
', device=3D'ide',<br />>>>>
type=3D'CONTROLLER', bootOrder=
=3D'0', specParams=3D'[]', address=3D'{slot=3D0x01,<br
/>>>>&g=
t; bus=3D0x00, domain=3D0x0000, type=3Dpci, function=3D0x1}', managed=3D=
'false',<br />>>>> plugged=3D'true',
readOnly=3D'false', de=
viceAlias=3D'ide', customProperties=3D'[]',<br
/>>>>> snaps=
hotId=3D'null', logicalName=3D'null', hostDevice=3D'null'},<br
/>>&g=
t;>> device=5Ffbddd528-7d93-49c6-a286-180e021cb274device=5F879c93=
ab-4<br />>>>>
df1-435c-af02-565039fcc254device=5F8945f61a-=
abbe-4156-8485-a4a<br />>>>>
a6f1908db=3DVmDevice:{id=3D'Vm=
DeviceId:{deviceId=3D'8945f61a-abbe-4156-8485-a4aa6f1908db',<br
/>>&=
gt;>> vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}',
device=3D'=
unix',<br />>>>> type=3D'CHANNEL',
bootOrder=3D'0', specPar=
ams=3D'[]', address=3D'{bus=3D0,<br />>>>>
controller=3D0, =
type=3Dvirtio-serial, port=3D2}', managed=3D'false',<br
/>>>>&=
gt; plugged=3D'true', readOnly=3D'false',
deviceAlias=3D'channel1',<br =
/>>>>> customProperties=3D'[]',
snapshotId=3D'null', logica=
lName=3D'null',<br />>>>>
hostDevice=3D'null'}}, vmType=3Dk=
vm, memSize=3D8192, smpCoresPerSocket=3D1,<br />>>>>
vmName=
=3DOracle=5FSECONDARY, nice=3D0, status=3DMigration Source, maxMemSize=3D=
32768,<br />>>>> bootMenuEnable=3Dfalse,
vmId=3D3f57e669-5e=
4c-4d10-85cc-d573004a099d,<br />>>>> numOfIoThreads=3D2,
sm=
pThreadsPerCore=3D1, memGuaranteedSize=3D8192,<br />>>>>
kv=
mEnable=3Dtrue, pitReinjection=3Dfalse, displayNetwork=3Dovirtmgmt,<br =
/>>>>> devices=3D[Ljava.lang.Object;@28ae66d7,
display=3Dvn=
c, maxVCpus=3D16,<br />>>>> clientIp=3D,
statusTime=3D42994=
84520, maxMemSlots=3D16}], log id: 54b4b435<br />>>>>
2018-=
02-12 16:46:31,150+01 INFO [org.ovirt.engine.core.vdsbro<br
/>>>&=
gt;> ker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler1)<=
br />>>>> [27fac647] Fetched 3 VMs from VDS
'd569c2dd-8f30-=
4878-8aea-858db285cf69'<br />>>>> 2018-02-12
16:46:31,151+0=
1 INFO [org.ovirt.engine.core.vdsbro<br />>>>>
ker.monitori=
ng.VmDevicesMonitoring] (DefaultQuartzScheduler9)<br
/>>>>>=
[54a65b66] Received a vnc Device without an address when processing VM=
<br />>>>> 3f57e669-5e4c-4d10-85cc-d573004a099d devices,
sk=
ipping device:<br />>>>> {device=3Dvnc,
specParams=3D{displ=
ayNetwork=3Dovirtmgmt, keyMap=3Dfr,<br />>>>>
displayIp=3D1=
92.168.0.6}, type=3Dgraphics, deviceId=3D813957b1-446a-4e88-9e40-9fe76d=
2c442d,<br />>>>> port=3D5901}<br
/>>>>> 2018-0=
2-12 16:46:31,151+01 INFO [org.ovirt.engine.core.vdsbro<br
/>>>&g=
t;> ker.monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler9)<br=
/>>>>> [54a65b66] Received a lease Device without an
addre=
ss when processing VM<br />>>>>
3f57e669-5e4c-4d10-85cc-d57=
3004a099d devices, skipping device:<br />>>>>
{lease=5Fid=3D=
3f57e669-5e4c-4d10-85cc-d573004a099d,<br />>>>>
sd=5Fid=3D1=
e51cecc-eb2e-47d0-b185-920fdc7afa16,<br />>>>>
deviceId=3D{=
uuid=3Da09949aa-5642-4b6d-94a4-8b0d04257be5}, offset=3D6291456,<br />&g=
t;>>> device=3Dlease, path=3D/rhev/data-center/mnt/glusterSD/1=
92.168.0.6:<br />>>>>
=5FDATA01/1e51cecc-eb2e-47d0-b185-920=
fdc7afa16/dom=5Fmd/xleases, type=3Dlease}<br />>>>>
2018-02=
-12 16:46:31,152+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.Vm=
Analyzer]<br />>>>> (DefaultQuartzScheduler1) [27fac647]
VM=
'3f57e669-5e4c-4d10-85cc-d573004a099d'(Oracle=5FSECONDARY)<br
/>>&g=
t;>> was unexpectedly detected as 'MigratingTo' on VDS<br
/>>&=
gt;>> 'd569c2dd-8f30-4878-8aea-858db285cf69'(ginger.local.systea.=
fr)<br />>>>> (expected on
'ce3938b1-b23f-4d22-840a-f17d7cd=
87bb1')<br />>>>> 2018-02-12 16:46:31,152+01 INFO
[org.ovir=
t.engine.core.vdsbroker.monitoring.VmAnalyzer]<br />>>>>
(D=
efaultQuartzScheduler1) [27fac647] VM '3f57e669-5e4c-4d10-85cc-d573004a=
099d'<br />>>>> is migrating to VDS
'd569c2dd-8f30-4878-8ae=
a-858db285cf69'(<br />>>>> ginger.local.systea.fr)
ignoring=
it in the refresh until migration is<br />>>>>
done<br />&=
gt;>>> ....<br />>>>> 2018-02-12
16:46:41,631+01 I=
NFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]<br />>&g=
t;>> (ForkJoinPool-1-worker-11) [] VM '3f57e669-5e4c-4d10-85cc-d5=
73004a099d'<br />>>>> was reported as Down on VDS
'ce3938b1=
-b23f-4d22-840a-f17d7cd87bb1'(<br />>>>>
victor.local.syste=
a.fr)<br />>>>> 2018-02-12 16:46:41,632+01 INFO
[org.ovirt.=
engine.core.vdsbro<br />>>>>
ker.vdsbroker.DestroyVDSComman=
d] (ForkJoinPool-1-worker-11) [] START,<br />>>>>
DestroyVD=
SCommand(HostName =3D victor.local.systea.fr,<br />>>>>
Des=
troyVmVDSCommandParameters:{runAsync=3D'true',<br
/>>>>> ho=
stId=3D'ce3938b1-b23f-4d22-840a-f17d7cd87bb1',<br
/>>>>> vm=
Id=3D'3f57e669-5e4c-4d10-85cc-d573004a099d', force=3D'false',<br
/>>=
>>> secondsToWait=3D'0', gracefully=3D'false',
reason=3D'', ig=
noreNoVm=3D'true'}), log<br />>>>> id:
560eca57<br />>&g=
t;>> 2018-02-12 16:46:41,650+01 INFO [org.ovirt.engine.core.vdsbr=
o<br />>>>> ker.vdsbroker.DestroyVDSCommand]
(ForkJoinPool-=
1-worker-11) [] FINISH,<br />>>>> DestroyVDSCommand, log
id=
: 560eca57<br />>>>> 2018-02-12 16:46:41,650+01 INFO
[org.o=
virt.engine.core.vdsbroker.monitoring.VmAnalyzer]<br
/>>>>>=
(ForkJoinPool-1-worker-11) [] VM '3f57e669-5e4c-4d10-85cc-d573004a099d=
'(Oracle=5FSECONDARY)<br />>>>> moved from
'MigratingFrom' =
--> 'Down'<br />>>>> 2018-02-12
16:46:41,651+01 INFO [or=
g.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]<br
/>>>>&=
gt; (ForkJoinPool-1-worker-11) [] Handing over VM '3f57e669-5e4c-4d10-8=
5cc-d573004a099d'(Oracle=5FSECONDARY)<br />>>>> to
Host 'd5=
69c2dd-8f30-4878-8aea-858db285cf69'. Setting VM to status<br
/>>>=
>> 'MigratingTo'<br />>>>>
2018-02-12 16:46:42,163+01=
INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]<br />>=
>>> (ForkJoinPool-1-worker-4) [] VM '3f57e669-5e4c-4d10-85cc-d=
573004a099d'(Oracle=5FSECONDARY)<br />>>>> moved
from 'Migr=
atingTo' --> 'Up'<br />>>>>
2018-02-12 16:46:42,169+01 I=
NFO [org.ovirt.engine.core.vdsbro<br />>>>>
ker.vdsbroker.M=
igrateStatusVDSCommand] (ForkJoinPool-1-worker-4) []<br
/>>>>&=
gt; START, MigrateStatusVDSCommand(HostName =3D ginger.local.systea.fr,=
<br />>>>>
MigrateStatusVDSCommandParameters:{runAsync=3D't=
rue',<br />>>>>
hostId=3D'd569c2dd-8f30-4878-8aea-858db285c=
f69',<br />>>>>
vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099=
d'}), log id: 7a25c281<br />>>>> 2018-02-12
16:46:42,174+01=
INFO [org.ovirt.engine.core.vdsbro<br />>>>>
ker.vdsbroker=
.MigrateStatusVDSCommand] (ForkJoinPool-1-worker-4) []<br
/>>>>=
;> FINISH, MigrateStatusVDSCommand, log id: 7a25c281<br
/>>>&g=
t;> 2018-02-12 16:46:42,194+01 INFO [org.ovirt.engine.core.dal.db<br=
/>>>>> broker.auditloghandling.AuditLogDirector]
(ForkJoin=
Pool-1-worker-4) []<br />>>>> EVENT=5FID:
VM=5FMIGRATION=5F=
DONE(63), Correlation ID:<br />>>>>
2f712024-5982-46a8-82c8=
-fd8293da5725, Job ID:<br />>>>>
4bd19aa9-cc99-4d02-884e-5a=
1e857a7738, Call Stack: null, Custom ID:<br />>>>> null,
Cu=
stom Event ID: -1, Message: Migration completed (VM:<br
/>>>>&=
gt; Oracle=5FSECONDARY, Source: victor.local.systea.fr, Destination:<br=
/>>>>> ginger.local.systea.fr, Duration: 11 seconds,
Total=
: 11 seconds, Actual<br />>>>> downtime: (N/A))<br
/>>&g=
t;>> 2018-02-12 16:46:42,201+01 INFO [org.ovirt.engine.core.bll.M=
igrateVmToServerCommand]<br />>>>>
(ForkJoinPool-1-worker-4=
) [] Lock freed to object<br />>>>>
'EngineLock:{exclusiveL=
ocks=3D'[3f57e669-5e4c-4d10-85cc-d573004a099d=3DVM]',<br
/>>>>=
> sharedLocks=3D''}'<br />>>>>
2018-02-12 16:46:42,203+0=
1 INFO [org.ovirt.engine.core.vdsbro<br />>>>>
ker.vdsbroke=
r.FullListVDSCommand] (ForkJoinPool-1-worker-4) [] START,<br />>>=
>> FullListVDSCommand(HostName =3D ginger.local.systea.fr,<br
/>&=
gt;>>> FullListVDSCommandParameters:{runAsync=3D'true',<br
/>&=
gt;>>> hostId=3D'd569c2dd-8f30-4878-8aea-858db285cf69',<br
/>&=
gt;>>> vmIds=3D'[3f57e669-5e4c-4d10-85cc-d573004a099d]'}),
log=
id: 7cc65298<br />>>>> 2018-02-12 16:46:42,254+01 INFO
[or=
g.ovirt.engine.core.vdsbro<br />>>>>
ker.vdsbroker.FullList=
VDSCommand] (ForkJoinPool-1-worker-4) [] FINISH,<br
/>>>>> =
FullListVDSCommand, return: [{acpiEnable=3Dtrue,<br
/>>>>> =
emulatedMachine=3Dpc-i440fx-rhel7.3.0, afterMigrationStatus=3D,<br />&g=
t;>>> tabletEnable=3Dtrue, pid=3D18748, guestDiskMapping=3D{},=
<br />>>>> transparentHugePages=3Dtrue, timeOffset=3D0,
cpu=
Type=3DNehalem, smp=3D2,<br />>>>>
guestNumaNodes=3D[Ljava.=
lang.Object;@760085fd,<br />>>>>
custom=3D{device=5Ffbddd52=
8-7d93-49c6-a286-180e021cb274device=5F87<br />>>>>
9c93ab-4=
df1-435c-af02-565039fcc254=3DVmDevice:{id=3D'VmDeviceId:<br
/>>>&=
gt;> {deviceId=3D'879c93ab-4df1-435c-af02-565039fcc254',<br
/>>&g=
t;>> vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}',
device=3D'u=
nix',<br />>>>> type=3D'CHANNEL',
bootOrder=3D'0', specPara=
ms=3D'[]', address=3D'{bus=3D0,<br />>>>>
controller=3D0, t=
ype=3Dvirtio-serial, port=3D1}', managed=3D'false',<br
/>>>>&g=
t; plugged=3D'true', readOnly=3D'false',
deviceAlias=3D'channel0',<br /=
>>>> customProperties=3D'[]',
snapshotId=3D'null', logical=
Name=3D'null',<br
/>>>>> hostDevice=3D'null'}, device=5Ffbd=
dd528-7d93-49c6-a286<br />>>>>
-180e021cb274device=5F879c93=
ab-4df1-435c-af02-565039fcc254devi<br />>>>>
ce=5F8945f61a-=
abbe-4156-8485-a4aa6f1908dbdevice=5F017b5e59-01c4-<br
/>>>>>=
; 4aac-bf0c-b5d9557284d6=3DVmDevice:{id=3D'VmDeviceId:{deviceId=3D'0<br=
/>>>>> 17b5e59-01c4-4aac-bf0c-b5d9557284d6',
vmId=3D'3f57e=
669-5e4c-4d10-85cc-d573004a099d'}',<br />>>>>
device=3D'tab=
let', type=3D'UNKNOWN', bootOrder=3D'0',
specParams=3D'[]',<br />>&g=
t;>> address=3D'{bus=3D0, type=3Dusb, port=3D1}',
managed=3D'fals=
e', plugged=3D'true',<br />>>>>
readOnly=3D'false', deviceA=
lias=3D'input0', customProperties=3D'[]',<br
/>>>>> snapsho=
tId=3D'null', logicalName=3D'null', hostDevice=3D'null'},<br
/>>>=
>> device=5Ffbddd528-7d93-49c6-a286-180e021cb274=3DVmDevice:{id=3D=
'Vm<br />>>>>
DeviceId:{deviceId=3D'fbddd528-7d93-49c6-a286=
-180e021cb274',<br />>>>>
vmId=3D'3f57e669-5e4c-4d10-85cc-d=
573004a099d'}', device=3D'ide',<br
/>>>>> type=3D'CONTROLLE=
R', bootOrder=3D'0', specParams=3D'[]',
address=3D'{slot=3D0x01,<br />&=
gt;>>> bus=3D0x00, domain=3D0x0000, type=3Dpci, function=3D0x1=
}', managed=3D'false',<br />>>>>
plugged=3D'true', readOnly=
=3D'false', deviceAlias=3D'ide', customProperties=3D'[]',<br
/>>>=
>> snapshotId=3D'null', logicalName=3D'null',
hostDevice=3D'null'=
},<br />>>>>
device=5Ffbddd528-7d93-49c6-a286-180e021cb274d=
evice=5F879c93ab-4<br />>>>>
df1-435c-af02-565039fcc254devi=
ce=5F8945f61a-abbe-4156-8485-a4a<br />>>>>
a6f1908db=3DVmDe=
vice:{id=3D'VmDeviceId:{deviceId=3D'8945f61a-abbe-4156-8485-a4aa6f1908d=
b',<br />>>>>
vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'=
}', device=3D'unix',<br />>>>>
type=3D'CHANNEL', bootOrder=3D=
'0', specParams=3D'[]', address=3D'{bus=3D0,<br
/>>>>> cont=
roller=3D0, type=3Dvirtio-serial, port=3D2}', managed=3D'false',<br
/>&=
gt;>>> plugged=3D'true', readOnly=3D'false',
deviceAlias=3D'ch=
annel1',<br />>>>> customProperties=3D'[]',
snapshotId=3D'n=
ull', logicalName=3D'null',<br />>>>>
hostDevice=3D'null'}}=
, vmType=3Dkvm, memSize=3D8192, smpCoresPerSocket=3D1,<br
/>>>>=
;> vmName=3DOracle=5FSECONDARY, nice=3D0, status=3DUp, maxMemSize=3D=
32768,<br />>>>> bootMenuEnable=3Dfalse,
vmId=3D3f57e669-5e=
4c-4d10-85cc-d573004a099d,<br />>>>> numOfIoThreads=3D2,
sm=
pThreadsPerCore=3D1, smartcardEnable=3Dfalse,<br />>>>>
max=
MemSlots=3D16, kvmEnable=3Dtrue, pitReinjection=3Dfalse,<br
/>>>&=
gt;> displayNetwork=3Dovirtmgmt, devices=3D[Ljava.lang.Object;@2e4d3=
dd3,<br />>>>> memGuaranteedSize=3D8192, maxVCpus=3D16,
cli=
entIp=3D, statusTime=3D4304259600<br />>>>>
<(430)%20425=
-9600>, display=3Dvnc}], log id: 7cc65298<br
/>>>>> 2018=
-02-12 16:46:42,257+01 INFO [org.ovirt.engine.core.vdsbro<br />>>=
>> ker.monitoring.VmDevicesMonitoring] (ForkJoinPool-1-worker-4) =
[]<br />>>>> Received a vnc Device without an address
when =
processing VM<br />>>>>
3f57e669-5e4c-4d10-85cc-d573004a099=
d devices, skipping device:<br />>>>> {device=3Dvnc,
specPa=
rams=3D{displayNetwork=3Dovirtmgmt, keyMap=3Dfr,<br
/>>>>> =
displayIp=3D192.168.0.5}, type=3Dgraphics, deviceId=3D813957b1-446a-4e8=
8-9e40-9fe76d2c442d,<br />>>>> port=3D5901}<br
/>>>&g=
t;> 2018-02-12 16:46:42,257+01 INFO [org.ovirt.engine.core.vdsbro<br=
/>>>>> ker.monitoring.VmDevicesMonitoring]
(ForkJoinPool-1=
-worker-4) []<br />>>>> Received a lease Device without
an =
address when processing VM<br />>>>>
3f57e669-5e4c-4d10-85c=
c-d573004a099d devices, skipping device:<br />>>>>
{lease=5F=
id=3D3f57e669-5e4c-4d10-85cc-d573004a099d,<br />>>>>
sd=5Fi=
d=3D1e51cecc-eb2e-47d0-b185-920fdc7afa16,<br />>>>>
deviceI=
d=3D{uuid=3Da09949aa-5642-4b6d-94a4-8b0d04257be5}, offset=3D6291456,<br=
/>>>>> device=3Dlease,
path=3D/rhev/data-center/mnt/gluste=
rSD/192.168.0.6:<br />>>>>
=5FDATA01/1e51cecc-eb2e-47d0-b18=
5-920fdc7afa16/dom=5Fmd/xleases, type=3Dlease}<br />>>>>
20=
18-02-12 16:46:46,260+01 INFO [org.ovirt.engine.core.vdsbro<br />>&g=
t;>> ker.vdsbroker.FullListVDSCommand] (DefaultQuartzScheduler5) =
[7fcb200a]<br />>>>> FINISH, FullListVDSCommand, return:
[{=
acpiEnable=3Dtrue,<br />>>>>
emulatedMachine=3Dpc-i440fx-rh=
el7.3.0, afterMigrationStatus=3D,<br />>>>>
tabletEnable=3D=
true, pid=3D18748, guestDiskMapping=3D{0QEMU=5FQEMU=5FH<br
/>>>&g=
t;> ARDDISK=5Fd890fa68-fba4-4f49-9=3D{name=3D/dev/sda},<br
/>>>=
;>> QEMU=5FDVD-ROM=5FQM00003=3D{name=3D/dev/sr0}}, transparentHug=
ePages=3Dtrue,<br />>>>> timeOffset=3D0,
cpuType=3DNehalem,=
smp=3D2, guestNumaNodes=3D[Ljava.lang.Obj<br />>>>>
ect;@7=
7951faf, custom=3D{device=5Ffbddd528-7d93-4<br />>>>>
9c6-a=
286-180e021cb274device=5F879c93ab-4df1-435c-af02-565039fc<br />>>=
>> c254=3DVmDevice:{id=3D'VmDeviceId:{deviceId=3D'879c93ab-4df1-4=
35c-af02-565039fcc254',<br />>>>>
vmId=3D'3f57e669-5e4c-4d1=
0-85cc-d573004a099d'}', device=3D'unix',<br
/>>>>> type=3D'=
CHANNEL', bootOrder=3D'0', specParams=3D'[]',
address=3D'{bus=3D0,<br /=
>>>> controller=3D0, type=3Dvirtio-serial,
port=3D1}', man=
aged=3D'false',<br
/>>>>> plugged=3D'true', readOnly=3D'fal=
se', deviceAlias=3D'channel0',<br />>>>>
customProperties=3D=
'[]', snapshotId=3D'null', logicalName=3D'null',<br
/>>>>> =
hostDevice=3D'null'}, device=5Ffbddd528-7d93-49c6-a286<br
/>>>>=
;> -180e021cb274device=5F879c93ab-4df1-435c-af02-565039fcc254devi<br=
/>>>>>
ce=5F8945f61a-abbe-4156-8485-a4aa6f1908dbdevice=5F0=
17b5e59-01c4-<br />>>>>
4aac-bf0c-b5d9557284d6=3DVmDevice:{=
id=3D'VmDeviceId:{deviceId=3D'0<br />>>>>
17b5e59-01c4-4aac=
-bf0c-b5d9557284d6',
vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}',<b=
r />>>>> device=3D'tablet',
type=3D'UNKNOWN', bootOrder=3D'=
0', specParams=3D'[]',<br />>>>>
address=3D'{bus=3D0, type=3D=
usb, port=3D1}', managed=3D'false', plugged=3D'true',<br
/>>>>=
> readOnly=3D'false', deviceAlias=3D'input0',
customProperties=3D'[]=
',<br />>>>> snapshotId=3D'null',
logicalName=3D'null', hos=
tDevice=3D'null'},<br />>>>>
device=5Ffbddd528-7d93-49c6-a2=
86-180e021cb274=3DVmDevice:{id=3D'Vm<br />>>>>
DeviceId:{de=
viceId=3D'fbddd528-7d93-49c6-a286-180e021cb274',<br
/>>>>> =
vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}', device=3D'ide',<br
/>&=
gt;>>> type=3D'CONTROLLER', bootOrder=3D'0',
specParams=3D'[]'=
, address=3D'{slot=3D0x01,<br />>>>> bus=3D0x00,
domain=3D0=
x0000, type=3Dpci, function=3D0x1}', managed=3D'false',<br
/>>>&g=
t;> plugged=3D'true', readOnly=3D'false',
deviceAlias=3D'ide', custo=
mProperties=3D'[]',<br />>>>>
snapshotId=3D'null', logicalN=
ame=3D'null', hostDevice=3D'null'},<br
/>>>>> device=5Ffbdd=
d528-7d93-49c6-a286-180e021cb274device=5F879c93ab-4<br
/>>>>&g=
t; df1-435c-af02-565039fcc254device=5F8945f61a-abbe-4156-8485-a4a<br />=
>>>>
a6f1908db=3DVmDevice:{id=3D'VmDeviceId:{deviceId=3D'89=
45f61a-abbe-4156-8485-a4aa6f1908db',<br />>>>>
vmId=3D'3f57=
e669-5e4c-4d10-85cc-d573004a099d'}', device=3D'unix',<br
/>>>>=
> type=3D'CHANNEL', bootOrder=3D'0', specParams=3D'[]',
address=3D'{=
bus=3D0,<br />>>>> controller=3D0, type=3Dvirtio-serial,
po=
rt=3D2}', managed=3D'false',<br />>>>>
plugged=3D'true', re=
adOnly=3D'false', deviceAlias=3D'channel1',<br
/>>>>> custo=
mProperties=3D'[]', snapshotId=3D'null',
logicalName=3D'null',<br />>=
;>>> hostDevice=3D'null'}}, vmType=3Dkvm, memSize=3D8192,
smpC=
oresPerSocket=3D1,<br />>>>>
vmName=3DOracle=5FSECONDARY, n=
ice=3D0, status=3DUp, maxMemSize=3D32768,<br />>>>>
bootMen=
uEnable=3Dfalse, vmId=3D3f57e669-5e4c-4d10-85cc-d573004a099d,<br />>=
>>> numOfIoThreads=3D2, smpThreadsPerCore=3D1, smartcardEnable=
=3Dfalse,<br />>>>> maxMemSlots=3D16, kvmEnable=3Dtrue,
pit=
Reinjection=3Dfalse,<br />>>>>
displayNetwork=3Dovirtmgmt, =
devices=3D[Ljava.lang.Object;@286410fd,<br />>>>>
memGuaran=
teedSize=3D8192, maxVCpus=3D16, clientIp=3D, statusTime=3D4304263620<br=
/>>>>> <(430)%20426-3620>, display=3Dvnc}],
log id: =
58cdef4c<br />>>>> 2018-02-12 16:46:46,267+01 INFO
[org.ovi=
rt.engine.core.vdsbro<br />>>>>
ker.monitoring.VmDevicesMon=
itoring] (DefaultQuartzScheduler5)<br />>>>> [7fcb200a]
Rec=
eived a vnc Device without an address when processing VM<br
/>>>&=
gt;> 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device:<=
br />>>>> {device=3Dvnc,
specParams=3D{displayNetwork=3Dovi=
rtmgmt, keyMap=3Dfr,<br />>>>> displayIp=3D192.168.0.5},
ty=
pe=3Dgraphics, deviceId=3D813957b1-446a-4e88-9e40-9fe76d2c442d,<br />&g=
t;>>> port=3D5901}<br />>>>>
2018-02-12 16:46:46,2=
68+01 INFO [org.ovirt.engine.core.vdsbro<br />>>>>
ker.moni=
toring.VmDevicesMonitoring] (DefaultQuartzScheduler5)<br
/>>>>=
> [7fcb200a] Received a lease Device without an address when process=
ing VM<br />>>>> 3f57e669-5e4c-4d10-85cc-d573004a099d
devic=
es, skipping device:<br />>>>>
{lease=5Fid=3D3f57e669-5e4c-=
4d10-85cc-d573004a099d,<br />>>>>
sd=5Fid=3D1e51cecc-eb2e-4=
7d0-b185-920fdc7afa16,<br />>>>>
deviceId=3D{uuid=3Da09949a=
a-5642-4b6d-94a4-8b0d04257be5}, offset=3D6291456,<br
/>>>>>=
device=3Dlease, path=3D/rhev/data-center/mnt/glusterSD/192.168.0.6:<br=
/>>>>>
=5FDATA01/1e51cecc-eb2e-47d0-b185-920fdc7afa16/dom=5F=
md/xleases, type=3Dlease}<br />>>>><br
/>>>>><b=
r />>>>><br />>>>><br
/>>>>> For th=
e VM with 2 vdisks we see :<br />>>>><br
/>>>>>=
2018-02-12 16:49:06,112+01 INFO [org.ovirt.engine.core.bll.MigrateVmTo=
ServerCommand]<br />>>>> (default task-50)
[92b5af33-cb87-4=
142-b8fe-8b838dd7458e] Lock Acquired<br />>>>> to object
'E=
ngineLock:{exclusiveLocks=3D'[f7d4ec12-627a-4b83-b59e-886400d55474=3DVM=
]',<br />>>>> sharedLocks=3D''}'<br
/>>>>> 2018=
-02-12 16:49:06,407+01 INFO [org.ovirt.engine.core.bll.MigrateVmToServe=
rCommand]<br />>>>> (org.ovirt.thread.pool-6-thread-49)
[92=
b5af33-cb87-4142-b8fe-8b838dd7458e]<br />>>>> Running
comma=
nd: MigrateVmToServerCommand internal: false. Entities<br
/>>>>=
;> affected : ID: f7d4ec12-627a-4b83-b59e-886400d55474 Type: VMActio=
n<br />>>>> group MIGRATE=5FVM with role type USER<br
/>>=
;>>> 2018-02-12 16:49:06,712+01 INFO [org.ovirt.engine.core.vd=
sbroker.MigrateVDSCommand]<br />>>>>
(org.ovirt.thread.pool=
-6-thread-49) [92b5af33-cb87-4142-b8fe-8b838dd7458e]<br
/>>>>&=
gt; START, MigrateVDSCommand( MigrateVDSCommandParameters:{runAsync=3D'=
true',<br />>>>>
hostId=3D'd569c2dd-8f30-4878-8aea-858db285=
cf69',<br />>>>>
vmId=3D'f7d4ec12-627a-4b83-b59e-886400d554=
74', srcHost=3D'192.168.0.5',<br />>>>>
dstVdsId=3D'ce3938b=
1-b23f-4d22-840a-f17d7cd87bb1', dstHost=3D'<br
/>>>>> 192.1=
68.0.6:54321', migrationMethod=3D'ONLINE',
tunnelMigration=3D'false',<b=
r />>>>> migrationDowntime=3D'0',
autoConverge=3D'true', mi=
grateCompressed=3D'false',<br />>>>>
consoleAddress=3D'null=
', maxBandwidth=3D'500', enableGuestEvents=3D'true',<br
/>>>>&=
gt; maxIncomingMigrations=3D'2', maxOutgoingMigrations=3D'2',<br
/>>=
>>> convergenceSchedule=3D'[init=3D[{name=3DsetDowntime, param=
s=3D[100]}],<br />>>>> stalling=3D[{limit=3D1,
action=3D{na=
me=3DsetDowntime, params=3D[150]}}, {limit=3D2,<br
/>>>>> a=
ction=3D{name=3DsetDowntime, params=3D[200]}}, {limit=3D3,<br />>>=
;>> action=3D{name=3DsetDowntime, params=3D[300]}}, {limit=3D4,<b=
r />>>>> action=3D{name=3DsetDowntime, params=3D[400]}},
{l=
imit=3D6,<br />>>>> action=3D{name=3DsetDowntime,
params=3D=
[500]}}, {limit=3D-1, action=3D{name=3Dabort,<br />>>>>
par=
ams=3D[]}}]]'}), log id: 3702a9e0<br />>>>>
2018-02-12 16:4=
9:06,713+01 INFO [org.ovirt.engine.core.vdsbro<br />>>>>
ke=
r.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-thread-49=
)<br />>>>> [92b5af33-cb87-4142-b8fe-8b838dd7458e]
START,<b=
r />>>>> MigrateBrokerVDSCommand(HostName =3D
ginger.local.=
systea.fr,<br />>>>>
MigrateVDSCommandParameters:{runAsync=3D=
'true',<br />>>>>
hostId=3D'd569c2dd-8f30-4878-8aea-858db28=
5cf69',<br />>>>>
vmId=3D'f7d4ec12-627a-4b83-b59e-886400d55=
474', srcHost=3D'192.168.0.5',<br />>>>>
dstVdsId=3D'ce3938=
b1-b23f-4d22-840a-f17d7cd87bb1', dstHost=3D'<br
/>>>>> 192.=
168.0.6:54321', migrationMethod=3D'ONLINE',
tunnelMigration=3D'false',<=
br />>>>> migrationDowntime=3D'0',
autoConverge=3D'true', m=
igrateCompressed=3D'false',<br />>>>>
consoleAddress=3D'nul=
l', maxBandwidth=3D'500', enableGuestEvents=3D'true',<br
/>>>>=
> maxIncomingMigrations=3D'2', maxOutgoingMigrations=3D'2',<br
/>>=
;>>> convergenceSchedule=3D'[init=3D[{name=3DsetDowntime, para=
ms=3D[100]}],<br />>>>> stalling=3D[{limit=3D1,
action=3D{n=
ame=3DsetDowntime, params=3D[150]}}, {limit=3D2,<br
/>>>>> =
action=3D{name=3DsetDowntime, params=3D[200]}}, {limit=3D3,<br />>&g=
t;>> action=3D{name=3DsetDowntime, params=3D[300]}}, {limit=3D4,<=
br />>>>> action=3D{name=3DsetDowntime, params=3D[400]}},
{=
limit=3D6,<br />>>>> action=3D{name=3DsetDowntime,
params=3D=
[500]}}, {limit=3D-1, action=3D{name=3Dabort,<br />>>>>
par=
ams=3D[]}}]]'}), log id: 1840069c<br />>>>>
2018-02-12 16:4=
9:06,724+01 INFO [org.ovirt.engine.core.vdsbro<br />>>>>
ke=
r.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-thread-49=
)<br />>>>> [92b5af33-cb87-4142-b8fe-8b838dd7458e]
FINISH, =
MigrateBrokerVDSCommand,<br />>>>> log id:
1840069c<br />&g=
t;>>> 2018-02-12 16:49:06,732+01 INFO [org.ovirt.engine.core.v=
dsbroker.MigrateVDSCommand]<br />>>>>
(org.ovirt.thread.poo=
l-6-thread-49) [92b5af33-cb87-4142-b8fe-8b838dd7458e]<br
/>>>>=
> FINISH, MigrateVDSCommand, return: MigratingFrom, log id: 3702a9e0=
<br />>>>> 2018-02-12 16:49:06,753+01 INFO
[org.ovirt.engin=
e.core.dal.db<br />>>>>
broker.auditloghandling.AuditLogDir=
ector] (org.ovirt.thread.pool-6-thread-49)<br />>>>>
[92b5a=
f33-cb87-4142-b8fe-8b838dd7458e] EVENT=5FID:<br />>>>>
VM=5F=
MIGRATION=5FSTART(62), Correlation ID: 92b5af33-cb87-4142-b8fe-8b838dd7=
458e,<br />>>>> Job ID:
f4f54054-f7c8-4481-8eda-d5a15c38306=
1, Call Stack: null, Custom<br />>>>> ID: null, Custom
Even=
t ID: -1, Message: Migration started (VM:<br />>>>>
Oracle=5F=
PRIMARY, Source: ginger.local.systea.fr, Destination:<br
/>>>>=
> victor.local.systea.fr, User: admin(a)internal-authz).<br
/>>>=
>> ...<br />>>>> 2018-02-12
16:49:16,453+01 INFO [org=
.ovirt.engine.core.vdsbro<br />>>>>
ker.monitoring.VmsStati=
sticsFetcher] (DefaultQuartzScheduler4)<br />>>>>
[162a5bc3=
] Fetched 2 VMs from VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'<br
/>&g=
t;>>> 2018-02-12 16:49:16,455+01 INFO [org.ovirt.engine.core.v=
dsbroker.monitoring.VmAnalyzer]<br />>>>>
(DefaultQuartzSch=
eduler4) [162a5bc3] VM 'f7d4ec12-627a-4b83-b59e-886400d55474'(Oracle=5F=
PRIMARY)<br />>>>> was unexpectedly detected as
'MigratingT=
o' on VDS<br />>>>>
'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'(=
victor.local.systea.fr)<br />>>>> (expected on
'd569c2dd-8f=
30-4878-8aea-858db285cf69')<br />>>>> 2018-02-12
16:49:16,4=
55+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]<br /=
>>>> (DefaultQuartzScheduler4) [162a5bc3]
VM 'f7d4ec12-627=
a-4b83-b59e-886400d55474'<br
/>>>>> is migrating to VDS 'ce=
3938b1-b23f-4d22-840a-f17d7cd87bb1'(<br />>>>>
victor.local=
.systea.fr) ignoring it in the refresh until migration is<br />>>=
>> done<br />>>>> ...<br
/>>>>> 2018-02-1=
2 16:49:31,484+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAn=
alyzer]<br />>>>> (DefaultQuartzScheduler5) [11a7619a]
VM '=
f7d4ec12-627a-4b83-b59e-886400d55474'(Oracle=5FPRIMARY)<br
/>>>&g=
t;> was unexpectedly detected as 'MigratingTo' on VDS<br
/>>>&=
gt;> 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'(victor.local.systea.fr)<=
br />>>>> (expected on
'd569c2dd-8f30-4878-8aea-858db285cf6=
9')<br />>>>> 2018-02-12 16:49:31,484+01 INFO
[org.ovirt.en=
gine.core.vdsbroker.monitoring.VmAnalyzer]<br />>>>>
(Defau=
ltQuartzScheduler5) [11a7619a] VM 'f7d4ec12-627a-4b83-b59e-886400d55474=
'<br />>>>> is migrating to VDS
'ce3938b1-b23f-4d22-840a-f1=
7d7cd87bb1'(<br />>>>> victor.local.systea.fr)
ignoring it =
in the refresh until migration is<br />>>>> done<br
/>>&=
gt;>><br />>>>><br
/>>>>><br />>>&g=
t;> and so on, last lines repeated indefinitly for hours since we po=
weroff<br />>>>> the VM...<br
/>>>>> Is this so=
mething known ? Any idea about that ?<br />>>>><br
/>>&g=
t;>> Thanks<br />>>>><br
/>>>>> Ovirt 4.1=
.6, updated last at feb-13. Gluster 3.12.1.<br
/>>>>><br />=
>>>> --<br />>>>><br
/>>>>> Cordial=
ement,<br />>>>><br
/>>>>> *Frank Soyer *<br />=
>>>><br />>>>>
=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=
=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=
=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F<br />>>>> Users
mailin=
g list<br />>>>> Users(a)ovirt.org<br
/>>>>> http=
://lists.ovirt.org/mailman/listinfo/users<br
/>>>>><br />&g=
t;>><br />>>><br
/>>>><br />>>><br />&=
gt;><br />>><br />>><br
/> </blockquote><br /> =
;</html>
------=_=-_OpenGroupware_org_NGMime-15477-1519640593.181730-42--------