
------=_=-_OpenGroupware_org_NGMime-29680-1518600202.220070-147------ Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable Content-Length: 29649 Hi all, I discovered yesterday a problem when migrating VM with more than one v= disk. On our test servers (oVirt4.1, shared storage with Gluster), I created = 2 VMs needed for a test, from a template with a 20G vdisk. On this VMs = I added a 100G vdisk (for this tests I didn't want to waste time to ext= end the existing vdisks... But I lost time finally...). The VMs with th= e 2 vdisks works well. Now I saw some updates waiting on the host. I tried to put it in mainte= nance... But it stopped on the two VM. They were marked "migrating", bu= t no more accessible. Other (small) VMs with only 1 vdisk was migrated = without problem at the same time. I saw that a kvm process for the (big) VMs was launched on the source A= ND destination host, but after tens of minutes, the migration and the V= Ms was always freezed. I tried to cancel the migration for the VMs : fa= iled. The only way to stop it was to poweroff the VMs : the kvm process= died on the 2 hosts and the GUI alerted on a failed migration. In doubt, I tried to delete the second vdisk on one of this VMs : it mi= grates then without error ! And no access problem. I tried to extend the first vdisk of the second VM, the delete the seco= nd vdisk : it migrates now without problem !=C2=A0=C2=A0=C2=A0 So after another test with a VM with 2 vdisks, I can say that this bloc= ked the migration process :( In engine.log, for a VMs with 1 vdisk migrating well, we see :2018-02-1= 2 16:46:29,705+01 INFO =C2=A0[org.ovirt.engine.core.bll.MigrateVmToServ= erCommand] (default task-28) [2f712024-5982-46a8-82c8-fd8293da5725] Loc= k Acquired to object 'EngineLock:{exclusiveLocks=3D'[3f57e669-5e4c-4d10= -85cc-d573004a099d=3DVM]', sharedLocks=3D''}' 2018-02-12 16:46:29,955+01 INFO =C2=A0[org.ovirt.engine.core.bll.Migrat= eVmToServerCommand] (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-= 46a8-82c8-fd8293da5725] Running command: MigrateVmToServerCommand inter= nal: false. Entities affected : =C2=A0ID: 3f57e669-5e4c-4d10-85cc-d5730= 04a099d Type: VMAction group MIGRATE=5FVM with role type USER 2018-02-12 16:46:30,261+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.= MigrateVDSCommand] (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-4= 6a8-82c8-fd8293da5725] START, MigrateVDSCommand( MigrateVDSCommandParam= eters:{runAsync=3D'true', hostId=3D'ce3938b1-b23f-4d22-840a-f17d7cd87bb= 1', vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d', srcHost=3D'192.168.0= .6', dstVdsId=3D'd569c2dd-8f30-4878-8aea-858db285cf69', dstHost=3D'192.= 168.0.5:54321', migrationMethod=3D'ONLINE', tunnelMigration=3D'false', = migrationDowntime=3D'0', autoConverge=3D'true', migrateCompressed=3D'fa= lse', consoleAddress=3D'null', maxBandwidth=3D'500', enableGuestEvents=3D= 'true', maxIncomingMigrations=3D'2', maxOutgoingMigrations=3D'2', conve= rgenceSchedule=3D'[init=3D[{name=3DsetDowntime, params=3D[100]}], stall= ing=3D[{limit=3D1, action=3D{name=3DsetDowntime, params=3D[150]}}, {lim= it=3D2, action=3D{name=3DsetDowntime, params=3D[200]}}, {limit=3D3, act= ion=3D{name=3DsetDowntime, params=3D[300]}}, {limit=3D4, action=3D{name= =3DsetDowntime, params=3D[400]}}, {limit=3D6, action=3D{name=3DsetDownt= ime, params=3D[500]}}, {limit=3D-1, action=3D{name=3Dabort, params=3D[]= }}]]'}), log id: 14f61ee0 2018-02-12 16:46:30,262+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.= vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-thread-32) = [2f712024-5982-46a8-82c8-fd8293da5725] START, MigrateBrokerVDSCommand(H= ostName =3D victor.local.systea.fr, MigrateVDSCommandParameters:{runAsy= nc=3D'true', hostId=3D'ce3938b1-b23f-4d22-840a-f17d7cd87bb1', vmId=3D'3= f57e669-5e4c-4d10-85cc-d573004a099d', srcHost=3D'192.168.0.6', dstVdsId= =3D'd569c2dd-8f30-4878-8aea-858db285cf69', dstHost=3D'192.168.0.5:54321= ', migrationMethod=3D'ONLINE', tunnelMigration=3D'false', migrationDown= time=3D'0', autoConverge=3D'true', migrateCompressed=3D'false', console= Address=3D'null', maxBandwidth=3D'500', enableGuestEvents=3D'true', max= IncomingMigrations=3D'2', maxOutgoingMigrations=3D'2', convergenceSched= ule=3D'[init=3D[{name=3DsetDowntime, params=3D[100]}], stalling=3D[{lim= it=3D1, action=3D{name=3DsetDowntime, params=3D[150]}}, {limit=3D2, act= ion=3D{name=3DsetDowntime, params=3D[200]}}, {limit=3D3, action=3D{name= =3DsetDowntime, params=3D[300]}}, {limit=3D4, action=3D{name=3DsetDownt= ime, params=3D[400]}}, {limit=3D6, action=3D{name=3DsetDowntime, params= =3D[500]}}, {limit=3D-1, action=3D{name=3Dabort, params=3D[]}}]]'}), lo= g id: 775cd381 2018-02-12 16:46:30,277+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.= vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-thread-32) = [2f712024-5982-46a8-82c8-fd8293da5725] FINISH, MigrateBrokerVDSCommand,= log id: 775cd381 2018-02-12 16:46:30,285+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.= MigrateVDSCommand] (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-4= 6a8-82c8-fd8293da5725] FINISH, MigrateVDSCommand, return: MigratingFrom= , log id: 14f61ee0 2018-02-12 16:46:30,301+01 INFO =C2=A0[org.ovirt.engine.core.dal.dbbrok= er.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-6-thread-3= 2) [2f712024-5982-46a8-82c8-fd8293da5725] EVENT=5FID: VM=5FMIGRATION=5F= START(62), Correlation ID: 2f712024-5982-46a8-82c8-fd8293da5725, Job ID= : 4bd19aa9-cc99-4d02-884e-5a1e857a7738, Call Stack: null, Custom ID: nu= ll, Custom Event ID: -1, Message: Migration started (VM: Oracle=5FSECON= DARY, Source: victor.local.systea.fr, Destination: ginger.local.systea.= fr, User: admin@internal-authz). 2018-02-12 16:46:31,106+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.= vdsbroker.FullListVDSCommand] (DefaultQuartzScheduler9) [54a65b66] STAR= T, FullListVDSCommand(HostName =3D victor.local.systea.fr, FullListVDSC= ommandParameters:{runAsync=3D'true', hostId=3D'ce3938b1-b23f-4d22-840a-= f17d7cd87bb1', vmIds=3D'[3f57e669-5e4c-4d10-85cc-d573004a099d]'}), log = id: 54b4b435 2018-02-12 16:46:31,147+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.= vdsbroker.FullListVDSCommand] (DefaultQuartzScheduler9) [54a65b66] FINI= SH, FullListVDSCommand, return: [{acpiEnable=3Dtrue, emulatedMachine=3D= pc-i440fx-rhel7.3.0, tabletEnable=3Dtrue, pid=3D1493, guestDiskMapping=3D= {0QEMU=5FQEMU=5FHARDDISK=5Fd890fa68-fba4-4f49-9=3D{name=3D/dev/sda}, QE= MU=5FDVD-ROM=5FQM00003=3D{name=3D/dev/sr0}}, transparentHugePages=3Dtru= e, timeOffset=3D0, cpuType=3DNehalem, smp=3D2, pauseCode=3DNOERR, guest= NumaNodes=3D[Ljava.lang.Object;@1d9042cd, smartcardEnable=3Dfalse, cust= om=3D{device=5Ffbddd528-7d93-49c6-a286-180e021cb274device=5F879c93ab-4d= f1-435c-af02-565039fcc254=3DVmDevice:{id=3D'VmDeviceId:{deviceId=3D'879= c93ab-4df1-435c-af02-565039fcc254', vmId=3D'3f57e669-5e4c-4d10-85cc-d57= 3004a099d'}', device=3D'unix', type=3D'CHANNEL', bootOrder=3D'0', specP= arams=3D'[]', address=3D'{bus=3D0, controller=3D0, type=3Dvirtio-serial= , port=3D1}', managed=3D'false', plugged=3D'true', readOnly=3D'false', = deviceAlias=3D'channel0', customProperties=3D'[]', snapshotId=3D'null',= logicalName=3D'null', hostDevice=3D'null'}, device=5Ffbddd528-7d93-49c= 6-a286-180e021cb274device=5F879c93ab-4df1-435c-af02-565039fcc254device=5F= 8945f61a-abbe-4156-8485-a4aa6f1908dbdevice=5F017b5e59-01c4-4aac-bf0c-b5= d9557284d6=3DVmDevice:{id=3D'VmDeviceId:{deviceId=3D'017b5e59-01c4-4aac= -bf0c-b5d9557284d6', vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}', d= evice=3D'tablet', type=3D'UNKNOWN', bootOrder=3D'0', specParams=3D'[]',= address=3D'{bus=3D0, type=3Dusb, port=3D1}', managed=3D'false', plugge= d=3D'true', readOnly=3D'false', deviceAlias=3D'input0', customPropertie= s=3D'[]', snapshotId=3D'null', logicalName=3D'null', hostDevice=3D'null= '}, device=5Ffbddd528-7d93-49c6-a286-180e021cb274=3DVmDevice:{id=3D'VmD= eviceId:{deviceId=3D'fbddd528-7d93-49c6-a286-180e021cb274', vmId=3D'3f5= 7e669-5e4c-4d10-85cc-d573004a099d'}', device=3D'ide', type=3D'CONTROLLE= R', bootOrder=3D'0', specParams=3D'[]', address=3D'{slot=3D0x01, bus=3D= 0x00, domain=3D0x0000, type=3Dpci, function=3D0x1}', managed=3D'false',= plugged=3D'true', readOnly=3D'false', deviceAlias=3D'ide', customPrope= rties=3D'[]', snapshotId=3D'null', logicalName=3D'null', hostDevice=3D'= null'}, device=5Ffbddd528-7d93-49c6-a286-180e021cb274device=5F879c93ab-= 4df1-435c-af02-565039fcc254device=5F8945f61a-abbe-4156-8485-a4aa6f1908d= b=3DVmDevice:{id=3D'VmDeviceId:{deviceId=3D'8945f61a-abbe-4156-8485-a4a= a6f1908db', vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}', device=3D'= unix', type=3D'CHANNEL', bootOrder=3D'0', specParams=3D'[]', address=3D= '{bus=3D0, controller=3D0, type=3Dvirtio-serial, port=3D2}', managed=3D= 'false', plugged=3D'true', readOnly=3D'false', deviceAlias=3D'channel1'= , customProperties=3D'[]', snapshotId=3D'null', logicalName=3D'null', h= ostDevice=3D'null'}}, vmType=3Dkvm, memSize=3D8192, smpCoresPerSocket=3D= 1, vmName=3DOracle=5FSECONDARY, nice=3D0, status=3DMigration Source, ma= xMemSize=3D32768, bootMenuEnable=3Dfalse, vmId=3D3f57e669-5e4c-4d10-85c= c-d573004a099d, numOfIoThreads=3D2, smpThreadsPerCore=3D1, memGuarantee= dSize=3D8192, kvmEnable=3Dtrue, pitReinjection=3Dfalse, displayNetwork=3D= ovirtmgmt, devices=3D[Ljava.lang.Object;@28ae66d7, display=3Dvnc, maxVC= pus=3D16, clientIp=3D, statusTime=3D4299484520, maxMemSlots=3D16}], log= id: 54b4b435 2018-02-12 16:46:31,150+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.= monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler1) [27fac647] F= etched 3 VMs from VDS 'd569c2dd-8f30-4878-8aea-858db285cf69' 2018-02-12 16:46:31,151+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.= monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler9) [54a65b66] Re= ceived a vnc Device without an address when processing VM 3f57e669-5e4c= -4d10-85cc-d573004a099d devices, skipping device: {device=3Dvnc, specPa= rams=3D{displayNetwork=3Dovirtmgmt, keyMap=3Dfr, displayIp=3D192.168.0.= 6}, type=3Dgraphics, deviceId=3D813957b1-446a-4e88-9e40-9fe76d2c442d, p= ort=3D5901} 2018-02-12 16:46:31,151+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.= monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler9) [54a65b66] Re= ceived a lease Device without an address when processing VM 3f57e669-5e= 4c-4d10-85cc-d573004a099d devices, skipping device: {lease=5Fid=3D3f57e= 669-5e4c-4d10-85cc-d573004a099d, sd=5Fid=3D1e51cecc-eb2e-47d0-b185-920f= dc7afa16, deviceId=3D{uuid=3Da09949aa-5642-4b6d-94a4-8b0d04257be5}, off= set=3D6291456, device=3Dlease, path=3D/rhev/data-center/mnt/glusterSD/1= 92.168.0.6:=5FDATA01/1e51cecc-eb2e-47d0-b185-920fdc7afa16/dom=5Fmd/xlea= ses, type=3Dlease} 2018-02-12 16:46:31,152+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.= monitoring.VmAnalyzer] (DefaultQuartzScheduler1) [27fac647] VM '3f57e66= 9-5e4c-4d10-85cc-d573004a099d'(Oracle=5FSECONDARY) was unexpectedly det= ected as 'MigratingTo' on VDS 'd569c2dd-8f30-4878-8aea-858db285cf69'(gi= nger.local.systea.fr) (expected on 'ce3938b1-b23f-4d22-840a-f17d7cd87bb= 1') 2018-02-12 16:46:31,152+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.= monitoring.VmAnalyzer] (DefaultQuartzScheduler1) [27fac647] VM '3f57e66= 9-5e4c-4d10-85cc-d573004a099d' is migrating to VDS 'd569c2dd-8f30-4878-= 8aea-858db285cf69'(ginger.local.systea.fr) ignoring it in the refresh u= ntil migration is done .... 2018-02-12 16:46:41,631+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.= monitoring.VmAnalyzer] (ForkJoinPool-1-worker-11) [] VM '3f57e669-5e4c-= 4d10-85cc-d573004a099d' was reported as Down on VDS 'ce3938b1-b23f-4d22= -840a-f17d7cd87bb1'(victor.local.systea.fr) 2018-02-12 16:46:41,632+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.= vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-11) [] START, Destr= oyVDSCommand(HostName =3D victor.local.systea.fr, DestroyVmVDSCommandPa= rameters:{runAsync=3D'true', hostId=3D'ce3938b1-b23f-4d22-840a-f17d7cd8= 7bb1', vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d', force=3D'false', = secondsToWait=3D'0', gracefully=3D'false', reason=3D'', ignoreNoVm=3D't= rue'}), log id: 560eca57 2018-02-12 16:46:41,650+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.= vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-11) [] FINISH, Dest= royVDSCommand, log id: 560eca57 2018-02-12 16:46:41,650+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.= monitoring.VmAnalyzer] (ForkJoinPool-1-worker-11) [] VM '3f57e669-5e4c-= 4d10-85cc-d573004a099d'(Oracle=5FSECONDARY) moved from 'MigratingFrom' = --> 'Down' 2018-02-12 16:46:41,651+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.= monitoring.VmAnalyzer] (ForkJoinPool-1-worker-11) [] Handing over VM '3= f57e669-5e4c-4d10-85cc-d573004a099d'(Oracle=5FSECONDARY) to Host 'd569c= 2dd-8f30-4878-8aea-858db285cf69'. Setting VM to status 'MigratingTo' 2018-02-12 16:46:42,163+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.= monitoring.VmAnalyzer] (ForkJoinPool-1-worker-4) [] VM '3f57e669-5e4c-4= d10-85cc-d573004a099d'(Oracle=5FSECONDARY) moved from 'MigratingTo' -->= 'Up' 2018-02-12 16:46:42,169+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.= vdsbroker.MigrateStatusVDSCommand] (ForkJoinPool-1-worker-4) [] START, = MigrateStatusVDSCommand(HostName =3D ginger.local.systea.fr, MigrateSta= tusVDSCommandParameters:{runAsync=3D'true', hostId=3D'd569c2dd-8f30-487= 8-8aea-858db285cf69', vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}), = log id: 7a25c281 2018-02-12 16:46:42,174+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.= vdsbroker.MigrateStatusVDSCommand] (ForkJoinPool-1-worker-4) [] FINISH,= MigrateStatusVDSCommand, log id: 7a25c281 2018-02-12 16:46:42,194+01 INFO =C2=A0[org.ovirt.engine.core.dal.dbbrok= er.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-4) [] EVEN= T=5FID: VM=5FMIGRATION=5FDONE(63), Correlation ID: 2f712024-5982-46a8-8= 2c8-fd8293da5725, Job ID: 4bd19aa9-cc99-4d02-884e-5a1e857a7738, Call St= ack: null, Custom ID: null, Custom Event ID: -1, Message: Migration com= pleted (VM: Oracle=5FSECONDARY, Source: victor.local.systea.fr, Destina= tion: ginger.local.systea.fr, Duration: 11 seconds, Total: 11 seconds, = Actual downtime: (N/A)) 2018-02-12 16:46:42,201+01 INFO =C2=A0[org.ovirt.engine.core.bll.Migrat= eVmToServerCommand] (ForkJoinPool-1-worker-4) [] Lock freed to object '= EngineLock:{exclusiveLocks=3D'[3f57e669-5e4c-4d10-85cc-d573004a099d=3DV= M]', sharedLocks=3D''}' 2018-02-12 16:46:42,203+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.= vdsbroker.FullListVDSCommand] (ForkJoinPool-1-worker-4) [] START, FullL= istVDSCommand(HostName =3D ginger.local.systea.fr, FullListVDSCommandPa= rameters:{runAsync=3D'true', hostId=3D'd569c2dd-8f30-4878-8aea-858db285= cf69', vmIds=3D'[3f57e669-5e4c-4d10-85cc-d573004a099d]'}), log id: 7cc6= 5298 2018-02-12 16:46:42,254+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.= vdsbroker.FullListVDSCommand] (ForkJoinPool-1-worker-4) [] FINISH, Full= ListVDSCommand, return: [{acpiEnable=3Dtrue, emulatedMachine=3Dpc-i440f= x-rhel7.3.0, afterMigrationStatus=3D, tabletEnable=3Dtrue, pid=3D18748,= guestDiskMapping=3D{}, transparentHugePages=3Dtrue, timeOffset=3D0, cp= uType=3DNehalem, smp=3D2, guestNumaNodes=3D[Ljava.lang.Object;@760085fd= , custom=3D{device=5Ffbddd528-7d93-49c6-a286-180e021cb274device=5F879c9= 3ab-4df1-435c-af02-565039fcc254=3DVmDevice:{id=3D'VmDeviceId:{deviceId=3D= '879c93ab-4df1-435c-af02-565039fcc254', vmId=3D'3f57e669-5e4c-4d10-85cc= -d573004a099d'}', device=3D'unix', type=3D'CHANNEL', bootOrder=3D'0', s= pecParams=3D'[]', address=3D'{bus=3D0, controller=3D0, type=3Dvirtio-se= rial, port=3D1}', managed=3D'false', plugged=3D'true', readOnly=3D'fals= e', deviceAlias=3D'channel0', customProperties=3D'[]', snapshotId=3D'nu= ll', logicalName=3D'null', hostDevice=3D'null'}, device=5Ffbddd528-7d93= -49c6-a286-180e021cb274device=5F879c93ab-4df1-435c-af02-565039fcc254dev= ice=5F8945f61a-abbe-4156-8485-a4aa6f1908dbdevice=5F017b5e59-01c4-4aac-b= f0c-b5d9557284d6=3DVmDevice:{id=3D'VmDeviceId:{deviceId=3D'017b5e59-01c= 4-4aac-bf0c-b5d9557284d6', vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d= '}', device=3D'tablet', type=3D'UNKNOWN', bootOrder=3D'0', specParams=3D= '[]', address=3D'{bus=3D0, type=3Dusb, port=3D1}', managed=3D'false', p= lugged=3D'true', readOnly=3D'false', deviceAlias=3D'input0', customProp= erties=3D'[]', snapshotId=3D'null', logicalName=3D'null', hostDevice=3D= 'null'}, device=5Ffbddd528-7d93-49c6-a286-180e021cb274=3DVmDevice:{id=3D= 'VmDeviceId:{deviceId=3D'fbddd528-7d93-49c6-a286-180e021cb274', vmId=3D= '3f57e669-5e4c-4d10-85cc-d573004a099d'}', device=3D'ide', type=3D'CONTR= OLLER', bootOrder=3D'0', specParams=3D'[]', address=3D'{slot=3D0x01, bu= s=3D0x00, domain=3D0x0000, type=3Dpci, function=3D0x1}', managed=3D'fal= se', plugged=3D'true', readOnly=3D'false', deviceAlias=3D'ide', customP= roperties=3D'[]', snapshotId=3D'null', logicalName=3D'null', hostDevice= =3D'null'}, device=5Ffbddd528-7d93-49c6-a286-180e021cb274device=5F879c9= 3ab-4df1-435c-af02-565039fcc254device=5F8945f61a-abbe-4156-8485-a4aa6f1= 908db=3DVmDevice:{id=3D'VmDeviceId:{deviceId=3D'8945f61a-abbe-4156-8485= -a4aa6f1908db', vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}', device= =3D'unix', type=3D'CHANNEL', bootOrder=3D'0', specParams=3D'[]', addres= s=3D'{bus=3D0, controller=3D0, type=3Dvirtio-serial, port=3D2}', manage= d=3D'false', plugged=3D'true', readOnly=3D'false', deviceAlias=3D'chann= el1', customProperties=3D'[]', snapshotId=3D'null', logicalName=3D'null= ', hostDevice=3D'null'}}, vmType=3Dkvm, memSize=3D8192, smpCoresPerSock= et=3D1, vmName=3DOracle=5FSECONDARY, nice=3D0, status=3DUp, maxMemSize=3D= 32768, bootMenuEnable=3Dfalse, vmId=3D3f57e669-5e4c-4d10-85cc-d573004a0= 99d, numOfIoThreads=3D2, smpThreadsPerCore=3D1, smartcardEnable=3Dfalse= , maxMemSlots=3D16, kvmEnable=3Dtrue, pitReinjection=3Dfalse, displayNe= twork=3Dovirtmgmt, devices=3D[Ljava.lang.Object;@2e4d3dd3, memGuarantee= dSize=3D8192, maxVCpus=3D16, clientIp=3D, statusTime=3D4304259600, disp= lay=3Dvnc}], log id: 7cc65298 2018-02-12 16:46:42,257+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.= monitoring.VmDevicesMonitoring] (ForkJoinPool-1-worker-4) [] Received a= vnc Device without an address when processing VM 3f57e669-5e4c-4d10-85= cc-d573004a099d devices, skipping device: {device=3Dvnc, specParams=3D{= displayNetwork=3Dovirtmgmt, keyMap=3Dfr, displayIp=3D192.168.0.5}, type= =3Dgraphics, deviceId=3D813957b1-446a-4e88-9e40-9fe76d2c442d, port=3D59= 01} 2018-02-12 16:46:42,257+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.= monitoring.VmDevicesMonitoring] (ForkJoinPool-1-worker-4) [] Received a= lease Device without an address when processing VM 3f57e669-5e4c-4d10-= 85cc-d573004a099d devices, skipping device: {lease=5Fid=3D3f57e669-5e4c= -4d10-85cc-d573004a099d, sd=5Fid=3D1e51cecc-eb2e-47d0-b185-920fdc7afa16= , deviceId=3D{uuid=3Da09949aa-5642-4b6d-94a4-8b0d04257be5}, offset=3D62= 91456, device=3Dlease, path=3D/rhev/data-center/mnt/glusterSD/192.168.0= .6:=5FDATA01/1e51cecc-eb2e-47d0-b185-920fdc7afa16/dom=5Fmd/xleases, typ= e=3Dlease} 2018-02-12 16:46:46,260+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.= vdsbroker.FullListVDSCommand] (DefaultQuartzScheduler5) [7fcb200a] FINI= SH, FullListVDSCommand, return: [{acpiEnable=3Dtrue, emulatedMachine=3D= pc-i440fx-rhel7.3.0, afterMigrationStatus=3D, tabletEnable=3Dtrue, pid=3D= 18748, guestDiskMapping=3D{0QEMU=5FQEMU=5FHARDDISK=5Fd890fa68-fba4-4f49= -9=3D{name=3D/dev/sda}, QEMU=5FDVD-ROM=5FQM00003=3D{name=3D/dev/sr0}}, = transparentHugePages=3Dtrue, timeOffset=3D0, cpuType=3DNehalem, smp=3D2= , guestNumaNodes=3D[Ljava.lang.Object;@77951faf, custom=3D{device=5Ffbd= dd528-7d93-49c6-a286-180e021cb274device=5F879c93ab-4df1-435c-af02-56503= 9fcc254=3DVmDevice:{id=3D'VmDeviceId:{deviceId=3D'879c93ab-4df1-435c-af= 02-565039fcc254', vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}', devi= ce=3D'unix', type=3D'CHANNEL', bootOrder=3D'0', specParams=3D'[]', addr= ess=3D'{bus=3D0, controller=3D0, type=3Dvirtio-serial, port=3D1}', mana= ged=3D'false', plugged=3D'true', readOnly=3D'false', deviceAlias=3D'cha= nnel0', customProperties=3D'[]', snapshotId=3D'null', logicalName=3D'nu= ll', hostDevice=3D'null'}, device=5Ffbddd528-7d93-49c6-a286-180e021cb27= 4device=5F879c93ab-4df1-435c-af02-565039fcc254device=5F8945f61a-abbe-41= 56-8485-a4aa6f1908dbdevice=5F017b5e59-01c4-4aac-bf0c-b5d9557284d6=3DVmD= evice:{id=3D'VmDeviceId:{deviceId=3D'017b5e59-01c4-4aac-bf0c-b5d9557284= d6', vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}', device=3D'tablet'= , type=3D'UNKNOWN', bootOrder=3D'0', specParams=3D'[]', address=3D'{bus= =3D0, type=3Dusb, port=3D1}', managed=3D'false', plugged=3D'true', read= Only=3D'false', deviceAlias=3D'input0', customProperties=3D'[]', snapsh= otId=3D'null', logicalName=3D'null', hostDevice=3D'null'}, device=5Ffbd= dd528-7d93-49c6-a286-180e021cb274=3DVmDevice:{id=3D'VmDeviceId:{deviceI= d=3D'fbddd528-7d93-49c6-a286-180e021cb274', vmId=3D'3f57e669-5e4c-4d10-= 85cc-d573004a099d'}', device=3D'ide', type=3D'CONTROLLER', bootOrder=3D= '0', specParams=3D'[]', address=3D'{slot=3D0x01, bus=3D0x00, domain=3D0= x0000, type=3Dpci, function=3D0x1}', managed=3D'false', plugged=3D'true= ', readOnly=3D'false', deviceAlias=3D'ide', customProperties=3D'[]', sn= apshotId=3D'null', logicalName=3D'null', hostDevice=3D'null'}, device=5F= fbddd528-7d93-49c6-a286-180e021cb274device=5F879c93ab-4df1-435c-af02-56= 5039fcc254device=5F8945f61a-abbe-4156-8485-a4aa6f1908db=3DVmDevice:{id=3D= 'VmDeviceId:{deviceId=3D'8945f61a-abbe-4156-8485-a4aa6f1908db', vmId=3D= '3f57e669-5e4c-4d10-85cc-d573004a099d'}', device=3D'unix', type=3D'CHAN= NEL', bootOrder=3D'0', specParams=3D'[]', address=3D'{bus=3D0, controll= er=3D0, type=3Dvirtio-serial, port=3D2}', managed=3D'false', plugged=3D= 'true', readOnly=3D'false', deviceAlias=3D'channel1', customProperties=3D= '[]', snapshotId=3D'null', logicalName=3D'null', hostDevice=3D'null'}},= vmType=3Dkvm, memSize=3D8192, smpCoresPerSocket=3D1, vmName=3DOracle=5F= SECONDARY, nice=3D0, status=3DUp, maxMemSize=3D32768, bootMenuEnable=3D= false, vmId=3D3f57e669-5e4c-4d10-85cc-d573004a099d, numOfIoThreads=3D2,= smpThreadsPerCore=3D1, smartcardEnable=3Dfalse, maxMemSlots=3D16, kvmE= nable=3Dtrue, pitReinjection=3Dfalse, displayNetwork=3Dovirtmgmt, devic= es=3D[Ljava.lang.Object;@286410fd, memGuaranteedSize=3D8192, maxVCpus=3D= 16, clientIp=3D, statusTime=3D4304263620, display=3Dvnc}], log id: 58cd= ef4c 2018-02-12 16:46:46,267+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.= monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler5) [7fcb200a] Re= ceived a vnc Device without an address when processing VM 3f57e669-5e4c= -4d10-85cc-d573004a099d devices, skipping device: {device=3Dvnc, specPa= rams=3D{displayNetwork=3Dovirtmgmt, keyMap=3Dfr, displayIp=3D192.168.0.= 5}, type=3Dgraphics, deviceId=3D813957b1-446a-4e88-9e40-9fe76d2c442d, p= ort=3D5901} 2018-02-12 16:46:46,268+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.= monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler5) [7fcb200a] Re= ceived a lease Device without an address when processing VM 3f57e669-5e= 4c-4d10-85cc-d573004a099d devices, skipping device: {lease=5Fid=3D3f57e= 669-5e4c-4d10-85cc-d573004a099d, sd=5Fid=3D1e51cecc-eb2e-47d0-b185-920f= dc7afa16, deviceId=3D{uuid=3Da09949aa-5642-4b6d-94a4-8b0d04257be5}, off= set=3D6291456, device=3Dlease, path=3D/rhev/data-center/mnt/glusterSD/1= 92.168.0.6:=5FDATA01/1e51cecc-eb2e-47d0-b185-920fdc7afa16/dom=5Fmd/xlea= ses, type=3Dlease} =C2=A0 For the VM with 2 vdisks we see : 2018-02-12 16:49:06,112+01 INFO =C2=A0[org.ovirt.engine.core.bll.Migrat= eVmToServerCommand] (default task-50) [92b5af33-cb87-4142-b8fe-8b838dd7= 458e] Lock Acquired to object 'EngineLock:{exclusiveLocks=3D'[f7d4ec12-= 627a-4b83-b59e-886400d55474=3DVM]', sharedLocks=3D''}' 2018-02-12 16:49:06,407+01 INFO =C2=A0[org.ovirt.engine.core.bll.Migrat= eVmToServerCommand] (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-= 4142-b8fe-8b838dd7458e] Running command: MigrateVmToServerCommand inter= nal: false. Entities affected : =C2=A0ID: f7d4ec12-627a-4b83-b59e-88640= 0d55474 Type: VMAction group MIGRATE=5FVM with role type USER 2018-02-12 16:49:06,712+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.= MigrateVDSCommand] (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4= 142-b8fe-8b838dd7458e] START, MigrateVDSCommand( MigrateVDSCommandParam= eters:{runAsync=3D'true', hostId=3D'd569c2dd-8f30-4878-8aea-858db285cf6= 9', vmId=3D'f7d4ec12-627a-4b83-b59e-886400d55474', srcHost=3D'192.168.0= .5', dstVdsId=3D'ce3938b1-b23f-4d22-840a-f17d7cd87bb1', dstHost=3D'192.= 168.0.6:54321', migrationMethod=3D'ONLINE', tunnelMigration=3D'false', = migrationDowntime=3D'0', autoConverge=3D'true', migrateCompressed=3D'fa= lse', consoleAddress=3D'null', maxBandwidth=3D'500', enableGuestEvents=3D= 'true', maxIncomingMigrations=3D'2', maxOutgoingMigrations=3D'2', conve= rgenceSchedule=3D'[init=3D[{name=3DsetDowntime, params=3D[100]}], stall= ing=3D[{limit=3D1, action=3D{name=3DsetDowntime, params=3D[150]}}, {lim= it=3D2, action=3D{name=3DsetDowntime, params=3D[200]}}, {limit=3D3, act= ion=3D{name=3DsetDowntime, params=3D[300]}}, {limit=3D4, action=3D{name= =3DsetDowntime, params=3D[400]}}, {limit=3D6, action=3D{name=3DsetDownt= ime, params=3D[500]}}, {limit=3D-1, action=3D{name=3Dabort, params=3D[]= }}]]'}), log id: 3702a9e0 2018-02-12 16:49:06,713+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.= vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-thread-49) = [92b5af33-cb87-4142-b8fe-8b838dd7458e] START, MigrateBrokerVDSCommand(H= ostName =3D ginger.local.systea.fr, MigrateVDSCommandParameters:{runAsy= nc=3D'true', hostId=3D'd569c2dd-8f30-4878-8aea-858db285cf69', vmId=3D'f= 7d4ec12-627a-4b83-b59e-886400d55474', srcHost=3D'192.168.0.5', dstVdsId= =3D'ce3938b1-b23f-4d22-840a-f17d7cd87bb1', dstHost=3D'192.168.0.6:54321= ', migrationMethod=3D'ONLINE', tunnelMigration=3D'false', migrationDown= time=3D'0', autoConverge=3D'true', migrateCompressed=3D'false', console= Address=3D'null', maxBandwidth=3D'500', enableGuestEvents=3D'true', max= IncomingMigrations=3D'2', maxOutgoingMigrations=3D'2', convergenceSched= ule=3D'[init=3D[{name=3DsetDowntime, params=3D[100]}], stalling=3D[{lim= it=3D1, action=3D{name=3DsetDowntime, params=3D[150]}}, {limit=3D2, act= ion=3D{name=3DsetDowntime, params=3D[200]}}, {limit=3D3, action=3D{name= =3DsetDowntime, params=3D[300]}}, {limit=3D4, action=3D{name=3DsetDownt= ime, params=3D[400]}}, {limit=3D6, action=3D{name=3DsetDowntime, params= =3D[500]}}, {limit=3D-1, action=3D{name=3Dabort, params=3D[]}}]]'}), lo= g id: 1840069c 2018-02-12 16:49:06,724+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.= vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-thread-49) = [92b5af33-cb87-4142-b8fe-8b838dd7458e] FINISH, MigrateBrokerVDSCommand,= log id: 1840069c 2018-02-12 16:49:06,732+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.= MigrateVDSCommand] (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4= 142-b8fe-8b838dd7458e] FINISH, MigrateVDSCommand, return: MigratingFrom= , log id: 3702a9e0 2018-02-12 16:49:06,753+01 INFO =C2=A0[org.ovirt.engine.core.dal.dbbrok= er.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-6-thread-4= 9) [92b5af33-cb87-4142-b8fe-8b838dd7458e] EVENT=5FID: VM=5FMIGRATION=5F= START(62), Correlation ID: 92b5af33-cb87-4142-b8fe-8b838dd7458e, Job ID= : f4f54054-f7c8-4481-8eda-d5a15c383061, Call Stack: null, Custom ID: nu= ll, Custom Event ID: -1, Message: Migration started (VM: Oracle=5FPRIMA= RY, Source: ginger.local.systea.fr, Destination: victor.local.systea.fr= , User: admin@internal-authz). ... 2018-02-12 16:49:16,453+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.= monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler4) [162a5bc3] F= etched 2 VMs from VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1' 2018-02-12 16:49:16,455+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.= monitoring.VmAnalyzer] (DefaultQuartzScheduler4) [162a5bc3] VM 'f7d4ec1= 2-627a-4b83-b59e-886400d55474'(Oracle=5FPRIMARY) was unexpectedly detec= ted as 'MigratingTo' on VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'(vict= or.local.systea.fr) (expected on 'd569c2dd-8f30-4878-8aea-858db285cf69'= ) 2018-02-12 16:49:16,455+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.= monitoring.VmAnalyzer] (DefaultQuartzScheduler4) [162a5bc3] VM 'f7d4ec1= 2-627a-4b83-b59e-886400d55474' is migrating to VDS 'ce3938b1-b23f-4d22-= 840a-f17d7cd87bb1'(victor.local.systea.fr) ignoring it in the refresh u= ntil migration is done ... 2018-02-12 16:49:31,484+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.= monitoring.VmAnalyzer] (DefaultQuartzScheduler5) [11a7619a] VM 'f7d4ec1= 2-627a-4b83-b59e-886400d55474'(Oracle=5FPRIMARY) was unexpectedly detec= ted as 'MigratingTo' on VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'(vict= or.local.systea.fr) (expected on 'd569c2dd-8f30-4878-8aea-858db285cf69'= ) 2018-02-12 16:49:31,484+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.= monitoring.VmAnalyzer] (DefaultQuartzScheduler5) [11a7619a] VM 'f7d4ec1= 2-627a-4b83-b59e-886400d55474' is migrating to VDS 'ce3938b1-b23f-4d22-= 840a-f17d7cd87bb1'(victor.local.systea.fr) ignoring it in the refresh u= ntil migration is done =C2=A0 and so on, last lines repeated indefinitly for hours since we poweroff = the VM... Is this something known ? Any idea about that ? Thanks Ovirt 4.1.6, updated last at feb-13. Gluster 3.12.1. -- Cordialement, Frank Soyer ------=_=-_OpenGroupware_org_NGMime-29680-1518600202.220070-147------ Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: quoted-printable Content-Length: 30401 <html><p>Hi all,<br />I discovered yesterday a problem when migrating V= M with more than one vdisk.<br />On our test servers (oVirt4.1, shared = storage with Gluster), I created 2 VMs needed for a test, from a templa= te with a 20G vdisk. On this VMs I added a 100G vdisk (for this tests I= didn't want to waste time to extend the existing vdisks... But I lost = time finally...). The VMs with the 2 vdisks works well.<br />Now I saw = some updates waiting on the host. I tried to put it in maintenance... B= ut it stopped on the two VM. They were marked "migrating", but no more = accessible. Other (small) VMs with only 1 vdisk was migrated without pr= oblem at the same time.<br />I saw that a kvm process for the (big) VMs= was launched on the source AND destination host, but after tens of min= utes, the migration and the VMs was always freezed. I tried to cancel t= he migration for the VMs : failed. The only way to stop it was to power= off the VMs : the kvm process died on the 2 hosts and the GUI alerted o= n a failed migration.<br />In doubt, I tried to delete the second vdisk= on one of this VMs : it migrates then without error ! And no access pr= oblem.<br />I tried to extend the first vdisk of the second VM, the del= ete the second vdisk : it migrates now without problem ! &nb= sp;<br /><br />So after another test with a VM with 2 vdisks, I can say= that this blocked the migration process :(<br /><br />In engine.log, f= or a VMs with 1 vdisk migrating well, we see :</p><blockquote>2018-02-1= 2 16:46:29,705+01 INFO [org.ovirt.engine.core.bll.MigrateVmToServ= erCommand] (default task-28) [2f712024-5982-46a8-82c8-fd8293da5725] Loc= k Acquired to object 'EngineLock:{exclusiveLocks=3D'[3f57e669-5e4c-4d10= -85cc-d573004a099d=3DVM]', sharedLocks=3D''}'<br />2018-02-12 16:46:29,= 955+01 INFO [org.ovirt.engine.core.bll.MigrateVmToServerCommand] = (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da57= 25] Running command: MigrateVmToServerCommand internal: false. Entities= affected : ID: 3f57e669-5e4c-4d10-85cc-d573004a099d Type: VMActi= on group MIGRATE=5FVM with role type USER<br />2018-02-12 16:46:30,261+= 01 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (org.= ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725] S= TART, MigrateVDSCommand( MigrateVDSCommandParameters:{runAsync=3D'true'= , hostId=3D'ce3938b1-b23f-4d22-840a-f17d7cd87bb1', vmId=3D'3f57e669-5e4= c-4d10-85cc-d573004a099d', srcHost=3D'192.168.0.6', dstVdsId=3D'd569c2d= d-8f30-4878-8aea-858db285cf69', dstHost=3D'192.168.0.5:54321', migratio= nMethod=3D'ONLINE', tunnelMigration=3D'false', migrationDowntime=3D'0',= autoConverge=3D'true', migrateCompressed=3D'false', consoleAddress=3D'= null', maxBandwidth=3D'500', enableGuestEvents=3D'true', maxIncomingMig= rations=3D'2', maxOutgoingMigrations=3D'2', convergenceSchedule=3D'[ini= t=3D[{name=3DsetDowntime, params=3D[100]}], stalling=3D[{limit=3D1, act= ion=3D{name=3DsetDowntime, params=3D[150]}}, {limit=3D2, action=3D{name= =3DsetDowntime, params=3D[200]}}, {limit=3D3, action=3D{name=3DsetDownt= ime, params=3D[300]}}, {limit=3D4, action=3D{name=3DsetDowntime, params= =3D[400]}}, {limit=3D6, action=3D{name=3DsetDowntime, params=3D[500]}},= {limit=3D-1, action=3D{name=3Dabort, params=3D[]}}]]'}), log id: 14f61= ee0<br />2018-02-12 16:46:30,262+01 INFO [org.ovirt.engine.core.v= dsbroker.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-th= read-32) [2f712024-5982-46a8-82c8-fd8293da5725] START, MigrateBrokerVDS= Command(HostName =3D victor.local.systea.fr, MigrateVDSCommandParameter= s:{runAsync=3D'true', hostId=3D'ce3938b1-b23f-4d22-840a-f17d7cd87bb1', = vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d', srcHost=3D'192.168.0.6',= dstVdsId=3D'd569c2dd-8f30-4878-8aea-858db285cf69', dstHost=3D'192.168.= 0.5:54321', migrationMethod=3D'ONLINE', tunnelMigration=3D'false', migr= ationDowntime=3D'0', autoConverge=3D'true', migrateCompressed=3D'false'= , consoleAddress=3D'null', maxBandwidth=3D'500', enableGuestEvents=3D't= rue', maxIncomingMigrations=3D'2', maxOutgoingMigrations=3D'2', converg= enceSchedule=3D'[init=3D[{name=3DsetDowntime, params=3D[100]}], stallin= g=3D[{limit=3D1, action=3D{name=3DsetDowntime, params=3D[150]}}, {limit= =3D2, action=3D{name=3DsetDowntime, params=3D[200]}}, {limit=3D3, actio= n=3D{name=3DsetDowntime, params=3D[300]}}, {limit=3D4, action=3D{name=3D= setDowntime, params=3D[400]}}, {limit=3D6, action=3D{name=3DsetDowntime= , params=3D[500]}}, {limit=3D-1, action=3D{name=3Dabort, params=3D[]}}]= ]'}), log id: 775cd381<br />2018-02-12 16:46:30,277+01 INFO [org.= ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (org.ovi= rt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725] FINI= SH, MigrateBrokerVDSCommand, log id: 775cd381<br />2018-02-12 16:46:30,= 285+01 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (= org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da572= 5] FINISH, MigrateVDSCommand, return: MigratingFrom, log id: 14f61ee0<b= r />2018-02-12 16:46:30,301+01 INFO [org.ovirt.engine.core.dal.db= broker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-6-thre= ad-32) [2f712024-5982-46a8-82c8-fd8293da5725] EVENT=5FID: VM=5FMIGRATIO= N=5FSTART(62), Correlation ID: 2f712024-5982-46a8-82c8-fd8293da5725, Jo= b ID: 4bd19aa9-cc99-4d02-884e-5a1e857a7738, Call Stack: null, Custom ID= : null, Custom Event ID: -1, Message: Migration started (VM: Oracle=5FS= ECONDARY, Source: victor.local.systea.fr, Destination: ginger.local.sys= tea.fr, User: admin@internal-authz).<br />2018-02-12 16:46:31,106+01 IN= FO [org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVDSCommand]= (DefaultQuartzScheduler9) [54a65b66] START, FullListVDSCommand(HostNam= e =3D victor.local.systea.fr, FullListVDSCommandParameters:{runAsync=3D= 'true', hostId=3D'ce3938b1-b23f-4d22-840a-f17d7cd87bb1', vmIds=3D'[3f57= e669-5e4c-4d10-85cc-d573004a099d]'}), log id: 54b4b435<br />2018-02-12 = 16:46:31,147+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.F= ullListVDSCommand] (DefaultQuartzScheduler9) [54a65b66] FINISH, FullLis= tVDSCommand, return: [{acpiEnable=3Dtrue, emulatedMachine=3Dpc-i440fx-r= hel7.3.0, tabletEnable=3Dtrue, pid=3D1493, guestDiskMapping=3D{0QEMU=5F= QEMU=5FHARDDISK=5Fd890fa68-fba4-4f49-9=3D{name=3D/dev/sda}, QEMU=5FDVD-= ROM=5FQM00003=3D{name=3D/dev/sr0}}, transparentHugePages=3Dtrue, timeOf= fset=3D0, cpuType=3DNehalem, smp=3D2, pauseCode=3DNOERR, guestNumaNodes= =3D[Ljava.lang.Object;@1d9042cd, smartcardEnable=3Dfalse, custom=3D{dev= ice=5Ffbddd528-7d93-49c6-a286-180e021cb274device=5F879c93ab-4df1-435c-a= f02-565039fcc254=3DVmDevice:{id=3D'VmDeviceId:{deviceId=3D'879c93ab-4df= 1-435c-af02-565039fcc254', vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d= '}', device=3D'unix', type=3D'CHANNEL', bootOrder=3D'0', specParams=3D'= []', address=3D'{bus=3D0, controller=3D0, type=3Dvirtio-serial, port=3D= 1}', managed=3D'false', plugged=3D'true', readOnly=3D'false', deviceAli= as=3D'channel0', customProperties=3D'[]', snapshotId=3D'null', logicalN= ame=3D'null', hostDevice=3D'null'}, device=5Ffbddd528-7d93-49c6-a286-18= 0e021cb274device=5F879c93ab-4df1-435c-af02-565039fcc254device=5F8945f61= a-abbe-4156-8485-a4aa6f1908dbdevice=5F017b5e59-01c4-4aac-bf0c-b5d955728= 4d6=3DVmDevice:{id=3D'VmDeviceId:{deviceId=3D'017b5e59-01c4-4aac-bf0c-b= 5d9557284d6', vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}', device=3D= 'tablet', type=3D'UNKNOWN', bootOrder=3D'0', specParams=3D'[]', address= =3D'{bus=3D0, type=3Dusb, port=3D1}', managed=3D'false', plugged=3D'tru= e', readOnly=3D'false', deviceAlias=3D'input0', customProperties=3D'[]'= , snapshotId=3D'null', logicalName=3D'null', hostDevice=3D'null'}, devi= ce=5Ffbddd528-7d93-49c6-a286-180e021cb274=3DVmDevice:{id=3D'VmDeviceId:= {deviceId=3D'fbddd528-7d93-49c6-a286-180e021cb274', vmId=3D'3f57e669-5e= 4c-4d10-85cc-d573004a099d'}', device=3D'ide', type=3D'CONTROLLER', boot= Order=3D'0', specParams=3D'[]', address=3D'{slot=3D0x01, bus=3D0x00, do= main=3D0x0000, type=3Dpci, function=3D0x1}', managed=3D'false', plugged= =3D'true', readOnly=3D'false', deviceAlias=3D'ide', customProperties=3D= '[]', snapshotId=3D'null', logicalName=3D'null', hostDevice=3D'null'}, = device=5Ffbddd528-7d93-49c6-a286-180e021cb274device=5F879c93ab-4df1-435= c-af02-565039fcc254device=5F8945f61a-abbe-4156-8485-a4aa6f1908db=3DVmDe= vice:{id=3D'VmDeviceId:{deviceId=3D'8945f61a-abbe-4156-8485-a4aa6f1908d= b', vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}', device=3D'unix', t= ype=3D'CHANNEL', bootOrder=3D'0', specParams=3D'[]', address=3D'{bus=3D= 0, controller=3D0, type=3Dvirtio-serial, port=3D2}', managed=3D'false',= plugged=3D'true', readOnly=3D'false', deviceAlias=3D'channel1', custom= Properties=3D'[]', snapshotId=3D'null', logicalName=3D'null', hostDevic= e=3D'null'}}, vmType=3Dkvm, memSize=3D8192, smpCoresPerSocket=3D1, vmNa= me=3DOracle=5FSECONDARY, nice=3D0, status=3DMigration Source, maxMemSiz= e=3D32768, bootMenuEnable=3Dfalse, vmId=3D3f57e669-5e4c-4d10-85cc-d5730= 04a099d, numOfIoThreads=3D2, smpThreadsPerCore=3D1, memGuaranteedSize=3D= 8192, kvmEnable=3Dtrue, pitReinjection=3Dfalse, displayNetwork=3Dovirtm= gmt, devices=3D[Ljava.lang.Object;@28ae66d7, display=3Dvnc, maxVCpus=3D= 16, clientIp=3D, statusTime=3D4299484520, maxMemSlots=3D16}], log id: 5= 4b4b435<br />2018-02-12 16:46:31,150+01 INFO [org.ovirt.engine.co= re.vdsbroker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler1)= [27fac647] Fetched 3 VMs from VDS 'd569c2dd-8f30-4878-8aea-858db285cf6= 9'<br />2018-02-12 16:46:31,151+01 INFO [org.ovirt.engine.core.vd= sbroker.monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler9) [54a6= 5b66] Received a vnc Device without an address when processing VM 3f57e= 669-5e4c-4d10-85cc-d573004a099d devices, skipping device: {device=3Dvnc= , specParams=3D{displayNetwork=3Dovirtmgmt, keyMap=3Dfr, displayIp=3D19= 2.168.0.6}, type=3Dgraphics, deviceId=3D813957b1-446a-4e88-9e40-9fe76d2= c442d, port=3D5901}<br />2018-02-12 16:46:31,151+01 INFO [org.ovi= rt.engine.core.vdsbroker.monitoring.VmDevicesMonitoring] (DefaultQuartz= Scheduler9) [54a65b66] Received a lease Device without an address when = processing VM 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping de= vice: {lease=5Fid=3D3f57e669-5e4c-4d10-85cc-d573004a099d, sd=5Fid=3D1e5= 1cecc-eb2e-47d0-b185-920fdc7afa16, deviceId=3D{uuid=3Da09949aa-5642-4b6= d-94a4-8b0d04257be5}, offset=3D6291456, device=3Dlease, path=3D/rhev/da= ta-center/mnt/glusterSD/192.168.0.6:=5FDATA01/1e51cecc-eb2e-47d0-b185-9= 20fdc7afa16/dom=5Fmd/xleases, type=3Dlease}<br />2018-02-12 16:46:31,15= 2+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]= (DefaultQuartzScheduler1) [27fac647] VM '3f57e669-5e4c-4d10-85cc-d5730= 04a099d'(Oracle=5FSECONDARY) was unexpectedly detected as 'MigratingTo'= on VDS 'd569c2dd-8f30-4878-8aea-858db285cf69'(ginger.local.systea.fr) = (expected on 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1')<br />2018-02-12 16= :46:31,152+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.Vm= Analyzer] (DefaultQuartzScheduler1) [27fac647] VM '3f57e669-5e4c-4d10-8= 5cc-d573004a099d' is migrating to VDS 'd569c2dd-8f30-4878-8aea-858db285= cf69'(ginger.local.systea.fr) ignoring it in the refresh until migratio= n is done<br />....<br />2018-02-12 16:46:41,631+01 INFO [org.ovi= rt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-= 11) [] VM '3f57e669-5e4c-4d10-85cc-d573004a099d' was reported as Down o= n VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'(victor.local.systea.fr)<br= />2018-02-12 16:46:41,632+01 INFO [org.ovirt.engine.core.vdsbrok= er.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-11) [] START, De= stroyVDSCommand(HostName =3D victor.local.systea.fr, DestroyVmVDSComman= dParameters:{runAsync=3D'true', hostId=3D'ce3938b1-b23f-4d22-840a-f17d7= cd87bb1', vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d', force=3D'false= ', secondsToWait=3D'0', gracefully=3D'false', reason=3D'', ignoreNoVm=3D= 'true'}), log id: 560eca57<br />2018-02-12 16:46:41,650+01 INFO [= org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (ForkJoinP= ool-1-worker-11) [] FINISH, DestroyVDSCommand, log id: 560eca57<br />20= 18-02-12 16:46:41,650+01 INFO [org.ovirt.engine.core.vdsbroker.mo= nitoring.VmAnalyzer] (ForkJoinPool-1-worker-11) [] VM '3f57e669-5e4c-4d= 10-85cc-d573004a099d'(Oracle=5FSECONDARY) moved from 'MigratingFrom' --= > 'Down'<br />2018-02-12 16:46:41,651+01 INFO [org.ovirt.engin= e.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-11) [] H= anding over VM '3f57e669-5e4c-4d10-85cc-d573004a099d'(Oracle=5FSECONDAR= Y) to Host 'd569c2dd-8f30-4878-8aea-858db285cf69'. Setting VM to status= 'MigratingTo'<br />2018-02-12 16:46:42,163+01 INFO [org.ovirt.en= gine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-4) []= VM '3f57e669-5e4c-4d10-85cc-d573004a099d'(Oracle=5FSECONDARY) moved fr= om 'MigratingTo' --> 'Up'<br />2018-02-12 16:46:42,169+01 INFO  = ;[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] (F= orkJoinPool-1-worker-4) [] START, MigrateStatusVDSCommand(HostName =3D = ginger.local.systea.fr, MigrateStatusVDSCommandParameters:{runAsync=3D'= true', hostId=3D'd569c2dd-8f30-4878-8aea-858db285cf69', vmId=3D'3f57e66= 9-5e4c-4d10-85cc-d573004a099d'}), log id: 7a25c281<br />2018-02-12 16:4= 6:42,174+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.Migra= teStatusVDSCommand] (ForkJoinPool-1-worker-4) [] FINISH, MigrateStatusV= DSCommand, log id: 7a25c281<br />2018-02-12 16:46:42,194+01 INFO = [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] = (ForkJoinPool-1-worker-4) [] EVENT=5FID: VM=5FMIGRATION=5FDONE(63), Cor= relation ID: 2f712024-5982-46a8-82c8-fd8293da5725, Job ID: 4bd19aa9-cc9= 9-4d02-884e-5a1e857a7738, Call Stack: null, Custom ID: null, Custom Eve= nt ID: -1, Message: Migration completed (VM: Oracle=5FSECONDARY, Source= : victor.local.systea.fr, Destination: ginger.local.systea.fr, Duration= : 11 seconds, Total: 11 seconds, Actual downtime: (N/A))<br />2018-02-1= 2 16:46:42,201+01 INFO [org.ovirt.engine.core.bll.MigrateVmToServ= erCommand] (ForkJoinPool-1-worker-4) [] Lock freed to object 'EngineLoc= k:{exclusiveLocks=3D'[3f57e669-5e4c-4d10-85cc-d573004a099d=3DVM]', shar= edLocks=3D''}'<br />2018-02-12 16:46:42,203+01 INFO [org.ovirt.en= gine.core.vdsbroker.vdsbroker.FullListVDSCommand] (ForkJoinPool-1-worke= r-4) [] START, FullListVDSCommand(HostName =3D ginger.local.systea.fr, = FullListVDSCommandParameters:{runAsync=3D'true', hostId=3D'd569c2dd-8f3= 0-4878-8aea-858db285cf69', vmIds=3D'[3f57e669-5e4c-4d10-85cc-d573004a09= 9d]'}), log id: 7cc65298<br />2018-02-12 16:46:42,254+01 INFO [or= g.ovirt.engine.core.vdsbroker.vdsbroker.FullListVDSCommand] (ForkJoinPo= ol-1-worker-4) [] FINISH, FullListVDSCommand, return: [{acpiEnable=3Dtr= ue, emulatedMachine=3Dpc-i440fx-rhel7.3.0, afterMigrationStatus=3D, tab= letEnable=3Dtrue, pid=3D18748, guestDiskMapping=3D{}, transparentHugePa= ges=3Dtrue, timeOffset=3D0, cpuType=3DNehalem, smp=3D2, guestNumaNodes=3D= [Ljava.lang.Object;@760085fd, custom=3D{device=5Ffbddd528-7d93-49c6-a28= 6-180e021cb274device=5F879c93ab-4df1-435c-af02-565039fcc254=3DVmDevice:= {id=3D'VmDeviceId:{deviceId=3D'879c93ab-4df1-435c-af02-565039fcc254', v= mId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}', device=3D'unix', type=3D= 'CHANNEL', bootOrder=3D'0', specParams=3D'[]', address=3D'{bus=3D0, con= troller=3D0, type=3Dvirtio-serial, port=3D1}', managed=3D'false', plugg= ed=3D'true', readOnly=3D'false', deviceAlias=3D'channel0', customProper= ties=3D'[]', snapshotId=3D'null', logicalName=3D'null', hostDevice=3D'n= ull'}, device=5Ffbddd528-7d93-49c6-a286-180e021cb274device=5F879c93ab-4= df1-435c-af02-565039fcc254device=5F8945f61a-abbe-4156-8485-a4aa6f1908db= device=5F017b5e59-01c4-4aac-bf0c-b5d9557284d6=3DVmDevice:{id=3D'VmDevic= eId:{deviceId=3D'017b5e59-01c4-4aac-bf0c-b5d9557284d6', vmId=3D'3f57e66= 9-5e4c-4d10-85cc-d573004a099d'}', device=3D'tablet', type=3D'UNKNOWN', = bootOrder=3D'0', specParams=3D'[]', address=3D'{bus=3D0, type=3Dusb, po= rt=3D1}', managed=3D'false', plugged=3D'true', readOnly=3D'false', devi= ceAlias=3D'input0', customProperties=3D'[]', snapshotId=3D'null', logic= alName=3D'null', hostDevice=3D'null'}, device=5Ffbddd528-7d93-49c6-a286= -180e021cb274=3DVmDevice:{id=3D'VmDeviceId:{deviceId=3D'fbddd528-7d93-4= 9c6-a286-180e021cb274', vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}'= , device=3D'ide', type=3D'CONTROLLER', bootOrder=3D'0', specParams=3D'[= ]', address=3D'{slot=3D0x01, bus=3D0x00, domain=3D0x0000, type=3Dpci, f= unction=3D0x1}', managed=3D'false', plugged=3D'true', readOnly=3D'false= ', deviceAlias=3D'ide', customProperties=3D'[]', snapshotId=3D'null', l= ogicalName=3D'null', hostDevice=3D'null'}, device=5Ffbddd528-7d93-49c6-= a286-180e021cb274device=5F879c93ab-4df1-435c-af02-565039fcc254device=5F= 8945f61a-abbe-4156-8485-a4aa6f1908db=3DVmDevice:{id=3D'VmDeviceId:{devi= ceId=3D'8945f61a-abbe-4156-8485-a4aa6f1908db', vmId=3D'3f57e669-5e4c-4d= 10-85cc-d573004a099d'}', device=3D'unix', type=3D'CHANNEL', bootOrder=3D= '0', specParams=3D'[]', address=3D'{bus=3D0, controller=3D0, type=3Dvir= tio-serial, port=3D2}', managed=3D'false', plugged=3D'true', readOnly=3D= 'false', deviceAlias=3D'channel1', customProperties=3D'[]', snapshotId=3D= 'null', logicalName=3D'null', hostDevice=3D'null'}}, vmType=3Dkvm, memS= ize=3D8192, smpCoresPerSocket=3D1, vmName=3DOracle=5FSECONDARY, nice=3D= 0, status=3DUp, maxMemSize=3D32768, bootMenuEnable=3Dfalse, vmId=3D3f57= e669-5e4c-4d10-85cc-d573004a099d, numOfIoThreads=3D2, smpThreadsPerCore= =3D1, smartcardEnable=3Dfalse, maxMemSlots=3D16, kvmEnable=3Dtrue, pitR= einjection=3Dfalse, displayNetwork=3Dovirtmgmt, devices=3D[Ljava.lang.O= bject;@2e4d3dd3, memGuaranteedSize=3D8192, maxVCpus=3D16, clientIp=3D, = statusTime=3D4304259600, display=3Dvnc}], log id: 7cc65298<br />2018-02= -12 16:46:42,257+01 INFO [org.ovirt.engine.core.vdsbroker.monitor= ing.VmDevicesMonitoring] (ForkJoinPool-1-worker-4) [] Received a vnc De= vice without an address when processing VM 3f57e669-5e4c-4d10-85cc-d573= 004a099d devices, skipping device: {device=3Dvnc, specParams=3D{display= Network=3Dovirtmgmt, keyMap=3Dfr, displayIp=3D192.168.0.5}, type=3Dgrap= hics, deviceId=3D813957b1-446a-4e88-9e40-9fe76d2c442d, port=3D5901}<br = />2018-02-12 16:46:42,257+01 INFO [org.ovirt.engine.core.vdsbroke= r.monitoring.VmDevicesMonitoring] (ForkJoinPool-1-worker-4) [] Received= a lease Device without an address when processing VM 3f57e669-5e4c-4d1= 0-85cc-d573004a099d devices, skipping device: {lease=5Fid=3D3f57e669-5e= 4c-4d10-85cc-d573004a099d, sd=5Fid=3D1e51cecc-eb2e-47d0-b185-920fdc7afa= 16, deviceId=3D{uuid=3Da09949aa-5642-4b6d-94a4-8b0d04257be5}, offset=3D= 6291456, device=3Dlease, path=3D/rhev/data-center/mnt/glusterSD/192.168= .0.6:=5FDATA01/1e51cecc-eb2e-47d0-b185-920fdc7afa16/dom=5Fmd/xleases, t= ype=3Dlease}<br />2018-02-12 16:46:46,260+01 INFO [org.ovirt.engi= ne.core.vdsbroker.vdsbroker.FullListVDSCommand] (DefaultQuartzScheduler= 5) [7fcb200a] FINISH, FullListVDSCommand, return: [{acpiEnable=3Dtrue, = emulatedMachine=3Dpc-i440fx-rhel7.3.0, afterMigrationStatus=3D, tabletE= nable=3Dtrue, pid=3D18748, guestDiskMapping=3D{0QEMU=5FQEMU=5FHARDDISK=5F= d890fa68-fba4-4f49-9=3D{name=3D/dev/sda}, QEMU=5FDVD-ROM=5FQM00003=3D{n= ame=3D/dev/sr0}}, transparentHugePages=3Dtrue, timeOffset=3D0, cpuType=3D= Nehalem, smp=3D2, guestNumaNodes=3D[Ljava.lang.Object;@77951faf, custom= =3D{device=5Ffbddd528-7d93-49c6-a286-180e021cb274device=5F879c93ab-4df1= -435c-af02-565039fcc254=3DVmDevice:{id=3D'VmDeviceId:{deviceId=3D'879c9= 3ab-4df1-435c-af02-565039fcc254', vmId=3D'3f57e669-5e4c-4d10-85cc-d5730= 04a099d'}', device=3D'unix', type=3D'CHANNEL', bootOrder=3D'0', specPar= ams=3D'[]', address=3D'{bus=3D0, controller=3D0, type=3Dvirtio-serial, = port=3D1}', managed=3D'false', plugged=3D'true', readOnly=3D'false', de= viceAlias=3D'channel0', customProperties=3D'[]', snapshotId=3D'null', l= ogicalName=3D'null', hostDevice=3D'null'}, device=5Ffbddd528-7d93-49c6-= a286-180e021cb274device=5F879c93ab-4df1-435c-af02-565039fcc254device=5F= 8945f61a-abbe-4156-8485-a4aa6f1908dbdevice=5F017b5e59-01c4-4aac-bf0c-b5= d9557284d6=3DVmDevice:{id=3D'VmDeviceId:{deviceId=3D'017b5e59-01c4-4aac= -bf0c-b5d9557284d6', vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}', d= evice=3D'tablet', type=3D'UNKNOWN', bootOrder=3D'0', specParams=3D'[]',= address=3D'{bus=3D0, type=3Dusb, port=3D1}', managed=3D'false', plugge= d=3D'true', readOnly=3D'false', deviceAlias=3D'input0', customPropertie= s=3D'[]', snapshotId=3D'null', logicalName=3D'null', hostDevice=3D'null= '}, device=5Ffbddd528-7d93-49c6-a286-180e021cb274=3DVmDevice:{id=3D'VmD= eviceId:{deviceId=3D'fbddd528-7d93-49c6-a286-180e021cb274', vmId=3D'3f5= 7e669-5e4c-4d10-85cc-d573004a099d'}', device=3D'ide', type=3D'CONTROLLE= R', bootOrder=3D'0', specParams=3D'[]', address=3D'{slot=3D0x01, bus=3D= 0x00, domain=3D0x0000, type=3Dpci, function=3D0x1}', managed=3D'false',= plugged=3D'true', readOnly=3D'false', deviceAlias=3D'ide', customPrope= rties=3D'[]', snapshotId=3D'null', logicalName=3D'null', hostDevice=3D'= null'}, device=5Ffbddd528-7d93-49c6-a286-180e021cb274device=5F879c93ab-= 4df1-435c-af02-565039fcc254device=5F8945f61a-abbe-4156-8485-a4aa6f1908d= b=3DVmDevice:{id=3D'VmDeviceId:{deviceId=3D'8945f61a-abbe-4156-8485-a4a= a6f1908db', vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}', device=3D'= unix', type=3D'CHANNEL', bootOrder=3D'0', specParams=3D'[]', address=3D= '{bus=3D0, controller=3D0, type=3Dvirtio-serial, port=3D2}', managed=3D= 'false', plugged=3D'true', readOnly=3D'false', deviceAlias=3D'channel1'= , customProperties=3D'[]', snapshotId=3D'null', logicalName=3D'null', h= ostDevice=3D'null'}}, vmType=3Dkvm, memSize=3D8192, smpCoresPerSocket=3D= 1, vmName=3DOracle=5FSECONDARY, nice=3D0, status=3DUp, maxMemSize=3D327= 68, bootMenuEnable=3Dfalse, vmId=3D3f57e669-5e4c-4d10-85cc-d573004a099d= , numOfIoThreads=3D2, smpThreadsPerCore=3D1, smartcardEnable=3Dfalse, m= axMemSlots=3D16, kvmEnable=3Dtrue, pitReinjection=3Dfalse, displayNetwo= rk=3Dovirtmgmt, devices=3D[Ljava.lang.Object;@286410fd, memGuaranteedSi= ze=3D8192, maxVCpus=3D16, clientIp=3D, statusTime=3D4304263620, display= =3Dvnc}], log id: 58cdef4c<br />2018-02-12 16:46:46,267+01 INFO [= org.ovirt.engine.core.vdsbroker.monitoring.VmDevicesMonitoring] (Defaul= tQuartzScheduler5) [7fcb200a] Received a vnc Device without an address = when processing VM 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skippi= ng device: {device=3Dvnc, specParams=3D{displayNetwork=3Dovirtmgmt, key= Map=3Dfr, displayIp=3D192.168.0.5}, type=3Dgraphics, deviceId=3D813957b= 1-446a-4e88-9e40-9fe76d2c442d, port=3D5901}<br />2018-02-12 16:46:46,26= 8+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmDevicesMo= nitoring] (DefaultQuartzScheduler5) [7fcb200a] Received a lease Device = without an address when processing VM 3f57e669-5e4c-4d10-85cc-d573004a0= 99d devices, skipping device: {lease=5Fid=3D3f57e669-5e4c-4d10-85cc-d57= 3004a099d, sd=5Fid=3D1e51cecc-eb2e-47d0-b185-920fdc7afa16, deviceId=3D{= uuid=3Da09949aa-5642-4b6d-94a4-8b0d04257be5}, offset=3D6291456, device=3D= lease, path=3D/rhev/data-center/mnt/glusterSD/192.168.0.6:=5FDATA01/1e5= 1cecc-eb2e-47d0-b185-920fdc7afa16/dom=5Fmd/xleases, type=3Dlease}<p>&nb= sp;</p></blockquote><br />For the VM with 2 vdisks we see :<blockquote>= <p>2018-02-12 16:49:06,112+01 INFO [org.ovirt.engine.core.bll.Mig= rateVmToServerCommand] (default task-50) [92b5af33-cb87-4142-b8fe-8b838= dd7458e] Lock Acquired to object 'EngineLock:{exclusiveLocks=3D'[f7d4ec= 12-627a-4b83-b59e-886400d55474=3DVM]', sharedLocks=3D''}'<br />2018-02-= 12 16:49:06,407+01 INFO [org.ovirt.engine.core.bll.MigrateVmToSer= verCommand] (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4142-b8f= e-8b838dd7458e] Running command: MigrateVmToServerCommand internal: fal= se. Entities affected : ID: f7d4ec12-627a-4b83-b59e-886400d55474 = Type: VMAction group MIGRATE=5FVM with role type USER<br />2018-02-12 1= 6:49:06,712+01 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCo= mmand] (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4142-b8fe-8b8= 38dd7458e] START, MigrateVDSCommand( MigrateVDSCommandParameters:{runAs= ync=3D'true', hostId=3D'd569c2dd-8f30-4878-8aea-858db285cf69', vmId=3D'= f7d4ec12-627a-4b83-b59e-886400d55474', srcHost=3D'192.168.0.5', dstVdsI= d=3D'ce3938b1-b23f-4d22-840a-f17d7cd87bb1', dstHost=3D'192.168.0.6:5432= 1', migrationMethod=3D'ONLINE', tunnelMigration=3D'false', migrationDow= ntime=3D'0', autoConverge=3D'true', migrateCompressed=3D'false', consol= eAddress=3D'null', maxBandwidth=3D'500', enableGuestEvents=3D'true', ma= xIncomingMigrations=3D'2', maxOutgoingMigrations=3D'2', convergenceSche= dule=3D'[init=3D[{name=3DsetDowntime, params=3D[100]}], stalling=3D[{li= mit=3D1, action=3D{name=3DsetDowntime, params=3D[150]}}, {limit=3D2, ac= tion=3D{name=3DsetDowntime, params=3D[200]}}, {limit=3D3, action=3D{nam= e=3DsetDowntime, params=3D[300]}}, {limit=3D4, action=3D{name=3DsetDown= time, params=3D[400]}}, {limit=3D6, action=3D{name=3DsetDowntime, param= s=3D[500]}}, {limit=3D-1, action=3D{name=3Dabort, params=3D[]}}]]'}), l= og id: 3702a9e0<br />2018-02-12 16:49:06,713+01 INFO [org.ovirt.e= ngine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thre= ad.pool-6-thread-49) [92b5af33-cb87-4142-b8fe-8b838dd7458e] START, Migr= ateBrokerVDSCommand(HostName =3D ginger.local.systea.fr, MigrateVDSComm= andParameters:{runAsync=3D'true', hostId=3D'd569c2dd-8f30-4878-8aea-858= db285cf69', vmId=3D'f7d4ec12-627a-4b83-b59e-886400d55474', srcHost=3D'1= 92.168.0.5', dstVdsId=3D'ce3938b1-b23f-4d22-840a-f17d7cd87bb1', dstHost= =3D'192.168.0.6:54321', migrationMethod=3D'ONLINE', tunnelMigration=3D'= false', migrationDowntime=3D'0', autoConverge=3D'true', migrateCompress= ed=3D'false', consoleAddress=3D'null', maxBandwidth=3D'500', enableGues= tEvents=3D'true', maxIncomingMigrations=3D'2', maxOutgoingMigrations=3D= '2', convergenceSchedule=3D'[init=3D[{name=3DsetDowntime, params=3D[100= ]}], stalling=3D[{limit=3D1, action=3D{name=3DsetDowntime, params=3D[15= 0]}}, {limit=3D2, action=3D{name=3DsetDowntime, params=3D[200]}}, {limi= t=3D3, action=3D{name=3DsetDowntime, params=3D[300]}}, {limit=3D4, acti= on=3D{name=3DsetDowntime, params=3D[400]}}, {limit=3D6, action=3D{name=3D= setDowntime, params=3D[500]}}, {limit=3D-1, action=3D{name=3Dabort, par= ams=3D[]}}]]'}), log id: 1840069c<br />2018-02-12 16:49:06,724+01 INFO = [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSComman= d] (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4142-b8fe-8b838dd= 7458e] FINISH, MigrateBrokerVDSCommand, log id: 1840069c<br />2018-02-1= 2 16:49:06,732+01 INFO [org.ovirt.engine.core.vdsbroker.MigrateVD= SCommand] (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4142-b8fe-= 8b838dd7458e] FINISH, MigrateVDSCommand, return: MigratingFrom, log id:= 3702a9e0<br />2018-02-12 16:49:06,753+01 INFO [org.ovirt.engine.= core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.= pool-6-thread-49) [92b5af33-cb87-4142-b8fe-8b838dd7458e] EVENT=5FID: VM= =5FMIGRATION=5FSTART(62), Correlation ID: 92b5af33-cb87-4142-b8fe-8b838= dd7458e, Job ID: f4f54054-f7c8-4481-8eda-d5a15c383061, Call Stack: null= , Custom ID: null, Custom Event ID: -1, Message: Migration started (VM:= Oracle=5FPRIMARY, Source: ginger.local.systea.fr, Destination: victor.= local.systea.fr, User: admin@internal-authz).<br />...<br />2018-02-12 = 16:49:16,453+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.= VmsStatisticsFetcher] (DefaultQuartzScheduler4) [162a5bc3] Fetched 2 VM= s from VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'<br />2018-02-12 16:49= :16,455+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAna= lyzer] (DefaultQuartzScheduler4) [162a5bc3] VM 'f7d4ec12-627a-4b83-b59e= -886400d55474'(Oracle=5FPRIMARY) was unexpectedly detected as 'Migratin= gTo' on VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'(victor.local.systea.= fr) (expected on 'd569c2dd-8f30-4878-8aea-858db285cf69')<br />2018-02-1= 2 16:49:16,455+01 INFO [org.ovirt.engine.core.vdsbroker.monitorin= g.VmAnalyzer] (DefaultQuartzScheduler4) [162a5bc3] VM 'f7d4ec12-627a-4b= 83-b59e-886400d55474' is migrating to VDS 'ce3938b1-b23f-4d22-840a-f17d= 7cd87bb1'(victor.local.systea.fr) ignoring it in the refresh until migr= ation is done<br />...<br />2018-02-12 16:49:31,484+01 INFO [org.= ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (DefaultQuartzSchedu= ler5) [11a7619a] VM 'f7d4ec12-627a-4b83-b59e-886400d55474'(Oracle=5FPRI= MARY) was unexpectedly detected as 'MigratingTo' on VDS 'ce3938b1-b23f-= 4d22-840a-f17d7cd87bb1'(victor.local.systea.fr) (expected on 'd569c2dd-= 8f30-4878-8aea-858db285cf69')<br />2018-02-12 16:49:31,484+01 INFO &nbs= p;[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (DefaultQuart= zScheduler5) [11a7619a] VM 'f7d4ec12-627a-4b83-b59e-886400d55474' is mi= grating to VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'(victor.local.syst= ea.fr) ignoring it in the refresh until migration is done<br /> </= p></blockquote><br />and so on, last lines repeated indefinitly for hou= rs since we poweroff the VM...<br />Is this something known ? Any idea = about that ?<br /><br />Thanks<br /><br />Ovirt 4.1.6, updated last at = feb-13. Gluster 3.12.1.<br /><br />--<br /><style type=3D"text/css">.Te= xt1 { color: black; font-size:9pt; font-family:Verdana; } .Text2 { color: black; font-size:7pt; font-family:Verdana; }</style><p class=3D"Text1">Cordialement,<br /><br /><b>Frank Soyer= </b></p></html> ------=_=-_OpenGroupware_org_NGMime-29680-1518600202.220070-147--------

Hi Frank, I already replied on your last email. Can you provide the VDSM logs from the time of the migration failure for both hosts: ginger.local.systea.f <http://ginger.local.systea.fr/>r and v ictor.local.systea.fr Thanks, Maor On Wed, Feb 14, 2018 at 11:23 AM, fsoyer <fsoyer@systea.fr> wrote:
Hi all, I discovered yesterday a problem when migrating VM with more than one vdisk. On our test servers (oVirt4.1, shared storage with Gluster), I created 2 VMs needed for a test, from a template with a 20G vdisk. On this VMs I added a 100G vdisk (for this tests I didn't want to waste time to extend the existing vdisks... But I lost time finally...). The VMs with the 2 vdisks works well. Now I saw some updates waiting on the host. I tried to put it in maintenance... But it stopped on the two VM. They were marked "migrating", but no more accessible. Other (small) VMs with only 1 vdisk was migrated without problem at the same time. I saw that a kvm process for the (big) VMs was launched on the source AND destination host, but after tens of minutes, the migration and the VMs was always freezed. I tried to cancel the migration for the VMs : failed. The only way to stop it was to poweroff the VMs : the kvm process died on the 2 hosts and the GUI alerted on a failed migration. In doubt, I tried to delete the second vdisk on one of this VMs : it migrates then without error ! And no access problem. I tried to extend the first vdisk of the second VM, the delete the second vdisk : it migrates now without problem !
So after another test with a VM with 2 vdisks, I can say that this blocked the migration process :(
In engine.log, for a VMs with 1 vdisk migrating well, we see :
2018-02-12 16:46:29,705+01 INFO [org.ovirt.engine.core.bll.MigrateVmToServerCommand] (default task-28) [2f712024-5982-46a8-82c8-fd8293da5725] Lock Acquired to object 'EngineLock:{exclusiveLocks='[3f57e669-5e4c-4d10-85cc-d573004a099d=VM]', sharedLocks=''}' 2018-02-12 16:46:29,955+01 INFO [org.ovirt.engine.core.bll.MigrateVmToServerCommand] (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725] Running command: MigrateVmToServerCommand internal: false. Entities affected : ID: 3f57e669-5e4c-4d10-85cc-d573004a099d Type: VMAction group MIGRATE_VM with role type USER 2018-02-12 16:46:30,261+01 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725] START, MigrateVDSCommand( MigrateVDSCommandParameters:{runAsync='true', hostId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d', srcHost='192.168.0.6', dstVdsId='d569c2dd-8f30-4878-8aea-858db285cf69', dstHost=' 192.168.0.5:54321', migrationMethod='ONLINE', tunnelMigration='false', migrationDowntime='0', autoConverge='true', migrateCompressed='false', consoleAddress='null', maxBandwidth='500', enableGuestEvents='true', maxIncomingMigrations='2', maxOutgoingMigrations='2', convergenceSchedule='[init=[{name=setDowntime, params=[100]}], stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2, action={name=setDowntime, params=[200]}}, {limit=3, action={name=setDowntime, params=[300]}}, {limit=4, action={name=setDowntime, params=[400]}}, {limit=6, action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort, params=[]}}]]'}), log id: 14f61ee0 2018-02-12 16:46:30,262+01 INFO [org.ovirt.engine.core. vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725] START, MigrateBrokerVDSCommand(HostName = victor.local.systea.fr, MigrateVDSCommandParameters:{runAsync='true', hostId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d', srcHost='192.168.0.6', dstVdsId='d569c2dd-8f30-4878-8aea-858db285cf69', dstHost=' 192.168.0.5:54321', migrationMethod='ONLINE', tunnelMigration='false', migrationDowntime='0', autoConverge='true', migrateCompressed='false', consoleAddress='null', maxBandwidth='500', enableGuestEvents='true', maxIncomingMigrations='2', maxOutgoingMigrations='2', convergenceSchedule='[init=[{name=setDowntime, params=[100]}], stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2, action={name=setDowntime, params=[200]}}, {limit=3, action={name=setDowntime, params=[300]}}, {limit=4, action={name=setDowntime, params=[400]}}, {limit=6, action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort, params=[]}}]]'}), log id: 775cd381 2018-02-12 16:46:30,277+01 INFO [org.ovirt.engine.core. vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725] FINISH, MigrateBrokerVDSCommand, log id: 775cd381 2018-02-12 16:46:30,285+01 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725] FINISH, MigrateVDSCommand, return: MigratingFrom, log id: 14f61ee0 2018-02-12 16:46:30,301+01 INFO [org.ovirt.engine.core.dal. dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725] EVENT_ID: VM_MIGRATION_START(62), Correlation ID: 2f712024-5982-46a8-82c8-fd8293da5725, Job ID: 4bd19aa9-cc99-4d02-884e-5a1e857a7738, Call Stack: null, Custom ID: null, Custom Event ID: -1, Message: Migration started (VM: Oracle_SECONDARY, Source: victor.local.systea.fr, Destination: ginger.local.systea.fr, User: admin@internal-authz). 2018-02-12 16:46:31,106+01 INFO [org.ovirt.engine.core. vdsbroker.vdsbroker.FullListVDSCommand] (DefaultQuartzScheduler9) [54a65b66] START, FullListVDSCommand(HostName = victor.local.systea.fr, FullListVDSCommandParameters:{runAsync='true', hostId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1', vmIds='[3f57e669-5e4c-4d10-85cc-d573004a099d]'}), log id: 54b4b435 2018-02-12 16:46:31,147+01 INFO [org.ovirt.engine.core. vdsbroker.vdsbroker.FullListVDSCommand] (DefaultQuartzScheduler9) [54a65b66] FINISH, FullListVDSCommand, return: [{acpiEnable=true, emulatedMachine=pc-i440fx-rhel7.3.0, tabletEnable=true, pid=1493, guestDiskMapping={0QEMU_QEMU_HARDDISK_d890fa68-fba4-4f49-9={name=/dev/sda}, QEMU_DVD-ROM_QM00003={name=/dev/sr0}}, transparentHugePages=true, timeOffset=0, cpuType=Nehalem, smp=2, pauseCode=NOERR, guestNumaNodes=[Ljava.lang.Object;@1d9042cd, smartcardEnable=false, custom={device_fbddd528-7d93-49c6-a286-180e021cb274device_ 879c93ab-4df1-435c-af02-565039fcc254=VmDevice:{id='VmDeviceId:{deviceId=' 879c93ab-4df1-435c-af02-565039fcc254', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='unix', type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, controller=0, type=virtio-serial, port=1}', managed='false', plugged='true', readOnly='false', deviceAlias='channel0', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}, device_fbddd528-7d93-49c6-a286-180e021cb274device_ 879c93ab-4df1-435c-af02-565039fcc254device_8945f61a-abbe-4156-8485- a4aa6f1908dbdevice_017b5e59-01c4-4aac-bf0c-b5d9557284d6= VmDevice:{id='VmDeviceId:{deviceId='017b5e59-01c4-4aac-bf0c-b5d9557284d6', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='tablet', type='UNKNOWN', bootOrder='0', specParams='[]', address='{bus=0, type=usb, port=1}', managed='false', plugged='true', readOnly='false', deviceAlias='input0', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}, device_fbddd528-7d93-49c6- a286-180e021cb274=VmDevice:{id='VmDeviceId:{deviceId=' fbddd528-7d93-49c6-a286-180e021cb274', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='ide', type='CONTROLLER', bootOrder='0', specParams='[]', address='{slot=0x01, bus=0x00, domain=0x0000, type=pci, function=0x1}', managed='false', plugged='true', readOnly='false', deviceAlias='ide', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}, device_fbddd528-7d93-49c6-a286-180e021cb274device_ 879c93ab-4df1-435c-af02-565039fcc254device_8945f61a- abbe-4156-8485-a4aa6f1908db=VmDevice:{id='VmDeviceId:{ deviceId='8945f61a-abbe-4156-8485-a4aa6f1908db', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='unix', type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, controller=0, type=virtio-serial, port=2}', managed='false', plugged='true', readOnly='false', deviceAlias='channel1', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}}, vmType=kvm, memSize=8192, smpCoresPerSocket=1, vmName=Oracle_SECONDARY, nice=0, status=Migration Source, maxMemSize=32768, bootMenuEnable=false, vmId=3f57e669-5e4c-4d10-85cc-d573004a099d, numOfIoThreads=2, smpThreadsPerCore=1, memGuaranteedSize=8192, kvmEnable=true, pitReinjection=false, displayNetwork=ovirtmgmt, devices=[Ljava.lang.Object;@28ae66d7, display=vnc, maxVCpus=16, clientIp=, statusTime=4299484520, maxMemSlots=16}], log id: 54b4b435 2018-02-12 16:46:31,150+01 INFO [org.ovirt.engine.core. vdsbroker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler1) [27fac647] Fetched 3 VMs from VDS 'd569c2dd-8f30-4878-8aea-858db285cf69' 2018-02-12 16:46:31,151+01 INFO [org.ovirt.engine.core. vdsbroker.monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler9) [54a65b66] Received a vnc Device without an address when processing VM 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: {device=vnc, specParams={displayNetwork=ovirtmgmt, keyMap=fr, displayIp=192.168.0.6}, type=graphics, deviceId=813957b1-446a-4e88-9e40-9fe76d2c442d, port=5901} 2018-02-12 16:46:31,151+01 INFO [org.ovirt.engine.core. vdsbroker.monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler9) [54a65b66] Received a lease Device without an address when processing VM 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: {lease_id=3f57e669-5e4c-4d10-85cc-d573004a099d, sd_id=1e51cecc-eb2e-47d0-b185-920fdc7afa16, deviceId={uuid=a09949aa-5642-4b6d-94a4-8b0d04257be5}, offset=6291456, device=lease, path=/rhev/data-center/mnt/ glusterSD/192.168.0.6:_DATA01/1e51cecc-eb2e-47d0-b185-920fdc7afa16/dom_md/xleases, type=lease} 2018-02-12 16:46:31,152+01 INFO [org.ovirt.engine.core. vdsbroker.monitoring.VmAnalyzer] (DefaultQuartzScheduler1) [27fac647] VM '3f57e669-5e4c-4d10-85cc-d573004a099d'(Oracle_SECONDARY) was unexpectedly detected as 'MigratingTo' on VDS 'd569c2dd-8f30-4878-8aea-858db285cf69'( ginger.local.systea.fr) (expected on 'ce3938b1-b23f-4d22-840a- f17d7cd87bb1') 2018-02-12 16:46:31,152+01 INFO [org.ovirt.engine.core. vdsbroker.monitoring.VmAnalyzer] (DefaultQuartzScheduler1) [27fac647] VM '3f57e669-5e4c-4d10-85cc-d573004a099d' is migrating to VDS 'd569c2dd-8f30-4878-8aea-858db285cf69'(ginger.local.systea.fr) ignoring it in the refresh until migration is done .... 2018-02-12 16:46:41,631+01 INFO [org.ovirt.engine.core. vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-11) [] VM '3f57e669-5e4c-4d10-85cc-d573004a099d' was reported as Down on VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'(victor.local.systea.fr) 2018-02-12 16:46:41,632+01 INFO [org.ovirt.engine.core. vdsbroker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-11) [] START, DestroyVDSCommand(HostName = victor.local.systea.fr, DestroyVmVDSCommandParameters:{runAsync='true', hostId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d', force='false', secondsToWait='0', gracefully='false', reason='', ignoreNoVm='true'}), log id: 560eca57 2018-02-12 16:46:41,650+01 INFO [org.ovirt.engine.core. vdsbroker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-11) [] FINISH, DestroyVDSCommand, log id: 560eca57 2018-02-12 16:46:41,650+01 INFO [org.ovirt.engine.core. vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-11) [] VM '3f57e669-5e4c-4d10-85cc-d573004a099d'(Oracle_SECONDARY) moved from 'MigratingFrom' --> 'Down' 2018-02-12 16:46:41,651+01 INFO [org.ovirt.engine.core. vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-11) [] Handing over VM '3f57e669-5e4c-4d10-85cc-d573004a099d'(Oracle_SECONDARY) to Host 'd569c2dd-8f30-4878-8aea-858db285cf69'. Setting VM to status 'MigratingTo' 2018-02-12 16:46:42,163+01 INFO [org.ovirt.engine.core. vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-4) [] VM '3f57e669-5e4c-4d10-85cc-d573004a099d'(Oracle_SECONDARY) moved from 'MigratingTo' --> 'Up' 2018-02-12 16:46:42,169+01 INFO [org.ovirt.engine.core. vdsbroker.vdsbroker.MigrateStatusVDSCommand] (ForkJoinPool-1-worker-4) [] START, MigrateStatusVDSCommand(HostName = ginger.local.systea.fr, MigrateStatusVDSCommandParameters:{runAsync='true', hostId='d569c2dd-8f30-4878-8aea-858db285cf69', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}), log id: 7a25c281 2018-02-12 16:46:42,174+01 INFO [org.ovirt.engine.core. vdsbroker.vdsbroker.MigrateStatusVDSCommand] (ForkJoinPool-1-worker-4) [] FINISH, MigrateStatusVDSCommand, log id: 7a25c281 2018-02-12 16:46:42,194+01 INFO [org.ovirt.engine.core.dal. dbbroker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-4) [] EVENT_ID: VM_MIGRATION_DONE(63), Correlation ID: 2f712024-5982-46a8-82c8-fd8293da5725, Job ID: 4bd19aa9-cc99-4d02-884e-5a1e857a7738, Call Stack: null, Custom ID: null, Custom Event ID: -1, Message: Migration completed (VM: Oracle_SECONDARY, Source: victor.local.systea.fr, Destination: ginger.local.systea.fr, Duration: 11 seconds, Total: 11 seconds, Actual downtime: (N/A)) 2018-02-12 16:46:42,201+01 INFO [org.ovirt.engine.core.bll.MigrateVmToServerCommand] (ForkJoinPool-1-worker-4) [] Lock freed to object 'EngineLock:{exclusiveLocks='[3f57e669-5e4c-4d10-85cc-d573004a099d=VM]', sharedLocks=''}' 2018-02-12 16:46:42,203+01 INFO [org.ovirt.engine.core. vdsbroker.vdsbroker.FullListVDSCommand] (ForkJoinPool-1-worker-4) [] START, FullListVDSCommand(HostName = ginger.local.systea.fr, FullListVDSCommandParameters:{runAsync='true', hostId='d569c2dd-8f30-4878-8aea-858db285cf69', vmIds='[3f57e669-5e4c-4d10-85cc-d573004a099d]'}), log id: 7cc65298 2018-02-12 16:46:42,254+01 INFO [org.ovirt.engine.core. vdsbroker.vdsbroker.FullListVDSCommand] (ForkJoinPool-1-worker-4) [] FINISH, FullListVDSCommand, return: [{acpiEnable=true, emulatedMachine=pc-i440fx-rhel7.3.0, afterMigrationStatus=, tabletEnable=true, pid=18748, guestDiskMapping={}, transparentHugePages=true, timeOffset=0, cpuType=Nehalem, smp=2, guestNumaNodes=[Ljava.lang.Object;@760085fd, custom={device_fbddd528-7d93- 49c6-a286-180e021cb274device_879c93ab-4df1-435c-af02- 565039fcc254=VmDevice:{id='VmDeviceId:{deviceId='879c93ab-4df1-435c-af02-565039fcc254', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='unix', type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, controller=0, type=virtio-serial, port=1}', managed='false', plugged='true', readOnly='false', deviceAlias='channel0', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}, device_fbddd528-7d93-49c6-a286-180e021cb274device_ 879c93ab-4df1-435c-af02-565039fcc254device_8945f61a-abbe-4156-8485- a4aa6f1908dbdevice_017b5e59-01c4-4aac-bf0c-b5d9557284d6= VmDevice:{id='VmDeviceId:{deviceId='017b5e59-01c4-4aac-bf0c-b5d9557284d6', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='tablet', type='UNKNOWN', bootOrder='0', specParams='[]', address='{bus=0, type=usb, port=1}', managed='false', plugged='true', readOnly='false', deviceAlias='input0', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}, device_fbddd528-7d93-49c6- a286-180e021cb274=VmDevice:{id='VmDeviceId:{deviceId=' fbddd528-7d93-49c6-a286-180e021cb274', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='ide', type='CONTROLLER', bootOrder='0', specParams='[]', address='{slot=0x01, bus=0x00, domain=0x0000, type=pci, function=0x1}', managed='false', plugged='true', readOnly='false', deviceAlias='ide', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}, device_fbddd528-7d93-49c6-a286-180e021cb274device_ 879c93ab-4df1-435c-af02-565039fcc254device_8945f61a- abbe-4156-8485-a4aa6f1908db=VmDevice:{id='VmDeviceId:{ deviceId='8945f61a-abbe-4156-8485-a4aa6f1908db', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='unix', type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, controller=0, type=virtio-serial, port=2}', managed='false', plugged='true', readOnly='false', deviceAlias='channel1', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}}, vmType=kvm, memSize=8192, smpCoresPerSocket=1, vmName=Oracle_SECONDARY, nice=0, status=Up, maxMemSize=32768, bootMenuEnable=false, vmId=3f57e669-5e4c-4d10-85cc-d573004a099d, numOfIoThreads=2, smpThreadsPerCore=1, smartcardEnable=false, maxMemSlots=16, kvmEnable=true, pitReinjection=false, displayNetwork=ovirtmgmt, devices=[Ljava.lang.Object;@2e4d3dd3, memGuaranteedSize=8192, maxVCpus=16, clientIp=, statusTime=4304259600 <(430)%20425-9600>, display=vnc}], log id: 7cc65298 2018-02-12 16:46:42,257+01 INFO [org.ovirt.engine.core. vdsbroker.monitoring.VmDevicesMonitoring] (ForkJoinPool-1-worker-4) [] Received a vnc Device without an address when processing VM 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: {device=vnc, specParams={displayNetwork=ovirtmgmt, keyMap=fr, displayIp=192.168.0.5}, type=graphics, deviceId=813957b1-446a-4e88-9e40-9fe76d2c442d, port=5901} 2018-02-12 16:46:42,257+01 INFO [org.ovirt.engine.core. vdsbroker.monitoring.VmDevicesMonitoring] (ForkJoinPool-1-worker-4) [] Received a lease Device without an address when processing VM 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: {lease_id=3f57e669-5e4c-4d10-85cc-d573004a099d, sd_id=1e51cecc-eb2e-47d0-b185-920fdc7afa16, deviceId={uuid=a09949aa-5642-4b6d-94a4-8b0d04257be5}, offset=6291456, device=lease, path=/rhev/data-center/mnt/ glusterSD/192.168.0.6:_DATA01/1e51cecc-eb2e-47d0-b185-920fdc7afa16/dom_md/xleases, type=lease} 2018-02-12 16:46:46,260+01 INFO [org.ovirt.engine.core. vdsbroker.vdsbroker.FullListVDSCommand] (DefaultQuartzScheduler5) [7fcb200a] FINISH, FullListVDSCommand, return: [{acpiEnable=true, emulatedMachine=pc-i440fx-rhel7.3.0, afterMigrationStatus=, tabletEnable=true, pid=18748, guestDiskMapping={0QEMU_QEMU_ HARDDISK_d890fa68-fba4-4f49-9={name=/dev/sda}, QEMU_DVD-ROM_QM00003={name=/dev/sr0}}, transparentHugePages=true, timeOffset=0, cpuType=Nehalem, smp=2, guestNumaNodes=[Ljava.lang. Object;@77951faf, custom={device_fbddd528-7d93- 49c6-a286-180e021cb274device_879c93ab-4df1-435c-af02- 565039fcc254=VmDevice:{id='VmDeviceId:{deviceId='879c93ab-4df1-435c-af02-565039fcc254', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='unix', type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, controller=0, type=virtio-serial, port=1}', managed='false', plugged='true', readOnly='false', deviceAlias='channel0', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}, device_fbddd528-7d93-49c6-a286-180e021cb274device_ 879c93ab-4df1-435c-af02-565039fcc254device_8945f61a-abbe-4156-8485- a4aa6f1908dbdevice_017b5e59-01c4-4aac-bf0c-b5d9557284d6= VmDevice:{id='VmDeviceId:{deviceId='017b5e59-01c4-4aac-bf0c-b5d9557284d6', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='tablet', type='UNKNOWN', bootOrder='0', specParams='[]', address='{bus=0, type=usb, port=1}', managed='false', plugged='true', readOnly='false', deviceAlias='input0', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}, device_fbddd528-7d93-49c6- a286-180e021cb274=VmDevice:{id='VmDeviceId:{deviceId=' fbddd528-7d93-49c6-a286-180e021cb274', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='ide', type='CONTROLLER', bootOrder='0', specParams='[]', address='{slot=0x01, bus=0x00, domain=0x0000, type=pci, function=0x1}', managed='false', plugged='true', readOnly='false', deviceAlias='ide', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}, device_fbddd528-7d93-49c6-a286-180e021cb274device_ 879c93ab-4df1-435c-af02-565039fcc254device_8945f61a- abbe-4156-8485-a4aa6f1908db=VmDevice:{id='VmDeviceId:{ deviceId='8945f61a-abbe-4156-8485-a4aa6f1908db', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='unix', type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, controller=0, type=virtio-serial, port=2}', managed='false', plugged='true', readOnly='false', deviceAlias='channel1', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}}, vmType=kvm, memSize=8192, smpCoresPerSocket=1, vmName=Oracle_SECONDARY, nice=0, status=Up, maxMemSize=32768, bootMenuEnable=false, vmId=3f57e669-5e4c-4d10-85cc-d573004a099d, numOfIoThreads=2, smpThreadsPerCore=1, smartcardEnable=false, maxMemSlots=16, kvmEnable=true, pitReinjection=false, displayNetwork=ovirtmgmt, devices=[Ljava.lang.Object;@286410fd, memGuaranteedSize=8192, maxVCpus=16, clientIp=, statusTime=4304263620 <(430)%20426-3620>, display=vnc}], log id: 58cdef4c 2018-02-12 16:46:46,267+01 INFO [org.ovirt.engine.core. vdsbroker.monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler5) [7fcb200a] Received a vnc Device without an address when processing VM 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: {device=vnc, specParams={displayNetwork=ovirtmgmt, keyMap=fr, displayIp=192.168.0.5}, type=graphics, deviceId=813957b1-446a-4e88-9e40-9fe76d2c442d, port=5901} 2018-02-12 16:46:46,268+01 INFO [org.ovirt.engine.core. vdsbroker.monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler5) [7fcb200a] Received a lease Device without an address when processing VM 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: {lease_id=3f57e669-5e4c-4d10-85cc-d573004a099d, sd_id=1e51cecc-eb2e-47d0-b185-920fdc7afa16, deviceId={uuid=a09949aa-5642-4b6d-94a4-8b0d04257be5}, offset=6291456, device=lease, path=/rhev/data-center/mnt/ glusterSD/192.168.0.6:_DATA01/1e51cecc-eb2e-47d0-b185-920fdc7afa16/dom_md/xleases, type=lease}
For the VM with 2 vdisks we see :
2018-02-12 16:49:06,112+01 INFO [org.ovirt.engine.core.bll.MigrateVmToServerCommand] (default task-50) [92b5af33-cb87-4142-b8fe-8b838dd7458e] Lock Acquired to object 'EngineLock:{exclusiveLocks='[f7d4ec12-627a-4b83-b59e-886400d55474=VM]', sharedLocks=''}' 2018-02-12 16:49:06,407+01 INFO [org.ovirt.engine.core.bll.MigrateVmToServerCommand] (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4142-b8fe-8b838dd7458e] Running command: MigrateVmToServerCommand internal: false. Entities affected : ID: f7d4ec12-627a-4b83-b59e-886400d55474 Type: VMAction group MIGRATE_VM with role type USER 2018-02-12 16:49:06,712+01 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4142-b8fe-8b838dd7458e] START, MigrateVDSCommand( MigrateVDSCommandParameters:{runAsync='true', hostId='d569c2dd-8f30-4878-8aea-858db285cf69', vmId='f7d4ec12-627a-4b83-b59e-886400d55474', srcHost='192.168.0.5', dstVdsId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1', dstHost=' 192.168.0.6:54321', migrationMethod='ONLINE', tunnelMigration='false', migrationDowntime='0', autoConverge='true', migrateCompressed='false', consoleAddress='null', maxBandwidth='500', enableGuestEvents='true', maxIncomingMigrations='2', maxOutgoingMigrations='2', convergenceSchedule='[init=[{name=setDowntime, params=[100]}], stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2, action={name=setDowntime, params=[200]}}, {limit=3, action={name=setDowntime, params=[300]}}, {limit=4, action={name=setDowntime, params=[400]}}, {limit=6, action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort, params=[]}}]]'}), log id: 3702a9e0 2018-02-12 16:49:06,713+01 INFO [org.ovirt.engine.core. vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4142-b8fe-8b838dd7458e] START, MigrateBrokerVDSCommand(HostName = ginger.local.systea.fr, MigrateVDSCommandParameters:{runAsync='true', hostId='d569c2dd-8f30-4878-8aea-858db285cf69', vmId='f7d4ec12-627a-4b83-b59e-886400d55474', srcHost='192.168.0.5', dstVdsId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1', dstHost=' 192.168.0.6:54321', migrationMethod='ONLINE', tunnelMigration='false', migrationDowntime='0', autoConverge='true', migrateCompressed='false', consoleAddress='null', maxBandwidth='500', enableGuestEvents='true', maxIncomingMigrations='2', maxOutgoingMigrations='2', convergenceSchedule='[init=[{name=setDowntime, params=[100]}], stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2, action={name=setDowntime, params=[200]}}, {limit=3, action={name=setDowntime, params=[300]}}, {limit=4, action={name=setDowntime, params=[400]}}, {limit=6, action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort, params=[]}}]]'}), log id: 1840069c 2018-02-12 16:49:06,724+01 INFO [org.ovirt.engine.core. vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4142-b8fe-8b838dd7458e] FINISH, MigrateBrokerVDSCommand, log id: 1840069c 2018-02-12 16:49:06,732+01 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4142-b8fe-8b838dd7458e] FINISH, MigrateVDSCommand, return: MigratingFrom, log id: 3702a9e0 2018-02-12 16:49:06,753+01 INFO [org.ovirt.engine.core.dal. dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4142-b8fe-8b838dd7458e] EVENT_ID: VM_MIGRATION_START(62), Correlation ID: 92b5af33-cb87-4142-b8fe-8b838dd7458e, Job ID: f4f54054-f7c8-4481-8eda-d5a15c383061, Call Stack: null, Custom ID: null, Custom Event ID: -1, Message: Migration started (VM: Oracle_PRIMARY, Source: ginger.local.systea.fr, Destination: victor.local.systea.fr, User: admin@internal-authz). ... 2018-02-12 16:49:16,453+01 INFO [org.ovirt.engine.core. vdsbroker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler4) [162a5bc3] Fetched 2 VMs from VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1' 2018-02-12 16:49:16,455+01 INFO [org.ovirt.engine.core. vdsbroker.monitoring.VmAnalyzer] (DefaultQuartzScheduler4) [162a5bc3] VM 'f7d4ec12-627a-4b83-b59e-886400d55474'(Oracle_PRIMARY) was unexpectedly detected as 'MigratingTo' on VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'( victor.local.systea.fr) (expected on 'd569c2dd-8f30-4878-8aea- 858db285cf69') 2018-02-12 16:49:16,455+01 INFO [org.ovirt.engine.core. vdsbroker.monitoring.VmAnalyzer] (DefaultQuartzScheduler4) [162a5bc3] VM 'f7d4ec12-627a-4b83-b59e-886400d55474' is migrating to VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'(victor.local.systea.fr) ignoring it in the refresh until migration is done ... 2018-02-12 16:49:31,484+01 INFO [org.ovirt.engine.core. vdsbroker.monitoring.VmAnalyzer] (DefaultQuartzScheduler5) [11a7619a] VM 'f7d4ec12-627a-4b83-b59e-886400d55474'(Oracle_PRIMARY) was unexpectedly detected as 'MigratingTo' on VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'( victor.local.systea.fr) (expected on 'd569c2dd-8f30-4878-8aea- 858db285cf69') 2018-02-12 16:49:31,484+01 INFO [org.ovirt.engine.core. vdsbroker.monitoring.VmAnalyzer] (DefaultQuartzScheduler5) [11a7619a] VM 'f7d4ec12-627a-4b83-b59e-886400d55474' is migrating to VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'(victor.local.systea.fr) ignoring it in the refresh until migration is done
and so on, last lines repeated indefinitly for hours since we poweroff the VM... Is this something known ? Any idea about that ?
Thanks
Ovirt 4.1.6, updated last at feb-13. Gluster 3.12.1.
--
Cordialement,
*Frank Soyer *
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

, FullListVDSCommandParameters:{<wbr />runAsync=3D'true', hostId=3D'd5= 69c2dd-8f30-4878-<wbr />8aea-858db285cf69', vmIds=3D'[3f57e669-5e4c-4d1= 0-<wbr />85cc-d573004a099d]'}), log id: 7cc65298<br />2018-02-12 16:46:= 42,254+01 INFO [org.ovirt.engine.core.<wbr />vdsbroker.vdsbroker.= <wbr />FullListVDSCommand] (ForkJoinPool-1-worker-4) [] FINISH, FullLis= tVDSCommand, return: [{acpiEnable=3Dtrue, emulatedMachine=3Dpc-i440fx-<= wbr />rhel7.3.0, afterMigrationStatus=3D, tabletEnable=3Dtrue, pid=3D18= 748, guestDiskMapping=3D{}, transparentHugePages=3Dtrue, timeOffset=3D0= , cpuType=3DNehalem, smp=3D2, guestNumaNodes=3D[Ljava.lang.<wbr />Objec= t;@760085fd, custom=3D{device=5Ffbddd528-7d93-<wbr />49c6-a286-180e021c= b274device=5F<wbr />879c93ab-4df1-435c-af02-<wbr />565039fcc254=3DVmDev= ice:{id=3D'<wbr />VmDeviceId:{deviceId=3D'<wbr />879c93ab-4df1-435c-af0= 2-<wbr />565039fcc254', vmId=3D'3f57e669-5e4c-4d10-85cc-<wbr />d573004a= 099d'}', device=3D'unix', type=3D'CHANNEL', bootOrder=3D'0', specParams= =3D'[]', address=3D'{bus=3D0, controller=3D0, type=3Dvirtio-serial, por= t=3D1}', managed=3D'false', plugged=3D'true', readOnly=3D'false', devic= eAlias=3D'channel0', customProperties=3D'[]', snapshotId=3D'null', logi= calName=3D'null', hostDevice=3D'null'}, device=5Ffbddd528-7d93-49c6-<wb= r />a286-180e021cb274device=5F<wbr />879c93ab-4df1-435c-af02-<wbr />565= 039fcc254device=5F8945f61a-<wbr />abbe-4156-8485-<wbr />a4aa6f1908dbdev= ice=5F017b5e59-<wbr />01c4-4aac-bf0c-b5d9557284d6=3D<wbr />VmDevice:{id= =3D'VmDeviceId:{<wbr />deviceId=3D'017b5e59-01c4-4aac-<wbr />bf0c-b5d95= 57284d6', vmId=3D'3f57e669-5e4c-4d10-85cc-<wbr />d573004a099d'}', devic= e=3D'tablet', type=3D'UNKNOWN', bootOrder=3D'0', specParams=3D'[]', add= ress=3D'{bus=3D0, type=3Dusb, port=3D1}', managed=3D'false', plugged=3D= 'true', readOnly=3D'false', deviceAlias=3D'input0', customProperties=3D= '[]', snapshotId=3D'null', logicalName=3D'null', hostDevice=3D'null'}, = device=5Ffbddd528-7d93-49c6-<wbr />a286-180e021cb274=3DVmDevice:{<wbr /= id=3D'VmDeviceId:{deviceId=3D'<wbr />fbddd528-7d93-49c6-a286-<wbr />18= 0e021cb274', vmId=3D'3f57e669-5e4c-4d10-85cc-<wbr />d573004a099d'}', de= vice=3D'ide', type=3D'CONTROLLER', bootOrder=3D'0', specParams=3D'[]', = address=3D'{slot=3D0x01, bus=3D0x00, domain=3D0x0000, type=3Dpci, funct= ion=3D0x1}', managed=3D'false', plugged=3D'true', readOnly=3D'false', d= eviceAlias=3D'ide', customProperties=3D'[]', snapshotId=3D'null', logic= alName=3D'null', hostDevice=3D'null'}, device=5Ffbddd528-7d93-49c6-<wbr= />a286-180e021cb274device=5F<wbr />879c93ab-4df1-435c-af02-<wbr />5650= 39fcc254device=5F8945f61a-<wbr />abbe-4156-8485-a4aa6f1908db=3D<wbr />V= mDevice:{id=3D'VmDeviceId:{<wbr />deviceId=3D'8945f61a-abbe-4156-<wbr /= 8485-a4aa6f1908db', vmId=3D'3f57e669-5e4c-4d10-85cc-<wbr />d573004a099= d'}', device=3D'unix', type=3D'CHANNEL', bootOrder=3D'0', specParams=3D= '[]', address=3D'{bus=3D0, controller=3D0, type=3Dvirtio-serial, port=3D= 2}', managed=3D'false', plugged=3D'true', readOnly=3D'false', deviceAli= as=3D'channel1', customProperties=3D'[]', snapshotId=3D'null', logicalN= ame=3D'null', hostDevice=3D'null'}}, vmType=3Dkvm, memSize=3D8192, smpC= oresPerSocket=3D1, vmName=3DOracle=5FSECONDARY, nice=3D0, status=3DUp, = maxMemSize=3D32768, bootMenuEnable=3Dfalse, vmId=3D3f57e669-5e4c-4d10-8= 5cc-<wbr />d573004a099d, numOfIoThreads=3D2, smpThreadsPerCore=3D1, sma= rtcardEnable=3Dfalse, maxMemSlots=3D16, kvmEnable=3Dtrue, pitReinjectio= n=3Dfalse, displayNetwork=3Dovirtmgmt, devices=3D[Ljava.lang.Object;@<w= br />2e4d3dd3, memGuaranteedSize=3D8192, maxVCpus=3D16, clientIp=3D, st= atusTime=3D<a value=3D"+14304259600" target=3D"=5Fblank" href=3D"tel:(4= 30)%20425-9600">4304259600</a>, display=3Dvnc}], log id: 7cc65298<br />= 2018-02-12 16:46:42,257+01 INFO [org.ovirt.engine.core.<wbr />vds= broker.monitoring.<wbr />VmDevicesMonitoring] (ForkJoinPool-1-worker-4)= [] Received a vnc Device without an address when processing VM 3f57e66= 9-5e4c-4d10-85cc-<wbr />d573004a099d devices, skipping device: {device=3D= vnc, specParams=3D{displayNetwork=3D<wbr />ovirtmgmt, keyMap=3Dfr, disp= layIp=3D192.168.0.5}, type=3Dgraphics, deviceId=3D813957b1-446a-4e88-<w= br />9e40-9fe76d2c442d, port=3D5901}<br />2018-02-12 16:46:42,257+01 IN= FO [org.ovirt.engine.core.<wbr />vdsbroker.monitoring.<wbr />VmDe= vicesMonitoring] (ForkJoinPool-1-worker-4) [] Received a lease Device w= ithout an address when processing VM 3f57e669-5e4c-4d10-85cc-<wbr />d57= 3004a099d devices, skipping device: {lease=5Fid=3D3f57e669-5e4c-4d10-<w= br />85cc-d573004a099d, sd=5Fid=3D1e51cecc-eb2e-47d0-b185-<wbr />920fdc= 7afa16, deviceId=3D{uuid=3Da09949aa-5642-<wbr />4b6d-94a4-8b0d04257be5}= , offset=3D6291456, device=3Dlease, path=3D/rhev/data-center/mnt/<wbr /= glusterSD/192.168.0.6:=5FDATA01/<wbr />1e51cecc-eb2e-47d0-b185-<wbr />= 920fdc7afa16/dom=5Fmd/xleases, type=3Dlease}<br />2018-02-12 16:46:46,2= 60+01 INFO [org.ovirt.engine.core.<wbr />vdsbroker.vdsbroker.<wbr= />FullListVDSCommand] (DefaultQuartzScheduler5) [7fcb200a] FINISH, Ful= lListVDSCommand, return: [{acpiEnable=3Dtrue, emulatedMachine=3Dpc-i440= fx-<wbr />rhel7.3.0, afterMigrationStatus=3D, tabletEnable=3Dtrue, pid=3D= 18748, guestDiskMapping=3D{0QEMU=5FQEMU=5F<wbr />HARDDISK=5Fd890fa68-fb= a4-4f49-9=3D<wbr />{name=3D/dev/sda}, QEMU=5FDVD-ROM=5FQM00003=3D{name=3D= /<wbr />dev/sr0}}, transparentHugePages=3Dtrue, timeOffset=3D0, cpuType= =3DNehalem, smp=3D2, guestNumaNodes=3D[Ljava.lang.<wbr />Object;@77951f= af, custom=3D{device=5Ffbddd528-7d93-<wbr />49c6-a286-180e021cb274devic= e=5F<wbr />879c93ab-4df1-435c-af02-<wbr />565039fcc254=3DVmDevice:{id=3D= '<wbr />VmDeviceId:{deviceId=3D'<wbr />879c93ab-4df1-435c-af02-<wbr />5= 65039fcc254', vmId=3D'3f57e669-5e4c-4d10-85cc-<wbr />d573004a099d'}', d= evice=3D'unix', type=3D'CHANNEL', bootOrder=3D'0', specParams=3D'[]', a=
920fdc7afa16, deviceId=3D{uuid=3Da09949aa-5642-<wbr />4b6d-94a4-8b0d04= 257be5}, offset=3D6291456, device=3Dlease, path=3D/rhev/data-center/mnt= /<wbr />glusterSD/192.168.0.6:=5FDATA01/<wbr />1e51cecc-eb2e-47d0-b185-= <wbr />920fdc7afa16/dom=5Fmd/xleases, type=3Dlease}<p> </p></block= quote><br />For the VM with 2 vdisks we see :<blockquote><p>2018-02-12 = 16:49:06,112+01 INFO [org.ovirt.engine.core.bll.<wbr />MigrateVmT= oServerCommand] (default task-50) [92b5af33-cb87-4142-b8fe-<wbr />8b838=
f7d4ec12-627a-4b83-b59e-<wbr />886400d55474=3DVM]', sharedLocks=3D''}'= <br />2018-02-12 16:49:06,407+01 INFO [org.ovirt.engine.core.bll.= <wbr />MigrateVmToServerCommand] (org.ovirt.thread.pool-6-<wbr />thread= -49) [92b5af33-cb87-4142-b8fe-<wbr />8b838dd7458e] Running command: Mig= rateVmToServerCommand internal: false. Entities affected : ID: f7= d4ec12-627a-4b83-b59e-<wbr />886400d55474 Type: VMAction group MIGRATE=5F= VM with role type USER<br />2018-02-12 16:49:06,712+01 INFO [org.= ovirt.engine.core.<wbr />vdsbroker.MigrateVDSCommand] (org.ovirt.thread= .pool-6-<wbr />thread-49) [92b5af33-cb87-4142-b8fe-<wbr />8b838dd7458e]= START, MigrateVDSCommand( MigrateVDSCommandParameters:{<wbr />runAsync= =3D'true', hostId=3D'd569c2dd-8f30-4878-<wbr />8aea-858db285cf69', vmId= =3D'f7d4ec12-627a-4b83-b59e-<wbr />886400d55474', srcHost=3D'192.168.0.= 5', dstVdsId=3D'ce3938b1-b23f-4d22-<wbr />840a-f17d7cd87bb1', dstHost=3D= '<a target=3D"=5Fblank" href=3D"http://192.168.0.6:54321">192.168.0.6:5= 4321</a>', migrationMethod=3D'ONLINE', tunnelMigration=3D'false', migra= tionDowntime=3D'0', autoConverge=3D'true', migrateCompressed=3D'false',= consoleAddress=3D'null', maxBandwidth=3D'500', enableGuestEvents=3D'tr= ue', maxIncomingMigrations=3D'2', maxOutgoingMigrations=3D'2', converge= nceSchedule=3D'[init=3D[{<wbr />name=3DsetDowntime, params=3D[100]}], s= talling=3D[{limit=3D1, action=3D{name=3DsetDowntime, params=3D[150]}}, = {limit=3D2, action=3D{name=3DsetDowntime, params=3D[200]}}, {limit=3D3,= action=3D{name=3DsetDowntime, params=3D[300]}}, {limit=3D4, action=3D{= name=3DsetDowntime, params=3D[400]}}, {limit=3D6, action=3D{name=3DsetD= owntime, params=3D[500]}}, {limit=3D-1, action=3D{name=3Dabort, params=3D= []}}]]'}), log id: 3702a9e0<br />2018-02-12 16:49:06,713+01 INFO = [org.ovirt.engine.core.<wbr />vdsbroker.vdsbroker.<wbr />MigrateBrokerV= DSCommand] (org.ovirt.thread.pool-6-<wbr />thread-49) [92b5af33-cb87-41= 42-b8fe-<wbr />8b838dd7458e] START, MigrateBrokerVDSCommand(<wbr />Host= Name =3D <a target=3D"=5Fblank" href=3D"http://ginger.local.systea.fr">= ginger.local.systea.fr</a>, MigrateVDSCommandParameters:{<wbr />runAsyn= c=3D'true', hostId=3D'd569c2dd-8f30-4878-<wbr />8aea-858db285cf69', vmI= d=3D'f7d4ec12-627a-4b83-b59e-<wbr />886400d55474', srcHost=3D'192.168.0= .5', dstVdsId=3D'ce3938b1-b23f-4d22-<wbr />840a-f17d7cd87bb1', dstHost=3D= '<a target=3D"=5Fblank" href=3D"http://192.168.0.6:54321">192.168.0.6:5= 4321</a>', migrationMethod=3D'ONLINE', tunnelMigration=3D'false', migra= tionDowntime=3D'0', autoConverge=3D'true', migrateCompressed=3D'false',= consoleAddress=3D'null', maxBandwidth=3D'500', enableGuestEvents=3D'tr= ue', maxIncomingMigrations=3D'2', maxOutgoingMigrations=3D'2', converge= nceSchedule=3D'[init=3D[{<wbr />name=3DsetDowntime, params=3D[100]}], s= talling=3D[{limit=3D1, action=3D{name=3DsetDowntime, params=3D[150]}}, = {limit=3D2, action=3D{name=3DsetDowntime, params=3D[200]}}, {limit=3D3,= action=3D{name=3DsetDowntime, params=3D[300]}}, {limit=3D4, action=3D{= name=3DsetDowntime, params=3D[400]}}, {limit=3D6, action=3D{name=3DsetD= owntime, params=3D[500]}}, {limit=3D-1, action=3D{name=3Dabort, params=3D= []}}]]'}), log id: 1840069c<br />2018-02-12 16:49:06,724+01 INFO = [org.ovirt.engine.core.<wbr />vdsbroker.vdsbroker.<wbr />MigrateBrokerV= DSCommand] (org.ovirt.thread.pool-6-<wbr />thread-49) [92b5af33-cb87-41= 42-b8fe-<wbr />8b838dd7458e] FINISH, MigrateBrokerVDSCommand, log id: 1= 840069c<br />2018-02-12 16:49:06,732+01 INFO [org.ovirt.engine.co= re.<wbr />vdsbroker.MigrateVDSCommand] (org.ovirt.thread.pool-6-<wbr />=
------=_=-_OpenGroupware_org_NGMime-18019-1518771817.081391-1------ Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable Content-Length: 30830 Hi Maor, sorry for the double post, I've change the email adress of my account a= nd supposed that I'd need to re-post it. And thank you for your time. Here are the logs. I added a vdisk to an e= xisting VM : it no more migrates, needing to poweroff it after minutes.= Then simply deleting the second disk makes migrate it in exactly 9s wi= thout problem !=C2=A0 https://gist.github.com/fgth/4707446331d201eef574ac31b6e89561 https://gist.github.com/fgth/f8de9c22664aee53722af676bff8719d -- Cordialement, Frank Soyer=C2=A0Le Mercredi, F=C3=A9vrier 14, 2018 11:04 CET, Maor Lip= chuk <mlipchuk@redhat.com> a =C3=A9crit: =C2=A0Hi Frank,=C2=A0I already replied on your last email.Can you provi= de the VDSM logs from the time of the migration failure for both hosts:= =C2=A0=C2=A0ginger.local.systea.fr=C2=A0and=C2=A0victor.local.systea.fr= =C2=A0Thanks,Maor=C2=A0On Wed, Feb 14, 2018 at 11:23 AM, fsoyer <fsoyer= @systea.fr> wrote: Hi all, I discovered yesterday a problem when migrating VM with more than one v= disk. On our test servers (oVirt4.1, shared storage with Gluster), I created = 2 VMs needed for a test, from a template with a 20G vdisk. On this VMs = I added a 100G vdisk (for this tests I didn't want to waste time to ext= end the existing vdisks... But I lost time finally...). The VMs with th= e 2 vdisks works well. Now I saw some updates waiting on the host. I tried to put it in mainte= nance... But it stopped on the two VM. They were marked "migrating", bu= t no more accessible. Other (small) VMs with only 1 vdisk was migrated = without problem at the same time. I saw that a kvm process for the (big) VMs was launched on the source A= ND destination host, but after tens of minutes, the migration and the V= Ms was always freezed. I tried to cancel the migration for the VMs : fa= iled. The only way to stop it was to poweroff the VMs : the kvm process= died on the 2 hosts and the GUI alerted on a failed migration. In doubt, I tried to delete the second vdisk on one of this VMs : it mi= grates then without error ! And no access problem. I tried to extend the first vdisk of the second VM, the delete the seco= nd vdisk : it migrates now without problem !=C2=A0=C2=A0=C2=A0 So after another test with a VM with 2 vdisks, I can say that this bloc= ked the migration process :( In engine.log, for a VMs with 1 vdisk migrating well, we see :2018-02-1= 2 16:46:29,705+01 INFO =C2=A0[org.ovirt.engine.core.bll.MigrateVmToServ= erCommand] (default task-28) [2f712024-5982-46a8-82c8-fd8293da5725] Loc= k Acquired to object 'EngineLock:{exclusiveLocks=3D'[3f57e669-5e4c-4d10= -85cc-d573004a099d=3DVM]', sharedLocks=3D''}' 2018-02-12 16:46:29,955+01 INFO =C2=A0[org.ovirt.engine.core.bll.Migrat= eVmToServerCommand] (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-= 46a8-82c8-fd8293da5725] Running command: MigrateVmToServerCommand inter= nal: false. Entities affected : =C2=A0ID: 3f57e669-5e4c-4d10-85cc-d5730= 04a099d Type: VMAction group MIGRATE=5FVM with role type USER 2018-02-12 16:46:30,261+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.= MigrateVDSCommand] (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-4= 6a8-82c8-fd8293da5725] START, MigrateVDSCommand( MigrateVDSCommandParam= eters:{runAsync=3D'true', hostId=3D'ce3938b1-b23f-4d22-840a-f17d7cd87bb= 1', vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d', srcHost=3D'192.168.0= .6', dstVdsId=3D'd569c2dd-8f30-4878-8aea-858db285cf69', dstHost=3D'192.= 168.0.5:54321', migrationMethod=3D'ONLINE', tunnelMigration=3D'false', = migrationDowntime=3D'0', autoConverge=3D'true', migrateCompressed=3D'fa= lse', consoleAddress=3D'null', maxBandwidth=3D'500', enableGuestEvents=3D= 'true', maxIncomingMigrations=3D'2', maxOutgoingMigrations=3D'2', conve= rgenceSchedule=3D'[init=3D[{name=3DsetDowntime, params=3D[100]}], stall= ing=3D[{limit=3D1, action=3D{name=3DsetDowntime, params=3D[150]}}, {lim= it=3D2, action=3D{name=3DsetDowntime, params=3D[200]}}, {limit=3D3, act= ion=3D{name=3DsetDowntime, params=3D[300]}}, {limit=3D4, action=3D{name= =3DsetDowntime, params=3D[400]}}, {limit=3D6, action=3D{name=3DsetDownt= ime, params=3D[500]}}, {limit=3D-1, action=3D{name=3Dabort, params=3D[]= }}]]'}), log id: 14f61ee0 2018-02-12 16:46:30,262+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.= vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-thread-32) = [2f712024-5982-46a8-82c8-fd8293da5725] START, MigrateBrokerVDSCommand(H= ostName =3D victor.local.systea.fr, MigrateVDSCommandParameters:{runAsy= nc=3D'true', hostId=3D'ce3938b1-b23f-4d22-840a-f17d7cd87bb1', vmId=3D'3= f57e669-5e4c-4d10-85cc-d573004a099d', srcHost=3D'192.168.0.6', dstVdsId= =3D'd569c2dd-8f30-4878-8aea-858db285cf69', dstHost=3D'192.168.0.5:54321= ', migrationMethod=3D'ONLINE', tunnelMigration=3D'false', migrationDown= time=3D'0', autoConverge=3D'true', migrateCompressed=3D'false', console= Address=3D'null', maxBandwidth=3D'500', enableGuestEvents=3D'true', max= IncomingMigrations=3D'2', maxOutgoingMigrations=3D'2', convergenceSched= ule=3D'[init=3D[{name=3DsetDowntime, params=3D[100]}], stalling=3D[{lim= it=3D1, action=3D{name=3DsetDowntime, params=3D[150]}}, {limit=3D2, act= ion=3D{name=3DsetDowntime, params=3D[200]}}, {limit=3D3, action=3D{name= =3DsetDowntime, params=3D[300]}}, {limit=3D4, action=3D{name=3DsetDownt= ime, params=3D[400]}}, {limit=3D6, action=3D{name=3DsetDowntime, params= =3D[500]}}, {limit=3D-1, action=3D{name=3Dabort, params=3D[]}}]]'}), lo= g id: 775cd381 2018-02-12 16:46:30,277+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.= vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-thread-32) = [2f712024-5982-46a8-82c8-fd8293da5725] FINISH, MigrateBrokerVDSCommand,= log id: 775cd381 2018-02-12 16:46:30,285+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.= MigrateVDSCommand] (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-4= 6a8-82c8-fd8293da5725] FINISH, MigrateVDSCommand, return: MigratingFrom= , log id: 14f61ee0 2018-02-12 16:46:30,301+01 INFO =C2=A0[org.ovirt.engine.core.dal.dbbrok= er.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-6-thread-3= 2) [2f712024-5982-46a8-82c8-fd8293da5725] EVENT=5FID: VM=5FMIGRATION=5F= START(62), Correlation ID: 2f712024-5982-46a8-82c8-fd8293da5725, Job ID= : 4bd19aa9-cc99-4d02-884e-5a1e857a7738, Call Stack: null, Custom ID: nu= ll, Custom Event ID: -1, Message: Migration started (VM: Oracle=5FSECON= DARY, Source: victor.local.systea.fr, Destination: ginger.local.systea.= fr, User: admin@internal-authz). 2018-02-12 16:46:31,106+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.= vdsbroker.FullListVDSCommand] (DefaultQuartzScheduler9) [54a65b66] STAR= T, FullListVDSCommand(HostName =3D victor.local.systea.fr, FullListVDSC= ommandParameters:{runAsync=3D'true', hostId=3D'ce3938b1-b23f-4d22-840a-= f17d7cd87bb1', vmIds=3D'[3f57e669-5e4c-4d10-85cc-d573004a099d]'}), log = id: 54b4b435 2018-02-12 16:46:31,147+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.= vdsbroker.FullListVDSCommand] (DefaultQuartzScheduler9) [54a65b66] FINI= SH, FullListVDSCommand, return: [{acpiEnable=3Dtrue, emulatedMachine=3D= pc-i440fx-rhel7.3.0, tabletEnable=3Dtrue, pid=3D1493, guestDiskMapping=3D= {0QEMU=5FQEMU=5FHARDDISK=5Fd890fa68-fba4-4f49-9=3D{name=3D/dev/sda}, QE= MU=5FDVD-ROM=5FQM00003=3D{name=3D/dev/sr0}}, transparentHugePages=3Dtru= e, timeOffset=3D0, cpuType=3DNehalem, smp=3D2, pauseCode=3DNOERR, guest= NumaNodes=3D[Ljava.lang.Object;@1d9042cd, smartcardEnable=3Dfalse, cust= om=3D{device=5Ffbddd528-7d93-49c6-a286-180e021cb274device=5F879c93ab-4d= f1-435c-af02-565039fcc254=3DVmDevice:{id=3D'VmDeviceId:{deviceId=3D'879= c93ab-4df1-435c-af02-565039fcc254', vmId=3D'3f57e669-5e4c-4d10-85cc-d57= 3004a099d'}', device=3D'unix', type=3D'CHANNEL', bootOrder=3D'0', specP= arams=3D'[]', address=3D'{bus=3D0, controller=3D0, type=3Dvirtio-serial= , port=3D1}', managed=3D'false', plugged=3D'true', readOnly=3D'false', = deviceAlias=3D'channel0', customProperties=3D'[]', snapshotId=3D'null',= logicalName=3D'null', hostDevice=3D'null'}, device=5Ffbddd528-7d93-49c= 6-a286-180e021cb274device=5F879c93ab-4df1-435c-af02-565039fcc254device=5F= 8945f61a-abbe-4156-8485-a4aa6f1908dbdevice=5F017b5e59-01c4-4aac-bf0c-b5= d9557284d6=3DVmDevice:{id=3D'VmDeviceId:{deviceId=3D'017b5e59-01c4-4aac= -bf0c-b5d9557284d6', vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}', d= evice=3D'tablet', type=3D'UNKNOWN', bootOrder=3D'0', specParams=3D'[]',= address=3D'{bus=3D0, type=3Dusb, port=3D1}', managed=3D'false', plugge= d=3D'true', readOnly=3D'false', deviceAlias=3D'input0', customPropertie= s=3D'[]', snapshotId=3D'null', logicalName=3D'null', hostDevice=3D'null= '}, device=5Ffbddd528-7d93-49c6-a286-180e021cb274=3DVmDevice:{id=3D'VmD= eviceId:{deviceId=3D'fbddd528-7d93-49c6-a286-180e021cb274', vmId=3D'3f5= 7e669-5e4c-4d10-85cc-d573004a099d'}', device=3D'ide', type=3D'CONTROLLE= R', bootOrder=3D'0', specParams=3D'[]', address=3D'{slot=3D0x01, bus=3D= 0x00, domain=3D0x0000, type=3Dpci, function=3D0x1}', managed=3D'false',= plugged=3D'true', readOnly=3D'false', deviceAlias=3D'ide', customPrope= rties=3D'[]', snapshotId=3D'null', logicalName=3D'null', hostDevice=3D'= null'}, device=5Ffbddd528-7d93-49c6-a286-180e021cb274device=5F879c93ab-= 4df1-435c-af02-565039fcc254device=5F8945f61a-abbe-4156-8485-a4aa6f1908d= b=3DVmDevice:{id=3D'VmDeviceId:{deviceId=3D'8945f61a-abbe-4156-8485-a4a= a6f1908db', vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}', device=3D'= unix', type=3D'CHANNEL', bootOrder=3D'0', specParams=3D'[]', address=3D= '{bus=3D0, controller=3D0, type=3Dvirtio-serial, port=3D2}', managed=3D= 'false', plugged=3D'true', readOnly=3D'false', deviceAlias=3D'channel1'= , customProperties=3D'[]', snapshotId=3D'null', logicalName=3D'null', h= ostDevice=3D'null'}}, vmType=3Dkvm, memSize=3D8192, smpCoresPerSocket=3D= 1, vmName=3DOracle=5FSECONDARY, nice=3D0, status=3DMigration Source, ma= xMemSize=3D32768, bootMenuEnable=3Dfalse, vmId=3D3f57e669-5e4c-4d10-85c= c-d573004a099d, numOfIoThreads=3D2, smpThreadsPerCore=3D1, memGuarantee= dSize=3D8192, kvmEnable=3Dtrue, pitReinjection=3Dfalse, displayNetwork=3D= ovirtmgmt, devices=3D[Ljava.lang.Object;@28ae66d7, display=3Dvnc, maxVC= pus=3D16, clientIp=3D, statusTime=3D4299484520, maxMemSlots=3D16}], log= id: 54b4b435 2018-02-12 16:46:31,150+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.= monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler1) [27fac647] F= etched 3 VMs from VDS 'd569c2dd-8f30-4878-8aea-858db285cf69' 2018-02-12 16:46:31,151+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.= monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler9) [54a65b66] Re= ceived a vnc Device without an address when processing VM 3f57e669-5e4c= -4d10-85cc-d573004a099d devices, skipping device: {device=3Dvnc, specPa= rams=3D{displayNetwork=3Dovirtmgmt, keyMap=3Dfr, displayIp=3D192.168.0.= 6}, type=3Dgraphics, deviceId=3D813957b1-446a-4e88-9e40-9fe76d2c442d, p= ort=3D5901} 2018-02-12 16:46:31,151+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.= monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler9) [54a65b66] Re= ceived a lease Device without an address when processing VM 3f57e669-5e= 4c-4d10-85cc-d573004a099d devices, skipping device: {lease=5Fid=3D3f57e= 669-5e4c-4d10-85cc-d573004a099d, sd=5Fid=3D1e51cecc-eb2e-47d0-b185-920f= dc7afa16, deviceId=3D{uuid=3Da09949aa-5642-4b6d-94a4-8b0d04257be5}, off= set=3D6291456, device=3Dlease, path=3D/rhev/data-center/mnt/glusterSD/1= 92.168.0.6:=5FDATA01/1e51cecc-eb2e-47d0-b185-920fdc7afa16/dom=5Fmd/xlea= ses, type=3Dlease} 2018-02-12 16:46:31,152+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.= monitoring.VmAnalyzer] (DefaultQuartzScheduler1) [27fac647] VM '3f57e66= 9-5e4c-4d10-85cc-d573004a099d'(Oracle=5FSECONDARY) was unexpectedly det= ected as 'MigratingTo' on VDS 'd569c2dd-8f30-4878-8aea-858db285cf69'(gi= nger.local.systea.fr) (expected on 'ce3938b1-b23f-4d22-840a-f17d7cd87bb= 1') 2018-02-12 16:46:31,152+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.= monitoring.VmAnalyzer] (DefaultQuartzScheduler1) [27fac647] VM '3f57e66= 9-5e4c-4d10-85cc-d573004a099d' is migrating to VDS 'd569c2dd-8f30-4878-= 8aea-858db285cf69'(ginger.local.systea.fr) ignoring it in the refresh u= ntil migration is done .... 2018-02-12 16:46:41,631+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.= monitoring.VmAnalyzer] (ForkJoinPool-1-worker-11) [] VM '3f57e669-5e4c-= 4d10-85cc-d573004a099d' was reported as Down on VDS 'ce3938b1-b23f-4d22= -840a-f17d7cd87bb1'(victor.local.systea.fr) 2018-02-12 16:46:41,632+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.= vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-11) [] START, Destr= oyVDSCommand(HostName =3D victor.local.systea.fr, DestroyVmVDSCommandPa= rameters:{runAsync=3D'true', hostId=3D'ce3938b1-b23f-4d22-840a-f17d7cd8= 7bb1', vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d', force=3D'false', = secondsToWait=3D'0', gracefully=3D'false', reason=3D'', ignoreNoVm=3D't= rue'}), log id: 560eca57 2018-02-12 16:46:41,650+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.= vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-11) [] FINISH, Dest= royVDSCommand, log id: 560eca57 2018-02-12 16:46:41,650+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.= monitoring.VmAnalyzer] (ForkJoinPool-1-worker-11) [] VM '3f57e669-5e4c-= 4d10-85cc-d573004a099d'(Oracle=5FSECONDARY) moved from 'MigratingFrom' = --> 'Down' 2018-02-12 16:46:41,651+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.= monitoring.VmAnalyzer] (ForkJoinPool-1-worker-11) [] Handing over VM '3= f57e669-5e4c-4d10-85cc-d573004a099d'(Oracle=5FSECONDARY) to Host 'd569c= 2dd-8f30-4878-8aea-858db285cf69'. Setting VM to status 'MigratingTo' 2018-02-12 16:46:42,163+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.= monitoring.VmAnalyzer] (ForkJoinPool-1-worker-4) [] VM '3f57e669-5e4c-4= d10-85cc-d573004a099d'(Oracle=5FSECONDARY) moved from 'MigratingTo' -->= 'Up' 2018-02-12 16:46:42,169+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.= vdsbroker.MigrateStatusVDSCommand] (ForkJoinPool-1-worker-4) [] START, = MigrateStatusVDSCommand(HostName =3D ginger.local.systea.fr, MigrateSta= tusVDSCommandParameters:{runAsync=3D'true', hostId=3D'd569c2dd-8f30-487= 8-8aea-858db285cf69', vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}), = log id: 7a25c281 2018-02-12 16:46:42,174+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.= vdsbroker.MigrateStatusVDSCommand] (ForkJoinPool-1-worker-4) [] FINISH,= MigrateStatusVDSCommand, log id: 7a25c281 2018-02-12 16:46:42,194+01 INFO =C2=A0[org.ovirt.engine.core.dal.dbbrok= er.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-4) [] EVEN= T=5FID: VM=5FMIGRATION=5FDONE(63), Correlation ID: 2f712024-5982-46a8-8= 2c8-fd8293da5725, Job ID: 4bd19aa9-cc99-4d02-884e-5a1e857a7738, Call St= ack: null, Custom ID: null, Custom Event ID: -1, Message: Migration com= pleted (VM: Oracle=5FSECONDARY, Source: victor.local.systea.fr, Destina= tion: ginger.local.systea.fr, Duration: 11 seconds, Total: 11 seconds, = Actual downtime: (N/A)) 2018-02-12 16:46:42,201+01 INFO =C2=A0[org.ovirt.engine.core.bll.Migrat= eVmToServerCommand] (ForkJoinPool-1-worker-4) [] Lock freed to object '= EngineLock:{exclusiveLocks=3D'[3f57e669-5e4c-4d10-85cc-d573004a099d=3DV= M]', sharedLocks=3D''}' 2018-02-12 16:46:42,203+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.= vdsbroker.FullListVDSCommand] (ForkJoinPool-1-worker-4) [] START, FullL= istVDSCommand(HostName =3D ginger.local.systea.fr, FullListVDSCommandPa= rameters:{runAsync=3D'true', hostId=3D'd569c2dd-8f30-4878-8aea-858db285= cf69', vmIds=3D'[3f57e669-5e4c-4d10-85cc-d573004a099d]'}), log id: 7cc6= 5298 2018-02-12 16:46:42,254+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.= vdsbroker.FullListVDSCommand] (ForkJoinPool-1-worker-4) [] FINISH, Full= ListVDSCommand, return: [{acpiEnable=3Dtrue, emulatedMachine=3Dpc-i440f= x-rhel7.3.0, afterMigrationStatus=3D, tabletEnable=3Dtrue, pid=3D18748,= guestDiskMapping=3D{}, transparentHugePages=3Dtrue, timeOffset=3D0, cp= uType=3DNehalem, smp=3D2, guestNumaNodes=3D[Ljava.lang.Object;@760085fd= , custom=3D{device=5Ffbddd528-7d93-49c6-a286-180e021cb274device=5F879c9= 3ab-4df1-435c-af02-565039fcc254=3DVmDevice:{id=3D'VmDeviceId:{deviceId=3D= '879c93ab-4df1-435c-af02-565039fcc254', vmId=3D'3f57e669-5e4c-4d10-85cc= -d573004a099d'}', device=3D'unix', type=3D'CHANNEL', bootOrder=3D'0', s= pecParams=3D'[]', address=3D'{bus=3D0, controller=3D0, type=3Dvirtio-se= rial, port=3D1}', managed=3D'false', plugged=3D'true', readOnly=3D'fals= e', deviceAlias=3D'channel0', customProperties=3D'[]', snapshotId=3D'nu= ll', logicalName=3D'null', hostDevice=3D'null'}, device=5Ffbddd528-7d93= -49c6-a286-180e021cb274device=5F879c93ab-4df1-435c-af02-565039fcc254dev= ice=5F8945f61a-abbe-4156-8485-a4aa6f1908dbdevice=5F017b5e59-01c4-4aac-b= f0c-b5d9557284d6=3DVmDevice:{id=3D'VmDeviceId:{deviceId=3D'017b5e59-01c= 4-4aac-bf0c-b5d9557284d6', vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d= '}', device=3D'tablet', type=3D'UNKNOWN', bootOrder=3D'0', specParams=3D= '[]', address=3D'{bus=3D0, type=3Dusb, port=3D1}', managed=3D'false', p= lugged=3D'true', readOnly=3D'false', deviceAlias=3D'input0', customProp= erties=3D'[]', snapshotId=3D'null', logicalName=3D'null', hostDevice=3D= 'null'}, device=5Ffbddd528-7d93-49c6-a286-180e021cb274=3DVmDevice:{id=3D= 'VmDeviceId:{deviceId=3D'fbddd528-7d93-49c6-a286-180e021cb274', vmId=3D= '3f57e669-5e4c-4d10-85cc-d573004a099d'}', device=3D'ide', type=3D'CONTR= OLLER', bootOrder=3D'0', specParams=3D'[]', address=3D'{slot=3D0x01, bu= s=3D0x00, domain=3D0x0000, type=3Dpci, function=3D0x1}', managed=3D'fal= se', plugged=3D'true', readOnly=3D'false', deviceAlias=3D'ide', customP= roperties=3D'[]', snapshotId=3D'null', logicalName=3D'null', hostDevice= =3D'null'}, device=5Ffbddd528-7d93-49c6-a286-180e021cb274device=5F879c9= 3ab-4df1-435c-af02-565039fcc254device=5F8945f61a-abbe-4156-8485-a4aa6f1= 908db=3DVmDevice:{id=3D'VmDeviceId:{deviceId=3D'8945f61a-abbe-4156-8485= -a4aa6f1908db', vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}', device= =3D'unix', type=3D'CHANNEL', bootOrder=3D'0', specParams=3D'[]', addres= s=3D'{bus=3D0, controller=3D0, type=3Dvirtio-serial, port=3D2}', manage= d=3D'false', plugged=3D'true', readOnly=3D'false', deviceAlias=3D'chann= el1', customProperties=3D'[]', snapshotId=3D'null', logicalName=3D'null= ', hostDevice=3D'null'}}, vmType=3Dkvm, memSize=3D8192, smpCoresPerSock= et=3D1, vmName=3DOracle=5FSECONDARY, nice=3D0, status=3DUp, maxMemSize=3D= 32768, bootMenuEnable=3Dfalse, vmId=3D3f57e669-5e4c-4d10-85cc-d573004a0= 99d, numOfIoThreads=3D2, smpThreadsPerCore=3D1, smartcardEnable=3Dfalse= , maxMemSlots=3D16, kvmEnable=3Dtrue, pitReinjection=3Dfalse, displayNe= twork=3Dovirtmgmt, devices=3D[Ljava.lang.Object;@2e4d3dd3, memGuarantee= dSize=3D8192, maxVCpus=3D16, clientIp=3D, statusTime=3D4304259600, disp= lay=3Dvnc}], log id: 7cc65298 2018-02-12 16:46:42,257+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.= monitoring.VmDevicesMonitoring] (ForkJoinPool-1-worker-4) [] Received a= vnc Device without an address when processing VM 3f57e669-5e4c-4d10-85= cc-d573004a099d devices, skipping device: {device=3Dvnc, specParams=3D{= displayNetwork=3Dovirtmgmt, keyMap=3Dfr, displayIp=3D192.168.0.5}, type= =3Dgraphics, deviceId=3D813957b1-446a-4e88-9e40-9fe76d2c442d, port=3D59= 01} 2018-02-12 16:46:42,257+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.= monitoring.VmDevicesMonitoring] (ForkJoinPool-1-worker-4) [] Received a= lease Device without an address when processing VM 3f57e669-5e4c-4d10-= 85cc-d573004a099d devices, skipping device: {lease=5Fid=3D3f57e669-5e4c= -4d10-85cc-d573004a099d, sd=5Fid=3D1e51cecc-eb2e-47d0-b185-920fdc7afa16= , deviceId=3D{uuid=3Da09949aa-5642-4b6d-94a4-8b0d04257be5}, offset=3D62= 91456, device=3Dlease, path=3D/rhev/data-center/mnt/glusterSD/192.168.0= .6:=5FDATA01/1e51cecc-eb2e-47d0-b185-920fdc7afa16/dom=5Fmd/xleases, typ= e=3Dlease} 2018-02-12 16:46:46,260+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.= vdsbroker.FullListVDSCommand] (DefaultQuartzScheduler5) [7fcb200a] FINI= SH, FullListVDSCommand, return: [{acpiEnable=3Dtrue, emulatedMachine=3D= pc-i440fx-rhel7.3.0, afterMigrationStatus=3D, tabletEnable=3Dtrue, pid=3D= 18748, guestDiskMapping=3D{0QEMU=5FQEMU=5FHARDDISK=5Fd890fa68-fba4-4f49= -9=3D{name=3D/dev/sda}, QEMU=5FDVD-ROM=5FQM00003=3D{name=3D/dev/sr0}}, = transparentHugePages=3Dtrue, timeOffset=3D0, cpuType=3DNehalem, smp=3D2= , guestNumaNodes=3D[Ljava.lang.Object;@77951faf, custom=3D{device=5Ffbd= dd528-7d93-49c6-a286-180e021cb274device=5F879c93ab-4df1-435c-af02-56503= 9fcc254=3DVmDevice:{id=3D'VmDeviceId:{deviceId=3D'879c93ab-4df1-435c-af= 02-565039fcc254', vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}', devi= ce=3D'unix', type=3D'CHANNEL', bootOrder=3D'0', specParams=3D'[]', addr= ess=3D'{bus=3D0, controller=3D0, type=3Dvirtio-serial, port=3D1}', mana= ged=3D'false', plugged=3D'true', readOnly=3D'false', deviceAlias=3D'cha= nnel0', customProperties=3D'[]', snapshotId=3D'null', logicalName=3D'nu= ll', hostDevice=3D'null'}, device=5Ffbddd528-7d93-49c6-a286-180e021cb27= 4device=5F879c93ab-4df1-435c-af02-565039fcc254device=5F8945f61a-abbe-41= 56-8485-a4aa6f1908dbdevice=5F017b5e59-01c4-4aac-bf0c-b5d9557284d6=3DVmD= evice:{id=3D'VmDeviceId:{deviceId=3D'017b5e59-01c4-4aac-bf0c-b5d9557284= d6', vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}', device=3D'tablet'= , type=3D'UNKNOWN', bootOrder=3D'0', specParams=3D'[]', address=3D'{bus= =3D0, type=3Dusb, port=3D1}', managed=3D'false', plugged=3D'true', read= Only=3D'false', deviceAlias=3D'input0', customProperties=3D'[]', snapsh= otId=3D'null', logicalName=3D'null', hostDevice=3D'null'}, device=5Ffbd= dd528-7d93-49c6-a286-180e021cb274=3DVmDevice:{id=3D'VmDeviceId:{deviceI= d=3D'fbddd528-7d93-49c6-a286-180e021cb274', vmId=3D'3f57e669-5e4c-4d10-= 85cc-d573004a099d'}', device=3D'ide', type=3D'CONTROLLER', bootOrder=3D= '0', specParams=3D'[]', address=3D'{slot=3D0x01, bus=3D0x00, domain=3D0= x0000, type=3Dpci, function=3D0x1}', managed=3D'false', plugged=3D'true= ', readOnly=3D'false', deviceAlias=3D'ide', customProperties=3D'[]', sn= apshotId=3D'null', logicalName=3D'null', hostDevice=3D'null'}, device=5F= fbddd528-7d93-49c6-a286-180e021cb274device=5F879c93ab-4df1-435c-af02-56= 5039fcc254device=5F8945f61a-abbe-4156-8485-a4aa6f1908db=3DVmDevice:{id=3D= 'VmDeviceId:{deviceId=3D'8945f61a-abbe-4156-8485-a4aa6f1908db', vmId=3D= '3f57e669-5e4c-4d10-85cc-d573004a099d'}', device=3D'unix', type=3D'CHAN= NEL', bootOrder=3D'0', specParams=3D'[]', address=3D'{bus=3D0, controll= er=3D0, type=3Dvirtio-serial, port=3D2}', managed=3D'false', plugged=3D= 'true', readOnly=3D'false', deviceAlias=3D'channel1', customProperties=3D= '[]', snapshotId=3D'null', logicalName=3D'null', hostDevice=3D'null'}},= vmType=3Dkvm, memSize=3D8192, smpCoresPerSocket=3D1, vmName=3DOracle=5F= SECONDARY, nice=3D0, status=3DUp, maxMemSize=3D32768, bootMenuEnable=3D= false, vmId=3D3f57e669-5e4c-4d10-85cc-d573004a099d, numOfIoThreads=3D2,= smpThreadsPerCore=3D1, smartcardEnable=3Dfalse, maxMemSlots=3D16, kvmE= nable=3Dtrue, pitReinjection=3Dfalse, displayNetwork=3Dovirtmgmt, devic= es=3D[Ljava.lang.Object;@286410fd, memGuaranteedSize=3D8192, maxVCpus=3D= 16, clientIp=3D, statusTime=3D4304263620, display=3Dvnc}], log id: 58cd= ef4c 2018-02-12 16:46:46,267+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.= monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler5) [7fcb200a] Re= ceived a vnc Device without an address when processing VM 3f57e669-5e4c= -4d10-85cc-d573004a099d devices, skipping device: {device=3Dvnc, specPa= rams=3D{displayNetwork=3Dovirtmgmt, keyMap=3Dfr, displayIp=3D192.168.0.= 5}, type=3Dgraphics, deviceId=3D813957b1-446a-4e88-9e40-9fe76d2c442d, p= ort=3D5901} 2018-02-12 16:46:46,268+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.= monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler5) [7fcb200a] Re= ceived a lease Device without an address when processing VM 3f57e669-5e= 4c-4d10-85cc-d573004a099d devices, skipping device: {lease=5Fid=3D3f57e= 669-5e4c-4d10-85cc-d573004a099d, sd=5Fid=3D1e51cecc-eb2e-47d0-b185-920f= dc7afa16, deviceId=3D{uuid=3Da09949aa-5642-4b6d-94a4-8b0d04257be5}, off= set=3D6291456, device=3Dlease, path=3D/rhev/data-center/mnt/glusterSD/1= 92.168.0.6:=5FDATA01/1e51cecc-eb2e-47d0-b185-920fdc7afa16/dom=5Fmd/xlea= ses, type=3Dlease} =C2=A0 For the VM with 2 vdisks we see : 2018-02-12 16:49:06,112+01 INFO =C2=A0[org.ovirt.engine.core.bll.Migrat= eVmToServerCommand] (default task-50) [92b5af33-cb87-4142-b8fe-8b838dd7= 458e] Lock Acquired to object 'EngineLock:{exclusiveLocks=3D'[f7d4ec12-= 627a-4b83-b59e-886400d55474=3DVM]', sharedLocks=3D''}' 2018-02-12 16:49:06,407+01 INFO =C2=A0[org.ovirt.engine.core.bll.Migrat= eVmToServerCommand] (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-= 4142-b8fe-8b838dd7458e] Running command: MigrateVmToServerCommand inter= nal: false. Entities affected : =C2=A0ID: f7d4ec12-627a-4b83-b59e-88640= 0d55474 Type: VMAction group MIGRATE=5FVM with role type USER 2018-02-12 16:49:06,712+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.= MigrateVDSCommand] (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4= 142-b8fe-8b838dd7458e] START, MigrateVDSCommand( MigrateVDSCommandParam= eters:{runAsync=3D'true', hostId=3D'd569c2dd-8f30-4878-8aea-858db285cf6= 9', vmId=3D'f7d4ec12-627a-4b83-b59e-886400d55474', srcHost=3D'192.168.0= .5', dstVdsId=3D'ce3938b1-b23f-4d22-840a-f17d7cd87bb1', dstHost=3D'192.= 168.0.6:54321', migrationMethod=3D'ONLINE', tunnelMigration=3D'false', = migrationDowntime=3D'0', autoConverge=3D'true', migrateCompressed=3D'fa= lse', consoleAddress=3D'null', maxBandwidth=3D'500', enableGuestEvents=3D= 'true', maxIncomingMigrations=3D'2', maxOutgoingMigrations=3D'2', conve= rgenceSchedule=3D'[init=3D[{name=3DsetDowntime, params=3D[100]}], stall= ing=3D[{limit=3D1, action=3D{name=3DsetDowntime, params=3D[150]}}, {lim= it=3D2, action=3D{name=3DsetDowntime, params=3D[200]}}, {limit=3D3, act= ion=3D{name=3DsetDowntime, params=3D[300]}}, {limit=3D4, action=3D{name= =3DsetDowntime, params=3D[400]}}, {limit=3D6, action=3D{name=3DsetDownt= ime, params=3D[500]}}, {limit=3D-1, action=3D{name=3Dabort, params=3D[]= }}]]'}), log id: 3702a9e0 2018-02-12 16:49:06,713+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.= vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-thread-49) = [92b5af33-cb87-4142-b8fe-8b838dd7458e] START, MigrateBrokerVDSCommand(H= ostName =3D ginger.local.systea.fr, MigrateVDSCommandParameters:{runAsy= nc=3D'true', hostId=3D'd569c2dd-8f30-4878-8aea-858db285cf69', vmId=3D'f= 7d4ec12-627a-4b83-b59e-886400d55474', srcHost=3D'192.168.0.5', dstVdsId= =3D'ce3938b1-b23f-4d22-840a-f17d7cd87bb1', dstHost=3D'192.168.0.6:54321= ', migrationMethod=3D'ONLINE', tunnelMigration=3D'false', migrationDown= time=3D'0', autoConverge=3D'true', migrateCompressed=3D'false', console= Address=3D'null', maxBandwidth=3D'500', enableGuestEvents=3D'true', max= IncomingMigrations=3D'2', maxOutgoingMigrations=3D'2', convergenceSched= ule=3D'[init=3D[{name=3DsetDowntime, params=3D[100]}], stalling=3D[{lim= it=3D1, action=3D{name=3DsetDowntime, params=3D[150]}}, {limit=3D2, act= ion=3D{name=3DsetDowntime, params=3D[200]}}, {limit=3D3, action=3D{name= =3DsetDowntime, params=3D[300]}}, {limit=3D4, action=3D{name=3DsetDownt= ime, params=3D[400]}}, {limit=3D6, action=3D{name=3DsetDowntime, params= =3D[500]}}, {limit=3D-1, action=3D{name=3Dabort, params=3D[]}}]]'}), lo= g id: 1840069c 2018-02-12 16:49:06,724+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.= vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-thread-49) = [92b5af33-cb87-4142-b8fe-8b838dd7458e] FINISH, MigrateBrokerVDSCommand,= log id: 1840069c 2018-02-12 16:49:06,732+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.= MigrateVDSCommand] (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4= 142-b8fe-8b838dd7458e] FINISH, MigrateVDSCommand, return: MigratingFrom= , log id: 3702a9e0 2018-02-12 16:49:06,753+01 INFO =C2=A0[org.ovirt.engine.core.dal.dbbrok= er.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-6-thread-4= 9) [92b5af33-cb87-4142-b8fe-8b838dd7458e] EVENT=5FID: VM=5FMIGRATION=5F= START(62), Correlation ID: 92b5af33-cb87-4142-b8fe-8b838dd7458e, Job ID= : f4f54054-f7c8-4481-8eda-d5a15c383061, Call Stack: null, Custom ID: nu= ll, Custom Event ID: -1, Message: Migration started (VM: Oracle=5FPRIMA= RY, Source: ginger.local.systea.fr, Destination: victor.local.systea.fr= , User: admin@internal-authz). ... 2018-02-12 16:49:16,453+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.= monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler4) [162a5bc3] F= etched 2 VMs from VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1' 2018-02-12 16:49:16,455+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.= monitoring.VmAnalyzer] (DefaultQuartzScheduler4) [162a5bc3] VM 'f7d4ec1= 2-627a-4b83-b59e-886400d55474'(Oracle=5FPRIMARY) was unexpectedly detec= ted as 'MigratingTo' on VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'(vict= or.local.systea.fr) (expected on 'd569c2dd-8f30-4878-8aea-858db285cf69'= ) 2018-02-12 16:49:16,455+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.= monitoring.VmAnalyzer] (DefaultQuartzScheduler4) [162a5bc3] VM 'f7d4ec1= 2-627a-4b83-b59e-886400d55474' is migrating to VDS 'ce3938b1-b23f-4d22-= 840a-f17d7cd87bb1'(victor.local.systea.fr) ignoring it in the refresh u= ntil migration is done ... 2018-02-12 16:49:31,484+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.= monitoring.VmAnalyzer] (DefaultQuartzScheduler5) [11a7619a] VM 'f7d4ec1= 2-627a-4b83-b59e-886400d55474'(Oracle=5FPRIMARY) was unexpectedly detec= ted as 'MigratingTo' on VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'(vict= or.local.systea.fr) (expected on 'd569c2dd-8f30-4878-8aea-858db285cf69'= ) 2018-02-12 16:49:31,484+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.= monitoring.VmAnalyzer] (DefaultQuartzScheduler5) [11a7619a] VM 'f7d4ec1= 2-627a-4b83-b59e-886400d55474' is migrating to VDS 'ce3938b1-b23f-4d22-= 840a-f17d7cd87bb1'(victor.local.systea.fr) ignoring it in the refresh u= ntil migration is done =C2=A0 and so on, last lines repeated indefinitly for hours since we poweroff = the VM... Is this something known ? Any idea about that ? Thanks Ovirt 4.1.6, updated last at feb-13. Gluster 3.12.1. -- Cordialement, Frank Soyer =5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F= =5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users =C2=A0 =C2=A0 ------=_=-_OpenGroupware_org_NGMime-18019-1518771817.081391-1------ Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: quoted-printable Content-Length: 36783 <html>Hi Maor,<br />sorry for the double post, I've change the email ad= ress of my account and supposed that I'd need to re-post it.<br />And t= hank you for your time. Here are the logs. I added a vdisk to an existi= ng VM : it no more migrates, needing to poweroff it after minutes. Then= simply deleting the second disk makes migrate it in exactly 9s without= problem ! <br />https://gist.github.com/fgth/4707446331d201eef574= ac31b6e89561<br />https://gist.github.com/fgth/f8de9c22664aee53722af676= bff8719d<br /><br />--<br /><style type=3D"text/css">.Text1 { color: black; font-size:9pt; font-family:Verdana; } .Text2 { color: black; font-size:7pt; font-family:Verdana; }</style><p class=3D"Text1">Cordialement,<br /><br /><b>Frank Soyer= </b></p>Le Mercredi, F=C3=A9vrier 14, 2018 11:04 CET, Maor Lipchu= k <mlipchuk@redhat.com> a =C3=A9crit:<br /> <blockquote type= =3D"cite" cite=3D"CAJ1JNOcD3ZX6hYG4TJ0-=5FumSgw6-wtJoC=5F=5FHRbarc1io-Y= -6Jw@mail.gmail.com"><div dir=3D"ltr">Hi Frank,<div> </div><div>I = already replied on your last email.</div><div><div>Can you provide the = VDSM logs from the time of the migration failure for both hosts:</div><= div> <span style=3D"font-family:monospace"> </span><a target=3D= "=5Fblank" style=3D"font-family:monospace" href=3D"http://ginger.local.= systea.fr/">ginger.local.systea.f</a>r and <a target=3D"=5Fbl= ank" style=3D"font-family:monospace" href=3D"http://victor.local.systea= .fr/">v<wbr />ictor.local.systea.fr</a></div><div> </div><div>Than= ks,</div><div>Maor</div></div></div><div class=3D"gmail=5Fextra"> = <div class=3D"gmail=5Fquote">On Wed, Feb 14, 2018 at 11:23 AM, fsoyer <= span dir=3D"ltr"><<a target=3D"=5Fblank" href=3D"mailto:fsoyer@syste= a.fr">fsoyer@systea.fr</a>></span> wrote:<blockquote class=3D"gmail=5F= quote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-le= ft:1ex"><p>Hi all,<br />I discovered yesterday a problem when migrating= VM with more than one vdisk.<br />On our test servers (oVirt4.1, share= d storage with Gluster), I created 2 VMs needed for a test, from a temp= late with a 20G vdisk. On this VMs I added a 100G vdisk (for this tests= I didn't want to waste time to extend the existing vdisks... But I los= t time finally...). The VMs with the 2 vdisks works well.<br />Now I sa= w some updates waiting on the host. I tried to put it in maintenance...= But it stopped on the two VM. They were marked "migrating", but no mor= e accessible. Other (small) VMs with only 1 vdisk was migrated without = problem at the same time.<br />I saw that a kvm process for the (big) V= Ms was launched on the source AND destination host, but after tens of m= inutes, the migration and the VMs was always freezed. I tried to cancel= the migration for the VMs : failed. The only way to stop it was to pow= eroff the VMs : the kvm process died on the 2 hosts and the GUI alerted= on a failed migration.<br />In doubt, I tried to delete the second vdi= sk on one of this VMs : it migrates then without error ! And no access = problem.<br />I tried to extend the first vdisk of the second VM, the d= elete the second vdisk : it migrates now without problem ! &= nbsp;<br /><br />So after another test with a VM with 2 vdisks, I can s= ay that this blocked the migration process :(<br /><br />In engine.log,= for a VMs with 1 vdisk migrating well, we see :</p><blockquote>2018-02= -12 16:46:29,705+01 INFO [org.ovirt.engine.core.bll.<wbr />Migrat= eVmToServerCommand] (default task-28) [2f712024-5982-46a8-82c8-<wbr />f= d8293da5725] Lock Acquired to object 'EngineLock:{exclusiveLocks=3D'[<w= br />3f57e669-5e4c-4d10-85cc-<wbr />d573004a099d=3DVM]', sharedLocks=3D= ''}'<br />2018-02-12 16:46:29,955+01 INFO [org.ovirt.engine.core.= bll.<wbr />MigrateVmToServerCommand] (org.ovirt.thread.pool-6-<wbr />th= read-32) [2f712024-5982-46a8-82c8-<wbr />fd8293da5725] Running command:= MigrateVmToServerCommand internal: false. Entities affected : ID= : 3f57e669-5e4c-4d10-85cc-<wbr />d573004a099d Type: VMAction group MIGR= ATE=5FVM with role type USER<br />2018-02-12 16:46:30,261+01 INFO  = ;[org.ovirt.engine.core.<wbr />vdsbroker.MigrateVDSCommand] (org.ovirt.= thread.pool-6-<wbr />thread-32) [2f712024-5982-46a8-82c8-<wbr />fd8293d= a5725] START, MigrateVDSCommand( MigrateVDSCommandParameters:{<wbr />ru= nAsync=3D'true', hostId=3D'ce3938b1-b23f-4d22-<wbr />840a-f17d7cd87bb1'= , vmId=3D'3f57e669-5e4c-4d10-85cc-<wbr />d573004a099d', srcHost=3D'192.= 168.0.6', dstVdsId=3D'd569c2dd-8f30-4878-<wbr />8aea-858db285cf69', dst= Host=3D'<a target=3D"=5Fblank" href=3D"http://192.168.0.5:54321">192.16= 8.0.5:54321</a>', migrationMethod=3D'ONLINE', tunnelMigration=3D'false'= , migrationDowntime=3D'0', autoConverge=3D'true', migrateCompressed=3D'= false', consoleAddress=3D'null', maxBandwidth=3D'500', enableGuestEvent= s=3D'true', maxIncomingMigrations=3D'2', maxOutgoingMigrations=3D'2', c= onvergenceSchedule=3D'[init=3D[{<wbr />name=3DsetDowntime, params=3D[10= 0]}], stalling=3D[{limit=3D1, action=3D{name=3DsetDowntime, params=3D[1= 50]}}, {limit=3D2, action=3D{name=3DsetDowntime, params=3D[200]}}, {lim= it=3D3, action=3D{name=3DsetDowntime, params=3D[300]}}, {limit=3D4, act= ion=3D{name=3DsetDowntime, params=3D[400]}}, {limit=3D6, action=3D{name= =3DsetDowntime, params=3D[500]}}, {limit=3D-1, action=3D{name=3Dabort, = params=3D[]}}]]'}), log id: 14f61ee0<br />2018-02-12 16:46:30,262+01 IN= FO [org.ovirt.engine.core.<wbr />vdsbroker.vdsbroker.<wbr />Migra= teBrokerVDSCommand] (org.ovirt.thread.pool-6-<wbr />thread-32) [2f71202= 4-5982-46a8-82c8-<wbr />fd8293da5725] START, MigrateBrokerVDSCommand(<w= br />HostName =3D <a target=3D"=5Fblank" href=3D"http://victor.local.sy= stea.fr">victor.local.systea.fr</a>, MigrateVDSCommandParameters:{<wbr = />runAsync=3D'true', hostId=3D'ce3938b1-b23f-4d22-<wbr />840a-f17d7cd87= bb1', vmId=3D'3f57e669-5e4c-4d10-85cc-<wbr />d573004a099d', srcHost=3D'= 192.168.0.6', dstVdsId=3D'd569c2dd-8f30-4878-<wbr />8aea-858db285cf69',= dstHost=3D'<a target=3D"=5Fblank" href=3D"http://192.168.0.5:54321">19= 2.168.0.5:54321</a>', migrationMethod=3D'ONLINE', tunnelMigration=3D'fa= lse', migrationDowntime=3D'0', autoConverge=3D'true', migrateCompressed= =3D'false', consoleAddress=3D'null', maxBandwidth=3D'500', enableGuestE= vents=3D'true', maxIncomingMigrations=3D'2', maxOutgoingMigrations=3D'2= ', convergenceSchedule=3D'[init=3D[{<wbr />name=3DsetDowntime, params=3D= [100]}], stalling=3D[{limit=3D1, action=3D{name=3DsetDowntime, params=3D= [150]}}, {limit=3D2, action=3D{name=3DsetDowntime, params=3D[200]}}, {l= imit=3D3, action=3D{name=3DsetDowntime, params=3D[300]}}, {limit=3D4, a= ction=3D{name=3DsetDowntime, params=3D[400]}}, {limit=3D6, action=3D{na= me=3DsetDowntime, params=3D[500]}}, {limit=3D-1, action=3D{name=3Dabort= , params=3D[]}}]]'}), log id: 775cd381<br />2018-02-12 16:46:30,277+01 = INFO [org.ovirt.engine.core.<wbr />vdsbroker.vdsbroker.<wbr />Mig= rateBrokerVDSCommand] (org.ovirt.thread.pool-6-<wbr />thread-32) [2f712= 024-5982-46a8-82c8-<wbr />fd8293da5725] FINISH, MigrateBrokerVDSCommand= , log id: 775cd381<br />2018-02-12 16:46:30,285+01 INFO [org.ovir= t.engine.core.<wbr />vdsbroker.MigrateVDSCommand] (org.ovirt.thread.poo= l-6-<wbr />thread-32) [2f712024-5982-46a8-82c8-<wbr />fd8293da5725] FIN= ISH, MigrateVDSCommand, return: MigratingFrom, log id: 14f61ee0<br />20= 18-02-12 16:46:30,301+01 INFO [org.ovirt.engine.core.dal.<wbr />d= bbroker.auditloghandling.<wbr />AuditLogDirector] (org.ovirt.thread.poo= l-6-<wbr />thread-32) [2f712024-5982-46a8-82c8-<wbr />fd8293da5725] EVE= NT=5FID: VM=5FMIGRATION=5FSTART(62), Correlation ID: 2f712024-5982-46a8= -82c8-<wbr />fd8293da5725, Job ID: 4bd19aa9-cc99-4d02-884e-<wbr />5a1e8= 57a7738, Call Stack: null, Custom ID: null, Custom Event ID: -1, Messag= e: Migration started (VM: Oracle=5FSECONDARY, Source: <a target=3D"=5Fb= lank" href=3D"http://victor.local.systea.fr">victor.local.systea.fr</a>= , Destination: <a target=3D"=5Fblank" href=3D"http://ginger.local.syste= a.fr">ginger.local.systea.fr</a>, User: admin@internal-authz).<br />201= 8-02-12 16:46:31,106+01 INFO [org.ovirt.engine.core.<wbr />vdsbro= ker.vdsbroker.<wbr />FullListVDSCommand] (DefaultQuartzScheduler9) [54a= 65b66] START, FullListVDSCommand(HostName =3D <a target=3D"=5Fblank" hr= ef=3D"http://victor.local.systea.fr">victor.local.systea.fr</a>, FullLi= stVDSCommandParameters:{<wbr />runAsync=3D'true', hostId=3D'ce3938b1-b2= 3f-4d22-<wbr />840a-f17d7cd87bb1', vmIds=3D'[3f57e669-5e4c-4d10-<wbr />= 85cc-d573004a099d]'}), log id: 54b4b435<br />2018-02-12 16:46:31,147+01= INFO [org.ovirt.engine.core.<wbr />vdsbroker.vdsbroker.<wbr />Fu= llListVDSCommand] (DefaultQuartzScheduler9) [54a65b66] FINISH, FullList= VDSCommand, return: [{acpiEnable=3Dtrue, emulatedMachine=3Dpc-i440fx-<w= br />rhel7.3.0, tabletEnable=3Dtrue, pid=3D1493, guestDiskMapping=3D{0Q= EMU=5FQEMU=5F<wbr />HARDDISK=5Fd890fa68-fba4-4f49-9=3D<wbr />{name=3D/d= ev/sda}, QEMU=5FDVD-ROM=5FQM00003=3D{name=3D/<wbr />dev/sr0}}, transpar= entHugePages=3Dtrue, timeOffset=3D0, cpuType=3DNehalem, smp=3D2, pauseC= ode=3DNOERR, guestNumaNodes=3D[Ljava.lang.<wbr />Object;@1d9042cd, smar= tcardEnable=3Dfalse, custom=3D{device=5Ffbddd528-7d93-<wbr />49c6-a286-= 180e021cb274device=5F<wbr />879c93ab-4df1-435c-af02-<wbr />565039fcc254= =3DVmDevice:{id=3D'<wbr />VmDeviceId:{deviceId=3D'<wbr />879c93ab-4df1-= 435c-af02-<wbr />565039fcc254', vmId=3D'3f57e669-5e4c-4d10-85cc-<wbr />= d573004a099d'}', device=3D'unix', type=3D'CHANNEL', bootOrder=3D'0', sp= ecParams=3D'[]', address=3D'{bus=3D0, controller=3D0, type=3Dvirtio-ser= ial, port=3D1}', managed=3D'false', plugged=3D'true', readOnly=3D'false= ', deviceAlias=3D'channel0', customProperties=3D'[]', snapshotId=3D'nul= l', logicalName=3D'null', hostDevice=3D'null'}, device=5Ffbddd528-7d93-= 49c6-<wbr />a286-180e021cb274device=5F<wbr />879c93ab-4df1-435c-af02-<w= br />565039fcc254device=5F8945f61a-<wbr />abbe-4156-8485-<wbr />a4aa6f1= 908dbdevice=5F017b5e59-<wbr />01c4-4aac-bf0c-b5d9557284d6=3D<wbr />VmDe= vice:{id=3D'VmDeviceId:{<wbr />deviceId=3D'017b5e59-01c4-4aac-<wbr />bf= 0c-b5d9557284d6', vmId=3D'3f57e669-5e4c-4d10-85cc-<wbr />d573004a099d'}= ', device=3D'tablet', type=3D'UNKNOWN', bootOrder=3D'0', specParams=3D'= []', address=3D'{bus=3D0, type=3Dusb, port=3D1}', managed=3D'false', pl= ugged=3D'true', readOnly=3D'false', deviceAlias=3D'input0', customPrope= rties=3D'[]', snapshotId=3D'null', logicalName=3D'null', hostDevice=3D'= null'}, device=5Ffbddd528-7d93-49c6-<wbr />a286-180e021cb274=3DVmDevice= :{<wbr />id=3D'VmDeviceId:{deviceId=3D'<wbr />fbddd528-7d93-49c6-a286-<= wbr />180e021cb274', vmId=3D'3f57e669-5e4c-4d10-85cc-<wbr />d573004a099= d'}', device=3D'ide', type=3D'CONTROLLER', bootOrder=3D'0', specParams=3D= '[]', address=3D'{slot=3D0x01, bus=3D0x00, domain=3D0x0000, type=3Dpci,= function=3D0x1}', managed=3D'false', plugged=3D'true', readOnly=3D'fal= se', deviceAlias=3D'ide', customProperties=3D'[]', snapshotId=3D'null',= logicalName=3D'null', hostDevice=3D'null'}, device=5Ffbddd528-7d93-49c= 6-<wbr />a286-180e021cb274device=5F<wbr />879c93ab-4df1-435c-af02-<wbr = />565039fcc254device=5F8945f61a-<wbr />abbe-4156-8485-a4aa6f1908db=3D<w= br />VmDevice:{id=3D'VmDeviceId:{<wbr />deviceId=3D'8945f61a-abbe-4156-= <wbr />8485-a4aa6f1908db', vmId=3D'3f57e669-5e4c-4d10-85cc-<wbr />d5730= 04a099d'}', device=3D'unix', type=3D'CHANNEL', bootOrder=3D'0', specPar= ams=3D'[]', address=3D'{bus=3D0, controller=3D0, type=3Dvirtio-serial, = port=3D2}', managed=3D'false', plugged=3D'true', readOnly=3D'false', de= viceAlias=3D'channel1', customProperties=3D'[]', snapshotId=3D'null', l= ogicalName=3D'null', hostDevice=3D'null'}}, vmType=3Dkvm, memSize=3D819= 2, smpCoresPerSocket=3D1, vmName=3DOracle=5FSECONDARY, nice=3D0, status= =3DMigration Source, maxMemSize=3D32768, bootMenuEnable=3Dfalse, vmId=3D= 3f57e669-5e4c-4d10-85cc-<wbr />d573004a099d, numOfIoThreads=3D2, smpThr= eadsPerCore=3D1, memGuaranteedSize=3D8192, kvmEnable=3Dtrue, pitReinjec= tion=3Dfalse, displayNetwork=3Dovirtmgmt, devices=3D[Ljava.lang.Object;= @<wbr />28ae66d7, display=3Dvnc, maxVCpus=3D16, clientIp=3D, statusTime= =3D4299484520, maxMemSlots=3D16}], log id: 54b4b435<br />2018-02-12 16:= 46:31,150+01 INFO [org.ovirt.engine.core.<wbr />vdsbroker.monitor= ing.<wbr />VmsStatisticsFetcher] (DefaultQuartzScheduler1) [27fac647] F= etched 3 VMs from VDS 'd569c2dd-8f30-4878-8aea-<wbr />858db285cf69'<br = />2018-02-12 16:46:31,151+01 INFO [org.ovirt.engine.core.<wbr />v= dsbroker.monitoring.<wbr />VmDevicesMonitoring] (DefaultQuartzScheduler= 9) [54a65b66] Received a vnc Device without an address when processing = VM 3f57e669-5e4c-4d10-85cc-<wbr />d573004a099d devices, skipping device= : {device=3Dvnc, specParams=3D{displayNetwork=3D<wbr />ovirtmgmt, keyMa= p=3Dfr, displayIp=3D192.168.0.6}, type=3Dgraphics, deviceId=3D813957b1-= 446a-4e88-<wbr />9e40-9fe76d2c442d, port=3D5901}<br />2018-02-12 16:46:= 31,151+01 INFO [org.ovirt.engine.core.<wbr />vdsbroker.monitoring= .<wbr />VmDevicesMonitoring] (DefaultQuartzScheduler9) [54a65b66] Recei= ved a lease Device without an address when processing VM 3f57e669-5e4c-= 4d10-85cc-<wbr />d573004a099d devices, skipping device: {lease=5Fid=3D3= f57e669-5e4c-4d10-<wbr />85cc-d573004a099d, sd=5Fid=3D1e51cecc-eb2e-47d= 0-b185-<wbr />920fdc7afa16, deviceId=3D{uuid=3Da09949aa-5642-<wbr />4b6= d-94a4-8b0d04257be5}, offset=3D6291456, device=3Dlease, path=3D/rhev/da= ta-center/mnt/<wbr />glusterSD/192.168.0.6:=5FDATA01/<wbr />1e51cecc-eb= 2e-47d0-b185-<wbr />920fdc7afa16/dom=5Fmd/xleases, type=3Dlease}<br />2= 018-02-12 16:46:31,152+01 INFO [org.ovirt.engine.core.<wbr />vdsb= roker.monitoring.<wbr />VmAnalyzer] (DefaultQuartzScheduler1) [27fac647= ] VM '3f57e669-5e4c-4d10-85cc-<wbr />d573004a099d'(Oracle=5F<wbr />SECO= NDARY) was unexpectedly detected as 'MigratingTo' on VDS 'd569c2dd-8f30= -4878-8aea-<wbr />858db285cf69'(<a target=3D"=5Fblank" href=3D"http://g= inger.local.systea.fr">ginger.local.<wbr />systea.fr</a>) (expected on = 'ce3938b1-b23f-4d22-840a-<wbr />f17d7cd87bb1')<br />2018-02-12 16:46:31= ,152+01 INFO [org.ovirt.engine.core.<wbr />vdsbroker.monitoring.<= wbr />VmAnalyzer] (DefaultQuartzScheduler1) [27fac647] VM '3f57e669-5e4= c-4d10-85cc-<wbr />d573004a099d' is migrating to VDS 'd569c2dd-8f30-487= 8-8aea-<wbr />858db285cf69'(<a target=3D"=5Fblank" href=3D"http://ginge= r.local.systea.fr">ginger.local.<wbr />systea.fr</a>) ignoring it in th= e refresh until migration is done<br />....<br />2018-02-12 16:46:41,63= 1+01 INFO [org.ovirt.engine.core.<wbr />vdsbroker.monitoring.<wbr= />VmAnalyzer] (ForkJoinPool-1-worker-11) [] VM '3f57e669-5e4c-4d10-85c= c-<wbr />d573004a099d' was reported as Down on VDS 'ce3938b1-b23f-4d22-= 840a-<wbr />f17d7cd87bb1'(<a target=3D"=5Fblank" href=3D"http://victor.= local.systea.fr">victor.local.<wbr />systea.fr</a>)<br />2018-02-12 16:= 46:41,632+01 INFO [org.ovirt.engine.core.<wbr />vdsbroker.vdsbrok= er.<wbr />DestroyVDSCommand] (ForkJoinPool-1-worker-11) [] START, Destr= oyVDSCommand(HostName =3D <a target=3D"=5Fblank" href=3D"http://victor.= local.systea.fr">victor.local.systea.fr</a>, DestroyVmVDSCommandParamet= ers:<wbr />{runAsync=3D'true', hostId=3D'ce3938b1-b23f-4d22-<wbr />840a= -f17d7cd87bb1', vmId=3D'3f57e669-5e4c-4d10-85cc-<wbr />d573004a099d', f= orce=3D'false', secondsToWait=3D'0', gracefully=3D'false', reason=3D'',= ignoreNoVm=3D'true'}), log id: 560eca57<br />2018-02-12 16:46:41,650+0= 1 INFO [org.ovirt.engine.core.<wbr />vdsbroker.vdsbroker.<wbr />D= estroyVDSCommand] (ForkJoinPool-1-worker-11) [] FINISH, DestroyVDSComma= nd, log id: 560eca57<br />2018-02-12 16:46:41,650+01 INFO [org.ov= irt.engine.core.<wbr />vdsbroker.monitoring.<wbr />VmAnalyzer] (ForkJoi= nPool-1-worker-11) [] VM '3f57e669-5e4c-4d10-85cc-<wbr />d573004a099d'(= Oracle=5F<wbr />SECONDARY) moved from 'MigratingFrom' --> 'Down'<br = />2018-02-12 16:46:41,651+01 INFO [org.ovirt.engine.core.<wbr />v= dsbroker.monitoring.<wbr />VmAnalyzer] (ForkJoinPool-1-worker-11) [] Ha= nding over VM '3f57e669-5e4c-4d10-85cc-<wbr />d573004a099d'(Oracle=5F<w= br />SECONDARY) to Host 'd569c2dd-8f30-4878-8aea-<wbr />858db285cf69'. = Setting VM to status 'MigratingTo'<br />2018-02-12 16:46:42,163+01 INFO= [org.ovirt.engine.core.<wbr />vdsbroker.monitoring.<wbr />VmAnal= yzer] (ForkJoinPool-1-worker-4) [] VM '3f57e669-5e4c-4d10-85cc-<wbr />d= 573004a099d'(Oracle=5F<wbr />SECONDARY) moved from 'MigratingTo' -->= 'Up'<br />2018-02-12 16:46:42,169+01 INFO [org.ovirt.engine.core= .<wbr />vdsbroker.vdsbroker.<wbr />MigrateStatusVDSCommand] (ForkJoinPo= ol-1-worker-4) [] START, MigrateStatusVDSCommand(<wbr />HostName =3D <a= target=3D"=5Fblank" href=3D"http://ginger.local.systea.fr">ginger.loca= l.systea.fr</a>, MigrateStatusVDSCommandParamet<wbr />ers:{runAsync=3D'= true', hostId=3D'd569c2dd-8f30-4878-<wbr />8aea-858db285cf69', vmId=3D'= 3f57e669-5e4c-4d10-85cc-<wbr />d573004a099d'}), log id: 7a25c281<br />2= 018-02-12 16:46:42,174+01 INFO [org.ovirt.engine.core.<wbr />vdsb= roker.vdsbroker.<wbr />MigrateStatusVDSCommand] (ForkJoinPool-1-worker-= 4) [] FINISH, MigrateStatusVDSCommand, log id: 7a25c281<br />2018-02-12= 16:46:42,194+01 INFO [org.ovirt.engine.core.dal.<wbr />dbbroker.= auditloghandling.<wbr />AuditLogDirector] (ForkJoinPool-1-worker-4) [] = EVENT=5FID: VM=5FMIGRATION=5FDONE(63), Correlation ID: 2f712024-5982-46= a8-82c8-<wbr />fd8293da5725, Job ID: 4bd19aa9-cc99-4d02-884e-<wbr />5a1= e857a7738, Call Stack: null, Custom ID: null, Custom Event ID: -1, Mess= age: Migration completed (VM: Oracle=5FSECONDARY, Source: <a target=3D"= =5Fblank" href=3D"http://victor.local.systea.fr">victor.local.systea.fr= </a>, Destination: <a target=3D"=5Fblank" href=3D"http://ginger.local.s= ystea.fr">ginger.local.systea.fr</a>, Duration: 11 seconds, Total: 11 s= econds, Actual downtime: (N/A))<br />2018-02-12 16:46:42,201+01 INFO &n= bsp;[org.ovirt.engine.core.bll.<wbr />MigrateVmToServerCommand] (ForkJo= inPool-1-worker-4) [] Lock freed to object 'EngineLock:{exclusiveLocks=3D= '[<wbr />3f57e669-5e4c-4d10-85cc-<wbr />d573004a099d=3DVM]', sharedLock= s=3D''}'<br />2018-02-12 16:46:42,203+01 INFO [org.ovirt.engine.c= ore.<wbr />vdsbroker.vdsbroker.<wbr />FullListVDSCommand] (ForkJoinPool= -1-worker-4) [] START, FullListVDSCommand(HostName =3D <a target=3D"=5F= blank" href=3D"http://ginger.local.systea.fr">ginger.local.systea.fr</a= ddress=3D'{bus=3D0, controller=3D0, type=3Dvirtio-serial, port=3D1}', m= anaged=3D'false', plugged=3D'true', readOnly=3D'false', deviceAlias=3D'= channel0', customProperties=3D'[]', snapshotId=3D'null', logicalName=3D= 'null', hostDevice=3D'null'}, device=5Ffbddd528-7d93-49c6-<wbr />a286-1= 80e021cb274device=5F<wbr />879c93ab-4df1-435c-af02-<wbr />565039fcc254d= evice=5F8945f61a-<wbr />abbe-4156-8485-<wbr />a4aa6f1908dbdevice=5F017b= 5e59-<wbr />01c4-4aac-bf0c-b5d9557284d6=3D<wbr />VmDevice:{id=3D'VmDevi= ceId:{<wbr />deviceId=3D'017b5e59-01c4-4aac-<wbr />bf0c-b5d9557284d6', = vmId=3D'3f57e669-5e4c-4d10-85cc-<wbr />d573004a099d'}', device=3D'table= t', type=3D'UNKNOWN', bootOrder=3D'0', specParams=3D'[]', address=3D'{b= us=3D0, type=3Dusb, port=3D1}', managed=3D'false', plugged=3D'true', re= adOnly=3D'false', deviceAlias=3D'input0', customProperties=3D'[]', snap= shotId=3D'null', logicalName=3D'null', hostDevice=3D'null'}, device=5Ff= bddd528-7d93-49c6-<wbr />a286-180e021cb274=3DVmDevice:{<wbr />id=3D'VmD= eviceId:{deviceId=3D'<wbr />fbddd528-7d93-49c6-a286-<wbr />180e021cb274= ', vmId=3D'3f57e669-5e4c-4d10-85cc-<wbr />d573004a099d'}', device=3D'id= e', type=3D'CONTROLLER', bootOrder=3D'0', specParams=3D'[]', address=3D= '{slot=3D0x01, bus=3D0x00, domain=3D0x0000, type=3Dpci, function=3D0x1}= ', managed=3D'false', plugged=3D'true', readOnly=3D'false', deviceAlias= =3D'ide', customProperties=3D'[]', snapshotId=3D'null', logicalName=3D'= null', hostDevice=3D'null'}, device=5Ffbddd528-7d93-49c6-<wbr />a286-18= 0e021cb274device=5F<wbr />879c93ab-4df1-435c-af02-<wbr />565039fcc254de= vice=5F8945f61a-<wbr />abbe-4156-8485-a4aa6f1908db=3D<wbr />VmDevice:{i= d=3D'VmDeviceId:{<wbr />deviceId=3D'8945f61a-abbe-4156-<wbr />8485-a4aa= 6f1908db', vmId=3D'3f57e669-5e4c-4d10-85cc-<wbr />d573004a099d'}', devi= ce=3D'unix', type=3D'CHANNEL', bootOrder=3D'0', specParams=3D'[]', addr= ess=3D'{bus=3D0, controller=3D0, type=3Dvirtio-serial, port=3D2}', mana= ged=3D'false', plugged=3D'true', readOnly=3D'false', deviceAlias=3D'cha= nnel1', customProperties=3D'[]', snapshotId=3D'null', logicalName=3D'nu= ll', hostDevice=3D'null'}}, vmType=3Dkvm, memSize=3D8192, smpCoresPerSo= cket=3D1, vmName=3DOracle=5FSECONDARY, nice=3D0, status=3DUp, maxMemSiz= e=3D32768, bootMenuEnable=3Dfalse, vmId=3D3f57e669-5e4c-4d10-85cc-<wbr = />d573004a099d, numOfIoThreads=3D2, smpThreadsPerCore=3D1, smartcardEna= ble=3Dfalse, maxMemSlots=3D16, kvmEnable=3Dtrue, pitReinjection=3Dfalse= , displayNetwork=3Dovirtmgmt, devices=3D[Ljava.lang.Object;@<wbr />2864= 10fd, memGuaranteedSize=3D8192, maxVCpus=3D16, clientIp=3D, statusTime=3D= <a value=3D"+14304263620" target=3D"=5Fblank" href=3D"tel:(430)%20426-3= 620">4304263620</a>, display=3Dvnc}], log id: 58cdef4c<br />2018-02-12 = 16:46:46,267+01 INFO [org.ovirt.engine.core.<wbr />vdsbroker.moni= toring.<wbr />VmDevicesMonitoring] (DefaultQuartzScheduler5) [7fcb200a]= Received a vnc Device without an address when processing VM 3f57e669-5= e4c-4d10-85cc-<wbr />d573004a099d devices, skipping device: {device=3Dv= nc, specParams=3D{displayNetwork=3D<wbr />ovirtmgmt, keyMap=3Dfr, displ= ayIp=3D192.168.0.5}, type=3Dgraphics, deviceId=3D813957b1-446a-4e88-<wb= r />9e40-9fe76d2c442d, port=3D5901}<br />2018-02-12 16:46:46,268+01 INF= O [org.ovirt.engine.core.<wbr />vdsbroker.monitoring.<wbr />VmDev= icesMonitoring] (DefaultQuartzScheduler5) [7fcb200a] Received a lease D= evice without an address when processing VM 3f57e669-5e4c-4d10-85cc-<wb= r />d573004a099d devices, skipping device: {lease=5Fid=3D3f57e669-5e4c-= 4d10-<wbr />85cc-d573004a099d, sd=5Fid=3D1e51cecc-eb2e-47d0-b185-<wbr /= dd7458e] Lock Acquired to object 'EngineLock:{exclusiveLocks=3D'[<wbr /= thread-49) [92b5af33-cb87-4142-b8fe-<wbr />8b838dd7458e] FINISH, Migrat= eVDSCommand, return: MigratingFrom, log id: 3702a9e0<br />2018-02-12 16= :49:06,753+01 INFO [org.ovirt.engine.core.dal.<wbr />dbbroker.aud= itloghandling.<wbr />AuditLogDirector] (org.ovirt.thread.pool-6-<wbr />= thread-49) [92b5af33-cb87-4142-b8fe-<wbr />8b838dd7458e] EVENT=5FID: VM= =5FMIGRATION=5FSTART(62), Correlation ID: 92b5af33-cb87-4142-b8fe-<wbr = />8b838dd7458e, Job ID: f4f54054-f7c8-4481-8eda-<wbr />d5a15c383061, Ca= ll Stack: null, Custom ID: null, Custom Event ID: -1, Message: Migratio= n started (VM: Oracle=5FPRIMARY, Source: <a target=3D"=5Fblank" href=3D= "http://ginger.local.systea.fr">ginger.local.systea.fr</a>, Destination= : <a target=3D"=5Fblank" href=3D"http://victor.local.systea.fr">victor.= local.systea.fr</a>, User: admin@internal-authz).<br />...<br />2018-02= -12 16:49:16,453+01 INFO [org.ovirt.engine.core.<wbr />vdsbroker.= monitoring.<wbr />VmsStatisticsFetcher] (DefaultQuartzScheduler4) [162a= 5bc3] Fetched 2 VMs from VDS 'ce3938b1-b23f-4d22-840a-<wbr />f17d7cd87b= b1'<br />2018-02-12 16:49:16,455+01 INFO [org.ovirt.engine.core.<= wbr />vdsbroker.monitoring.<wbr />VmAnalyzer] (DefaultQuartzScheduler4)= [162a5bc3] VM 'f7d4ec12-627a-4b83-b59e-<wbr />886400d55474'(Oracle=5FP= RIMARY) was unexpectedly detected as 'MigratingTo' on VDS 'ce3938b1-b23= f-4d22-840a-<wbr />f17d7cd87bb1'(<a target=3D"=5Fblank" href=3D"http://= victor.local.systea.fr">victor.local.<wbr />systea.fr</a>) (expected on= 'd569c2dd-8f30-4878-8aea-<wbr />858db285cf69')<br />2018-02-12 16:49:1= 6,455+01 INFO [org.ovirt.engine.core.<wbr />vdsbroker.monitoring.= <wbr />VmAnalyzer] (DefaultQuartzScheduler4) [162a5bc3] VM 'f7d4ec12-62= 7a-4b83-b59e-<wbr />886400d55474' is migrating to VDS 'ce3938b1-b23f-4d= 22-840a-<wbr />f17d7cd87bb1'(<a target=3D"=5Fblank" href=3D"http://vict= or.local.systea.fr">victor.local.<wbr />systea.fr</a>) ignoring it in t= he refresh until migration is done<br />...<br />2018-02-12 16:49:31,48= 4+01 INFO [org.ovirt.engine.core.<wbr />vdsbroker.monitoring.<wbr= />VmAnalyzer] (DefaultQuartzScheduler5) [11a7619a] VM 'f7d4ec12-627a-4= b83-b59e-<wbr />886400d55474'(Oracle=5FPRIMARY) was unexpectedly detect= ed as 'MigratingTo' on VDS 'ce3938b1-b23f-4d22-840a-<wbr />f17d7cd87bb1= '(<a target=3D"=5Fblank" href=3D"http://victor.local.systea.fr">victor.= local.<wbr />systea.fr</a>) (expected on 'd569c2dd-8f30-4878-8aea-<wbr = />858db285cf69')<br />2018-02-12 16:49:31,484+01 INFO [org.ovirt.= engine.core.<wbr />vdsbroker.monitoring.<wbr />VmAnalyzer] (DefaultQuar= tzScheduler5) [11a7619a] VM 'f7d4ec12-627a-4b83-b59e-<wbr />886400d5547= 4' is migrating to VDS 'ce3938b1-b23f-4d22-840a-<wbr />f17d7cd87bb1'(<a= target=3D"=5Fblank" href=3D"http://victor.local.systea.fr">victor.loca= l.<wbr />systea.fr</a>) ignoring it in the refresh until migration is d= one<br /> </p></blockquote><br />and so on, last lines repeated in= definitly for hours since we poweroff the VM...<br />Is this something = known ? Any idea about that ?<br /><br />Thanks<br /><br />Ovirt 4.1.6,= updated last at feb-13. Gluster 3.12.1.<br /><br />--<p class=3D"m=5F8= 587729722327689770Text1">Cordialement,<br /><br /><b>Frank Soyer </b></= p><br />=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F= =5F=5F=5F=5F=5F=5F=5F=5F=5F<wbr />=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F= =5F=5F=5F=5F<br />Users mailing list<br /><a href=3D"mailto:Users@ovirt= .org">Users@ovirt.org</a><br /><a rel=3D"noreferrer" target=3D"=5Fblank= " href=3D"http://lists.ovirt.org/mailman/listinfo/users">http://lists.o= virt.org/<wbr />mailman/listinfo/users</a><br /> </blockquote></di= v></div></blockquote><br /> </html> ------=_=-_OpenGroupware_org_NGMime-18019-1518771817.081391-1--------

Hi Frank, Sorry about the delay repond. I've been going through the logs you attached, although I could not find any specific indication why the migration failed because of the disk you were mentionning. Does this VM run with both disks on the target host without migration? Regards, Maor On Fri, Feb 16, 2018 at 11:03 AM, fsoyer <fsoyer@systea.fr> wrote:
Hi Maor, sorry for the double post, I've change the email adress of my account and supposed that I'd need to re-post it. And thank you for your time. Here are the logs. I added a vdisk to an existing VM : it no more migrates, needing to poweroff it after minutes. Then simply deleting the second disk makes migrate it in exactly 9s without problem ! https://gist.github.com/fgth/4707446331d201eef574ac31b6e89561 https://gist.github.com/fgth/f8de9c22664aee53722af676bff8719d
--
Cordialement,
*Frank Soyer * Le Mercredi, Février 14, 2018 11:04 CET, Maor Lipchuk <mlipchuk@redhat.com> a écrit:
Hi Frank,
I already replied on your last email. Can you provide the VDSM logs from the time of the migration failure for both hosts: ginger.local.systea.f <http://ginger.local.systea.fr/>r and v ictor.local.systea.fr
Thanks, Maor
On Wed, Feb 14, 2018 at 11:23 AM, fsoyer <fsoyer@systea.fr> wrote:
Hi all, I discovered yesterday a problem when migrating VM with more than one vdisk. On our test servers (oVirt4.1, shared storage with Gluster), I created 2 VMs needed for a test, from a template with a 20G vdisk. On this VMs I added a 100G vdisk (for this tests I didn't want to waste time to extend the existing vdisks... But I lost time finally...). The VMs with the 2 vdisks works well. Now I saw some updates waiting on the host. I tried to put it in maintenance... But it stopped on the two VM. They were marked "migrating", but no more accessible. Other (small) VMs with only 1 vdisk was migrated without problem at the same time. I saw that a kvm process for the (big) VMs was launched on the source AND destination host, but after tens of minutes, the migration and the VMs was always freezed. I tried to cancel the migration for the VMs : failed. The only way to stop it was to poweroff the VMs : the kvm process died on the 2 hosts and the GUI alerted on a failed migration. In doubt, I tried to delete the second vdisk on one of this VMs : it migrates then without error ! And no access problem. I tried to extend the first vdisk of the second VM, the delete the second vdisk : it migrates now without problem !
So after another test with a VM with 2 vdisks, I can say that this blocked the migration process :(
In engine.log, for a VMs with 1 vdisk migrating well, we see :
2018-02-12 16:46:29,705+01 INFO [org.ovirt.engine.core.bll.MigrateVmToServerCommand] (default task-28) [2f712024-5982-46a8-82c8-fd8293da5725] Lock Acquired to object 'EngineLock:{exclusiveLocks='[3f57e669-5e4c-4d10-85cc-d573004a099d=VM]', sharedLocks=''}' 2018-02-12 16:46:29,955+01 INFO [org.ovirt.engine.core.bll.MigrateVmToServerCommand] (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725] Running command: MigrateVmToServerCommand internal: false. Entities affected : ID: 3f57e669-5e4c-4d10-85cc-d573004a099d Type: VMAction group MIGRATE_VM with role type USER 2018-02-12 16:46:30,261+01 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725] START, MigrateVDSCommand( MigrateVDSCommandParameters:{runAsync='true', hostId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d', srcHost='192.168.0.6', dstVdsId='d569c2dd-8f30-4878-8aea-858db285cf69', dstHost=' 192.168.0.5:54321', migrationMethod='ONLINE', tunnelMigration='false', migrationDowntime='0', autoConverge='true', migrateCompressed='false', consoleAddress='null', maxBandwidth='500', enableGuestEvents='true', maxIncomingMigrations='2', maxOutgoingMigrations='2', convergenceSchedule='[init=[{name=setDowntime, params=[100]}], stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2, action={name=setDowntime, params=[200]}}, {limit=3, action={name=setDowntime, params=[300]}}, {limit=4, action={name=setDowntime, params=[400]}}, {limit=6, action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort, params=[]}}]]'}), log id: 14f61ee0 2018-02-12 16:46:30,262+01 INFO [org.ovirt.engine.core.vdsbro ker.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725] START, MigrateBrokerVDSCommand(HostName = victor.local.systea.fr, MigrateVDSCommandParameters:{runAsync='true', hostId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d', srcHost='192.168.0.6', dstVdsId='d569c2dd-8f30-4878-8aea-858db285cf69', dstHost=' 192.168.0.5:54321', migrationMethod='ONLINE', tunnelMigration='false', migrationDowntime='0', autoConverge='true', migrateCompressed='false', consoleAddress='null', maxBandwidth='500', enableGuestEvents='true', maxIncomingMigrations='2', maxOutgoingMigrations='2', convergenceSchedule='[init=[{name=setDowntime, params=[100]}], stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2, action={name=setDowntime, params=[200]}}, {limit=3, action={name=setDowntime, params=[300]}}, {limit=4, action={name=setDowntime, params=[400]}}, {limit=6, action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort, params=[]}}]]'}), log id: 775cd381 2018-02-12 16:46:30,277+01 INFO [org.ovirt.engine.core.vdsbro ker.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725] FINISH, MigrateBrokerVDSCommand, log id: 775cd381 2018-02-12 16:46:30,285+01 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725] FINISH, MigrateVDSCommand, return: MigratingFrom, log id: 14f61ee0 2018-02-12 16:46:30,301+01 INFO [org.ovirt.engine.core.dal.db broker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725] EVENT_ID: VM_MIGRATION_START(62), Correlation ID: 2f712024-5982-46a8-82c8-fd8293da5725, Job ID: 4bd19aa9-cc99-4d02-884e-5a1e857a7738, Call Stack: null, Custom ID: null, Custom Event ID: -1, Message: Migration started (VM: Oracle_SECONDARY, Source: victor.local.systea.fr, Destination: ginger.local.systea.fr, User: admin@internal-authz). 2018-02-12 16:46:31,106+01 INFO [org.ovirt.engine.core.vdsbro ker.vdsbroker.FullListVDSCommand] (DefaultQuartzScheduler9) [54a65b66] START, FullListVDSCommand(HostName = victor.local.systea.fr, FullListVDSCommandParameters:{runAsync='true', hostId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1', vmIds='[3f57e669-5e4c-4d10-85cc-d573004a099d]'}), log id: 54b4b435 2018-02-12 16:46:31,147+01 INFO [org.ovirt.engine.core.vdsbro ker.vdsbroker.FullListVDSCommand] (DefaultQuartzScheduler9) [54a65b66] FINISH, FullListVDSCommand, return: [{acpiEnable=true, emulatedMachine=pc-i440fx-rhel7.3.0, tabletEnable=true, pid=1493, guestDiskMapping={0QEMU_QEMU_HARDDISK_d890fa68-fba4-4f49-9={name=/dev/sda}, QEMU_DVD-ROM_QM00003={name=/dev/sr0}}, transparentHugePages=true, timeOffset=0, cpuType=Nehalem, smp=2, pauseCode=NOERR, guestNumaNodes=[Ljava.lang.Object;@1d9042cd, smartcardEnable=false, custom={device_fbddd528-7d93-49c6-a286-180e021cb274device_87 9c93ab-4df1-435c-af02-565039fcc254=VmDevice:{id='VmDeviceId: {deviceId='879c93ab-4df1-435c-af02-565039fcc254', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='unix', type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, controller=0, type=virtio-serial, port=1}', managed='false', plugged='true', readOnly='false', deviceAlias='channel0', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}, device_fbddd528-7d93-49c6-a286 -180e021cb274device_879c93ab-4df1-435c-af02-565039fcc254dev ice_8945f61a-abbe-4156-8485-a4aa6f1908dbdevice_017b5e59-01c4 -4aac-bf0c-b5d9557284d6=VmDevice:{id='VmDeviceId:{deviceId=' 017b5e59-01c4-4aac-bf0c-b5d9557284d6', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='tablet', type='UNKNOWN', bootOrder='0', specParams='[]', address='{bus=0, type=usb, port=1}', managed='false', plugged='true', readOnly='false', deviceAlias='input0', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}, device_fbddd528-7d93-49c6-a286-180e021cb274=VmDevice:{id=' VmDeviceId:{deviceId='fbddd528-7d93-49c6-a286-180e021cb274', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='ide', type='CONTROLLER', bootOrder='0', specParams='[]', address='{slot=0x01, bus=0x00, domain=0x0000, type=pci, function=0x1}', managed='false', plugged='true', readOnly='false', deviceAlias='ide', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}, device_fbddd528-7d93-49c6-a286-180e021cb274device_879c93ab- 4df1-435c-af02-565039fcc254device_8945f61a-abbe-4156-8485- a4aa6f1908db=VmDevice:{id='VmDeviceId:{deviceId='8945f61a-abbe-4156-8485-a4aa6f1908db', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='unix', type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, controller=0, type=virtio-serial, port=2}', managed='false', plugged='true', readOnly='false', deviceAlias='channel1', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}}, vmType=kvm, memSize=8192, smpCoresPerSocket=1, vmName=Oracle_SECONDARY, nice=0, status=Migration Source, maxMemSize=32768, bootMenuEnable=false, vmId=3f57e669-5e4c-4d10-85cc-d573004a099d, numOfIoThreads=2, smpThreadsPerCore=1, memGuaranteedSize=8192, kvmEnable=true, pitReinjection=false, displayNetwork=ovirtmgmt, devices=[Ljava.lang.Object;@28ae66d7, display=vnc, maxVCpus=16, clientIp=, statusTime=4299484520, maxMemSlots=16}], log id: 54b4b435 2018-02-12 16:46:31,150+01 INFO [org.ovirt.engine.core.vdsbro ker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler1) [27fac647] Fetched 3 VMs from VDS 'd569c2dd-8f30-4878-8aea-858db285cf69' 2018-02-12 16:46:31,151+01 INFO [org.ovirt.engine.core.vdsbro ker.monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler9) [54a65b66] Received a vnc Device without an address when processing VM 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: {device=vnc, specParams={displayNetwork=ovirtmgmt, keyMap=fr, displayIp=192.168.0.6}, type=graphics, deviceId=813957b1-446a-4e88-9e40-9fe76d2c442d, port=5901} 2018-02-12 16:46:31,151+01 INFO [org.ovirt.engine.core.vdsbro ker.monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler9) [54a65b66] Received a lease Device without an address when processing VM 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: {lease_id=3f57e669-5e4c-4d10-85cc-d573004a099d, sd_id=1e51cecc-eb2e-47d0-b185-920fdc7afa16, deviceId={uuid=a09949aa-5642-4b6d-94a4-8b0d04257be5}, offset=6291456, device=lease, path=/rhev/data-center/mnt/glusterSD/192.168.0.6: _DATA01/1e51cecc-eb2e-47d0-b185-920fdc7afa16/dom_md/xleases, type=lease} 2018-02-12 16:46:31,152+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (DefaultQuartzScheduler1) [27fac647] VM '3f57e669-5e4c-4d10-85cc-d573004a099d'(Oracle_SECONDARY) was unexpectedly detected as 'MigratingTo' on VDS 'd569c2dd-8f30-4878-8aea-858db285cf69'(ginger.local.systea.fr) (expected on 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1') 2018-02-12 16:46:31,152+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (DefaultQuartzScheduler1) [27fac647] VM '3f57e669-5e4c-4d10-85cc-d573004a099d' is migrating to VDS 'd569c2dd-8f30-4878-8aea-858db285cf69'( ginger.local.systea.fr) ignoring it in the refresh until migration is done .... 2018-02-12 16:46:41,631+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-11) [] VM '3f57e669-5e4c-4d10-85cc-d573004a099d' was reported as Down on VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'( victor.local.systea.fr) 2018-02-12 16:46:41,632+01 INFO [org.ovirt.engine.core.vdsbro ker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-11) [] START, DestroyVDSCommand(HostName = victor.local.systea.fr, DestroyVmVDSCommandParameters:{runAsync='true', hostId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d', force='false', secondsToWait='0', gracefully='false', reason='', ignoreNoVm='true'}), log id: 560eca57 2018-02-12 16:46:41,650+01 INFO [org.ovirt.engine.core.vdsbro ker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-11) [] FINISH, DestroyVDSCommand, log id: 560eca57 2018-02-12 16:46:41,650+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-11) [] VM '3f57e669-5e4c-4d10-85cc-d573004a099d'(Oracle_SECONDARY) moved from 'MigratingFrom' --> 'Down' 2018-02-12 16:46:41,651+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-11) [] Handing over VM '3f57e669-5e4c-4d10-85cc-d573004a099d'(Oracle_SECONDARY) to Host 'd569c2dd-8f30-4878-8aea-858db285cf69'. Setting VM to status 'MigratingTo' 2018-02-12 16:46:42,163+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-4) [] VM '3f57e669-5e4c-4d10-85cc-d573004a099d'(Oracle_SECONDARY) moved from 'MigratingTo' --> 'Up' 2018-02-12 16:46:42,169+01 INFO [org.ovirt.engine.core.vdsbro ker.vdsbroker.MigrateStatusVDSCommand] (ForkJoinPool-1-worker-4) [] START, MigrateStatusVDSCommand(HostName = ginger.local.systea.fr, MigrateStatusVDSCommandParameters:{runAsync='true', hostId='d569c2dd-8f30-4878-8aea-858db285cf69', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}), log id: 7a25c281 2018-02-12 16:46:42,174+01 INFO [org.ovirt.engine.core.vdsbro ker.vdsbroker.MigrateStatusVDSCommand] (ForkJoinPool-1-worker-4) [] FINISH, MigrateStatusVDSCommand, log id: 7a25c281 2018-02-12 16:46:42,194+01 INFO [org.ovirt.engine.core.dal.db broker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-4) [] EVENT_ID: VM_MIGRATION_DONE(63), Correlation ID: 2f712024-5982-46a8-82c8-fd8293da5725, Job ID: 4bd19aa9-cc99-4d02-884e-5a1e857a7738, Call Stack: null, Custom ID: null, Custom Event ID: -1, Message: Migration completed (VM: Oracle_SECONDARY, Source: victor.local.systea.fr, Destination: ginger.local.systea.fr, Duration: 11 seconds, Total: 11 seconds, Actual downtime: (N/A)) 2018-02-12 16:46:42,201+01 INFO [org.ovirt.engine.core.bll.MigrateVmToServerCommand] (ForkJoinPool-1-worker-4) [] Lock freed to object 'EngineLock:{exclusiveLocks='[3f57e669-5e4c-4d10-85cc-d573004a099d=VM]', sharedLocks=''}' 2018-02-12 16:46:42,203+01 INFO [org.ovirt.engine.core.vdsbro ker.vdsbroker.FullListVDSCommand] (ForkJoinPool-1-worker-4) [] START, FullListVDSCommand(HostName = ginger.local.systea.fr, FullListVDSCommandParameters:{runAsync='true', hostId='d569c2dd-8f30-4878-8aea-858db285cf69', vmIds='[3f57e669-5e4c-4d10-85cc-d573004a099d]'}), log id: 7cc65298 2018-02-12 16:46:42,254+01 INFO [org.ovirt.engine.core.vdsbro ker.vdsbroker.FullListVDSCommand] (ForkJoinPool-1-worker-4) [] FINISH, FullListVDSCommand, return: [{acpiEnable=true, emulatedMachine=pc-i440fx-rhel7.3.0, afterMigrationStatus=, tabletEnable=true, pid=18748, guestDiskMapping={}, transparentHugePages=true, timeOffset=0, cpuType=Nehalem, smp=2, guestNumaNodes=[Ljava.lang.Object;@760085fd, custom={device_fbddd528-7d93-49c6-a286-180e021cb274device_87 9c93ab-4df1-435c-af02-565039fcc254=VmDevice:{id='VmDeviceId: {deviceId='879c93ab-4df1-435c-af02-565039fcc254', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='unix', type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, controller=0, type=virtio-serial, port=1}', managed='false', plugged='true', readOnly='false', deviceAlias='channel0', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}, device_fbddd528-7d93-49c6-a286 -180e021cb274device_879c93ab-4df1-435c-af02-565039fcc254dev ice_8945f61a-abbe-4156-8485-a4aa6f1908dbdevice_017b5e59-01c4 -4aac-bf0c-b5d9557284d6=VmDevice:{id='VmDeviceId:{deviceId=' 017b5e59-01c4-4aac-bf0c-b5d9557284d6', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='tablet', type='UNKNOWN', bootOrder='0', specParams='[]', address='{bus=0, type=usb, port=1}', managed='false', plugged='true', readOnly='false', deviceAlias='input0', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}, device_fbddd528-7d93-49c6-a286-180e021cb274=VmDevice:{id=' VmDeviceId:{deviceId='fbddd528-7d93-49c6-a286-180e021cb274', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='ide', type='CONTROLLER', bootOrder='0', specParams='[]', address='{slot=0x01, bus=0x00, domain=0x0000, type=pci, function=0x1}', managed='false', plugged='true', readOnly='false', deviceAlias='ide', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}, device_fbddd528-7d93-49c6-a286-180e021cb274device_879c93ab- 4df1-435c-af02-565039fcc254device_8945f61a-abbe-4156-8485- a4aa6f1908db=VmDevice:{id='VmDeviceId:{deviceId='8945f61a-abbe-4156-8485-a4aa6f1908db', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='unix', type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, controller=0, type=virtio-serial, port=2}', managed='false', plugged='true', readOnly='false', deviceAlias='channel1', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}}, vmType=kvm, memSize=8192, smpCoresPerSocket=1, vmName=Oracle_SECONDARY, nice=0, status=Up, maxMemSize=32768, bootMenuEnable=false, vmId=3f57e669-5e4c-4d10-85cc-d573004a099d, numOfIoThreads=2, smpThreadsPerCore=1, smartcardEnable=false, maxMemSlots=16, kvmEnable=true, pitReinjection=false, displayNetwork=ovirtmgmt, devices=[Ljava.lang.Object;@2e4d3dd3, memGuaranteedSize=8192, maxVCpus=16, clientIp=, statusTime=4304259600 <(430)%20425-9600>, display=vnc}], log id: 7cc65298 2018-02-12 16:46:42,257+01 INFO [org.ovirt.engine.core.vdsbro ker.monitoring.VmDevicesMonitoring] (ForkJoinPool-1-worker-4) [] Received a vnc Device without an address when processing VM 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: {device=vnc, specParams={displayNetwork=ovirtmgmt, keyMap=fr, displayIp=192.168.0.5}, type=graphics, deviceId=813957b1-446a-4e88-9e40-9fe76d2c442d, port=5901} 2018-02-12 16:46:42,257+01 INFO [org.ovirt.engine.core.vdsbro ker.monitoring.VmDevicesMonitoring] (ForkJoinPool-1-worker-4) [] Received a lease Device without an address when processing VM 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: {lease_id=3f57e669-5e4c-4d10-85cc-d573004a099d, sd_id=1e51cecc-eb2e-47d0-b185-920fdc7afa16, deviceId={uuid=a09949aa-5642-4b6d-94a4-8b0d04257be5}, offset=6291456, device=lease, path=/rhev/data-center/mnt/glusterSD/192.168.0.6: _DATA01/1e51cecc-eb2e-47d0-b185-920fdc7afa16/dom_md/xleases, type=lease} 2018-02-12 16:46:46,260+01 INFO [org.ovirt.engine.core.vdsbro ker.vdsbroker.FullListVDSCommand] (DefaultQuartzScheduler5) [7fcb200a] FINISH, FullListVDSCommand, return: [{acpiEnable=true, emulatedMachine=pc-i440fx-rhel7.3.0, afterMigrationStatus=, tabletEnable=true, pid=18748, guestDiskMapping={0QEMU_QEMU_H ARDDISK_d890fa68-fba4-4f49-9={name=/dev/sda}, QEMU_DVD-ROM_QM00003={name=/dev/sr0}}, transparentHugePages=true, timeOffset=0, cpuType=Nehalem, smp=2, guestNumaNodes=[Ljava.lang.Obj ect;@77951faf, custom={device_fbddd528-7d93-4 9c6-a286-180e021cb274device_879c93ab-4df1-435c-af02-565039fc c254=VmDevice:{id='VmDeviceId:{deviceId='879c93ab-4df1-435c-af02-565039fcc254', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='unix', type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, controller=0, type=virtio-serial, port=1}', managed='false', plugged='true', readOnly='false', deviceAlias='channel0', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}, device_fbddd528-7d93-49c6-a286 -180e021cb274device_879c93ab-4df1-435c-af02-565039fcc254dev ice_8945f61a-abbe-4156-8485-a4aa6f1908dbdevice_017b5e59-01c4 -4aac-bf0c-b5d9557284d6=VmDevice:{id='VmDeviceId:{deviceId=' 017b5e59-01c4-4aac-bf0c-b5d9557284d6', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='tablet', type='UNKNOWN', bootOrder='0', specParams='[]', address='{bus=0, type=usb, port=1}', managed='false', plugged='true', readOnly='false', deviceAlias='input0', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}, device_fbddd528-7d93-49c6-a286-180e021cb274=VmDevice:{id=' VmDeviceId:{deviceId='fbddd528-7d93-49c6-a286-180e021cb274', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='ide', type='CONTROLLER', bootOrder='0', specParams='[]', address='{slot=0x01, bus=0x00, domain=0x0000, type=pci, function=0x1}', managed='false', plugged='true', readOnly='false', deviceAlias='ide', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}, device_fbddd528-7d93-49c6-a286-180e021cb274device_879c93ab- 4df1-435c-af02-565039fcc254device_8945f61a-abbe-4156-8485- a4aa6f1908db=VmDevice:{id='VmDeviceId:{deviceId='8945f61a-abbe-4156-8485-a4aa6f1908db', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='unix', type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, controller=0, type=virtio-serial, port=2}', managed='false', plugged='true', readOnly='false', deviceAlias='channel1', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}}, vmType=kvm, memSize=8192, smpCoresPerSocket=1, vmName=Oracle_SECONDARY, nice=0, status=Up, maxMemSize=32768, bootMenuEnable=false, vmId=3f57e669-5e4c-4d10-85cc-d573004a099d, numOfIoThreads=2, smpThreadsPerCore=1, smartcardEnable=false, maxMemSlots=16, kvmEnable=true, pitReinjection=false, displayNetwork=ovirtmgmt, devices=[Ljava.lang.Object;@286410fd, memGuaranteedSize=8192, maxVCpus=16, clientIp=, statusTime=4304263620 <(430)%20426-3620>, display=vnc}], log id: 58cdef4c 2018-02-12 16:46:46,267+01 INFO [org.ovirt.engine.core.vdsbro ker.monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler5) [7fcb200a] Received a vnc Device without an address when processing VM 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: {device=vnc, specParams={displayNetwork=ovirtmgmt, keyMap=fr, displayIp=192.168.0.5}, type=graphics, deviceId=813957b1-446a-4e88-9e40-9fe76d2c442d, port=5901} 2018-02-12 16:46:46,268+01 INFO [org.ovirt.engine.core.vdsbro ker.monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler5) [7fcb200a] Received a lease Device without an address when processing VM 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: {lease_id=3f57e669-5e4c-4d10-85cc-d573004a099d, sd_id=1e51cecc-eb2e-47d0-b185-920fdc7afa16, deviceId={uuid=a09949aa-5642-4b6d-94a4-8b0d04257be5}, offset=6291456, device=lease, path=/rhev/data-center/mnt/glusterSD/192.168.0.6: _DATA01/1e51cecc-eb2e-47d0-b185-920fdc7afa16/dom_md/xleases, type=lease}
For the VM with 2 vdisks we see :
2018-02-12 16:49:06,112+01 INFO [org.ovirt.engine.core.bll.MigrateVmToServerCommand] (default task-50) [92b5af33-cb87-4142-b8fe-8b838dd7458e] Lock Acquired to object 'EngineLock:{exclusiveLocks='[f7d4ec12-627a-4b83-b59e-886400d55474=VM]', sharedLocks=''}' 2018-02-12 16:49:06,407+01 INFO [org.ovirt.engine.core.bll.MigrateVmToServerCommand] (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4142-b8fe-8b838dd7458e] Running command: MigrateVmToServerCommand internal: false. Entities affected : ID: f7d4ec12-627a-4b83-b59e-886400d55474 Type: VMAction group MIGRATE_VM with role type USER 2018-02-12 16:49:06,712+01 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4142-b8fe-8b838dd7458e] START, MigrateVDSCommand( MigrateVDSCommandParameters:{runAsync='true', hostId='d569c2dd-8f30-4878-8aea-858db285cf69', vmId='f7d4ec12-627a-4b83-b59e-886400d55474', srcHost='192.168.0.5', dstVdsId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1', dstHost=' 192.168.0.6:54321', migrationMethod='ONLINE', tunnelMigration='false', migrationDowntime='0', autoConverge='true', migrateCompressed='false', consoleAddress='null', maxBandwidth='500', enableGuestEvents='true', maxIncomingMigrations='2', maxOutgoingMigrations='2', convergenceSchedule='[init=[{name=setDowntime, params=[100]}], stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2, action={name=setDowntime, params=[200]}}, {limit=3, action={name=setDowntime, params=[300]}}, {limit=4, action={name=setDowntime, params=[400]}}, {limit=6, action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort, params=[]}}]]'}), log id: 3702a9e0 2018-02-12 16:49:06,713+01 INFO [org.ovirt.engine.core.vdsbro ker.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4142-b8fe-8b838dd7458e] START, MigrateBrokerVDSCommand(HostName = ginger.local.systea.fr, MigrateVDSCommandParameters:{runAsync='true', hostId='d569c2dd-8f30-4878-8aea-858db285cf69', vmId='f7d4ec12-627a-4b83-b59e-886400d55474', srcHost='192.168.0.5', dstVdsId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1', dstHost=' 192.168.0.6:54321', migrationMethod='ONLINE', tunnelMigration='false', migrationDowntime='0', autoConverge='true', migrateCompressed='false', consoleAddress='null', maxBandwidth='500', enableGuestEvents='true', maxIncomingMigrations='2', maxOutgoingMigrations='2', convergenceSchedule='[init=[{name=setDowntime, params=[100]}], stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2, action={name=setDowntime, params=[200]}}, {limit=3, action={name=setDowntime, params=[300]}}, {limit=4, action={name=setDowntime, params=[400]}}, {limit=6, action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort, params=[]}}]]'}), log id: 1840069c 2018-02-12 16:49:06,724+01 INFO [org.ovirt.engine.core.vdsbro ker.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4142-b8fe-8b838dd7458e] FINISH, MigrateBrokerVDSCommand, log id: 1840069c 2018-02-12 16:49:06,732+01 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4142-b8fe-8b838dd7458e] FINISH, MigrateVDSCommand, return: MigratingFrom, log id: 3702a9e0 2018-02-12 16:49:06,753+01 INFO [org.ovirt.engine.core.dal.db broker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4142-b8fe-8b838dd7458e] EVENT_ID: VM_MIGRATION_START(62), Correlation ID: 92b5af33-cb87-4142-b8fe-8b838dd7458e, Job ID: f4f54054-f7c8-4481-8eda-d5a15c383061, Call Stack: null, Custom ID: null, Custom Event ID: -1, Message: Migration started (VM: Oracle_PRIMARY, Source: ginger.local.systea.fr, Destination: victor.local.systea.fr, User: admin@internal-authz). ... 2018-02-12 16:49:16,453+01 INFO [org.ovirt.engine.core.vdsbro ker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler4) [162a5bc3] Fetched 2 VMs from VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1' 2018-02-12 16:49:16,455+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (DefaultQuartzScheduler4) [162a5bc3] VM 'f7d4ec12-627a-4b83-b59e-886400d55474'(Oracle_PRIMARY) was unexpectedly detected as 'MigratingTo' on VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'(victor.local.systea.fr) (expected on 'd569c2dd-8f30-4878-8aea-858db285cf69') 2018-02-12 16:49:16,455+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (DefaultQuartzScheduler4) [162a5bc3] VM 'f7d4ec12-627a-4b83-b59e-886400d55474' is migrating to VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'( victor.local.systea.fr) ignoring it in the refresh until migration is done ... 2018-02-12 16:49:31,484+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (DefaultQuartzScheduler5) [11a7619a] VM 'f7d4ec12-627a-4b83-b59e-886400d55474'(Oracle_PRIMARY) was unexpectedly detected as 'MigratingTo' on VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'(victor.local.systea.fr) (expected on 'd569c2dd-8f30-4878-8aea-858db285cf69') 2018-02-12 16:49:31,484+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (DefaultQuartzScheduler5) [11a7619a] VM 'f7d4ec12-627a-4b83-b59e-886400d55474' is migrating to VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'( victor.local.systea.fr) ignoring it in the refresh until migration is done
and so on, last lines repeated indefinitly for hours since we poweroff the VM... Is this something known ? Any idea about that ?
Thanks
Ovirt 4.1.6, updated last at feb-13. Gluster 3.12.1.
--
Cordialement,
*Frank Soyer *
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

</div></div><div class=3D"gmail=5Fextra"> <div class=3D"gma= il=5Fquote">On Fri, Feb 16, 2018 at 11:03 AM, fsoyer <span dir=3D"ltr">= <<a target=3D"=5Fblank" href=3D"mailto:fsoyer@systea.fr">fsoyer@syst= ea.fr</a>></span> wrote:<blockquote class=3D"gmail=5Fquote" style=3D= "margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi Maor= ,<br />sorry for the double post, I've change the email adress of my ac= count and supposed that I'd need to re-post it.<br />And thank you for = your time. Here are the logs. I added a vdisk to an existing VM : it no= more migrates, needing to poweroff it after minutes. Then simply delet= ing the second disk makes migrate it in exactly 9s without problem !&nb= sp;<br /><a target=3D"=5Fblank" href=3D"https://gist.github.com/fgth/47= 07446331d201eef574ac31b6e89561">https://gist.github.com/fgth/<wbr />470= 7446331d201eef574ac31b6e895<wbr />61</a><br /><a target=3D"=5Fblank" hr= ef=3D"https://gist.github.com/fgth/f8de9c22664aee53722af676bff8719d">ht= tps://gist.github.com/fgth/<wbr />f8de9c22664aee53722af676bff871<wbr />= 9d</a><br /><br />--<p class=3D"m=5F-4299273321983674487Text1">Cordiale= ment,<br /><br /><b>Frank Soyer </b></p><div class=3D"HOEnZb"><div= class=3D"h5">Le Mercredi, F=C3=A9vrier 14, 2018 11:04 CET, Maor Lipchu= k <<a target=3D"=5Fblank" href=3D"mailto:mlipchuk@redhat.com">mlipch= uk@redhat.com</a>> a =C3=A9crit:<br /> <blockquote type=3D"cite= " cite=3D"http://CAJ1JNOcD3ZX6hYG4TJ0-=5FumSgw6-wtJoC=5F=5FHRbarc1io-Y-= 6Jw@mail.gmail.com"><div dir=3D"ltr">Hi Frank,<div> </div><div>I a= lready replied on your last email.</div><div><div>Can you provide the V= DSM logs from the time of the migration failure for both hosts:</div><d= iv> <span style=3D"font-family:monospace"> </span><a style=3D= "font-family:monospace" target=3D"=5Fblank" href=3D"http://ginger.local= .systea.fr/">ginger.local.systea.f</a>r and <a style=3D"font-= family:monospace" target=3D"=5Fblank" href=3D"http://victor.local.syste= a.fr/">v<wbr />ictor.local.systea.fr</a></div><div> </div><div>Tha= nks,</div><div>Maor</div></div></div><div class=3D"gmail=5Fextra"> = ;<div class=3D"gmail=5Fquote">On Wed, Feb 14, 2018 at 11:23 AM, fsoyer = <span dir=3D"ltr"><<a target=3D"=5Fblank" href=3D"mailto:fsoyer@syst= ea.fr">fsoyer@systea.fr</a>></span> wrote:<blockquote class=3D"gmail= =5Fquote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;padding= -left:1ex"><p>Hi all,<br />I discovered yesterday a problem when migrat= ing VM with more than one vdisk.<br />On our test servers (oVirt4.1, sh= ared storage with Gluster), I created 2 VMs needed for a test, from a t= emplate with a 20G vdisk. On this VMs I added a 100G vdisk (for this te= sts I didn't want to waste time to extend the existing vdisks... But I = lost time finally...). The VMs with the 2 vdisks works well.<br />Now I= saw some updates waiting on the host. I tried to put it in maintenance= ... But it stopped on the two VM. They were marked "migrating", but no = more accessible. Other (small) VMs with only 1 vdisk was migrated witho= ut problem at the same time.<br />I saw that a kvm process for the (big= ) VMs was launched on the source AND destination host, but after tens o= f minutes, the migration and the VMs was always freezed. I tried to can= cel the migration for the VMs : failed. The only way to stop it was to =
3da5725] START, MigrateVDSCommand( MigrateVDSCommandParameters:{r<wbr = />unAsync=3D'true', hostId=3D'ce3938b1-b23f-4d22-840<wbr />a-f17d7cd87b= b1', vmId=3D'3f57e669-5e4c-4d10-85cc-<wbr />d573004a099d', srcHost=3D'1= 92.168.0.6', dstVdsId=3D'd569c2dd-8f30-4878-8<wbr />aea-858db285cf69', = dstHost=3D'<a target=3D"=5Fblank" href=3D"http://192.168.0.5:54321">192= .168.0.5:54321</a>', migrationMethod=3D'ONLINE', tunnelMigration=3D'fal= se', migrationDowntime=3D'0', autoConverge=3D'true', migrateCompressed=3D= 'false', consoleAddress=3D'null', maxBandwidth=3D'500', enableGuestEven= ts=3D'true', maxIncomingMigrations=3D'2', maxOutgoingMigrations=3D'2', = convergenceSchedule=3D'[init=3D[{n<wbr />ame=3DsetDowntime, params=3D[1= 00]}], stalling=3D[{limit=3D1, action=3D{name=3DsetDowntime, params=3D[= 150]}}, {limit=3D2, action=3D{name=3DsetDowntime, params=3D[200]}}, {li= mit=3D3, action=3D{name=3DsetDowntime, params=3D[300]}}, {limit=3D4, ac= tion=3D{name=3DsetDowntime, params=3D[400]}}, {limit=3D6, action=3D{nam= e=3DsetDowntime, params=3D[500]}}, {limit=3D-1, action=3D{name=3Dabort,=
broker.auditloghandling.AuditL<wbr />ogDirector] (org.ovirt.thread.poo= l-6-threa<wbr />d-32) [2f712024-5982-46a8-82c8-fd829<wbr />3da5725] EVE= NT=5FID: VM=5FMIGRATION=5FSTART(62), Correlation ID: 2f712024-5982-46a8= -82c8-fd8293<wbr />da5725, Job ID: 4bd19aa9-cc99-4d02-884e-5a1e85<wbr /= 7a7738, Call Stack: null, Custom ID: null, Custom Event ID: -1, Messag= e: Migration started (VM: Oracle=5FSECONDARY, Source: <a target=3D"=5Fb= lank" href=3D"http://victor.local.systea.fr">victor.local.systea.fr</a>= , Destination: <a target=3D"=5Fblank" href=3D"http://ginger.local.syste= a.fr">ginger.local.systea.fr</a>, User: admin@internal-authz).<br />201= 8-02-12 16:46:31,106+01 INFO [org.ovirt.engine.core.vdsbro<wbr />= ker.vdsbroker.FullListVDSComma<wbr />nd] (DefaultQuartzScheduler9) [54a= 65b66] START, FullListVDSCommand(HostName =3D <a target=3D"=5Fblank" hr= ef=3D"http://victor.local.systea.fr">victor.local.systea.fr</a>, FullLi= stVDSCommandParameters:{<wbr />runAsync=3D'true', hostId=3D'ce3938b1-b2= 3f-4d22-840<wbr />a-f17d7cd87bb1', vmIds=3D'[3f57e669-5e4c-4d10-85c<wbr= />c-d573004a099d]'}), log id: 54b4b435<br />2018-02-12 16:46:31,147+01= INFO [org.ovirt.engine.core.vdsbro<wbr />ker.vdsbroker.FullListV= DSComma<wbr />nd] (DefaultQuartzScheduler9) [54a65b66] FINISH, FullList= VDSCommand, return: [{acpiEnable=3Dtrue, emulatedMachine=3Dpc-i440fx-rh= el<wbr />7.3.0, tabletEnable=3Dtrue, pid=3D1493, guestDiskMapping=3D{0Q= EMU=5FQEMU=5FH<wbr />ARDDISK=5Fd890fa68-fba4-4f49-9=3D{<wbr />name=3D/d= ev/sda}, QEMU=5FDVD-ROM=5FQM00003=3D{name=3D/de<wbr />v/sr0}}, transpar= entHugePages=3Dtrue, timeOffset=3D0, cpuType=3DNehalem, smp=3D2, pauseC= ode=3DNOERR, guestNumaNodes=3D[Ljava.lang.Obj<wbr />ect;@1d9042cd, smar= tcardEnable=3Dfalse, custom=3D{device=5Ffbddd528-7d93-4<wbr />9c6-a286-= 180e021cb274device=5F87<wbr />9c93ab-4df1-435c-af02-565039fc<wbr />c254= =3DVmDevice:{id=3D'VmDeviceId:<wbr />{deviceId=3D'879c93ab-4df1-435c-<w= br />af02-565039fcc254', vmId=3D'3f57e669-5e4c-4d10-85cc-<wbr />d573004= a099d'}', device=3D'unix', type=3D'CHANNEL', bootOrder=3D'0', specParam= s=3D'[]', address=3D'{bus=3D0, controller=3D0, type=3Dvirtio-serial, po= rt=3D1}', managed=3D'false', plugged=3D'true', readOnly=3D'false', devi= ceAlias=3D'channel0', customProperties=3D'[]', snapshotId=3D'null', log= icalName=3D'null', hostDevice=3D'null'}, device=5Ffbddd528-7d93-49c6-a2= 86<wbr />-180e021cb274device=5F879c93ab-<wbr />4df1-435c-af02-565039fcc= 254dev<wbr />ice=5F8945f61a-abbe-4156-8485-a4<wbr />aa6f1908dbdevice=5F= 017b5e59-01c4<wbr />-4aac-bf0c-b5d9557284d6=3DVmDevi<wbr />ce:{id=3D'Vm= DeviceId:{deviceId=3D'<wbr />017b5e59-01c4-4aac-bf0c-<wbr />b5d9557284d= 6', vmId=3D'3f57e669-5e4c-4d10-85cc-<wbr />d573004a099d'}', device=3D't= ablet', type=3D'UNKNOWN', bootOrder=3D'0', specParams=3D'[]', address=3D= '{bus=3D0, type=3Dusb, port=3D1}', managed=3D'false', plugged=3D'true',= readOnly=3D'false', deviceAlias=3D'input0', customProperties=3D'[]', s= napshotId=3D'null', logicalName=3D'null', hostDevice=3D'null'}, device=5F= fbddd528-7d93-49c6-a286<wbr />-180e021cb274=3DVmDevice:{id=3D'<wbr />Vm= DeviceId:{deviceId=3D'fbddd528<wbr />-7d93-49c6-a286-180e021cb274', vmI= d=3D'3f57e669-5e4c-4d10-85cc-<wbr />d573004a099d'}', device=3D'ide', ty=
ring] (DefaultQuartzScheduler9) [54a65b66] Received a lease Device wit= hout an address when processing VM 3f57e669-5e4c-4d10-85cc-d57300<wbr /= 4a099d devices, skipping device: {lease=5Fid=3D3f57e669-5e4c-4d10-8<wb= r />5cc-d573004a099d, sd=5Fid=3D1e51cecc-eb2e-47d0-b185-<wbr />920fdc7a= fa16, deviceId=3D{uuid=3Da09949aa-5642-4<wbr />b6d-94a4-8b0d04257be5}, = offset=3D6291456, device=3Dlease, path=3D/rhev/data-center/mnt/glu<wbr = />sterSD/192.168.0.6:=5FDATA01/1e5<wbr />1cecc-eb2e-47d0-b185-920fdc7af= <wbr />a16/dom=5Fmd/xleases, type=3Dlease}<br />2018-02-12 16:46:31,152= +01 INFO [org.ovirt.engine.core.vdsbro<wbr />ker.monitoring.VmAna= lyzer] (DefaultQuartzScheduler1) [27fac647] VM '3f57e669-5e4c-4d10-85cc= -d5730<wbr />04a099d'(Oracle=5FSECONDARY) was unexpectedly detected as = 'MigratingTo' on VDS 'd569c2dd-8f30-4878-8aea-858db<wbr />285cf69'(<a t= arget=3D"=5Fblank" href=3D"http://ginger.local.systea.fr">ginger.local.= systea.<wbr />fr</a>) (expected on 'ce3938b1-b23f-4d22-840a-f17d7<wbr /= cd87bb1')<br />2018-02-12 16:46:31,152+01 INFO [org.ovirt.engine= .core.vdsbro<wbr />ker.monitoring.VmAnalyzer] (DefaultQuartzScheduler1)= [27fac647] VM '3f57e669-5e4c-4d10-85cc-d5730<wbr />04a099d' is migrati= ng to VDS 'd569c2dd-8f30-4878-8aea-858db<wbr />285cf69'(<a target=3D"=5F= blank" href=3D"http://ginger.local.systea.fr">ginger.local.systea.<wbr = />fr</a>) ignoring it in the refresh until migration is done<br />....<= br />2018-02-12 16:46:41,631+01 INFO [org.ovirt.engine.core.vdsbr= o<wbr />ker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-11) [] VM '3f= 57e669-5e4c-4d10-85cc-d5730<wbr />04a099d' was reported as Down on VDS = 'ce3938b1-b23f-4d22-840a-f17d7<wbr />cd87bb1'(<a target=3D"=5Fblank" hr= ef=3D"http://victor.local.systea.fr">victor.local.systea.<wbr />fr</a>)= <br />2018-02-12 16:46:41,632+01 INFO [org.ovirt.engine.core.vdsb= ro<wbr />ker.vdsbroker.DestroyVDSComman<wbr />d] (ForkJoinPool-1-worker= -11) [] START, DestroyVDSCommand(HostName =3D <a target=3D"=5Fblank" hr= ef=3D"http://victor.local.systea.fr">victor.local.systea.fr</a>, Destro= yVmVDSCommandParameters:<wbr />{runAsync=3D'true', hostId=3D'ce3938b1-b= 23f-4d22-840<wbr />a-f17d7cd87bb1', vmId=3D'3f57e669-5e4c-4d10-85cc-<wb= r />d573004a099d', force=3D'false', secondsToWait=3D'0', gracefully=3D'= false', reason=3D'', ignoreNoVm=3D'true'}), log id: 560eca57<br />2018-= 02-12 16:46:41,650+01 INFO [org.ovirt.engine.core.vdsbro<wbr />ke= r.vdsbroker.DestroyVDSComman<wbr />d] (ForkJoinPool-1-worker-11) [] FIN= ISH, DestroyVDSCommand, log id: 560eca57<br />2018-02-12 16:46:41,650+0= 1 INFO [org.ovirt.engine.core.vdsbro<wbr />ker.monitoring.VmAnaly= zer] (ForkJoinPool-1-worker-11) [] VM '3f57e669-5e4c-4d10-85cc-d5730<wb= r />04a099d'(Oracle=5FSECONDARY) moved from 'MigratingFrom' --> 'Dow= n'<br />2018-02-12 16:46:41,651+01 INFO [org.ovirt.engine.core.vd= sbro<wbr />ker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-11) [] Han= ding over VM '3f57e669-5e4c-4d10-85cc-d5730<wbr />04a099d'(Oracle=5FSEC= ONDARY) to Host 'd569c2dd-8f30-4878-8aea-858db<wbr />285cf69'. Setting = VM to status 'MigratingTo'<br />2018-02-12 16:46:42,163+01 INFO [= org.ovirt.engine.core.vdsbro<wbr />ker.monitoring.VmAnalyzer] (ForkJoin= Pool-1-worker-4) [] VM '3f57e669-5e4c-4d10-85cc-d5730<wbr />04a099d'(Or= acle=5FSECONDARY) moved from 'MigratingTo' --> 'Up'<br />2018-02-12 = 16:46:42,169+01 INFO [org.ovirt.engine.core.vdsbro<wbr />ker.vdsb= roker.MigrateStatusVDS<wbr />Command] (ForkJoinPool-1-worker-4) [] STAR= T, MigrateStatusVDSCommand(HostNa<wbr />me =3D <a target=3D"=5Fblank" h= ref=3D"http://ginger.local.systea.fr">ginger.local.systea.fr</a>, Migra= teStatusVDSCommandParamet<wbr />ers:{runAsync=3D'true', hostId=3D'd569c= 2dd-8f30-4878-8ae<wbr />a-858db285cf69', vmId=3D'3f57e669-5e4c-4d10-85c= c-<wbr />d573004a099d'}), log id: 7a25c281<br />2018-02-12 16:46:42,174= +01 INFO [org.ovirt.engine.core.vdsbro<wbr />ker.vdsbroker.Migrat= eStatusVDS<wbr />Command] (ForkJoinPool-1-worker-4) [] FINISH, MigrateS= tatusVDSCommand, log id: 7a25c281<br />2018-02-12 16:46:42,194+01 INFO = [org.ovirt.engine.core.dal.db<wbr />broker.auditloghandling.Audit= L<wbr />ogDirector] (ForkJoinPool-1-worker-4) [] EVENT=5FID: VM=5FMIGRA= TION=5FDONE(63), Correlation ID: 2f712024-5982-46a8-82c8-fd8293<wbr />d= a5725, Job ID: 4bd19aa9-cc99-4d02-884e-5a1e85<wbr />7a7738, Call Stack:= null, Custom ID: null, Custom Event ID: -1, Message: Migration complet= ed (VM: Oracle=5FSECONDARY, Source: <a target=3D"=5Fblank" href=3D"http= ://victor.local.systea.fr">victor.local.systea.fr</a>, Destination: <a = target=3D"=5Fblank" href=3D"http://ginger.local.systea.fr">ginger.local= .systea.fr</a>, Duration: 11 seconds, Total: 11 seconds, Actual downtim= e: (N/A))<br />2018-02-12 16:46:42,201+01 INFO [org.ovirt.engine.= core.bll.Mi<wbr />grateVmToServerCommand] (ForkJoinPool-1-worker-4) [] = Lock freed to object 'EngineLock:{exclusiveLocks=3D'[<wbr />3f57e669-5e= 4c-4d10-85cc-d57300<wbr />4a099d=3DVM]', sharedLocks=3D''}'<br />2018-0= 2-12 16:46:42,203+01 INFO [org.ovirt.engine.core.vdsbro<wbr />ker= .vdsbroker.FullListVDSComma<wbr />nd] (ForkJoinPool-1-worker-4) [] STAR= T, FullListVDSCommand(HostName =3D <a target=3D"=5Fblank" href=3D"http:= //ginger.local.systea.fr">ginger.local.systea.fr</a>, FullListVDSComman= dParameters:{<wbr />runAsync=3D'true', hostId=3D'd569c2dd-8f30-4878-8ae= <wbr />a-858db285cf69', vmIds=3D'[3f57e669-5e4c-4d10-85c<wbr />c-d57300= 4a099d]'}), log id: 7cc65298<br />2018-02-12 16:46:42,254+01 INFO  = ;[org.ovirt.engine.core.vdsbro<wbr />ker.vdsbroker.FullListVDSComma<wbr= />nd] (ForkJoinPool-1-worker-4) [] FINISH, FullListVDSCommand, return:= [{acpiEnable=3Dtrue, emulatedMachine=3Dpc-i440fx-rhel<wbr />7.3.0, aft= erMigrationStatus=3D, tabletEnable=3Dtrue, pid=3D18748, guestDiskMappin= g=3D{}, transparentHugePages=3Dtrue, timeOffset=3D0, cpuType=3DNehalem,= smp=3D2, guestNumaNodes=3D[Ljava.lang.Obj<wbr />ect;@760085fd, custom=3D= {device=5Ffbddd528-7d93-4<wbr />9c6-a286-180e021cb274device=5F87<wbr />= 9c93ab-4df1-435c-af02-565039fc<wbr />c254=3DVmDevice:{id=3D'VmDeviceId:= <wbr />{deviceId=3D'879c93ab-4df1-435c-<wbr />af02-565039fcc254', vmId=3D= '3f57e669-5e4c-4d10-85cc-<wbr />d573004a099d'}', device=3D'unix', type=3D= 'CHANNEL', bootOrder=3D'0', specParams=3D'[]', address=3D'{bus=3D0, con=
d573004a099d'}', device=3D'unix', type=3D'CHANNEL', bootOrder=3D'0', s=
------=_=-_OpenGroupware_org_NGMime-18019-1519309377.654147-22------ Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable Content-Length: 31863 Hi, Yes, on 2018-02-16 (vdsm logs) I tried with a VM standing on ginger (19= 2.168.0.6) migrated=C2=A0(or failed to migrate...) to victor (192.168.0= .5), while the engine.log in the first mail on 2018-02-12 was for VMs s= tanding on victor, migrated (or failed to migrate...) to ginger. Sympto= ms were exactly the same, in both directions, and VMs works like a char= m before, and even after (migration "killed" by a poweroff of VMs). Am I the only one experimenting this problem ? Thanks -- Cordialement, Frank Soyer =C2=A0 Le Jeudi, F=C3=A9vrier 22, 2018 00:45 CET, Maor Lipchuk <mlipchuk@redha= t.com> a =C3=A9crit: =C2=A0Hi Frank,=C2=A0Sorry about the delay repond.I've been going throu= gh the logs you attached, although I could not find any specific indica= tion why the migration failed because of the disk you were mentionning.= Does this VM run with both disks on the target host without migration?=C2= =A0Regards,Maor=C2=A0=C2=A0On Fri, Feb 16, 2018 at 11:03 AM, fsoyer <fs= oyer@systea.fr> wrote:Hi Maor, sorry for the double post, I've change the email adress of my account a= nd supposed that I'd need to re-post it. And thank you for your time. Here are the logs. I added a vdisk to an e= xisting VM : it no more migrates, needing to poweroff it after minutes.= Then simply deleting the second disk makes migrate it in exactly 9s wi= thout problem !=C2=A0 https://gist.github.com/fgth/4707446331d201eef574ac31b6e89561 https://gist.github.com/fgth/f8de9c22664aee53722af676bff8719d -- Cordialement, Frank Soyer=C2=A0Le Mercredi, F=C3=A9vrier 14, 2018 11:04 CET, Maor Lip= chuk <mlipchuk@redhat.com> a =C3=A9crit: =C2=A0Hi Frank,=C2=A0I already replied on your last email.Can you provi= de the VDSM logs from the time of the migration failure for both hosts:= =C2=A0=C2=A0ginger.local.systea.fr=C2=A0and=C2=A0victor.local.systea.fr= =C2=A0Thanks,Maor=C2=A0On Wed, Feb 14, 2018 at 11:23 AM, fsoyer <fsoyer= @systea.fr> wrote: Hi all, I discovered yesterday a problem when migrating VM with more than one v= disk. On our test servers (oVirt4.1, shared storage with Gluster), I created = 2 VMs needed for a test, from a template with a 20G vdisk. On this VMs = I added a 100G vdisk (for this tests I didn't want to waste time to ext= end the existing vdisks... But I lost time finally...). The VMs with th= e 2 vdisks works well. Now I saw some updates waiting on the host. I tried to put it in mainte= nance... But it stopped on the two VM. They were marked "migrating", bu= t no more accessible. Other (small) VMs with only 1 vdisk was migrated = without problem at the same time. I saw that a kvm process for the (big) VMs was launched on the source A= ND destination host, but after tens of minutes, the migration and the V= Ms was always freezed. I tried to cancel the migration for the VMs : fa= iled. The only way to stop it was to poweroff the VMs : the kvm process= died on the 2 hosts and the GUI alerted on a failed migration. In doubt, I tried to delete the second vdisk on one of this VMs : it mi= grates then without error ! And no access problem. I tried to extend the first vdisk of the second VM, the delete the seco= nd vdisk : it migrates now without problem !=C2=A0=C2=A0=C2=A0 So after another test with a VM with 2 vdisks, I can say that this bloc= ked the migration process :( In engine.log, for a VMs with 1 vdisk migrating well, we see :2018-02-1= 2 16:46:29,705+01 INFO =C2=A0[org.ovirt.engine.core.bll.MigrateVmToServ= erCommand] (default task-28) [2f712024-5982-46a8-82c8-fd8293da5725] Loc= k Acquired to object 'EngineLock:{exclusiveLocks=3D'[3f57e669-5e4c-4d10= -85cc-d573004a099d=3DVM]', sharedLocks=3D''}' 2018-02-12 16:46:29,955+01 INFO =C2=A0[org.ovirt.engine.core.bll.Migrat= eVmToServerCommand] (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-= 46a8-82c8-fd8293da5725] Running command: MigrateVmToServerCommand inter= nal: false. Entities affected : =C2=A0ID: 3f57e669-5e4c-4d10-85cc-d5730= 04a099d Type: VMAction group MIGRATE=5FVM with role type USER 2018-02-12 16:46:30,261+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.= MigrateVDSCommand] (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-4= 6a8-82c8-fd8293da5725] START, MigrateVDSCommand( MigrateVDSCommandParam= eters:{runAsync=3D'true', hostId=3D'ce3938b1-b23f-4d22-840a-f17d7cd87bb= 1', vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d', srcHost=3D'192.168.0= .6', dstVdsId=3D'd569c2dd-8f30-4878-8aea-858db285cf69', dstHost=3D'192.= 168.0.5:54321', migrationMethod=3D'ONLINE', tunnelMigration=3D'false', = migrationDowntime=3D'0', autoConverge=3D'true', migrateCompressed=3D'fa= lse', consoleAddress=3D'null', maxBandwidth=3D'500', enableGuestEvents=3D= 'true', maxIncomingMigrations=3D'2', maxOutgoingMigrations=3D'2', conve= rgenceSchedule=3D'[init=3D[{name=3DsetDowntime, params=3D[100]}], stall= ing=3D[{limit=3D1, action=3D{name=3DsetDowntime, params=3D[150]}}, {lim= it=3D2, action=3D{name=3DsetDowntime, params=3D[200]}}, {limit=3D3, act= ion=3D{name=3DsetDowntime, params=3D[300]}}, {limit=3D4, action=3D{name= =3DsetDowntime, params=3D[400]}}, {limit=3D6, action=3D{name=3DsetDownt= ime, params=3D[500]}}, {limit=3D-1, action=3D{name=3Dabort, params=3D[]= }}]]'}), log id: 14f61ee0 2018-02-12 16:46:30,262+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.= vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-thread-32) = [2f712024-5982-46a8-82c8-fd8293da5725] START, MigrateBrokerVDSCommand(H= ostName =3D victor.local.systea.fr, MigrateVDSCommandParameters:{runAsy= nc=3D'true', hostId=3D'ce3938b1-b23f-4d22-840a-f17d7cd87bb1', vmId=3D'3= f57e669-5e4c-4d10-85cc-d573004a099d', srcHost=3D'192.168.0.6', dstVdsId= =3D'd569c2dd-8f30-4878-8aea-858db285cf69', dstHost=3D'192.168.0.5:54321= ', migrationMethod=3D'ONLINE', tunnelMigration=3D'false', migrationDown= time=3D'0', autoConverge=3D'true', migrateCompressed=3D'false', console= Address=3D'null', maxBandwidth=3D'500', enableGuestEvents=3D'true', max= IncomingMigrations=3D'2', maxOutgoingMigrations=3D'2', convergenceSched= ule=3D'[init=3D[{name=3DsetDowntime, params=3D[100]}], stalling=3D[{lim= it=3D1, action=3D{name=3DsetDowntime, params=3D[150]}}, {limit=3D2, act= ion=3D{name=3DsetDowntime, params=3D[200]}}, {limit=3D3, action=3D{name= =3DsetDowntime, params=3D[300]}}, {limit=3D4, action=3D{name=3DsetDownt= ime, params=3D[400]}}, {limit=3D6, action=3D{name=3DsetDowntime, params= =3D[500]}}, {limit=3D-1, action=3D{name=3Dabort, params=3D[]}}]]'}), lo= g id: 775cd381 2018-02-12 16:46:30,277+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.= vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-thread-32) = [2f712024-5982-46a8-82c8-fd8293da5725] FINISH, MigrateBrokerVDSCommand,= log id: 775cd381 2018-02-12 16:46:30,285+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.= MigrateVDSCommand] (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-4= 6a8-82c8-fd8293da5725] FINISH, MigrateVDSCommand, return: MigratingFrom= , log id: 14f61ee0 2018-02-12 16:46:30,301+01 INFO =C2=A0[org.ovirt.engine.core.dal.dbbrok= er.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-6-thread-3= 2) [2f712024-5982-46a8-82c8-fd8293da5725] EVENT=5FID: VM=5FMIGRATION=5F= START(62), Correlation ID: 2f712024-5982-46a8-82c8-fd8293da5725, Job ID= : 4bd19aa9-cc99-4d02-884e-5a1e857a7738, Call Stack: null, Custom ID: nu= ll, Custom Event ID: -1, Message: Migration started (VM: Oracle=5FSECON= DARY, Source: victor.local.systea.fr, Destination: ginger.local.systea.= fr, User: admin@internal-authz). 2018-02-12 16:46:31,106+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.= vdsbroker.FullListVDSCommand] (DefaultQuartzScheduler9) [54a65b66] STAR= T, FullListVDSCommand(HostName =3D victor.local.systea.fr, FullListVDSC= ommandParameters:{runAsync=3D'true', hostId=3D'ce3938b1-b23f-4d22-840a-= f17d7cd87bb1', vmIds=3D'[3f57e669-5e4c-4d10-85cc-d573004a099d]'}), log = id: 54b4b435 2018-02-12 16:46:31,147+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.= vdsbroker.FullListVDSCommand] (DefaultQuartzScheduler9) [54a65b66] FINI= SH, FullListVDSCommand, return: [{acpiEnable=3Dtrue, emulatedMachine=3D= pc-i440fx-rhel7.3.0, tabletEnable=3Dtrue, pid=3D1493, guestDiskMapping=3D= {0QEMU=5FQEMU=5FHARDDISK=5Fd890fa68-fba4-4f49-9=3D{name=3D/dev/sda}, QE= MU=5FDVD-ROM=5FQM00003=3D{name=3D/dev/sr0}}, transparentHugePages=3Dtru= e, timeOffset=3D0, cpuType=3DNehalem, smp=3D2, pauseCode=3DNOERR, guest= NumaNodes=3D[Ljava.lang.Object;@1d9042cd, smartcardEnable=3Dfalse, cust= om=3D{device=5Ffbddd528-7d93-49c6-a286-180e021cb274device=5F879c93ab-4d= f1-435c-af02-565039fcc254=3DVmDevice:{id=3D'VmDeviceId:{deviceId=3D'879= c93ab-4df1-435c-af02-565039fcc254', vmId=3D'3f57e669-5e4c-4d10-85cc-d57= 3004a099d'}', device=3D'unix', type=3D'CHANNEL', bootOrder=3D'0', specP= arams=3D'[]', address=3D'{bus=3D0, controller=3D0, type=3Dvirtio-serial= , port=3D1}', managed=3D'false', plugged=3D'true', readOnly=3D'false', = deviceAlias=3D'channel0', customProperties=3D'[]', snapshotId=3D'null',= logicalName=3D'null', hostDevice=3D'null'}, device=5Ffbddd528-7d93-49c= 6-a286-180e021cb274device=5F879c93ab-4df1-435c-af02-565039fcc254device=5F= 8945f61a-abbe-4156-8485-a4aa6f1908dbdevice=5F017b5e59-01c4-4aac-bf0c-b5= d9557284d6=3DVmDevice:{id=3D'VmDeviceId:{deviceId=3D'017b5e59-01c4-4aac= -bf0c-b5d9557284d6', vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}', d= evice=3D'tablet', type=3D'UNKNOWN', bootOrder=3D'0', specParams=3D'[]',= address=3D'{bus=3D0, type=3Dusb, port=3D1}', managed=3D'false', plugge= d=3D'true', readOnly=3D'false', deviceAlias=3D'input0', customPropertie= s=3D'[]', snapshotId=3D'null', logicalName=3D'null', hostDevice=3D'null= '}, device=5Ffbddd528-7d93-49c6-a286-180e021cb274=3DVmDevice:{id=3D'VmD= eviceId:{deviceId=3D'fbddd528-7d93-49c6-a286-180e021cb274', vmId=3D'3f5= 7e669-5e4c-4d10-85cc-d573004a099d'}', device=3D'ide', type=3D'CONTROLLE= R', bootOrder=3D'0', specParams=3D'[]', address=3D'{slot=3D0x01, bus=3D= 0x00, domain=3D0x0000, type=3Dpci, function=3D0x1}', managed=3D'false',= plugged=3D'true', readOnly=3D'false', deviceAlias=3D'ide', customPrope= rties=3D'[]', snapshotId=3D'null', logicalName=3D'null', hostDevice=3D'= null'}, device=5Ffbddd528-7d93-49c6-a286-180e021cb274device=5F879c93ab-= 4df1-435c-af02-565039fcc254device=5F8945f61a-abbe-4156-8485-a4aa6f1908d= b=3DVmDevice:{id=3D'VmDeviceId:{deviceId=3D'8945f61a-abbe-4156-8485-a4a= a6f1908db', vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}', device=3D'= unix', type=3D'CHANNEL', bootOrder=3D'0', specParams=3D'[]', address=3D= '{bus=3D0, controller=3D0, type=3Dvirtio-serial, port=3D2}', managed=3D= 'false', plugged=3D'true', readOnly=3D'false', deviceAlias=3D'channel1'= , customProperties=3D'[]', snapshotId=3D'null', logicalName=3D'null', h= ostDevice=3D'null'}}, vmType=3Dkvm, memSize=3D8192, smpCoresPerSocket=3D= 1, vmName=3DOracle=5FSECONDARY, nice=3D0, status=3DMigration Source, ma= xMemSize=3D32768, bootMenuEnable=3Dfalse, vmId=3D3f57e669-5e4c-4d10-85c= c-d573004a099d, numOfIoThreads=3D2, smpThreadsPerCore=3D1, memGuarantee= dSize=3D8192, kvmEnable=3Dtrue, pitReinjection=3Dfalse, displayNetwork=3D= ovirtmgmt, devices=3D[Ljava.lang.Object;@28ae66d7, display=3Dvnc, maxVC= pus=3D16, clientIp=3D, statusTime=3D4299484520, maxMemSlots=3D16}], log= id: 54b4b435 2018-02-12 16:46:31,150+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.= monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler1) [27fac647] F= etched 3 VMs from VDS 'd569c2dd-8f30-4878-8aea-858db285cf69' 2018-02-12 16:46:31,151+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.= monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler9) [54a65b66] Re= ceived a vnc Device without an address when processing VM 3f57e669-5e4c= -4d10-85cc-d573004a099d devices, skipping device: {device=3Dvnc, specPa= rams=3D{displayNetwork=3Dovirtmgmt, keyMap=3Dfr, displayIp=3D192.168.0.= 6}, type=3Dgraphics, deviceId=3D813957b1-446a-4e88-9e40-9fe76d2c442d, p= ort=3D5901} 2018-02-12 16:46:31,151+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.= monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler9) [54a65b66] Re= ceived a lease Device without an address when processing VM 3f57e669-5e= 4c-4d10-85cc-d573004a099d devices, skipping device: {lease=5Fid=3D3f57e= 669-5e4c-4d10-85cc-d573004a099d, sd=5Fid=3D1e51cecc-eb2e-47d0-b185-920f= dc7afa16, deviceId=3D{uuid=3Da09949aa-5642-4b6d-94a4-8b0d04257be5}, off= set=3D6291456, device=3Dlease, path=3D/rhev/data-center/mnt/glusterSD/1= 92.168.0.6:=5FDATA01/1e51cecc-eb2e-47d0-b185-920fdc7afa16/dom=5Fmd/xlea= ses, type=3Dlease} 2018-02-12 16:46:31,152+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.= monitoring.VmAnalyzer] (DefaultQuartzScheduler1) [27fac647] VM '3f57e66= 9-5e4c-4d10-85cc-d573004a099d'(Oracle=5FSECONDARY) was unexpectedly det= ected as 'MigratingTo' on VDS 'd569c2dd-8f30-4878-8aea-858db285cf69'(gi= nger.local.systea.fr) (expected on 'ce3938b1-b23f-4d22-840a-f17d7cd87bb= 1') 2018-02-12 16:46:31,152+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.= monitoring.VmAnalyzer] (DefaultQuartzScheduler1) [27fac647] VM '3f57e66= 9-5e4c-4d10-85cc-d573004a099d' is migrating to VDS 'd569c2dd-8f30-4878-= 8aea-858db285cf69'(ginger.local.systea.fr) ignoring it in the refresh u= ntil migration is done .... 2018-02-12 16:46:41,631+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.= monitoring.VmAnalyzer] (ForkJoinPool-1-worker-11) [] VM '3f57e669-5e4c-= 4d10-85cc-d573004a099d' was reported as Down on VDS 'ce3938b1-b23f-4d22= -840a-f17d7cd87bb1'(victor.local.systea.fr) 2018-02-12 16:46:41,632+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.= vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-11) [] START, Destr= oyVDSCommand(HostName =3D victor.local.systea.fr, DestroyVmVDSCommandPa= rameters:{runAsync=3D'true', hostId=3D'ce3938b1-b23f-4d22-840a-f17d7cd8= 7bb1', vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d', force=3D'false', = secondsToWait=3D'0', gracefully=3D'false', reason=3D'', ignoreNoVm=3D't= rue'}), log id: 560eca57 2018-02-12 16:46:41,650+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.= vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-11) [] FINISH, Dest= royVDSCommand, log id: 560eca57 2018-02-12 16:46:41,650+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.= monitoring.VmAnalyzer] (ForkJoinPool-1-worker-11) [] VM '3f57e669-5e4c-= 4d10-85cc-d573004a099d'(Oracle=5FSECONDARY) moved from 'MigratingFrom' = --> 'Down' 2018-02-12 16:46:41,651+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.= monitoring.VmAnalyzer] (ForkJoinPool-1-worker-11) [] Handing over VM '3= f57e669-5e4c-4d10-85cc-d573004a099d'(Oracle=5FSECONDARY) to Host 'd569c= 2dd-8f30-4878-8aea-858db285cf69'. Setting VM to status 'MigratingTo' 2018-02-12 16:46:42,163+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.= monitoring.VmAnalyzer] (ForkJoinPool-1-worker-4) [] VM '3f57e669-5e4c-4= d10-85cc-d573004a099d'(Oracle=5FSECONDARY) moved from 'MigratingTo' -->= 'Up' 2018-02-12 16:46:42,169+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.= vdsbroker.MigrateStatusVDSCommand] (ForkJoinPool-1-worker-4) [] START, = MigrateStatusVDSCommand(HostName =3D ginger.local.systea.fr, MigrateSta= tusVDSCommandParameters:{runAsync=3D'true', hostId=3D'd569c2dd-8f30-487= 8-8aea-858db285cf69', vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}), = log id: 7a25c281 2018-02-12 16:46:42,174+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.= vdsbroker.MigrateStatusVDSCommand] (ForkJoinPool-1-worker-4) [] FINISH,= MigrateStatusVDSCommand, log id: 7a25c281 2018-02-12 16:46:42,194+01 INFO =C2=A0[org.ovirt.engine.core.dal.dbbrok= er.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-4) [] EVEN= T=5FID: VM=5FMIGRATION=5FDONE(63), Correlation ID: 2f712024-5982-46a8-8= 2c8-fd8293da5725, Job ID: 4bd19aa9-cc99-4d02-884e-5a1e857a7738, Call St= ack: null, Custom ID: null, Custom Event ID: -1, Message: Migration com= pleted (VM: Oracle=5FSECONDARY, Source: victor.local.systea.fr, Destina= tion: ginger.local.systea.fr, Duration: 11 seconds, Total: 11 seconds, = Actual downtime: (N/A)) 2018-02-12 16:46:42,201+01 INFO =C2=A0[org.ovirt.engine.core.bll.Migrat= eVmToServerCommand] (ForkJoinPool-1-worker-4) [] Lock freed to object '= EngineLock:{exclusiveLocks=3D'[3f57e669-5e4c-4d10-85cc-d573004a099d=3DV= M]', sharedLocks=3D''}' 2018-02-12 16:46:42,203+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.= vdsbroker.FullListVDSCommand] (ForkJoinPool-1-worker-4) [] START, FullL= istVDSCommand(HostName =3D ginger.local.systea.fr, FullListVDSCommandPa= rameters:{runAsync=3D'true', hostId=3D'd569c2dd-8f30-4878-8aea-858db285= cf69', vmIds=3D'[3f57e669-5e4c-4d10-85cc-d573004a099d]'}), log id: 7cc6= 5298 2018-02-12 16:46:42,254+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.= vdsbroker.FullListVDSCommand] (ForkJoinPool-1-worker-4) [] FINISH, Full= ListVDSCommand, return: [{acpiEnable=3Dtrue, emulatedMachine=3Dpc-i440f= x-rhel7.3.0, afterMigrationStatus=3D, tabletEnable=3Dtrue, pid=3D18748,= guestDiskMapping=3D{}, transparentHugePages=3Dtrue, timeOffset=3D0, cp= uType=3DNehalem, smp=3D2, guestNumaNodes=3D[Ljava.lang.Object;@760085fd= , custom=3D{device=5Ffbddd528-7d93-49c6-a286-180e021cb274device=5F879c9= 3ab-4df1-435c-af02-565039fcc254=3DVmDevice:{id=3D'VmDeviceId:{deviceId=3D= '879c93ab-4df1-435c-af02-565039fcc254', vmId=3D'3f57e669-5e4c-4d10-85cc= -d573004a099d'}', device=3D'unix', type=3D'CHANNEL', bootOrder=3D'0', s= pecParams=3D'[]', address=3D'{bus=3D0, controller=3D0, type=3Dvirtio-se= rial, port=3D1}', managed=3D'false', plugged=3D'true', readOnly=3D'fals= e', deviceAlias=3D'channel0', customProperties=3D'[]', snapshotId=3D'nu= ll', logicalName=3D'null', hostDevice=3D'null'}, device=5Ffbddd528-7d93= -49c6-a286-180e021cb274device=5F879c93ab-4df1-435c-af02-565039fcc254dev= ice=5F8945f61a-abbe-4156-8485-a4aa6f1908dbdevice=5F017b5e59-01c4-4aac-b= f0c-b5d9557284d6=3DVmDevice:{id=3D'VmDeviceId:{deviceId=3D'017b5e59-01c= 4-4aac-bf0c-b5d9557284d6', vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d= '}', device=3D'tablet', type=3D'UNKNOWN', bootOrder=3D'0', specParams=3D= '[]', address=3D'{bus=3D0, type=3Dusb, port=3D1}', managed=3D'false', p= lugged=3D'true', readOnly=3D'false', deviceAlias=3D'input0', customProp= erties=3D'[]', snapshotId=3D'null', logicalName=3D'null', hostDevice=3D= 'null'}, device=5Ffbddd528-7d93-49c6-a286-180e021cb274=3DVmDevice:{id=3D= 'VmDeviceId:{deviceId=3D'fbddd528-7d93-49c6-a286-180e021cb274', vmId=3D= '3f57e669-5e4c-4d10-85cc-d573004a099d'}', device=3D'ide', type=3D'CONTR= OLLER', bootOrder=3D'0', specParams=3D'[]', address=3D'{slot=3D0x01, bu= s=3D0x00, domain=3D0x0000, type=3Dpci, function=3D0x1}', managed=3D'fal= se', plugged=3D'true', readOnly=3D'false', deviceAlias=3D'ide', customP= roperties=3D'[]', snapshotId=3D'null', logicalName=3D'null', hostDevice= =3D'null'}, device=5Ffbddd528-7d93-49c6-a286-180e021cb274device=5F879c9= 3ab-4df1-435c-af02-565039fcc254device=5F8945f61a-abbe-4156-8485-a4aa6f1= 908db=3DVmDevice:{id=3D'VmDeviceId:{deviceId=3D'8945f61a-abbe-4156-8485= -a4aa6f1908db', vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}', device= =3D'unix', type=3D'CHANNEL', bootOrder=3D'0', specParams=3D'[]', addres= s=3D'{bus=3D0, controller=3D0, type=3Dvirtio-serial, port=3D2}', manage= d=3D'false', plugged=3D'true', readOnly=3D'false', deviceAlias=3D'chann= el1', customProperties=3D'[]', snapshotId=3D'null', logicalName=3D'null= ', hostDevice=3D'null'}}, vmType=3Dkvm, memSize=3D8192, smpCoresPerSock= et=3D1, vmName=3DOracle=5FSECONDARY, nice=3D0, status=3DUp, maxMemSize=3D= 32768, bootMenuEnable=3Dfalse, vmId=3D3f57e669-5e4c-4d10-85cc-d573004a0= 99d, numOfIoThreads=3D2, smpThreadsPerCore=3D1, smartcardEnable=3Dfalse= , maxMemSlots=3D16, kvmEnable=3Dtrue, pitReinjection=3Dfalse, displayNe= twork=3Dovirtmgmt, devices=3D[Ljava.lang.Object;@2e4d3dd3, memGuarantee= dSize=3D8192, maxVCpus=3D16, clientIp=3D, statusTime=3D4304259600, disp= lay=3Dvnc}], log id: 7cc65298 2018-02-12 16:46:42,257+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.= monitoring.VmDevicesMonitoring] (ForkJoinPool-1-worker-4) [] Received a= vnc Device without an address when processing VM 3f57e669-5e4c-4d10-85= cc-d573004a099d devices, skipping device: {device=3Dvnc, specParams=3D{= displayNetwork=3Dovirtmgmt, keyMap=3Dfr, displayIp=3D192.168.0.5}, type= =3Dgraphics, deviceId=3D813957b1-446a-4e88-9e40-9fe76d2c442d, port=3D59= 01} 2018-02-12 16:46:42,257+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.= monitoring.VmDevicesMonitoring] (ForkJoinPool-1-worker-4) [] Received a= lease Device without an address when processing VM 3f57e669-5e4c-4d10-= 85cc-d573004a099d devices, skipping device: {lease=5Fid=3D3f57e669-5e4c= -4d10-85cc-d573004a099d, sd=5Fid=3D1e51cecc-eb2e-47d0-b185-920fdc7afa16= , deviceId=3D{uuid=3Da09949aa-5642-4b6d-94a4-8b0d04257be5}, offset=3D62= 91456, device=3Dlease, path=3D/rhev/data-center/mnt/glusterSD/192.168.0= .6:=5FDATA01/1e51cecc-eb2e-47d0-b185-920fdc7afa16/dom=5Fmd/xleases, typ= e=3Dlease} 2018-02-12 16:46:46,260+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.= vdsbroker.FullListVDSCommand] (DefaultQuartzScheduler5) [7fcb200a] FINI= SH, FullListVDSCommand, return: [{acpiEnable=3Dtrue, emulatedMachine=3D= pc-i440fx-rhel7.3.0, afterMigrationStatus=3D, tabletEnable=3Dtrue, pid=3D= 18748, guestDiskMapping=3D{0QEMU=5FQEMU=5FHARDDISK=5Fd890fa68-fba4-4f49= -9=3D{name=3D/dev/sda}, QEMU=5FDVD-ROM=5FQM00003=3D{name=3D/dev/sr0}}, = transparentHugePages=3Dtrue, timeOffset=3D0, cpuType=3DNehalem, smp=3D2= , guestNumaNodes=3D[Ljava.lang.Object;@77951faf, custom=3D{device=5Ffbd= dd528-7d93-49c6-a286-180e021cb274device=5F879c93ab-4df1-435c-af02-56503= 9fcc254=3DVmDevice:{id=3D'VmDeviceId:{deviceId=3D'879c93ab-4df1-435c-af= 02-565039fcc254', vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}', devi= ce=3D'unix', type=3D'CHANNEL', bootOrder=3D'0', specParams=3D'[]', addr= ess=3D'{bus=3D0, controller=3D0, type=3Dvirtio-serial, port=3D1}', mana= ged=3D'false', plugged=3D'true', readOnly=3D'false', deviceAlias=3D'cha= nnel0', customProperties=3D'[]', snapshotId=3D'null', logicalName=3D'nu= ll', hostDevice=3D'null'}, device=5Ffbddd528-7d93-49c6-a286-180e021cb27= 4device=5F879c93ab-4df1-435c-af02-565039fcc254device=5F8945f61a-abbe-41= 56-8485-a4aa6f1908dbdevice=5F017b5e59-01c4-4aac-bf0c-b5d9557284d6=3DVmD= evice:{id=3D'VmDeviceId:{deviceId=3D'017b5e59-01c4-4aac-bf0c-b5d9557284= d6', vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}', device=3D'tablet'= , type=3D'UNKNOWN', bootOrder=3D'0', specParams=3D'[]', address=3D'{bus= =3D0, type=3Dusb, port=3D1}', managed=3D'false', plugged=3D'true', read= Only=3D'false', deviceAlias=3D'input0', customProperties=3D'[]', snapsh= otId=3D'null', logicalName=3D'null', hostDevice=3D'null'}, device=5Ffbd= dd528-7d93-49c6-a286-180e021cb274=3DVmDevice:{id=3D'VmDeviceId:{deviceI= d=3D'fbddd528-7d93-49c6-a286-180e021cb274', vmId=3D'3f57e669-5e4c-4d10-= 85cc-d573004a099d'}', device=3D'ide', type=3D'CONTROLLER', bootOrder=3D= '0', specParams=3D'[]', address=3D'{slot=3D0x01, bus=3D0x00, domain=3D0= x0000, type=3Dpci, function=3D0x1}', managed=3D'false', plugged=3D'true= ', readOnly=3D'false', deviceAlias=3D'ide', customProperties=3D'[]', sn= apshotId=3D'null', logicalName=3D'null', hostDevice=3D'null'}, device=5F= fbddd528-7d93-49c6-a286-180e021cb274device=5F879c93ab-4df1-435c-af02-56= 5039fcc254device=5F8945f61a-abbe-4156-8485-a4aa6f1908db=3DVmDevice:{id=3D= 'VmDeviceId:{deviceId=3D'8945f61a-abbe-4156-8485-a4aa6f1908db', vmId=3D= '3f57e669-5e4c-4d10-85cc-d573004a099d'}', device=3D'unix', type=3D'CHAN= NEL', bootOrder=3D'0', specParams=3D'[]', address=3D'{bus=3D0, controll= er=3D0, type=3Dvirtio-serial, port=3D2}', managed=3D'false', plugged=3D= 'true', readOnly=3D'false', deviceAlias=3D'channel1', customProperties=3D= '[]', snapshotId=3D'null', logicalName=3D'null', hostDevice=3D'null'}},= vmType=3Dkvm, memSize=3D8192, smpCoresPerSocket=3D1, vmName=3DOracle=5F= SECONDARY, nice=3D0, status=3DUp, maxMemSize=3D32768, bootMenuEnable=3D= false, vmId=3D3f57e669-5e4c-4d10-85cc-d573004a099d, numOfIoThreads=3D2,= smpThreadsPerCore=3D1, smartcardEnable=3Dfalse, maxMemSlots=3D16, kvmE= nable=3Dtrue, pitReinjection=3Dfalse, displayNetwork=3Dovirtmgmt, devic= es=3D[Ljava.lang.Object;@286410fd, memGuaranteedSize=3D8192, maxVCpus=3D= 16, clientIp=3D, statusTime=3D4304263620, display=3Dvnc}], log id: 58cd= ef4c 2018-02-12 16:46:46,267+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.= monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler5) [7fcb200a] Re= ceived a vnc Device without an address when processing VM 3f57e669-5e4c= -4d10-85cc-d573004a099d devices, skipping device: {device=3Dvnc, specPa= rams=3D{displayNetwork=3Dovirtmgmt, keyMap=3Dfr, displayIp=3D192.168.0.= 5}, type=3Dgraphics, deviceId=3D813957b1-446a-4e88-9e40-9fe76d2c442d, p= ort=3D5901} 2018-02-12 16:46:46,268+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.= monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler5) [7fcb200a] Re= ceived a lease Device without an address when processing VM 3f57e669-5e= 4c-4d10-85cc-d573004a099d devices, skipping device: {lease=5Fid=3D3f57e= 669-5e4c-4d10-85cc-d573004a099d, sd=5Fid=3D1e51cecc-eb2e-47d0-b185-920f= dc7afa16, deviceId=3D{uuid=3Da09949aa-5642-4b6d-94a4-8b0d04257be5}, off= set=3D6291456, device=3Dlease, path=3D/rhev/data-center/mnt/glusterSD/1= 92.168.0.6:=5FDATA01/1e51cecc-eb2e-47d0-b185-920fdc7afa16/dom=5Fmd/xlea= ses, type=3Dlease} =C2=A0 For the VM with 2 vdisks we see : 2018-02-12 16:49:06,112+01 INFO =C2=A0[org.ovirt.engine.core.bll.Migrat= eVmToServerCommand] (default task-50) [92b5af33-cb87-4142-b8fe-8b838dd7= 458e] Lock Acquired to object 'EngineLock:{exclusiveLocks=3D'[f7d4ec12-= 627a-4b83-b59e-886400d55474=3DVM]', sharedLocks=3D''}' 2018-02-12 16:49:06,407+01 INFO =C2=A0[org.ovirt.engine.core.bll.Migrat= eVmToServerCommand] (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-= 4142-b8fe-8b838dd7458e] Running command: MigrateVmToServerCommand inter= nal: false. Entities affected : =C2=A0ID: f7d4ec12-627a-4b83-b59e-88640= 0d55474 Type: VMAction group MIGRATE=5FVM with role type USER 2018-02-12 16:49:06,712+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.= MigrateVDSCommand] (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4= 142-b8fe-8b838dd7458e] START, MigrateVDSCommand( MigrateVDSCommandParam= eters:{runAsync=3D'true', hostId=3D'd569c2dd-8f30-4878-8aea-858db285cf6= 9', vmId=3D'f7d4ec12-627a-4b83-b59e-886400d55474', srcHost=3D'192.168.0= .5', dstVdsId=3D'ce3938b1-b23f-4d22-840a-f17d7cd87bb1', dstHost=3D'192.= 168.0.6:54321', migrationMethod=3D'ONLINE', tunnelMigration=3D'false', = migrationDowntime=3D'0', autoConverge=3D'true', migrateCompressed=3D'fa= lse', consoleAddress=3D'null', maxBandwidth=3D'500', enableGuestEvents=3D= 'true', maxIncomingMigrations=3D'2', maxOutgoingMigrations=3D'2', conve= rgenceSchedule=3D'[init=3D[{name=3DsetDowntime, params=3D[100]}], stall= ing=3D[{limit=3D1, action=3D{name=3DsetDowntime, params=3D[150]}}, {lim= it=3D2, action=3D{name=3DsetDowntime, params=3D[200]}}, {limit=3D3, act= ion=3D{name=3DsetDowntime, params=3D[300]}}, {limit=3D4, action=3D{name= =3DsetDowntime, params=3D[400]}}, {limit=3D6, action=3D{name=3DsetDownt= ime, params=3D[500]}}, {limit=3D-1, action=3D{name=3Dabort, params=3D[]= }}]]'}), log id: 3702a9e0 2018-02-12 16:49:06,713+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.= vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-thread-49) = [92b5af33-cb87-4142-b8fe-8b838dd7458e] START, MigrateBrokerVDSCommand(H= ostName =3D ginger.local.systea.fr, MigrateVDSCommandParameters:{runAsy= nc=3D'true', hostId=3D'd569c2dd-8f30-4878-8aea-858db285cf69', vmId=3D'f= 7d4ec12-627a-4b83-b59e-886400d55474', srcHost=3D'192.168.0.5', dstVdsId= =3D'ce3938b1-b23f-4d22-840a-f17d7cd87bb1', dstHost=3D'192.168.0.6:54321= ', migrationMethod=3D'ONLINE', tunnelMigration=3D'false', migrationDown= time=3D'0', autoConverge=3D'true', migrateCompressed=3D'false', console= Address=3D'null', maxBandwidth=3D'500', enableGuestEvents=3D'true', max= IncomingMigrations=3D'2', maxOutgoingMigrations=3D'2', convergenceSched= ule=3D'[init=3D[{name=3DsetDowntime, params=3D[100]}], stalling=3D[{lim= it=3D1, action=3D{name=3DsetDowntime, params=3D[150]}}, {limit=3D2, act= ion=3D{name=3DsetDowntime, params=3D[200]}}, {limit=3D3, action=3D{name= =3DsetDowntime, params=3D[300]}}, {limit=3D4, action=3D{name=3DsetDownt= ime, params=3D[400]}}, {limit=3D6, action=3D{name=3DsetDowntime, params= =3D[500]}}, {limit=3D-1, action=3D{name=3Dabort, params=3D[]}}]]'}), lo= g id: 1840069c 2018-02-12 16:49:06,724+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.= vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-thread-49) = [92b5af33-cb87-4142-b8fe-8b838dd7458e] FINISH, MigrateBrokerVDSCommand,= log id: 1840069c 2018-02-12 16:49:06,732+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.= MigrateVDSCommand] (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4= 142-b8fe-8b838dd7458e] FINISH, MigrateVDSCommand, return: MigratingFrom= , log id: 3702a9e0 2018-02-12 16:49:06,753+01 INFO =C2=A0[org.ovirt.engine.core.dal.dbbrok= er.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-6-thread-4= 9) [92b5af33-cb87-4142-b8fe-8b838dd7458e] EVENT=5FID: VM=5FMIGRATION=5F= START(62), Correlation ID: 92b5af33-cb87-4142-b8fe-8b838dd7458e, Job ID= : f4f54054-f7c8-4481-8eda-d5a15c383061, Call Stack: null, Custom ID: nu= ll, Custom Event ID: -1, Message: Migration started (VM: Oracle=5FPRIMA= RY, Source: ginger.local.systea.fr, Destination: victor.local.systea.fr= , User: admin@internal-authz). ... 2018-02-12 16:49:16,453+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.= monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler4) [162a5bc3] F= etched 2 VMs from VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1' 2018-02-12 16:49:16,455+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.= monitoring.VmAnalyzer] (DefaultQuartzScheduler4) [162a5bc3] VM 'f7d4ec1= 2-627a-4b83-b59e-886400d55474'(Oracle=5FPRIMARY) was unexpectedly detec= ted as 'MigratingTo' on VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'(vict= or.local.systea.fr) (expected on 'd569c2dd-8f30-4878-8aea-858db285cf69'= ) 2018-02-12 16:49:16,455+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.= monitoring.VmAnalyzer] (DefaultQuartzScheduler4) [162a5bc3] VM 'f7d4ec1= 2-627a-4b83-b59e-886400d55474' is migrating to VDS 'ce3938b1-b23f-4d22-= 840a-f17d7cd87bb1'(victor.local.systea.fr) ignoring it in the refresh u= ntil migration is done ... 2018-02-12 16:49:31,484+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.= monitoring.VmAnalyzer] (DefaultQuartzScheduler5) [11a7619a] VM 'f7d4ec1= 2-627a-4b83-b59e-886400d55474'(Oracle=5FPRIMARY) was unexpectedly detec= ted as 'MigratingTo' on VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'(vict= or.local.systea.fr) (expected on 'd569c2dd-8f30-4878-8aea-858db285cf69'= ) 2018-02-12 16:49:31,484+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.= monitoring.VmAnalyzer] (DefaultQuartzScheduler5) [11a7619a] VM 'f7d4ec1= 2-627a-4b83-b59e-886400d55474' is migrating to VDS 'ce3938b1-b23f-4d22-= 840a-f17d7cd87bb1'(victor.local.systea.fr) ignoring it in the refresh u= ntil migration is done =C2=A0 and so on, last lines repeated indefinitly for hours since we poweroff = the VM... Is this something known ? Any idea about that ? Thanks Ovirt 4.1.6, updated last at feb-13. Gluster 3.12.1. -- Cordialement, Frank Soyer =5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F= =5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users =C2=A0 =C2=A0 =C2=A0 ------=_=-_OpenGroupware_org_NGMime-18019-1519309377.654147-22------ Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: quoted-printable Content-Length: 38724 <html>Hi,<br />Yes, on 2018-02-16 (vdsm logs) I tried with a VM standin= g on ginger (192.168.0.6) migrated (or failed to migrate...) to vi= ctor (192.168.0.5), while the engine.log in the first mail on 2018-02-1= 2 was for VMs standing on victor, migrated (or failed to migrate...) to= ginger. Symptoms were exactly the same, in both directions, and VMs wo= rks like a charm before, and even after (migration "killed" by a powero= ff of VMs).<br />Am I the only one experimenting this problem ?<br /><b= r /><br />Thanks<br />--<br /><style type=3D"text/css">.Text1 { color: black; font-size:9pt; font-family:Verdana; } .Text2 { color: black; font-size:7pt; font-family:Verdana; }</style><p class=3D"Text1">Cordialement,<br /><br /><b>Frank Soyer= </b><br /> </p><br /><br />Le Jeudi, F=C3=A9vrier 22, 2018 00:45 = CET, Maor Lipchuk <mlipchuk@redhat.com> a =C3=A9crit:<br /> = <blockquote type=3D"cite" cite=3D"CAJ1JNOe9Yi5XnFWvqOYhpoMuhkXOKAR=3DNO= WafRkRHLXuOTtwtg@mail.gmail.com"><div dir=3D"ltr">Hi Frank,<div> <= /div><div>Sorry about the delay repond.</div><div>I've been going throu= gh the logs you attached, although I could not find any specific indica= tion why the migration failed because of the disk you were mentionning.= </div><div>Does this VM run with both disks on the target host without = migration?</div><div> </div><div>Regards,</div><div>Maor</div><div= poweroff the VMs : the kvm process died on the 2 hosts and the GUI aler= ted on a failed migration.<br />In doubt, I tried to delete the second = vdisk on one of this VMs : it migrates then without error ! And no acce= ss problem.<br />I tried to extend the first vdisk of the second VM, th= e delete the second vdisk : it migrates now without problem ! &nbs= p; <br /><br />So after another test with a VM with 2 vdisks, I ca= n say that this blocked the migration process :(<br /><br />In engine.l= og, for a VMs with 1 vdisk migrating well, we see :</p><blockquote>2018= -02-12 16:46:29,705+01 INFO [org.ovirt.engine.core.bll.Mi<wbr />g= rateVmToServerCommand] (default task-28) [2f712024-5982-46a8-82c8-fd829= <wbr />3da5725] Lock Acquired to object 'EngineLock:{exclusiveLocks=3D'= [<wbr />3f57e669-5e4c-4d10-85cc-d57300<wbr />4a099d=3DVM]', sharedLocks= =3D''}'<br />2018-02-12 16:46:29,955+01 INFO [org.ovirt.engine.co= re.bll.Mi<wbr />grateVmToServerCommand] (org.ovirt.thread.pool-6-threa<= wbr />d-32) [2f712024-5982-46a8-82c8-fd829<wbr />3da5725] Running comma= nd: MigrateVmToServerCommand internal: false. Entities affected :  = ;ID: 3f57e669-5e4c-4d10-85cc-d57300<wbr />4a099d Type: VMAction group M= IGRATE=5FVM with role type USER<br />2018-02-12 16:46:30,261+01 INFO &n= bsp;[org.ovirt.engine.core.vdsbro<wbr />ker.MigrateVDSCommand] (org.ovi= rt.thread.pool-6-threa<wbr />d-32) [2f712024-5982-46a8-82c8-fd829<wbr /= params=3D[]}}]]'}), log id: 14f61ee0<br />2018-02-12 16:46:30,262+01 I= NFO [org.ovirt.engine.core.vdsbro<wbr />ker.vdsbroker.MigrateBrok= erVDS<wbr />Command] (org.ovirt.thread.pool-6-threa<wbr />d-32) [2f7120= 24-5982-46a8-82c8-fd829<wbr />3da5725] START, MigrateBrokerVDSCommand(H= ostNa<wbr />me =3D <a target=3D"=5Fblank" href=3D"http://victor.local.s= ystea.fr">victor.local.systea.fr</a>, MigrateVDSCommandParameters:{r<wb= r />unAsync=3D'true', hostId=3D'ce3938b1-b23f-4d22-840<wbr />a-f17d7cd8= 7bb1', vmId=3D'3f57e669-5e4c-4d10-85cc-<wbr />d573004a099d', srcHost=3D= '192.168.0.6', dstVdsId=3D'd569c2dd-8f30-4878-8<wbr />aea-858db285cf69'= , dstHost=3D'<a target=3D"=5Fblank" href=3D"http://192.168.0.5:54321">1= 92.168.0.5:54321</a>', migrationMethod=3D'ONLINE', tunnelMigration=3D'f= alse', migrationDowntime=3D'0', autoConverge=3D'true', migrateCompresse= d=3D'false', consoleAddress=3D'null', maxBandwidth=3D'500', enableGuest= Events=3D'true', maxIncomingMigrations=3D'2', maxOutgoingMigrations=3D'= 2', convergenceSchedule=3D'[init=3D[{n<wbr />ame=3DsetDowntime, params=3D= [100]}], stalling=3D[{limit=3D1, action=3D{name=3DsetDowntime, params=3D= [150]}}, {limit=3D2, action=3D{name=3DsetDowntime, params=3D[200]}}, {l= imit=3D3, action=3D{name=3DsetDowntime, params=3D[300]}}, {limit=3D4, a= ction=3D{name=3DsetDowntime, params=3D[400]}}, {limit=3D6, action=3D{na= me=3DsetDowntime, params=3D[500]}}, {limit=3D-1, action=3D{name=3Dabort= , params=3D[]}}]]'}), log id: 775cd381<br />2018-02-12 16:46:30,277+01 = INFO [org.ovirt.engine.core.vdsbro<wbr />ker.vdsbroker.MigrateBro= kerVDS<wbr />Command] (org.ovirt.thread.pool-6-threa<wbr />d-32) [2f712= 024-5982-46a8-82c8-fd829<wbr />3da5725] FINISH, MigrateBrokerVDSCommand= , log id: 775cd381<br />2018-02-12 16:46:30,285+01 INFO [org.ovir= t.engine.core.vdsbro<wbr />ker.MigrateVDSCommand] (org.ovirt.thread.poo= l-6-threa<wbr />d-32) [2f712024-5982-46a8-82c8-fd829<wbr />3da5725] FIN= ISH, MigrateVDSCommand, return: MigratingFrom, log id: 14f61ee0<br />20= 18-02-12 16:46:30,301+01 INFO [org.ovirt.engine.core.dal.db<wbr /= pe=3D'CONTROLLER', bootOrder=3D'0', specParams=3D'[]', address=3D'{slot= =3D0x01, bus=3D0x00, domain=3D0x0000, type=3Dpci, function=3D0x1}', man= aged=3D'false', plugged=3D'true', readOnly=3D'false', deviceAlias=3D'id= e', customProperties=3D'[]', snapshotId=3D'null', logicalName=3D'null',= hostDevice=3D'null'}, device=5Ffbddd528-7d93-49c6-a286<wbr />-180e021c= b274device=5F879c93ab-<wbr />4df1-435c-af02-565039fcc254dev<wbr />ice=5F= 8945f61a-abbe-4156-8485-<wbr />a4aa6f1908db=3DVmDevice:{id=3D'<wbr />Vm= DeviceId:{deviceId=3D'<wbr />8945f61a-abbe-4156-8485-<wbr />a4aa6f1908d= b', vmId=3D'3f57e669-5e4c-4d10-85cc-<wbr />d573004a099d'}', device=3D'u= nix', type=3D'CHANNEL', bootOrder=3D'0', specParams=3D'[]', address=3D'= {bus=3D0, controller=3D0, type=3Dvirtio-serial, port=3D2}', managed=3D'= false', plugged=3D'true', readOnly=3D'false', deviceAlias=3D'channel1',= customProperties=3D'[]', snapshotId=3D'null', logicalName=3D'null', ho= stDevice=3D'null'}}, vmType=3Dkvm, memSize=3D8192, smpCoresPerSocket=3D= 1, vmName=3DOracle=5FSECONDARY, nice=3D0, status=3DMigration Source, ma= xMemSize=3D32768, bootMenuEnable=3Dfalse, vmId=3D3f57e669-5e4c-4d10-85c= c-d<wbr />573004a099d, numOfIoThreads=3D2, smpThreadsPerCore=3D1, memGu= aranteedSize=3D8192, kvmEnable=3Dtrue, pitReinjection=3Dfalse, displayN= etwork=3Dovirtmgmt, devices=3D[Ljava.lang.Object;@28<wbr />ae66d7, disp= lay=3Dvnc, maxVCpus=3D16, clientIp=3D, statusTime=3D4299484520, maxMemS= lots=3D16}], log id: 54b4b435<br />2018-02-12 16:46:31,150+01 INFO &nbs= p;[org.ovirt.engine.core.vdsbro<wbr />ker.monitoring.VmsStatisticsFe<wb= r />tcher] (DefaultQuartzScheduler1) [27fac647] Fetched 3 VMs from VDS = 'd569c2dd-8f30-4878-8aea-858db<wbr />285cf69'<br />2018-02-12 16:46:31,= 151+01 INFO [org.ovirt.engine.core.vdsbro<wbr />ker.monitoring.Vm= DevicesMonito<wbr />ring] (DefaultQuartzScheduler9) [54a65b66] Received= a vnc Device without an address when processing VM 3f57e669-5e4c-4d10-= 85cc-d57300<wbr />4a099d devices, skipping device: {device=3Dvnc, specP= arams=3D{displayNetwork=3Dovi<wbr />rtmgmt, keyMap=3Dfr, displayIp=3D19= 2.168.0.6}, type=3Dgraphics, deviceId=3D813957b1-446a-4e88-9e<wbr />40-= 9fe76d2c442d, port=3D5901}<br />2018-02-12 16:46:31,151+01 INFO [= org.ovirt.engine.core.vdsbro<wbr />ker.monitoring.VmDevicesMonito<wbr /= troller=3D0, type=3Dvirtio-serial, port=3D1}', managed=3D'false', plugg= ed=3D'true', readOnly=3D'false', deviceAlias=3D'channel0', customProper= ties=3D'[]', snapshotId=3D'null', logicalName=3D'null', hostDevice=3D'n= ull'}, device=5Ffbddd528-7d93-49c6-a286<wbr />-180e021cb274device=5F879= c93ab-<wbr />4df1-435c-af02-565039fcc254dev<wbr />ice=5F8945f61a-abbe-4= 156-8485-a4<wbr />aa6f1908dbdevice=5F017b5e59-01c4<wbr />-4aac-bf0c-b5d= 9557284d6=3DVmDevi<wbr />ce:{id=3D'VmDeviceId:{deviceId=3D'<wbr />017b5= e59-01c4-4aac-bf0c-<wbr />b5d9557284d6', vmId=3D'3f57e669-5e4c-4d10-85c= c-<wbr />d573004a099d'}', device=3D'tablet', type=3D'UNKNOWN', bootOrde= r=3D'0', specParams=3D'[]', address=3D'{bus=3D0, type=3Dusb, port=3D1}'= , managed=3D'false', plugged=3D'true', readOnly=3D'false', deviceAlias=3D= 'input0', customProperties=3D'[]', snapshotId=3D'null', logicalName=3D'= null', hostDevice=3D'null'}, device=5Ffbddd528-7d93-49c6-a286<wbr />-18= 0e021cb274=3DVmDevice:{id=3D'<wbr />VmDeviceId:{deviceId=3D'fbddd528<wb= r />-7d93-49c6-a286-180e021cb274', vmId=3D'3f57e669-5e4c-4d10-85cc-<wbr= />d573004a099d'}', device=3D'ide', type=3D'CONTROLLER', bootOrder=3D'0= ', specParams=3D'[]', address=3D'{slot=3D0x01, bus=3D0x00, domain=3D0x0= 000, type=3Dpci, function=3D0x1}', managed=3D'false', plugged=3D'true',= readOnly=3D'false', deviceAlias=3D'ide', customProperties=3D'[]', snap= shotId=3D'null', logicalName=3D'null', hostDevice=3D'null'}, device=5Ff= bddd528-7d93-49c6-a286<wbr />-180e021cb274device=5F879c93ab-<wbr />4df1= -435c-af02-565039fcc254dev<wbr />ice=5F8945f61a-abbe-4156-8485-<wbr />a= 4aa6f1908db=3DVmDevice:{id=3D'<wbr />VmDeviceId:{deviceId=3D'<wbr />894= 5f61a-abbe-4156-8485-<wbr />a4aa6f1908db', vmId=3D'3f57e669-5e4c-4d10-8= 5cc-<wbr />d573004a099d'}', device=3D'unix', type=3D'CHANNEL', bootOrde= r=3D'0', specParams=3D'[]', address=3D'{bus=3D0, controller=3D0, type=3D= virtio-serial, port=3D2}', managed=3D'false', plugged=3D'true', readOnl= y=3D'false', deviceAlias=3D'channel1', customProperties=3D'[]', snapsho= tId=3D'null', logicalName=3D'null', hostDevice=3D'null'}}, vmType=3Dkvm= , memSize=3D8192, smpCoresPerSocket=3D1, vmName=3DOracle=5FSECONDARY, n= ice=3D0, status=3DUp, maxMemSize=3D32768, bootMenuEnable=3Dfalse, vmId=3D= 3f57e669-5e4c-4d10-85cc-d<wbr />573004a099d, numOfIoThreads=3D2, smpThr= eadsPerCore=3D1, smartcardEnable=3Dfalse, maxMemSlots=3D16, kvmEnable=3D= true, pitReinjection=3Dfalse, displayNetwork=3Dovirtmgmt, devices=3D[Lj= ava.lang.Object;@2e<wbr />4d3dd3, memGuaranteedSize=3D8192, maxVCpus=3D= 16, clientIp=3D, statusTime=3D<a value=3D"+14304259600" target=3D"=5Fbl= ank" href=3D"tel:(430)%20425-9600">4304259600</a>, display=3Dvnc}], log= id: 7cc65298<br />2018-02-12 16:46:42,257+01 INFO [org.ovirt.eng= ine.core.vdsbro<wbr />ker.monitoring.VmDevicesMonito<wbr />ring] (ForkJ= oinPool-1-worker-4) [] Received a vnc Device without an address when pr= ocessing VM 3f57e669-5e4c-4d10-85cc-d57300<wbr />4a099d devices, skippi= ng device: {device=3Dvnc, specParams=3D{displayNetwork=3Dovi<wbr />rtmg= mt, keyMap=3Dfr, displayIp=3D192.168.0.5}, type=3Dgraphics, deviceId=3D= 813957b1-446a-4e88-9e<wbr />40-9fe76d2c442d, port=3D5901}<br />2018-02-= 12 16:46:42,257+01 INFO [org.ovirt.engine.core.vdsbro<wbr />ker.m= onitoring.VmDevicesMonito<wbr />ring] (ForkJoinPool-1-worker-4) [] Rece= ived a lease Device without an address when processing VM 3f57e669-5e4c= -4d10-85cc-d57300<wbr />4a099d devices, skipping device: {lease=5Fid=3D= 3f57e669-5e4c-4d10-8<wbr />5cc-d573004a099d, sd=5Fid=3D1e51cecc-eb2e-47= d0-b185-<wbr />920fdc7afa16, deviceId=3D{uuid=3Da09949aa-5642-4<wbr />b= 6d-94a4-8b0d04257be5}, offset=3D6291456, device=3Dlease, path=3D/rhev/d= ata-center/mnt/glu<wbr />sterSD/192.168.0.6:=5FDATA01/1e5<wbr />1cecc-e= b2e-47d0-b185-920fdc7af<wbr />a16/dom=5Fmd/xleases, type=3Dlease}<br />= 2018-02-12 16:46:46,260+01 INFO [org.ovirt.engine.core.vdsbro<wbr= />ker.vdsbroker.FullListVDSComma<wbr />nd] (DefaultQuartzScheduler5) [= 7fcb200a] FINISH, FullListVDSCommand, return: [{acpiEnable=3Dtrue, emul= atedMachine=3Dpc-i440fx-rhel<wbr />7.3.0, afterMigrationStatus=3D, tabl= etEnable=3Dtrue, pid=3D18748, guestDiskMapping=3D{0QEMU=5FQEMU=5FH<wbr = />ARDDISK=5Fd890fa68-fba4-4f49-9=3D{<wbr />name=3D/dev/sda}, QEMU=5FDVD= -ROM=5FQM00003=3D{name=3D/de<wbr />v/sr0}}, transparentHugePages=3Dtrue= , timeOffset=3D0, cpuType=3DNehalem, smp=3D2, guestNumaNodes=3D[Ljava.l= ang.Obj<wbr />ect;@77951faf, custom=3D{device=5Ffbddd528-7d93-4<wbr />9= c6-a286-180e021cb274device=5F87<wbr />9c93ab-4df1-435c-af02-565039fc<wb= r />c254=3DVmDevice:{id=3D'VmDeviceId:<wbr />{deviceId=3D'879c93ab-4df1= -435c-<wbr />af02-565039fcc254', vmId=3D'3f57e669-5e4c-4d10-85cc-<wbr /= pecParams=3D'[]', address=3D'{bus=3D0, controller=3D0, type=3Dvirtio-se= rial, port=3D1}', managed=3D'false', plugged=3D'true', readOnly=3D'fals= e', deviceAlias=3D'channel0', customProperties=3D'[]', snapshotId=3D'nu= ll', logicalName=3D'null', hostDevice=3D'null'}, device=5Ffbddd528-7d93= -49c6-a286<wbr />-180e021cb274device=5F879c93ab-<wbr />4df1-435c-af02-5= 65039fcc254dev<wbr />ice=5F8945f61a-abbe-4156-8485-a4<wbr />aa6f1908dbd= evice=5F017b5e59-01c4<wbr />-4aac-bf0c-b5d9557284d6=3DVmDevi<wbr />ce:{= id=3D'VmDeviceId:{deviceId=3D'<wbr />017b5e59-01c4-4aac-bf0c-<wbr />b5d= 9557284d6', vmId=3D'3f57e669-5e4c-4d10-85cc-<wbr />d573004a099d'}', dev= ice=3D'tablet', type=3D'UNKNOWN', bootOrder=3D'0', specParams=3D'[]', a= ddress=3D'{bus=3D0, type=3Dusb, port=3D1}', managed=3D'false', plugged=3D= 'true', readOnly=3D'false', deviceAlias=3D'input0', customProperties=3D= '[]', snapshotId=3D'null', logicalName=3D'null', hostDevice=3D'null'}, = device=5Ffbddd528-7d93-49c6-a286<wbr />-180e021cb274=3DVmDevice:{id=3D'= <wbr />VmDeviceId:{deviceId=3D'fbddd528<wbr />-7d93-49c6-a286-180e021cb= 274', vmId=3D'3f57e669-5e4c-4d10-85cc-<wbr />d573004a099d'}', device=3D= 'ide', type=3D'CONTROLLER', bootOrder=3D'0', specParams=3D'[]', address= =3D'{slot=3D0x01, bus=3D0x00, domain=3D0x0000, type=3Dpci, function=3D0= x1}', managed=3D'false', plugged=3D'true', readOnly=3D'false', deviceAl= ias=3D'ide', customProperties=3D'[]', snapshotId=3D'null', logicalName=3D= 'null', hostDevice=3D'null'}, device=5Ffbddd528-7d93-49c6-a286<wbr />-1= 80e021cb274device=5F879c93ab-<wbr />4df1-435c-af02-565039fcc254dev<wbr = />ice=5F8945f61a-abbe-4156-8485-<wbr />a4aa6f1908db=3DVmDevice:{id=3D'<= wbr />VmDeviceId:{deviceId=3D'<wbr />8945f61a-abbe-4156-8485-<wbr />a4a= a6f1908db', vmId=3D'3f57e669-5e4c-4d10-85cc-<wbr />d573004a099d'}', dev= ice=3D'unix', type=3D'CHANNEL', bootOrder=3D'0', specParams=3D'[]', add= ress=3D'{bus=3D0, controller=3D0, type=3Dvirtio-serial, port=3D2}', man= aged=3D'false', plugged=3D'true', readOnly=3D'false', deviceAlias=3D'ch= annel1', customProperties=3D'[]', snapshotId=3D'null', logicalName=3D'n= ull', hostDevice=3D'null'}}, vmType=3Dkvm, memSize=3D8192, smpCoresPerS= ocket=3D1, vmName=3DOracle=5FSECONDARY, nice=3D0, status=3DUp, maxMemSi= ze=3D32768, bootMenuEnable=3Dfalse, vmId=3D3f57e669-5e4c-4d10-85cc-d<wb= r />573004a099d, numOfIoThreads=3D2, smpThreadsPerCore=3D1, smartcardEn= able=3Dfalse, maxMemSlots=3D16, kvmEnable=3Dtrue, pitReinjection=3Dfals= e, displayNetwork=3Dovirtmgmt, devices=3D[Ljava.lang.Object;@28<wbr />6= 410fd, memGuaranteedSize=3D8192, maxVCpus=3D16, clientIp=3D, statusTime= =3D<a value=3D"+14304263620" target=3D"=5Fblank" href=3D"tel:(430)%2042= 6-3620">4304263620</a>, display=3Dvnc}], log id: 58cdef4c<br />2018-02-= 12 16:46:46,267+01 INFO [org.ovirt.engine.core.vdsbro<wbr />ker.m= onitoring.VmDevicesMonito<wbr />ring] (DefaultQuartzScheduler5) [7fcb20= 0a] Received a vnc Device without an address when processing VM 3f57e66= 9-5e4c-4d10-85cc-d57300<wbr />4a099d devices, skipping device: {device=3D= vnc, specParams=3D{displayNetwork=3Dovi<wbr />rtmgmt, keyMap=3Dfr, disp= layIp=3D192.168.0.5}, type=3Dgraphics, deviceId=3D813957b1-446a-4e88-9e= <wbr />40-9fe76d2c442d, port=3D5901}<br />2018-02-12 16:46:46,268+01 IN= FO [org.ovirt.engine.core.vdsbro<wbr />ker.monitoring.VmDevicesMo= nito<wbr />ring] (DefaultQuartzScheduler5) [7fcb200a] Received a lease = Device without an address when processing VM 3f57e669-5e4c-4d10-85cc-d5= 7300<wbr />4a099d devices, skipping device: {lease=5Fid=3D3f57e669-5e4c= -4d10-8<wbr />5cc-d573004a099d, sd=5Fid=3D1e51cecc-eb2e-47d0-b185-<wbr = />920fdc7afa16, deviceId=3D{uuid=3Da09949aa-5642-4<wbr />b6d-94a4-8b0d0= 4257be5}, offset=3D6291456, device=3Dlease, path=3D/rhev/data-center/mn= t/glu<wbr />sterSD/192.168.0.6:=5FDATA01/1e5<wbr />1cecc-eb2e-47d0-b185= -920fdc7af<wbr />a16/dom=5Fmd/xleases, type=3Dlease}<p> </p></bloc= kquote><br />For the VM with 2 vdisks we see :<blockquote><p>2018-02-12= 16:49:06,112+01 INFO [org.ovirt.engine.core.bll.Mi<wbr />grateVm= ToServerCommand] (default task-50) [92b5af33-cb87-4142-b8fe-8b838<wbr /=
dd7458e] Lock Acquired to object 'EngineLock:{exclusiveLocks=3D'[<wbr = />f7d4ec12-627a-4b83-b59e-886400<wbr />d55474=3DVM]', sharedLocks=3D''}= '<br />2018-02-12 16:49:06,407+01 INFO [org.ovirt.engine.core.bll= .Mi<wbr />grateVmToServerCommand] (org.ovirt.thread.pool-6-threa<wbr />= d-49) [92b5af33-cb87-4142-b8fe-8b838<wbr />dd7458e] Running command: Mi= grateVmToServerCommand internal: false. Entities affected : ID: f= 7d4ec12-627a-4b83-b59e-886400<wbr />d55474 Type: VMAction group MIGRATE= =5FVM with role type USER<br />2018-02-12 16:49:06,712+01 INFO [o= rg.ovirt.engine.core.vdsbro<wbr />ker.MigrateVDSCommand] (org.ovirt.thr= ead.pool-6-threa<wbr />d-49) [92b5af33-cb87-4142-b8fe-8b838<wbr />dd745= 8e] START, MigrateVDSCommand( MigrateVDSCommandParameters:{r<wbr />unAs= ync=3D'true', hostId=3D'd569c2dd-8f30-4878-8ae<wbr />a-858db285cf69', v= mId=3D'f7d4ec12-627a-4b83-b59e-<wbr />886400d55474', srcHost=3D'192.168= .0.5', dstVdsId=3D'ce3938b1-b23f-4d22-8<wbr />40a-f17d7cd87bb1', dstHos= t=3D'<a target=3D"=5Fblank" href=3D"http://192.168.0.6:54321">192.168.0= .6:54321</a>', migrationMethod=3D'ONLINE', tunnelMigration=3D'false', m= igrationDowntime=3D'0', autoConverge=3D'true', migrateCompressed=3D'fal= se', consoleAddress=3D'null', maxBandwidth=3D'500', enableGuestEvents=3D= 'true', maxIncomingMigrations=3D'2', maxOutgoingMigrations=3D'2', conve= rgenceSchedule=3D'[init=3D[{n<wbr />ame=3DsetDowntime, params=3D[100]}]= , stalling=3D[{limit=3D1, action=3D{name=3DsetDowntime, params=3D[150]}= }, {limit=3D2, action=3D{name=3DsetDowntime, params=3D[200]}}, {limit=3D= 3, action=3D{name=3DsetDowntime, params=3D[300]}}, {limit=3D4, action=3D= {name=3DsetDowntime, params=3D[400]}}, {limit=3D6, action=3D{name=3Dset= Downtime, params=3D[500]}}, {limit=3D-1, action=3D{name=3Dabort, params= =3D[]}}]]'}), log id: 3702a9e0<br />2018-02-12 16:49:06,713+01 INFO &nb= sp;[org.ovirt.engine.core.vdsbro<wbr />ker.vdsbroker.MigrateBrokerVDS<w= br />Command] (org.ovirt.thread.pool-6-threa<wbr />d-49) [92b5af33-cb87= -4142-b8fe-8b838<wbr />dd7458e] START, MigrateBrokerVDSCommand(HostNa<w= br />me =3D <a target=3D"=5Fblank" href=3D"http://ginger.local.systea.f= r">ginger.local.systea.fr</a>, MigrateVDSCommandParameters:{r<wbr />unA= sync=3D'true', hostId=3D'd569c2dd-8f30-4878-8ae<wbr />a-858db285cf69', = vmId=3D'f7d4ec12-627a-4b83-b59e-<wbr />886400d55474', srcHost=3D'192.16= 8.0.5', dstVdsId=3D'ce3938b1-b23f-4d22-8<wbr />40a-f17d7cd87bb1', dstHo= st=3D'<a target=3D"=5Fblank" href=3D"http://192.168.0.6:54321">192.168.= 0.6:54321</a>', migrationMethod=3D'ONLINE', tunnelMigration=3D'false', = migrationDowntime=3D'0', autoConverge=3D'true', migrateCompressed=3D'fa= lse', consoleAddress=3D'null', maxBandwidth=3D'500', enableGuestEvents=3D= 'true', maxIncomingMigrations=3D'2', maxOutgoingMigrations=3D'2', conve= rgenceSchedule=3D'[init=3D[{n<wbr />ame=3DsetDowntime, params=3D[100]}]= , stalling=3D[{limit=3D1, action=3D{name=3DsetDowntime, params=3D[150]}= }, {limit=3D2, action=3D{name=3DsetDowntime, params=3D[200]}}, {limit=3D= 3, action=3D{name=3DsetDowntime, params=3D[300]}}, {limit=3D4, action=3D= {name=3DsetDowntime, params=3D[400]}}, {limit=3D6, action=3D{name=3Dset= Downtime, params=3D[500]}}, {limit=3D-1, action=3D{name=3Dabort, params= =3D[]}}]]'}), log id: 1840069c<br />2018-02-12 16:49:06,724+01 INFO &nb= sp;[org.ovirt.engine.core.vdsbro<wbr />ker.vdsbroker.MigrateBrokerVDS<w= br />Command] (org.ovirt.thread.pool-6-threa<wbr />d-49) [92b5af33-cb87= -4142-b8fe-8b838<wbr />dd7458e] FINISH, MigrateBrokerVDSCommand, log id= : 1840069c<br />2018-02-12 16:49:06,732+01 INFO [org.ovirt.engine= .core.vdsbro<wbr />ker.MigrateVDSCommand] (org.ovirt.thread.pool-6-thre= a<wbr />d-49) [92b5af33-cb87-4142-b8fe-8b838<wbr />dd7458e] FINISH, Mig= rateVDSCommand, return: MigratingFrom, log id: 3702a9e0<br />2018-02-12= 16:49:06,753+01 INFO [org.ovirt.engine.core.dal.db<wbr />broker.= auditloghandling.AuditL<wbr />ogDirector] (org.ovirt.thread.pool-6-thre= a<wbr />d-49) [92b5af33-cb87-4142-b8fe-8b838<wbr />dd7458e] EVENT=5FID:= VM=5FMIGRATION=5FSTART(62), Correlation ID: 92b5af33-cb87-4142-b8fe-8b= 838d<wbr />d7458e, Job ID: f4f54054-f7c8-4481-8eda-d5a15c<wbr />383061,= Call Stack: null, Custom ID: null, Custom Event ID: -1, Message: Migra= tion started (VM: Oracle=5FPRIMARY, Source: <a target=3D"=5Fblank" href= =3D"http://ginger.local.systea.fr">ginger.local.systea.fr</a>, Destinat= ion: <a target=3D"=5Fblank" href=3D"http://victor.local.systea.fr">vict= or.local.systea.fr</a>, User: admin@internal-authz).<br />...<br />2018= -02-12 16:49:16,453+01 INFO [org.ovirt.engine.core.vdsbro<wbr />k= er.monitoring.VmsStatisticsFe<wbr />tcher] (DefaultQuartzScheduler4) [1= 62a5bc3] Fetched 2 VMs from VDS 'ce3938b1-b23f-4d22-840a-f17d7<wbr />cd= 87bb1'<br />2018-02-12 16:49:16,455+01 INFO [org.ovirt.engine.cor= e.vdsbro<wbr />ker.monitoring.VmAnalyzer] (DefaultQuartzScheduler4) [16= 2a5bc3] VM 'f7d4ec12-627a-4b83-b59e-88640<wbr />0d55474'(Oracle=5FPRIMA= RY) was unexpectedly detected as 'MigratingTo' on VDS 'ce3938b1-b23f-4d= 22-840a-f17d7<wbr />cd87bb1'(<a target=3D"=5Fblank" href=3D"http://vict= or.local.systea.fr">victor.local.systea.<wbr />fr</a>) (expected on 'd5= 69c2dd-8f30-4878-8aea-858db<wbr />285cf69')<br />2018-02-12 16:49:16,45= 5+01 INFO [org.ovirt.engine.core.vdsbro<wbr />ker.monitoring.VmAn= alyzer] (DefaultQuartzScheduler4) [162a5bc3] VM 'f7d4ec12-627a-4b83-b59= e-88640<wbr />0d55474' is migrating to VDS 'ce3938b1-b23f-4d22-840a-f17= d7<wbr />cd87bb1'(<a target=3D"=5Fblank" href=3D"http://victor.local.sy= stea.fr">victor.local.systea.<wbr />fr</a>) ignoring it in the refresh = until migration is done<br />...<br />2018-02-12 16:49:31,484+01 INFO &= nbsp;[org.ovirt.engine.core.vdsbro<wbr />ker.monitoring.VmAnalyzer] (De= faultQuartzScheduler5) [11a7619a] VM 'f7d4ec12-627a-4b83-b59e-88640<wbr= />0d55474'(Oracle=5FPRIMARY) was unexpectedly detected as 'MigratingTo= ' on VDS 'ce3938b1-b23f-4d22-840a-f17d7<wbr />cd87bb1'(<a target=3D"=5F= blank" href=3D"http://victor.local.systea.fr">victor.local.systea.<wbr = />fr</a>) (expected on 'd569c2dd-8f30-4878-8aea-858db<wbr />285cf69')<b= r />2018-02-12 16:49:31,484+01 INFO [org.ovirt.engine.core.vdsbro= <wbr />ker.monitoring.VmAnalyzer] (DefaultQuartzScheduler5) [11a7619a] = VM 'f7d4ec12-627a-4b83-b59e-88640<wbr />0d55474' is migrating to VDS 'c= e3938b1-b23f-4d22-840a-f17d7<wbr />cd87bb1'(<a target=3D"=5Fblank" href= =3D"http://victor.local.systea.fr">victor.local.systea.<wbr />fr</a>) i= gnoring it in the refresh until migration is done<br /> </p></bloc= kquote><br />and so on, last lines repeated indefinitly for hours since= we poweroff the VM...<br />Is this something known ? Any idea about th= at ?<br /><br />Thanks<br /><br />Ovirt 4.1.6, updated last at feb-13. = Gluster 3.12.1.<br /><br />--<p class=3D"m=5F-4299273321983674487m=5F85= 87729722327689770Text1">Cordialement,<br /><br /><b>Frank Soyer </b></p= <br />=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F= =5F=5F=5F=5F=5F=5F=5F=5F<wbr />=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F= =5F=5F=5F<br />Users mailing list<br /><a target=3D"=5Fblank" href=3D"m= ailto:Users@ovirt.org">Users@ovirt.org</a><br /><a rel=3D"noreferrer" t= arget=3D"=5Fblank" href=3D"http://lists.ovirt.org/mailman/listinfo/user= s">http://lists.ovirt.org/mailman<wbr />/listinfo/users</a><br /> = </blockquote></div></div></blockquote><br /> </div></div></blockqu= ote></div></div></blockquote><br /> </html>
------=_=-_OpenGroupware_org_NGMime-18019-1519309377.654147-22--------

I encountered a bug (see [1]) which contains the same error mentioned in your VDSM logs (see [2]), but I doubt it is related. Milan, maybe you have any advice to troubleshoot the issue? Will the libvirt/qemu logs can help? I would suggest to open a bug on that issue so we can track it more properly. Regards, Maor [1] https://bugzilla.redhat.com/show_bug.cgi?id=1486543 - Migration leads to VM running on 2 Hosts [2] 2018-02-16 09:43:35,236+0100 ERROR (jsonrpc/7) [jsonrpc.JsonRpcServer] Internal server error (__init__:577) Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 572, in _handle_request res = method(**params) File "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 198, in _dynamicMethod result = fn(*methodArgs) File "/usr/share/vdsm/API.py", line 1454, in getAllVmIoTunePolicies io_tune_policies_dict = self._cif.getAllVmIoTunePolicies() File "/usr/share/vdsm/clientIF.py", line 454, in getAllVmIoTunePolicies 'current_values': v.getIoTune()} File "/usr/share/vdsm/virt/vm.py", line 2859, in getIoTune result = self.getIoTuneResponse() File "/usr/share/vdsm/virt/vm.py", line 2878, in getIoTuneResponse res = self._dom.blockIoTune( File "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py", line 47, in __getattr__ % self.vmid) NotConnectedError: VM u'755cf168-de65-42ed-b22f-efe9136f7594' was not started yet or was shut down On Thu, Feb 22, 2018 at 4:22 PM, fsoyer <fsoyer@systea.fr> wrote:
Hi, Yes, on 2018-02-16 (vdsm logs) I tried with a VM standing on ginger (192.168.0.6) migrated (or failed to migrate...) to victor (192.168.0.5), while the engine.log in the first mail on 2018-02-12 was for VMs standing on victor, migrated (or failed to migrate...) to ginger. Symptoms were exactly the same, in both directions, and VMs works like a charm before, and even after (migration "killed" by a poweroff of VMs). Am I the only one experimenting this problem ?
Thanks --
Cordialement,
*Frank Soyer *
Le Jeudi, Février 22, 2018 00:45 CET, Maor Lipchuk <mlipchuk@redhat.com> a écrit:
Hi Frank,
Sorry about the delay repond. I've been going through the logs you attached, although I could not find any specific indication why the migration failed because of the disk you were mentionning. Does this VM run with both disks on the target host without migration?
Regards, Maor
On Fri, Feb 16, 2018 at 11:03 AM, fsoyer <fsoyer@systea.fr> wrote:
Hi Maor, sorry for the double post, I've change the email adress of my account and supposed that I'd need to re-post it. And thank you for your time. Here are the logs. I added a vdisk to an existing VM : it no more migrates, needing to poweroff it after minutes. Then simply deleting the second disk makes migrate it in exactly 9s without problem ! https://gist.github.com/fgth/4707446331d201eef574ac31b6e89561 https://gist.github.com/fgth/f8de9c22664aee53722af676bff8719d
--
Cordialement,
*Frank Soyer * Le Mercredi, Février 14, 2018 11:04 CET, Maor Lipchuk < mlipchuk@redhat.com> a écrit:
Hi Frank,
I already replied on your last email. Can you provide the VDSM logs from the time of the migration failure for both hosts: ginger.local.systea.f <http://ginger.local.systea.fr/>r and v ictor.local.systea.fr
Thanks, Maor
On Wed, Feb 14, 2018 at 11:23 AM, fsoyer <fsoyer@systea.fr> wrote:
Hi all, I discovered yesterday a problem when migrating VM with more than one vdisk. On our test servers (oVirt4.1, shared storage with Gluster), I created 2 VMs needed for a test, from a template with a 20G vdisk. On this VMs I added a 100G vdisk (for this tests I didn't want to waste time to extend the existing vdisks... But I lost time finally...). The VMs with the 2 vdisks works well. Now I saw some updates waiting on the host. I tried to put it in maintenance... But it stopped on the two VM. They were marked "migrating", but no more accessible. Other (small) VMs with only 1 vdisk was migrated without problem at the same time. I saw that a kvm process for the (big) VMs was launched on the source AND destination host, but after tens of minutes, the migration and the VMs was always freezed. I tried to cancel the migration for the VMs : failed. The only way to stop it was to poweroff the VMs : the kvm process died on the 2 hosts and the GUI alerted on a failed migration. In doubt, I tried to delete the second vdisk on one of this VMs : it migrates then without error ! And no access problem. I tried to extend the first vdisk of the second VM, the delete the second vdisk : it migrates now without problem !
So after another test with a VM with 2 vdisks, I can say that this blocked the migration process :(
In engine.log, for a VMs with 1 vdisk migrating well, we see :
2018-02-12 16:46:29,705+01 INFO [org.ovirt.engine.core.bll.MigrateVmToServerCommand] (default task-28) [2f712024-5982-46a8-82c8-fd8293da5725] Lock Acquired to object 'EngineLock:{exclusiveLocks='[3f57e669-5e4c-4d10-85cc-d573004a099d=VM]', sharedLocks=''}' 2018-02-12 16:46:29,955+01 INFO [org.ovirt.engine.core.bll.MigrateVmToServerCommand] (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725] Running command: MigrateVmToServerCommand internal: false. Entities affected : ID: 3f57e669-5e4c-4d10-85cc-d573004a099d Type: VMAction group MIGRATE_VM with role type USER 2018-02-12 16:46:30,261+01 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725] START, MigrateVDSCommand( MigrateVDSCommandParameters:{runAsync='true', hostId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d', srcHost='192.168.0.6', dstVdsId='d569c2dd-8f30-4878-8aea-858db285cf69', dstHost=' 192.168.0.5:54321', migrationMethod='ONLINE', tunnelMigration='false', migrationDowntime='0', autoConverge='true', migrateCompressed='false', consoleAddress='null', maxBandwidth='500', enableGuestEvents='true', maxIncomingMigrations='2', maxOutgoingMigrations='2', convergenceSchedule='[init=[{name=setDowntime, params=[100]}], stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2, action={name=setDowntime, params=[200]}}, {limit=3, action={name=setDowntime, params=[300]}}, {limit=4, action={name=setDowntime, params=[400]}}, {limit=6, action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort, params=[]}}]]'}), log id: 14f61ee0 2018-02-12 16:46:30,262+01 INFO [org.ovirt.engine.core.vdsbro ker.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725] START, MigrateBrokerVDSCommand(HostName = victor.local.systea.fr, MigrateVDSCommandParameters:{runAsync='true', hostId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d', srcHost='192.168.0.6', dstVdsId='d569c2dd-8f30-4878-8aea-858db285cf69', dstHost=' 192.168.0.5:54321', migrationMethod='ONLINE', tunnelMigration='false', migrationDowntime='0', autoConverge='true', migrateCompressed='false', consoleAddress='null', maxBandwidth='500', enableGuestEvents='true', maxIncomingMigrations='2', maxOutgoingMigrations='2', convergenceSchedule='[init=[{name=setDowntime, params=[100]}], stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2, action={name=setDowntime, params=[200]}}, {limit=3, action={name=setDowntime, params=[300]}}, {limit=4, action={name=setDowntime, params=[400]}}, {limit=6, action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort, params=[]}}]]'}), log id: 775cd381 2018-02-12 16:46:30,277+01 INFO [org.ovirt.engine.core.vdsbro ker.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725] FINISH, MigrateBrokerVDSCommand, log id: 775cd381 2018-02-12 16:46:30,285+01 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725] FINISH, MigrateVDSCommand, return: MigratingFrom, log id: 14f61ee0 2018-02-12 16:46:30,301+01 INFO [org.ovirt.engine.core.dal.db broker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725] EVENT_ID: VM_MIGRATION_START(62), Correlation ID: 2f712024-5982-46a8-82c8-fd8293da5725, Job ID: 4bd19aa9-cc99-4d02-884e-5a1e857a7738, Call Stack: null, Custom ID: null, Custom Event ID: -1, Message: Migration started (VM: Oracle_SECONDARY, Source: victor.local.systea.fr, Destination: ginger.local.systea.fr, User: admin@internal-authz). 2018-02-12 16:46:31,106+01 INFO [org.ovirt.engine.core.vdsbro ker.vdsbroker.FullListVDSCommand] (DefaultQuartzScheduler9) [54a65b66] START, FullListVDSCommand(HostName = victor.local.systea.fr, FullListVDSCommandParameters:{runAsync='true', hostId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1', vmIds='[3f57e669-5e4c-4d10-85cc-d573004a099d]'}), log id: 54b4b435 2018-02-12 16:46:31,147+01 INFO [org.ovirt.engine.core.vdsbro ker.vdsbroker.FullListVDSCommand] (DefaultQuartzScheduler9) [54a65b66] FINISH, FullListVDSCommand, return: [{acpiEnable=true, emulatedMachine=pc-i440fx-rhel7.3.0, tabletEnable=true, pid=1493, guestDiskMapping={0QEMU_QEMU_HARDDISK_d890fa68-fba4-4f49-9={name=/dev/sda}, QEMU_DVD-ROM_QM00003={name=/dev/sr0}}, transparentHugePages=true, timeOffset=0, cpuType=Nehalem, smp=2, pauseCode=NOERR, guestNumaNodes=[Ljava.lang.Object;@1d9042cd, smartcardEnable=false, custom={device_fbddd528-7d93-49c6-a286-180e021cb274device_87 9c93ab-4df1-435c-af02-565039fcc254=VmDevice:{id='VmDeviceId: {deviceId='879c93ab-4df1-435c-af02-565039fcc254', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='unix', type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, controller=0, type=virtio-serial, port=1}', managed='false', plugged='true', readOnly='false', deviceAlias='channel0', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}, device_fbddd528-7d93-49c6-a286 -180e021cb274device_879c93ab-4df1-435c-af02-565039fcc254devi ce_8945f61a-abbe-4156-8485-a4aa6f1908dbdevice_017b5e59-01c4- 4aac-bf0c-b5d9557284d6=VmDevice:{id='VmDeviceId:{deviceId='0 17b5e59-01c4-4aac-bf0c-b5d9557284d6', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='tablet', type='UNKNOWN', bootOrder='0', specParams='[]', address='{bus=0, type=usb, port=1}', managed='false', plugged='true', readOnly='false', deviceAlias='input0', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}, device_fbddd528-7d93-49c6-a286-180e021cb274=VmDevice:{id='Vm DeviceId:{deviceId='fbddd528-7d93-49c6-a286-180e021cb274', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='ide', type='CONTROLLER', bootOrder='0', specParams='[]', address='{slot=0x01, bus=0x00, domain=0x0000, type=pci, function=0x1}', managed='false', plugged='true', readOnly='false', deviceAlias='ide', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}, device_fbddd528-7d93-49c6-a286-180e021cb274device_879c93ab-4 df1-435c-af02-565039fcc254device_8945f61a-abbe-4156-8485-a4a a6f1908db=VmDevice:{id='VmDeviceId:{deviceId='8945f61a-abbe-4156-8485-a4aa6f1908db', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='unix', type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, controller=0, type=virtio-serial, port=2}', managed='false', plugged='true', readOnly='false', deviceAlias='channel1', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}}, vmType=kvm, memSize=8192, smpCoresPerSocket=1, vmName=Oracle_SECONDARY, nice=0, status=Migration Source, maxMemSize=32768, bootMenuEnable=false, vmId=3f57e669-5e4c-4d10-85cc-d573004a099d, numOfIoThreads=2, smpThreadsPerCore=1, memGuaranteedSize=8192, kvmEnable=true, pitReinjection=false, displayNetwork=ovirtmgmt, devices=[Ljava.lang.Object;@28ae66d7, display=vnc, maxVCpus=16, clientIp=, statusTime=4299484520, maxMemSlots=16}], log id: 54b4b435 2018-02-12 16:46:31,150+01 INFO [org.ovirt.engine.core.vdsbro ker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler1) [27fac647] Fetched 3 VMs from VDS 'd569c2dd-8f30-4878-8aea-858db285cf69' 2018-02-12 16:46:31,151+01 INFO [org.ovirt.engine.core.vdsbro ker.monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler9) [54a65b66] Received a vnc Device without an address when processing VM 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: {device=vnc, specParams={displayNetwork=ovirtmgmt, keyMap=fr, displayIp=192.168.0.6}, type=graphics, deviceId=813957b1-446a-4e88-9e40-9fe76d2c442d, port=5901} 2018-02-12 16:46:31,151+01 INFO [org.ovirt.engine.core.vdsbro ker.monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler9) [54a65b66] Received a lease Device without an address when processing VM 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: {lease_id=3f57e669-5e4c-4d10-85cc-d573004a099d, sd_id=1e51cecc-eb2e-47d0-b185-920fdc7afa16, deviceId={uuid=a09949aa-5642-4b6d-94a4-8b0d04257be5}, offset=6291456, device=lease, path=/rhev/data-center/mnt/glusterSD/192.168.0.6: _DATA01/1e51cecc-eb2e-47d0-b185-920fdc7afa16/dom_md/xleases, type=lease} 2018-02-12 16:46:31,152+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (DefaultQuartzScheduler1) [27fac647] VM '3f57e669-5e4c-4d10-85cc-d573004a099d'(Oracle_SECONDARY) was unexpectedly detected as 'MigratingTo' on VDS 'd569c2dd-8f30-4878-8aea-858db285cf69'(ginger.local.systea.fr) (expected on 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1') 2018-02-12 16:46:31,152+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (DefaultQuartzScheduler1) [27fac647] VM '3f57e669-5e4c-4d10-85cc-d573004a099d' is migrating to VDS 'd569c2dd-8f30-4878-8aea-858db285cf69'( ginger.local.systea.fr) ignoring it in the refresh until migration is done .... 2018-02-12 16:46:41,631+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-11) [] VM '3f57e669-5e4c-4d10-85cc-d573004a099d' was reported as Down on VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'( victor.local.systea.fr) 2018-02-12 16:46:41,632+01 INFO [org.ovirt.engine.core.vdsbro ker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-11) [] START, DestroyVDSCommand(HostName = victor.local.systea.fr, DestroyVmVDSCommandParameters:{runAsync='true', hostId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d', force='false', secondsToWait='0', gracefully='false', reason='', ignoreNoVm='true'}), log id: 560eca57 2018-02-12 16:46:41,650+01 INFO [org.ovirt.engine.core.vdsbro ker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-11) [] FINISH, DestroyVDSCommand, log id: 560eca57 2018-02-12 16:46:41,650+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-11) [] VM '3f57e669-5e4c-4d10-85cc-d573004a099d'(Oracle_SECONDARY) moved from 'MigratingFrom' --> 'Down' 2018-02-12 16:46:41,651+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-11) [] Handing over VM '3f57e669-5e4c-4d10-85cc-d573004a099d'(Oracle_SECONDARY) to Host 'd569c2dd-8f30-4878-8aea-858db285cf69'. Setting VM to status 'MigratingTo' 2018-02-12 16:46:42,163+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-4) [] VM '3f57e669-5e4c-4d10-85cc-d573004a099d'(Oracle_SECONDARY) moved from 'MigratingTo' --> 'Up' 2018-02-12 16:46:42,169+01 INFO [org.ovirt.engine.core.vdsbro ker.vdsbroker.MigrateStatusVDSCommand] (ForkJoinPool-1-worker-4) [] START, MigrateStatusVDSCommand(HostName = ginger.local.systea.fr, MigrateStatusVDSCommandParameters:{runAsync='true', hostId='d569c2dd-8f30-4878-8aea-858db285cf69', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}), log id: 7a25c281 2018-02-12 16:46:42,174+01 INFO [org.ovirt.engine.core.vdsbro ker.vdsbroker.MigrateStatusVDSCommand] (ForkJoinPool-1-worker-4) [] FINISH, MigrateStatusVDSCommand, log id: 7a25c281 2018-02-12 16:46:42,194+01 INFO [org.ovirt.engine.core.dal.db broker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-4) [] EVENT_ID: VM_MIGRATION_DONE(63), Correlation ID: 2f712024-5982-46a8-82c8-fd8293da5725, Job ID: 4bd19aa9-cc99-4d02-884e-5a1e857a7738, Call Stack: null, Custom ID: null, Custom Event ID: -1, Message: Migration completed (VM: Oracle_SECONDARY, Source: victor.local.systea.fr, Destination: ginger.local.systea.fr, Duration: 11 seconds, Total: 11 seconds, Actual downtime: (N/A)) 2018-02-12 16:46:42,201+01 INFO [org.ovirt.engine.core.bll.MigrateVmToServerCommand] (ForkJoinPool-1-worker-4) [] Lock freed to object 'EngineLock:{exclusiveLocks='[3f57e669-5e4c-4d10-85cc-d573004a099d=VM]', sharedLocks=''}' 2018-02-12 16:46:42,203+01 INFO [org.ovirt.engine.core.vdsbro ker.vdsbroker.FullListVDSCommand] (ForkJoinPool-1-worker-4) [] START, FullListVDSCommand(HostName = ginger.local.systea.fr, FullListVDSCommandParameters:{runAsync='true', hostId='d569c2dd-8f30-4878-8aea-858db285cf69', vmIds='[3f57e669-5e4c-4d10-85cc-d573004a099d]'}), log id: 7cc65298 2018-02-12 16:46:42,254+01 INFO [org.ovirt.engine.core.vdsbro ker.vdsbroker.FullListVDSCommand] (ForkJoinPool-1-worker-4) [] FINISH, FullListVDSCommand, return: [{acpiEnable=true, emulatedMachine=pc-i440fx-rhel7.3.0, afterMigrationStatus=, tabletEnable=true, pid=18748, guestDiskMapping={}, transparentHugePages=true, timeOffset=0, cpuType=Nehalem, smp=2, guestNumaNodes=[Ljava.lang.Object;@760085fd, custom={device_fbddd528-7d93-49c6-a286-180e021cb274device_87 9c93ab-4df1-435c-af02-565039fcc254=VmDevice:{id='VmDeviceId: {deviceId='879c93ab-4df1-435c-af02-565039fcc254', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='unix', type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, controller=0, type=virtio-serial, port=1}', managed='false', plugged='true', readOnly='false', deviceAlias='channel0', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}, device_fbddd528-7d93-49c6-a286 -180e021cb274device_879c93ab-4df1-435c-af02-565039fcc254devi ce_8945f61a-abbe-4156-8485-a4aa6f1908dbdevice_017b5e59-01c4- 4aac-bf0c-b5d9557284d6=VmDevice:{id='VmDeviceId:{deviceId='0 17b5e59-01c4-4aac-bf0c-b5d9557284d6', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='tablet', type='UNKNOWN', bootOrder='0', specParams='[]', address='{bus=0, type=usb, port=1}', managed='false', plugged='true', readOnly='false', deviceAlias='input0', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}, device_fbddd528-7d93-49c6-a286-180e021cb274=VmDevice:{id='Vm DeviceId:{deviceId='fbddd528-7d93-49c6-a286-180e021cb274', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='ide', type='CONTROLLER', bootOrder='0', specParams='[]', address='{slot=0x01, bus=0x00, domain=0x0000, type=pci, function=0x1}', managed='false', plugged='true', readOnly='false', deviceAlias='ide', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}, device_fbddd528-7d93-49c6-a286-180e021cb274device_879c93ab-4 df1-435c-af02-565039fcc254device_8945f61a-abbe-4156-8485-a4a a6f1908db=VmDevice:{id='VmDeviceId:{deviceId='8945f61a-abbe-4156-8485-a4aa6f1908db', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='unix', type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, controller=0, type=virtio-serial, port=2}', managed='false', plugged='true', readOnly='false', deviceAlias='channel1', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}}, vmType=kvm, memSize=8192, smpCoresPerSocket=1, vmName=Oracle_SECONDARY, nice=0, status=Up, maxMemSize=32768, bootMenuEnable=false, vmId=3f57e669-5e4c-4d10-85cc-d573004a099d, numOfIoThreads=2, smpThreadsPerCore=1, smartcardEnable=false, maxMemSlots=16, kvmEnable=true, pitReinjection=false, displayNetwork=ovirtmgmt, devices=[Ljava.lang.Object;@2e4d3dd3, memGuaranteedSize=8192, maxVCpus=16, clientIp=, statusTime=4304259600 <(430)%20425-9600>, display=vnc}], log id: 7cc65298 2018-02-12 16:46:42,257+01 INFO [org.ovirt.engine.core.vdsbro ker.monitoring.VmDevicesMonitoring] (ForkJoinPool-1-worker-4) [] Received a vnc Device without an address when processing VM 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: {device=vnc, specParams={displayNetwork=ovirtmgmt, keyMap=fr, displayIp=192.168.0.5}, type=graphics, deviceId=813957b1-446a-4e88-9e40-9fe76d2c442d, port=5901} 2018-02-12 16:46:42,257+01 INFO [org.ovirt.engine.core.vdsbro ker.monitoring.VmDevicesMonitoring] (ForkJoinPool-1-worker-4) [] Received a lease Device without an address when processing VM 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: {lease_id=3f57e669-5e4c-4d10-85cc-d573004a099d, sd_id=1e51cecc-eb2e-47d0-b185-920fdc7afa16, deviceId={uuid=a09949aa-5642-4b6d-94a4-8b0d04257be5}, offset=6291456, device=lease, path=/rhev/data-center/mnt/glusterSD/192.168.0.6: _DATA01/1e51cecc-eb2e-47d0-b185-920fdc7afa16/dom_md/xleases, type=lease} 2018-02-12 16:46:46,260+01 INFO [org.ovirt.engine.core.vdsbro ker.vdsbroker.FullListVDSCommand] (DefaultQuartzScheduler5) [7fcb200a] FINISH, FullListVDSCommand, return: [{acpiEnable=true, emulatedMachine=pc-i440fx-rhel7.3.0, afterMigrationStatus=, tabletEnable=true, pid=18748, guestDiskMapping={0QEMU_QEMU_H ARDDISK_d890fa68-fba4-4f49-9={name=/dev/sda}, QEMU_DVD-ROM_QM00003={name=/dev/sr0}}, transparentHugePages=true, timeOffset=0, cpuType=Nehalem, smp=2, guestNumaNodes=[Ljava.lang.Obj ect;@77951faf, custom={device_fbddd528-7d93-4 9c6-a286-180e021cb274device_879c93ab-4df1-435c-af02-565039fc c254=VmDevice:{id='VmDeviceId:{deviceId='879c93ab-4df1-435c-af02-565039fcc254', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='unix', type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, controller=0, type=virtio-serial, port=1}', managed='false', plugged='true', readOnly='false', deviceAlias='channel0', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}, device_fbddd528-7d93-49c6-a286 -180e021cb274device_879c93ab-4df1-435c-af02-565039fcc254devi ce_8945f61a-abbe-4156-8485-a4aa6f1908dbdevice_017b5e59-01c4- 4aac-bf0c-b5d9557284d6=VmDevice:{id='VmDeviceId:{deviceId='0 17b5e59-01c4-4aac-bf0c-b5d9557284d6', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='tablet', type='UNKNOWN', bootOrder='0', specParams='[]', address='{bus=0, type=usb, port=1}', managed='false', plugged='true', readOnly='false', deviceAlias='input0', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}, device_fbddd528-7d93-49c6-a286-180e021cb274=VmDevice:{id='Vm DeviceId:{deviceId='fbddd528-7d93-49c6-a286-180e021cb274', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='ide', type='CONTROLLER', bootOrder='0', specParams='[]', address='{slot=0x01, bus=0x00, domain=0x0000, type=pci, function=0x1}', managed='false', plugged='true', readOnly='false', deviceAlias='ide', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}, device_fbddd528-7d93-49c6-a286-180e021cb274device_879c93ab-4 df1-435c-af02-565039fcc254device_8945f61a-abbe-4156-8485-a4a a6f1908db=VmDevice:{id='VmDeviceId:{deviceId='8945f61a-abbe-4156-8485-a4aa6f1908db', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='unix', type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, controller=0, type=virtio-serial, port=2}', managed='false', plugged='true', readOnly='false', deviceAlias='channel1', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}}, vmType=kvm, memSize=8192, smpCoresPerSocket=1, vmName=Oracle_SECONDARY, nice=0, status=Up, maxMemSize=32768, bootMenuEnable=false, vmId=3f57e669-5e4c-4d10-85cc-d573004a099d, numOfIoThreads=2, smpThreadsPerCore=1, smartcardEnable=false, maxMemSlots=16, kvmEnable=true, pitReinjection=false, displayNetwork=ovirtmgmt, devices=[Ljava.lang.Object;@286410fd, memGuaranteedSize=8192, maxVCpus=16, clientIp=, statusTime=4304263620 <(430)%20426-3620>, display=vnc}], log id: 58cdef4c 2018-02-12 16:46:46,267+01 INFO [org.ovirt.engine.core.vdsbro ker.monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler5) [7fcb200a] Received a vnc Device without an address when processing VM 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: {device=vnc, specParams={displayNetwork=ovirtmgmt, keyMap=fr, displayIp=192.168.0.5}, type=graphics, deviceId=813957b1-446a-4e88-9e40-9fe76d2c442d, port=5901} 2018-02-12 16:46:46,268+01 INFO [org.ovirt.engine.core.vdsbro ker.monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler5) [7fcb200a] Received a lease Device without an address when processing VM 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: {lease_id=3f57e669-5e4c-4d10-85cc-d573004a099d, sd_id=1e51cecc-eb2e-47d0-b185-920fdc7afa16, deviceId={uuid=a09949aa-5642-4b6d-94a4-8b0d04257be5}, offset=6291456, device=lease, path=/rhev/data-center/mnt/glusterSD/192.168.0.6: _DATA01/1e51cecc-eb2e-47d0-b185-920fdc7afa16/dom_md/xleases, type=lease}
For the VM with 2 vdisks we see :
2018-02-12 16:49:06,112+01 INFO [org.ovirt.engine.core.bll.MigrateVmToServerCommand] (default task-50) [92b5af33-cb87-4142-b8fe-8b838dd7458e] Lock Acquired to object 'EngineLock:{exclusiveLocks='[f7d4ec12-627a-4b83-b59e-886400d55474=VM]', sharedLocks=''}' 2018-02-12 16:49:06,407+01 INFO [org.ovirt.engine.core.bll.MigrateVmToServerCommand] (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4142-b8fe-8b838dd7458e] Running command: MigrateVmToServerCommand internal: false. Entities affected : ID: f7d4ec12-627a-4b83-b59e-886400d55474 Type: VMAction group MIGRATE_VM with role type USER 2018-02-12 16:49:06,712+01 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4142-b8fe-8b838dd7458e] START, MigrateVDSCommand( MigrateVDSCommandParameters:{runAsync='true', hostId='d569c2dd-8f30-4878-8aea-858db285cf69', vmId='f7d4ec12-627a-4b83-b59e-886400d55474', srcHost='192.168.0.5', dstVdsId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1', dstHost=' 192.168.0.6:54321', migrationMethod='ONLINE', tunnelMigration='false', migrationDowntime='0', autoConverge='true', migrateCompressed='false', consoleAddress='null', maxBandwidth='500', enableGuestEvents='true', maxIncomingMigrations='2', maxOutgoingMigrations='2', convergenceSchedule='[init=[{name=setDowntime, params=[100]}], stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2, action={name=setDowntime, params=[200]}}, {limit=3, action={name=setDowntime, params=[300]}}, {limit=4, action={name=setDowntime, params=[400]}}, {limit=6, action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort, params=[]}}]]'}), log id: 3702a9e0 2018-02-12 16:49:06,713+01 INFO [org.ovirt.engine.core.vdsbro ker.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4142-b8fe-8b838dd7458e] START, MigrateBrokerVDSCommand(HostName = ginger.local.systea.fr, MigrateVDSCommandParameters:{runAsync='true', hostId='d569c2dd-8f30-4878-8aea-858db285cf69', vmId='f7d4ec12-627a-4b83-b59e-886400d55474', srcHost='192.168.0.5', dstVdsId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1', dstHost=' 192.168.0.6:54321', migrationMethod='ONLINE', tunnelMigration='false', migrationDowntime='0', autoConverge='true', migrateCompressed='false', consoleAddress='null', maxBandwidth='500', enableGuestEvents='true', maxIncomingMigrations='2', maxOutgoingMigrations='2', convergenceSchedule='[init=[{name=setDowntime, params=[100]}], stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2, action={name=setDowntime, params=[200]}}, {limit=3, action={name=setDowntime, params=[300]}}, {limit=4, action={name=setDowntime, params=[400]}}, {limit=6, action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort, params=[]}}]]'}), log id: 1840069c 2018-02-12 16:49:06,724+01 INFO [org.ovirt.engine.core.vdsbro ker.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4142-b8fe-8b838dd7458e] FINISH, MigrateBrokerVDSCommand, log id: 1840069c 2018-02-12 16:49:06,732+01 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4142-b8fe-8b838dd7458e] FINISH, MigrateVDSCommand, return: MigratingFrom, log id: 3702a9e0 2018-02-12 16:49:06,753+01 INFO [org.ovirt.engine.core.dal.db broker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4142-b8fe-8b838dd7458e] EVENT_ID: VM_MIGRATION_START(62), Correlation ID: 92b5af33-cb87-4142-b8fe-8b838dd7458e, Job ID: f4f54054-f7c8-4481-8eda-d5a15c383061, Call Stack: null, Custom ID: null, Custom Event ID: -1, Message: Migration started (VM: Oracle_PRIMARY, Source: ginger.local.systea.fr, Destination: victor.local.systea.fr, User: admin@internal-authz). ... 2018-02-12 16:49:16,453+01 INFO [org.ovirt.engine.core.vdsbro ker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler4) [162a5bc3] Fetched 2 VMs from VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1' 2018-02-12 16:49:16,455+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (DefaultQuartzScheduler4) [162a5bc3] VM 'f7d4ec12-627a-4b83-b59e-886400d55474'(Oracle_PRIMARY) was unexpectedly detected as 'MigratingTo' on VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'(victor.local.systea.fr) (expected on 'd569c2dd-8f30-4878-8aea-858db285cf69') 2018-02-12 16:49:16,455+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (DefaultQuartzScheduler4) [162a5bc3] VM 'f7d4ec12-627a-4b83-b59e-886400d55474' is migrating to VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'( victor.local.systea.fr) ignoring it in the refresh until migration is done ... 2018-02-12 16:49:31,484+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (DefaultQuartzScheduler5) [11a7619a] VM 'f7d4ec12-627a-4b83-b59e-886400d55474'(Oracle_PRIMARY) was unexpectedly detected as 'MigratingTo' on VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'(victor.local.systea.fr) (expected on 'd569c2dd-8f30-4878-8aea-858db285cf69') 2018-02-12 16:49:31,484+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (DefaultQuartzScheduler5) [11a7619a] VM 'f7d4ec12-627a-4b83-b59e-886400d55474' is migrating to VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'( victor.local.systea.fr) ignoring it in the refresh until migration is done
and so on, last lines repeated indefinitly for hours since we poweroff the VM... Is this something known ? Any idea about that ?
Thanks
Ovirt 4.1.6, updated last at feb-13. Gluster 3.12.1.
--
Cordialement,
*Frank Soyer *
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Maor Lipchuk <mlipchuk@redhat.com> writes:
I encountered a bug (see [1]) which contains the same error mentioned in your VDSM logs (see [2]), but I doubt it is related.
Indeed, it's not related. The error in vdsm_victor.log just means that the info gathering call tries to access libvirt domain before the incoming migration is completed. It's ugly but harmless.
Milan, maybe you have any advice to troubleshoot the issue? Will the libvirt/qemu logs can help?
It seems there is something wrong on (at least) the source host. There are no migration progress messages in the vdsm_ginger.log and there are warnings about stale stat samples. That looks like problems with calling libvirt – slow and/or stuck calls, maybe due to storage problems. The possibly faulty second disk could cause that. libvirt debug logs could tell us whether that is indeed the problem and whether it is caused by storage or something else.
I would suggest to open a bug on that issue so we can track it more properly.
Regards, Maor
[1] https://bugzilla.redhat.com/show_bug.cgi?id=1486543 - Migration leads to VM running on 2 Hosts
[2] 2018-02-16 09:43:35,236+0100 ERROR (jsonrpc/7) [jsonrpc.JsonRpcServer] Internal server error (__init__:577) Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 572, in _handle_request res = method(**params) File "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 198, in _dynamicMethod result = fn(*methodArgs) File "/usr/share/vdsm/API.py", line 1454, in getAllVmIoTunePolicies io_tune_policies_dict = self._cif.getAllVmIoTunePolicies() File "/usr/share/vdsm/clientIF.py", line 454, in getAllVmIoTunePolicies 'current_values': v.getIoTune()} File "/usr/share/vdsm/virt/vm.py", line 2859, in getIoTune result = self.getIoTuneResponse() File "/usr/share/vdsm/virt/vm.py", line 2878, in getIoTuneResponse res = self._dom.blockIoTune( File "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py", line 47, in __getattr__ % self.vmid) NotConnectedError: VM u'755cf168-de65-42ed-b22f-efe9136f7594' was not started yet or was shut down
On Thu, Feb 22, 2018 at 4:22 PM, fsoyer <fsoyer@systea.fr> wrote:
Hi, Yes, on 2018-02-16 (vdsm logs) I tried with a VM standing on ginger (192.168.0.6) migrated (or failed to migrate...) to victor (192.168.0.5), while the engine.log in the first mail on 2018-02-12 was for VMs standing on victor, migrated (or failed to migrate...) to ginger. Symptoms were exactly the same, in both directions, and VMs works like a charm before, and even after (migration "killed" by a poweroff of VMs). Am I the only one experimenting this problem ?
Thanks --
Cordialement,
*Frank Soyer *
Le Jeudi, Février 22, 2018 00:45 CET, Maor Lipchuk <mlipchuk@redhat.com> a écrit:
Hi Frank,
Sorry about the delay repond. I've been going through the logs you attached, although I could not find any specific indication why the migration failed because of the disk you were mentionning. Does this VM run with both disks on the target host without migration?
Regards, Maor
On Fri, Feb 16, 2018 at 11:03 AM, fsoyer <fsoyer@systea.fr> wrote:
Hi Maor, sorry for the double post, I've change the email adress of my account and supposed that I'd need to re-post it. And thank you for your time. Here are the logs. I added a vdisk to an existing VM : it no more migrates, needing to poweroff it after minutes. Then simply deleting the second disk makes migrate it in exactly 9s without problem ! https://gist.github.com/fgth/4707446331d201eef574ac31b6e89561 https://gist.github.com/fgth/f8de9c22664aee53722af676bff8719d
--
Cordialement,
*Frank Soyer * Le Mercredi, Février 14, 2018 11:04 CET, Maor Lipchuk < mlipchuk@redhat.com> a écrit:
Hi Frank,
I already replied on your last email. Can you provide the VDSM logs from the time of the migration failure for both hosts: ginger.local.systea.f <http://ginger.local.systea.fr/>r and v ictor.local.systea.fr
Thanks, Maor
On Wed, Feb 14, 2018 at 11:23 AM, fsoyer <fsoyer@systea.fr> wrote:
Hi all, I discovered yesterday a problem when migrating VM with more than one vdisk. On our test servers (oVirt4.1, shared storage with Gluster), I created 2 VMs needed for a test, from a template with a 20G vdisk. On this VMs I added a 100G vdisk (for this tests I didn't want to waste time to extend the existing vdisks... But I lost time finally...). The VMs with the 2 vdisks works well. Now I saw some updates waiting on the host. I tried to put it in maintenance... But it stopped on the two VM. They were marked "migrating", but no more accessible. Other (small) VMs with only 1 vdisk was migrated without problem at the same time. I saw that a kvm process for the (big) VMs was launched on the source AND destination host, but after tens of minutes, the migration and the VMs was always freezed. I tried to cancel the migration for the VMs : failed. The only way to stop it was to poweroff the VMs : the kvm process died on the 2 hosts and the GUI alerted on a failed migration. In doubt, I tried to delete the second vdisk on one of this VMs : it migrates then without error ! And no access problem. I tried to extend the first vdisk of the second VM, the delete the second vdisk : it migrates now without problem !
So after another test with a VM with 2 vdisks, I can say that this blocked the migration process :(
In engine.log, for a VMs with 1 vdisk migrating well, we see :
2018-02-12 16:46:29,705+01 INFO [org.ovirt.engine.core.bll.MigrateVmToServerCommand] (default task-28) [2f712024-5982-46a8-82c8-fd8293da5725] Lock Acquired to object 'EngineLock:{exclusiveLocks='[3f57e669-5e4c-4d10-85cc-d573004a099d=VM]', sharedLocks=''}' 2018-02-12 16:46:29,955+01 INFO [org.ovirt.engine.core.bll.MigrateVmToServerCommand] (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725] Running command: MigrateVmToServerCommand internal: false. Entities affected : ID: 3f57e669-5e4c-4d10-85cc-d573004a099d Type: VMAction group MIGRATE_VM with role type USER 2018-02-12 16:46:30,261+01 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725] START, MigrateVDSCommand( MigrateVDSCommandParameters:{runAsync='true', hostId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d', srcHost='192.168.0.6', dstVdsId='d569c2dd-8f30-4878-8aea-858db285cf69', dstHost=' 192.168.0.5:54321', migrationMethod='ONLINE', tunnelMigration='false', migrationDowntime='0', autoConverge='true', migrateCompressed='false', consoleAddress='null', maxBandwidth='500', enableGuestEvents='true', maxIncomingMigrations='2', maxOutgoingMigrations='2', convergenceSchedule='[init=[{name=setDowntime, params=[100]}], stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2, action={name=setDowntime, params=[200]}}, {limit=3, action={name=setDowntime, params=[300]}}, {limit=4, action={name=setDowntime, params=[400]}}, {limit=6, action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort, params=[]}}]]'}), log id: 14f61ee0 2018-02-12 16:46:30,262+01 INFO [org.ovirt.engine.core.vdsbro ker.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725] START, MigrateBrokerVDSCommand(HostName = victor.local.systea.fr, MigrateVDSCommandParameters:{runAsync='true', hostId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d', srcHost='192.168.0.6', dstVdsId='d569c2dd-8f30-4878-8aea-858db285cf69', dstHost=' 192.168.0.5:54321', migrationMethod='ONLINE', tunnelMigration='false', migrationDowntime='0', autoConverge='true', migrateCompressed='false', consoleAddress='null', maxBandwidth='500', enableGuestEvents='true', maxIncomingMigrations='2', maxOutgoingMigrations='2', convergenceSchedule='[init=[{name=setDowntime, params=[100]}], stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2, action={name=setDowntime, params=[200]}}, {limit=3, action={name=setDowntime, params=[300]}}, {limit=4, action={name=setDowntime, params=[400]}}, {limit=6, action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort, params=[]}}]]'}), log id: 775cd381 2018-02-12 16:46:30,277+01 INFO [org.ovirt.engine.core.vdsbro ker.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725] FINISH, MigrateBrokerVDSCommand, log id: 775cd381 2018-02-12 16:46:30,285+01 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725] FINISH, MigrateVDSCommand, return: MigratingFrom, log id: 14f61ee0 2018-02-12 16:46:30,301+01 INFO [org.ovirt.engine.core.dal.db broker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725] EVENT_ID: VM_MIGRATION_START(62), Correlation ID: 2f712024-5982-46a8-82c8-fd8293da5725, Job ID: 4bd19aa9-cc99-4d02-884e-5a1e857a7738, Call Stack: null, Custom ID: null, Custom Event ID: -1, Message: Migration started (VM: Oracle_SECONDARY, Source: victor.local.systea.fr, Destination: ginger.local.systea.fr, User: admin@internal-authz). 2018-02-12 16:46:31,106+01 INFO [org.ovirt.engine.core.vdsbro ker.vdsbroker.FullListVDSCommand] (DefaultQuartzScheduler9) [54a65b66] START, FullListVDSCommand(HostName = victor.local.systea.fr, FullListVDSCommandParameters:{runAsync='true', hostId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1', vmIds='[3f57e669-5e4c-4d10-85cc-d573004a099d]'}), log id: 54b4b435 2018-02-12 16:46:31,147+01 INFO [org.ovirt.engine.core.vdsbro ker.vdsbroker.FullListVDSCommand] (DefaultQuartzScheduler9) [54a65b66] FINISH, FullListVDSCommand, return: [{acpiEnable=true, emulatedMachine=pc-i440fx-rhel7.3.0, tabletEnable=true, pid=1493, guestDiskMapping={0QEMU_QEMU_HARDDISK_d890fa68-fba4-4f49-9={name=/dev/sda}, QEMU_DVD-ROM_QM00003={name=/dev/sr0}}, transparentHugePages=true, timeOffset=0, cpuType=Nehalem, smp=2, pauseCode=NOERR, guestNumaNodes=[Ljava.lang.Object;@1d9042cd, smartcardEnable=false, custom={device_fbddd528-7d93-49c6-a286-180e021cb274device_87 9c93ab-4df1-435c-af02-565039fcc254=VmDevice:{id='VmDeviceId: {deviceId='879c93ab-4df1-435c-af02-565039fcc254', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='unix', type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, controller=0, type=virtio-serial, port=1}', managed='false', plugged='true', readOnly='false', deviceAlias='channel0', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}, device_fbddd528-7d93-49c6-a286 -180e021cb274device_879c93ab-4df1-435c-af02-565039fcc254devi ce_8945f61a-abbe-4156-8485-a4aa6f1908dbdevice_017b5e59-01c4- 4aac-bf0c-b5d9557284d6=VmDevice:{id='VmDeviceId:{deviceId='0 17b5e59-01c4-4aac-bf0c-b5d9557284d6', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='tablet', type='UNKNOWN', bootOrder='0', specParams='[]', address='{bus=0, type=usb, port=1}', managed='false', plugged='true', readOnly='false', deviceAlias='input0', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}, device_fbddd528-7d93-49c6-a286-180e021cb274=VmDevice:{id='Vm DeviceId:{deviceId='fbddd528-7d93-49c6-a286-180e021cb274', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='ide', type='CONTROLLER', bootOrder='0', specParams='[]', address='{slot=0x01, bus=0x00, domain=0x0000, type=pci, function=0x1}', managed='false', plugged='true', readOnly='false', deviceAlias='ide', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}, device_fbddd528-7d93-49c6-a286-180e021cb274device_879c93ab-4 df1-435c-af02-565039fcc254device_8945f61a-abbe-4156-8485-a4a a6f1908db=VmDevice:{id='VmDeviceId:{deviceId='8945f61a-abbe-4156-8485-a4aa6f1908db', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='unix', type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, controller=0, type=virtio-serial, port=2}', managed='false', plugged='true', readOnly='false', deviceAlias='channel1', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}}, vmType=kvm, memSize=8192, smpCoresPerSocket=1, vmName=Oracle_SECONDARY, nice=0, status=Migration Source, maxMemSize=32768, bootMenuEnable=false, vmId=3f57e669-5e4c-4d10-85cc-d573004a099d, numOfIoThreads=2, smpThreadsPerCore=1, memGuaranteedSize=8192, kvmEnable=true, pitReinjection=false, displayNetwork=ovirtmgmt, devices=[Ljava.lang.Object;@28ae66d7, display=vnc, maxVCpus=16, clientIp=, statusTime=4299484520, maxMemSlots=16}], log id: 54b4b435 2018-02-12 16:46:31,150+01 INFO [org.ovirt.engine.core.vdsbro ker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler1) [27fac647] Fetched 3 VMs from VDS 'd569c2dd-8f30-4878-8aea-858db285cf69' 2018-02-12 16:46:31,151+01 INFO [org.ovirt.engine.core.vdsbro ker.monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler9) [54a65b66] Received a vnc Device without an address when processing VM 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: {device=vnc, specParams={displayNetwork=ovirtmgmt, keyMap=fr, displayIp=192.168.0.6}, type=graphics, deviceId=813957b1-446a-4e88-9e40-9fe76d2c442d, port=5901} 2018-02-12 16:46:31,151+01 INFO [org.ovirt.engine.core.vdsbro ker.monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler9) [54a65b66] Received a lease Device without an address when processing VM 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: {lease_id=3f57e669-5e4c-4d10-85cc-d573004a099d, sd_id=1e51cecc-eb2e-47d0-b185-920fdc7afa16, deviceId={uuid=a09949aa-5642-4b6d-94a4-8b0d04257be5}, offset=6291456, device=lease, path=/rhev/data-center/mnt/glusterSD/192.168.0.6: _DATA01/1e51cecc-eb2e-47d0-b185-920fdc7afa16/dom_md/xleases, type=lease} 2018-02-12 16:46:31,152+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (DefaultQuartzScheduler1) [27fac647] VM '3f57e669-5e4c-4d10-85cc-d573004a099d'(Oracle_SECONDARY) was unexpectedly detected as 'MigratingTo' on VDS 'd569c2dd-8f30-4878-8aea-858db285cf69'(ginger.local.systea.fr) (expected on 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1') 2018-02-12 16:46:31,152+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (DefaultQuartzScheduler1) [27fac647] VM '3f57e669-5e4c-4d10-85cc-d573004a099d' is migrating to VDS 'd569c2dd-8f30-4878-8aea-858db285cf69'( ginger.local.systea.fr) ignoring it in the refresh until migration is done .... 2018-02-12 16:46:41,631+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-11) [] VM '3f57e669-5e4c-4d10-85cc-d573004a099d' was reported as Down on VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'( victor.local.systea.fr) 2018-02-12 16:46:41,632+01 INFO [org.ovirt.engine.core.vdsbro ker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-11) [] START, DestroyVDSCommand(HostName = victor.local.systea.fr, DestroyVmVDSCommandParameters:{runAsync='true', hostId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d', force='false', secondsToWait='0', gracefully='false', reason='', ignoreNoVm='true'}), log id: 560eca57 2018-02-12 16:46:41,650+01 INFO [org.ovirt.engine.core.vdsbro ker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-11) [] FINISH, DestroyVDSCommand, log id: 560eca57 2018-02-12 16:46:41,650+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-11) [] VM '3f57e669-5e4c-4d10-85cc-d573004a099d'(Oracle_SECONDARY) moved from 'MigratingFrom' --> 'Down' 2018-02-12 16:46:41,651+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-11) [] Handing over VM '3f57e669-5e4c-4d10-85cc-d573004a099d'(Oracle_SECONDARY) to Host 'd569c2dd-8f30-4878-8aea-858db285cf69'. Setting VM to status 'MigratingTo' 2018-02-12 16:46:42,163+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-4) [] VM '3f57e669-5e4c-4d10-85cc-d573004a099d'(Oracle_SECONDARY) moved from 'MigratingTo' --> 'Up' 2018-02-12 16:46:42,169+01 INFO [org.ovirt.engine.core.vdsbro ker.vdsbroker.MigrateStatusVDSCommand] (ForkJoinPool-1-worker-4) [] START, MigrateStatusVDSCommand(HostName = ginger.local.systea.fr, MigrateStatusVDSCommandParameters:{runAsync='true', hostId='d569c2dd-8f30-4878-8aea-858db285cf69', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}), log id: 7a25c281 2018-02-12 16:46:42,174+01 INFO [org.ovirt.engine.core.vdsbro ker.vdsbroker.MigrateStatusVDSCommand] (ForkJoinPool-1-worker-4) [] FINISH, MigrateStatusVDSCommand, log id: 7a25c281 2018-02-12 16:46:42,194+01 INFO [org.ovirt.engine.core.dal.db broker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-4) [] EVENT_ID: VM_MIGRATION_DONE(63), Correlation ID: 2f712024-5982-46a8-82c8-fd8293da5725, Job ID: 4bd19aa9-cc99-4d02-884e-5a1e857a7738, Call Stack: null, Custom ID: null, Custom Event ID: -1, Message: Migration completed (VM: Oracle_SECONDARY, Source: victor.local.systea.fr, Destination: ginger.local.systea.fr, Duration: 11 seconds, Total: 11 seconds, Actual downtime: (N/A)) 2018-02-12 16:46:42,201+01 INFO [org.ovirt.engine.core.bll.MigrateVmToServerCommand] (ForkJoinPool-1-worker-4) [] Lock freed to object 'EngineLock:{exclusiveLocks='[3f57e669-5e4c-4d10-85cc-d573004a099d=VM]', sharedLocks=''}' 2018-02-12 16:46:42,203+01 INFO [org.ovirt.engine.core.vdsbro ker.vdsbroker.FullListVDSCommand] (ForkJoinPool-1-worker-4) [] START, FullListVDSCommand(HostName = ginger.local.systea.fr, FullListVDSCommandParameters:{runAsync='true', hostId='d569c2dd-8f30-4878-8aea-858db285cf69', vmIds='[3f57e669-5e4c-4d10-85cc-d573004a099d]'}), log id: 7cc65298 2018-02-12 16:46:42,254+01 INFO [org.ovirt.engine.core.vdsbro ker.vdsbroker.FullListVDSCommand] (ForkJoinPool-1-worker-4) [] FINISH, FullListVDSCommand, return: [{acpiEnable=true, emulatedMachine=pc-i440fx-rhel7.3.0, afterMigrationStatus=, tabletEnable=true, pid=18748, guestDiskMapping={}, transparentHugePages=true, timeOffset=0, cpuType=Nehalem, smp=2, guestNumaNodes=[Ljava.lang.Object;@760085fd, custom={device_fbddd528-7d93-49c6-a286-180e021cb274device_87 9c93ab-4df1-435c-af02-565039fcc254=VmDevice:{id='VmDeviceId: {deviceId='879c93ab-4df1-435c-af02-565039fcc254', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='unix', type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, controller=0, type=virtio-serial, port=1}', managed='false', plugged='true', readOnly='false', deviceAlias='channel0', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}, device_fbddd528-7d93-49c6-a286 -180e021cb274device_879c93ab-4df1-435c-af02-565039fcc254devi ce_8945f61a-abbe-4156-8485-a4aa6f1908dbdevice_017b5e59-01c4- 4aac-bf0c-b5d9557284d6=VmDevice:{id='VmDeviceId:{deviceId='0 17b5e59-01c4-4aac-bf0c-b5d9557284d6', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='tablet', type='UNKNOWN', bootOrder='0', specParams='[]', address='{bus=0, type=usb, port=1}', managed='false', plugged='true', readOnly='false', deviceAlias='input0', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}, device_fbddd528-7d93-49c6-a286-180e021cb274=VmDevice:{id='Vm DeviceId:{deviceId='fbddd528-7d93-49c6-a286-180e021cb274', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='ide', type='CONTROLLER', bootOrder='0', specParams='[]', address='{slot=0x01, bus=0x00, domain=0x0000, type=pci, function=0x1}', managed='false', plugged='true', readOnly='false', deviceAlias='ide', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}, device_fbddd528-7d93-49c6-a286-180e021cb274device_879c93ab-4 df1-435c-af02-565039fcc254device_8945f61a-abbe-4156-8485-a4a a6f1908db=VmDevice:{id='VmDeviceId:{deviceId='8945f61a-abbe-4156-8485-a4aa6f1908db', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='unix', type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, controller=0, type=virtio-serial, port=2}', managed='false', plugged='true', readOnly='false', deviceAlias='channel1', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}}, vmType=kvm, memSize=8192, smpCoresPerSocket=1, vmName=Oracle_SECONDARY, nice=0, status=Up, maxMemSize=32768, bootMenuEnable=false, vmId=3f57e669-5e4c-4d10-85cc-d573004a099d, numOfIoThreads=2, smpThreadsPerCore=1, smartcardEnable=false, maxMemSlots=16, kvmEnable=true, pitReinjection=false, displayNetwork=ovirtmgmt, devices=[Ljava.lang.Object;@2e4d3dd3, memGuaranteedSize=8192, maxVCpus=16, clientIp=, statusTime=4304259600 <(430)%20425-9600>, display=vnc}], log id: 7cc65298 2018-02-12 16:46:42,257+01 INFO [org.ovirt.engine.core.vdsbro ker.monitoring.VmDevicesMonitoring] (ForkJoinPool-1-worker-4) [] Received a vnc Device without an address when processing VM 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: {device=vnc, specParams={displayNetwork=ovirtmgmt, keyMap=fr, displayIp=192.168.0.5}, type=graphics, deviceId=813957b1-446a-4e88-9e40-9fe76d2c442d, port=5901} 2018-02-12 16:46:42,257+01 INFO [org.ovirt.engine.core.vdsbro ker.monitoring.VmDevicesMonitoring] (ForkJoinPool-1-worker-4) [] Received a lease Device without an address when processing VM 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: {lease_id=3f57e669-5e4c-4d10-85cc-d573004a099d, sd_id=1e51cecc-eb2e-47d0-b185-920fdc7afa16, deviceId={uuid=a09949aa-5642-4b6d-94a4-8b0d04257be5}, offset=6291456, device=lease, path=/rhev/data-center/mnt/glusterSD/192.168.0.6: _DATA01/1e51cecc-eb2e-47d0-b185-920fdc7afa16/dom_md/xleases, type=lease} 2018-02-12 16:46:46,260+01 INFO [org.ovirt.engine.core.vdsbro ker.vdsbroker.FullListVDSCommand] (DefaultQuartzScheduler5) [7fcb200a] FINISH, FullListVDSCommand, return: [{acpiEnable=true, emulatedMachine=pc-i440fx-rhel7.3.0, afterMigrationStatus=, tabletEnable=true, pid=18748, guestDiskMapping={0QEMU_QEMU_H ARDDISK_d890fa68-fba4-4f49-9={name=/dev/sda}, QEMU_DVD-ROM_QM00003={name=/dev/sr0}}, transparentHugePages=true, timeOffset=0, cpuType=Nehalem, smp=2, guestNumaNodes=[Ljava.lang.Obj ect;@77951faf, custom={device_fbddd528-7d93-4 9c6-a286-180e021cb274device_879c93ab-4df1-435c-af02-565039fc c254=VmDevice:{id='VmDeviceId:{deviceId='879c93ab-4df1-435c-af02-565039fcc254', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='unix', type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, controller=0, type=virtio-serial, port=1}', managed='false', plugged='true', readOnly='false', deviceAlias='channel0', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}, device_fbddd528-7d93-49c6-a286 -180e021cb274device_879c93ab-4df1-435c-af02-565039fcc254devi ce_8945f61a-abbe-4156-8485-a4aa6f1908dbdevice_017b5e59-01c4- 4aac-bf0c-b5d9557284d6=VmDevice:{id='VmDeviceId:{deviceId='0 17b5e59-01c4-4aac-bf0c-b5d9557284d6', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='tablet', type='UNKNOWN', bootOrder='0', specParams='[]', address='{bus=0, type=usb, port=1}', managed='false', plugged='true', readOnly='false', deviceAlias='input0', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}, device_fbddd528-7d93-49c6-a286-180e021cb274=VmDevice:{id='Vm DeviceId:{deviceId='fbddd528-7d93-49c6-a286-180e021cb274', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='ide', type='CONTROLLER', bootOrder='0', specParams='[]', address='{slot=0x01, bus=0x00, domain=0x0000, type=pci, function=0x1}', managed='false', plugged='true', readOnly='false', deviceAlias='ide', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}, device_fbddd528-7d93-49c6-a286-180e021cb274device_879c93ab-4 df1-435c-af02-565039fcc254device_8945f61a-abbe-4156-8485-a4a a6f1908db=VmDevice:{id='VmDeviceId:{deviceId='8945f61a-abbe-4156-8485-a4aa6f1908db', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='unix', type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, controller=0, type=virtio-serial, port=2}', managed='false', plugged='true', readOnly='false', deviceAlias='channel1', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}}, vmType=kvm, memSize=8192, smpCoresPerSocket=1, vmName=Oracle_SECONDARY, nice=0, status=Up, maxMemSize=32768, bootMenuEnable=false, vmId=3f57e669-5e4c-4d10-85cc-d573004a099d, numOfIoThreads=2, smpThreadsPerCore=1, smartcardEnable=false, maxMemSlots=16, kvmEnable=true, pitReinjection=false, displayNetwork=ovirtmgmt, devices=[Ljava.lang.Object;@286410fd, memGuaranteedSize=8192, maxVCpus=16, clientIp=, statusTime=4304263620 <(430)%20426-3620>, display=vnc}], log id: 58cdef4c 2018-02-12 16:46:46,267+01 INFO [org.ovirt.engine.core.vdsbro ker.monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler5) [7fcb200a] Received a vnc Device without an address when processing VM 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: {device=vnc, specParams={displayNetwork=ovirtmgmt, keyMap=fr, displayIp=192.168.0.5}, type=graphics, deviceId=813957b1-446a-4e88-9e40-9fe76d2c442d, port=5901} 2018-02-12 16:46:46,268+01 INFO [org.ovirt.engine.core.vdsbro ker.monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler5) [7fcb200a] Received a lease Device without an address when processing VM 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: {lease_id=3f57e669-5e4c-4d10-85cc-d573004a099d, sd_id=1e51cecc-eb2e-47d0-b185-920fdc7afa16, deviceId={uuid=a09949aa-5642-4b6d-94a4-8b0d04257be5}, offset=6291456, device=lease, path=/rhev/data-center/mnt/glusterSD/192.168.0.6: _DATA01/1e51cecc-eb2e-47d0-b185-920fdc7afa16/dom_md/xleases, type=lease}
For the VM with 2 vdisks we see :
2018-02-12 16:49:06,112+01 INFO [org.ovirt.engine.core.bll.MigrateVmToServerCommand] (default task-50) [92b5af33-cb87-4142-b8fe-8b838dd7458e] Lock Acquired to object 'EngineLock:{exclusiveLocks='[f7d4ec12-627a-4b83-b59e-886400d55474=VM]', sharedLocks=''}' 2018-02-12 16:49:06,407+01 INFO [org.ovirt.engine.core.bll.MigrateVmToServerCommand] (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4142-b8fe-8b838dd7458e] Running command: MigrateVmToServerCommand internal: false. Entities affected : ID: f7d4ec12-627a-4b83-b59e-886400d55474 Type: VMAction group MIGRATE_VM with role type USER 2018-02-12 16:49:06,712+01 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4142-b8fe-8b838dd7458e] START, MigrateVDSCommand( MigrateVDSCommandParameters:{runAsync='true', hostId='d569c2dd-8f30-4878-8aea-858db285cf69', vmId='f7d4ec12-627a-4b83-b59e-886400d55474', srcHost='192.168.0.5', dstVdsId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1', dstHost=' 192.168.0.6:54321', migrationMethod='ONLINE', tunnelMigration='false', migrationDowntime='0', autoConverge='true', migrateCompressed='false', consoleAddress='null', maxBandwidth='500', enableGuestEvents='true', maxIncomingMigrations='2', maxOutgoingMigrations='2', convergenceSchedule='[init=[{name=setDowntime, params=[100]}], stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2, action={name=setDowntime, params=[200]}}, {limit=3, action={name=setDowntime, params=[300]}}, {limit=4, action={name=setDowntime, params=[400]}}, {limit=6, action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort, params=[]}}]]'}), log id: 3702a9e0 2018-02-12 16:49:06,713+01 INFO [org.ovirt.engine.core.vdsbro ker.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4142-b8fe-8b838dd7458e] START, MigrateBrokerVDSCommand(HostName = ginger.local.systea.fr, MigrateVDSCommandParameters:{runAsync='true', hostId='d569c2dd-8f30-4878-8aea-858db285cf69', vmId='f7d4ec12-627a-4b83-b59e-886400d55474', srcHost='192.168.0.5', dstVdsId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1', dstHost=' 192.168.0.6:54321', migrationMethod='ONLINE', tunnelMigration='false', migrationDowntime='0', autoConverge='true', migrateCompressed='false', consoleAddress='null', maxBandwidth='500', enableGuestEvents='true', maxIncomingMigrations='2', maxOutgoingMigrations='2', convergenceSchedule='[init=[{name=setDowntime, params=[100]}], stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2, action={name=setDowntime, params=[200]}}, {limit=3, action={name=setDowntime, params=[300]}}, {limit=4, action={name=setDowntime, params=[400]}}, {limit=6, action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort, params=[]}}]]'}), log id: 1840069c 2018-02-12 16:49:06,724+01 INFO [org.ovirt.engine.core.vdsbro ker.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4142-b8fe-8b838dd7458e] FINISH, MigrateBrokerVDSCommand, log id: 1840069c 2018-02-12 16:49:06,732+01 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4142-b8fe-8b838dd7458e] FINISH, MigrateVDSCommand, return: MigratingFrom, log id: 3702a9e0 2018-02-12 16:49:06,753+01 INFO [org.ovirt.engine.core.dal.db broker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4142-b8fe-8b838dd7458e] EVENT_ID: VM_MIGRATION_START(62), Correlation ID: 92b5af33-cb87-4142-b8fe-8b838dd7458e, Job ID: f4f54054-f7c8-4481-8eda-d5a15c383061, Call Stack: null, Custom ID: null, Custom Event ID: -1, Message: Migration started (VM: Oracle_PRIMARY, Source: ginger.local.systea.fr, Destination: victor.local.systea.fr, User: admin@internal-authz). ... 2018-02-12 16:49:16,453+01 INFO [org.ovirt.engine.core.vdsbro ker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler4) [162a5bc3] Fetched 2 VMs from VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1' 2018-02-12 16:49:16,455+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (DefaultQuartzScheduler4) [162a5bc3] VM 'f7d4ec12-627a-4b83-b59e-886400d55474'(Oracle_PRIMARY) was unexpectedly detected as 'MigratingTo' on VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'(victor.local.systea.fr) (expected on 'd569c2dd-8f30-4878-8aea-858db285cf69') 2018-02-12 16:49:16,455+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (DefaultQuartzScheduler4) [162a5bc3] VM 'f7d4ec12-627a-4b83-b59e-886400d55474' is migrating to VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'( victor.local.systea.fr) ignoring it in the refresh until migration is done ... 2018-02-12 16:49:31,484+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (DefaultQuartzScheduler5) [11a7619a] VM 'f7d4ec12-627a-4b83-b59e-886400d55474'(Oracle_PRIMARY) was unexpectedly detected as 'MigratingTo' on VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'(victor.local.systea.fr) (expected on 'd569c2dd-8f30-4878-8aea-858db285cf69') 2018-02-12 16:49:31,484+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (DefaultQuartzScheduler5) [11a7619a] VM 'f7d4ec12-627a-4b83-b59e-886400d55474' is migrating to VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'( victor.local.systea.fr) ignoring it in the refresh until migration is done
and so on, last lines repeated indefinitly for hours since we poweroff the VM... Is this something known ? Any idea about that ?
Thanks
Ovirt 4.1.6, updated last at feb-13. Gluster 3.12.1.
--
Cordialement,
*Frank Soyer *
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

------=_=-_OpenGroupware_org_NGMime-15477-1519640593.181730-42------ Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable Content-Length: 36556 Hi, I don't beleive that this is relatd to a host, tests have been done fro= m victor source to ginger dest and ginger to victor. I don't see proble= ms on storage (gluster 3.12 native managed by ovirt), when VMs with a u= niq disk from 20 to 250G migrate without error in some seconds and with= no downtime. How ca I enable this libvirt debug mode ? -- Cordialement, Frank Soyer =C2=A0 Le Vendredi, F=C3=A9vrier 23, 2018 09:56 CET, Milan Zamazal <mzamazal@r= edhat.com> a =C3=A9crit: =C2=A0Maor Lipchuk <mlipchuk@redhat.com> writes:
I encountered a bug (see [1]) which contains the same error mentioned= in your VDSM logs (see [2]), but I doubt it is related.
Indeed, it's not related. The error in vdsm=5Fvictor.log just means that the info gathering call tries to access libvirt domain before the incoming migration is completed. It's ugly but harmless.
Milan, maybe you have any advice to troubleshoot the issue? Will the libvirt/qemu logs can help?
It seems there is something wrong on (at least) the source host. There are no migration progress messages in the vdsm=5Fginger.log and there a= re warnings about stale stat samples. That looks like problems with calling libvirt =E2=80=93 slow and/or stuck calls, maybe due to storage= problems. The possibly faulty second disk could cause that. libvirt debug logs could tell us whether that is indeed the problem and= whether it is caused by storage or something else.
I would suggest to open a bug on that issue so we can track it more properly.
Regards, Maor
[1] https://bugzilla.redhat.com/show=5Fbug.cgi?id=3D1486543 - Migration l= eads to VM running on 2 Hosts
[2] 2018-02-16 09:43:35,236+0100 ERROR (jsonrpc/7) [jsonrpc.JsonRpcServer= ] Internal server error (=5F=5Finit=5F=5F:577) Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/yajsonrpc/=5F=5Finit=5F=5F.py"= , line 572, in =5Fhandle=5Frequest res =3D method(**params) File "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 198,= in =5FdynamicMethod result =3D fn(*methodArgs) File "/usr/share/vdsm/API.py", line 1454, in getAllVmIoTunePolicies io=5Ftune=5Fpolicies=5Fdict =3D self.=5Fcif.getAllVmIoTunePolicies() File "/usr/share/vdsm/clientIF.py", line 454, in getAllVmIoTunePolici= es 'current=5Fvalues': v.getIoTune()} File "/usr/share/vdsm/virt/vm.py", line 2859, in getIoTune result =3D self.getIoTuneResponse() File "/usr/share/vdsm/virt/vm.py", line 2878, in getIoTuneResponse res =3D self.=5Fdom.blockIoTune( File "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py", line = 47, in =5F=5Fgetattr=5F=5F % self.vmid) NotConnectedError: VM u'755cf168-de65-42ed-b22f-efe9136f7594' was not=
started yet or was shut down
On Thu, Feb 22, 2018 at 4:22 PM, fsoyer <fsoyer@systea.fr> wrote:
Hi, Yes, on 2018-02-16 (vdsm logs) I tried with a VM standing on ginger (192.168.0.6) migrated (or failed to migrate...) to victor (192.168.= 0.5), while the engine.log in the first mail on 2018-02-12 was for VMs sta= nding on victor, migrated (or failed to migrate...) to ginger. Symptoms we= re exactly the same, in both directions, and VMs works like a charm bef= ore, and even after (migration "killed" by a poweroff of VMs). Am I the only one experimenting this problem ?
Thanks --
Cordialement,
*Frank Soyer *
Le Jeudi, F=C3=A9vrier 22, 2018 00:45 CET, Maor Lipchuk <mlipchuk@re= dhat.com> a =C3=A9crit:
Hi Frank,
Sorry about the delay repond. I've been going through the logs you attached, although I could not = find any specific indication why the migration failed because of the disk= you were mentionning. Does this VM run with both disks on the target host without migratio= n?
Regards, Maor
On Fri, Feb 16, 2018 at 11:03 AM, fsoyer <fsoyer@systea.fr> wrote:
Hi Maor, sorry for the double post, I've change the email adress of my accou=
supposed that I'd need to re-post it. And thank you for your time. Here are the logs. I added a vdisk to = an existing VM : it no more migrates, needing to poweroff it after min= utes. Then simply deleting the second disk makes migrate it in exactly 9s= without problem ! https://gist.github.com/fgth/4707446331d201eef574ac31b6e89561 https://gist.github.com/fgth/f8de9c22664aee53722af676bff8719d
--
Cordialement,
*Frank Soyer * Le Mercredi, F=C3=A9vrier 14, 2018 11:04 CET, Maor Lipchuk < mlipchuk@redhat.com> a =C3=A9crit:
Hi Frank,
I already replied on your last email. Can you provide the VDSM logs from the time of the migration failur= e for both hosts: ginger.local.systea.f <http://ginger.local.systea.fr/>r and v ictor.local.systea.fr
Thanks, Maor
On Wed, Feb 14, 2018 at 11:23 AM, fsoyer <fsoyer@systea.fr> wrote:
Hi all, I discovered yesterday a problem when migrating VM with more than =
one
vdisk. On our test servers (oVirt4.1, shared storage with Gluster), I cre= ated 2 VMs needed for a test, from a template with a 20G vdisk. On this V= Ms I added a 100G vdisk (for this tests I didn't want to waste time to = extend the existing vdisks... But I lost time finally...). The VMs with t= he 2 vdisks works well. Now I saw some updates waiting on the host. I tried to put it in maintenance... But it stopped on the two VM. They were marked "mig= rating", but no more accessible. Other (small) VMs with only 1 vdisk was mi= grated without problem at the same time. I saw that a kvm process for the (big) VMs was launched on the sou= rce AND destination host, but after tens of minutes, the migration and=
nt and the VMs
was always freezed. I tried to cancel the migration for the VMs : = failed. The only way to stop it was to poweroff the VMs : the kvm process = died on the 2 hosts and the GUI alerted on a failed migration. In doubt, I tried to delete the second vdisk on one of this VMs : = it migrates then without error ! And no access problem. I tried to extend the first vdisk of the second VM, the delete the=
second vdisk : it migrates now without problem !
So after another test with a VM with 2 vdisks, I can say that this=
blocked the migration process :(
In engine.log, for a VMs with 1 vdisk migrating well, we see :
2018-02-12 16:46:29,705+01 INFO [org.ovirt.engine.core.bll.Migrate= VmToServerCommand] (default task-28) [2f712024-5982-46a8-82c8-fd8293da5725] Lock Acqu= ired to object 'EngineLock:{exclusiveLocks=3D'[3f57e669-5e4c-4d10-85cc-= d573004a099d=3DVM]', sharedLocks=3D''}' 2018-02-12 16:46:29,955+01 INFO [org.ovirt.engine.core.bll.Migrate= VmToServerCommand] (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd829= 3da5725] Running command: MigrateVmToServerCommand internal: false. Entitie= s affected : ID: 3f57e669-5e4c-4d10-85cc-d573004a099d Type: VMAction=
group MIGRATE=5FVM with role type USER 2018-02-12 16:46:30,261+01 INFO [org.ovirt.engine.core.vdsbroker.M= igrateVDSCommand] (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd829= 3da5725] START, MigrateVDSCommand( MigrateVDSCommandParameters:{runAsync=3D= 'true', hostId=3D'ce3938b1-b23f-4d22-840a-f17d7cd87bb1', vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d', srcHost=3D'192.168.= 0.6', dstVdsId=3D'd569c2dd-8f30-4878-8aea-858db285cf69', dstHost=3D' 192.168.0.5:54321', migrationMethod=3D'ONLINE', tunnelMigration=3D= 'false', migrationDowntime=3D'0', autoConverge=3D'true', migrateCompressed=3D= 'false', consoleAddress=3D'null', maxBandwidth=3D'500', enableGuestEvents=3D= 'true', maxIncomingMigrations=3D'2', maxOutgoingMigrations=3D'2', convergenceSchedule=3D'[init=3D[{name=3DsetDowntime, params=3D[100= ]}], stalling=3D[{limit=3D1, action=3D{name=3DsetDowntime, params=3D[15= 0]}}, {limit=3D2, action=3D{name=3DsetDowntime, params=3D[200]}}, {limit=3D3, action=3D{name=3DsetDowntime, params=3D[300]}}, {limit=3D4, action=3D{name=3DsetDowntime, params=3D[400]}}, {limit=3D6, action=3D{name=3DsetDowntime, params=3D[500]}}, {limit=3D-1, actio= n=3D{name=3Dabort, params=3D[]}}]]'}), log id: 14f61ee0 2018-02-12 16:46:30,262+01 INFO [org.ovirt.engine.core.vdsbro ker.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-th= read-32) [2f712024-5982-46a8-82c8-fd8293da5725] START, MigrateBrokerVDSCommand(HostName =3D victor.local.systea.fr, MigrateVDSCommandParameters:{runAsync=3D'true', hostId=3D'ce3938b1-b23f-4d22-840a-f17d7cd87bb1', vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d', srcHost=3D'192.168.= 0.6', dstVdsId=3D'd569c2dd-8f30-4878-8aea-858db285cf69', dstHost=3D' 192.168.0.5:54321', migrationMethod=3D'ONLINE', tunnelMigration=3D= 'false', migrationDowntime=3D'0', autoConverge=3D'true', migrateCompressed=3D= 'false', consoleAddress=3D'null', maxBandwidth=3D'500', enableGuestEvents=3D= 'true', maxIncomingMigrations=3D'2', maxOutgoingMigrations=3D'2', convergenceSchedule=3D'[init=3D[{name=3DsetDowntime, params=3D[100= ]}], stalling=3D[{limit=3D1, action=3D{name=3DsetDowntime, params=3D[15= 0]}}, {limit=3D2, action=3D{name=3DsetDowntime, params=3D[200]}}, {limit=3D3, action=3D{name=3DsetDowntime, params=3D[300]}}, {limit=3D4, action=3D{name=3DsetDowntime, params=3D[400]}}, {limit=3D6, action=3D{name=3DsetDowntime, params=3D[500]}}, {limit=3D-1, actio= n=3D{name=3Dabort, params=3D[]}}]]'}), log id: 775cd381 2018-02-12 16:46:30,277+01 INFO [org.ovirt.engine.core.vdsbro ker.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-th= read-32) [2f712024-5982-46a8-82c8-fd8293da5725] FINISH, MigrateBrokerVDSCom= mand, log id: 775cd381 2018-02-12 16:46:30,285+01 INFO [org.ovirt.engine.core.vdsbroker.M= igrateVDSCommand] (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd829= 3da5725] FINISH, MigrateVDSCommand, return: MigratingFrom, log id: 14f61ee0=
2018-02-12 16:46:30,301+01 INFO [org.ovirt.engine.core.dal.db broker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-6= -thread-32) [2f712024-5982-46a8-82c8-fd8293da5725] EVENT=5FID: VM=5FMIGRATION=5FSTART(62), Correlation ID: 2f712024-5982-46a8-82c= 8-fd8293da5725, Job ID: 4bd19aa9-cc99-4d02-884e-5a1e857a7738, Call Stack: null, Cu= stom ID: null, Custom Event ID: -1, Message: Migration started (VM: Oracle=5FSECONDARY, Source: victor.local.systea.fr, Destination: ginger.local.systea.fr, User: admin@internal-authz). 2018-02-12 16:46:31,106+01 INFO [org.ovirt.engine.core.vdsbro ker.vdsbroker.FullListVDSCommand] (DefaultQuartzScheduler9) [54a65= b66] START, FullListVDSCommand(HostName =3D victor.local.systea.fr, FullListVDSCommandParameters:{runAsync=3D'true', hostId=3D'ce3938b1-b23f-4d22-840a-f17d7cd87bb1', vmIds=3D'[3f57e669-5e4c-4d10-85cc-d573004a099d]'}), log id: 54b4b4= 35 2018-02-12 16:46:31,147+01 INFO [org.ovirt.engine.core.vdsbro ker.vdsbroker.FullListVDSCommand] (DefaultQuartzScheduler9) [54a65= b66] FINISH, FullListVDSCommand, return: [{acpiEnable=3Dtrue, emulatedMachine=3Dpc-i440fx-rhel7.3.0, tabletEnable=3Dtrue, pid=3D= 1493, guestDiskMapping=3D{0QEMU=5FQEMU=5FHARDDISK=5Fd890fa68-fba4-4f49-9= =3D{name=3D/dev/sda}, QEMU=5FDVD-ROM=5FQM00003=3D{name=3D/dev/sr0}}, transparentHugePage= s=3Dtrue, timeOffset=3D0, cpuType=3DNehalem, smp=3D2, pauseCode=3DNOERR, guestNumaNodes=3D[Ljava.lang.Object;@1d9042cd, smartcardEnable=3Df= alse, custom=3D{device=5Ffbddd528-7d93-49c6-a286-180e021cb274device=5F87=
9c93ab-4df1-435c-af02-565039fcc254=3DVmDevice:{id=3D'VmDeviceId: {deviceId=3D'879c93ab-4df1-435c-af02-565039fcc254', vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}', device=3D'unix', type=3D'CHANNEL', bootOrder=3D'0', specParams=3D'[]', address=3D'{= bus=3D0, controller=3D0, type=3Dvirtio-serial, port=3D1}', managed=3D'false= ', plugged=3D'true', readOnly=3D'false', deviceAlias=3D'channel0', customProperties=3D'[]', snapshotId=3D'null', logicalName=3D'null'= , hostDevice=3D'null'}, device=5Ffbddd528-7d93-49c6-a286 -180e021cb274device=5F879c93ab-4df1-435c-af02-565039fcc254devi ce=5F8945f61a-abbe-4156-8485-a4aa6f1908dbdevice=5F017b5e59-01c4- 4aac-bf0c-b5d9557284d6=3DVmDevice:{id=3D'VmDeviceId:{deviceId=3D'0=
17b5e59-01c4-4aac-bf0c-b5d9557284d6', vmId=3D'3f57e669-5e4c-4d10-8= 5cc-d573004a099d'}', device=3D'tablet', type=3D'UNKNOWN', bootOrder=3D'0', specParams=3D= '[]', address=3D'{bus=3D0, type=3Dusb, port=3D1}', managed=3D'false', pl= ugged=3D'true', readOnly=3D'false', deviceAlias=3D'input0', customProperties=3D'[]= ', snapshotId=3D'null', logicalName=3D'null', hostDevice=3D'null'}, device=5Ffbddd528-7d93-49c6-a286-180e021cb274=3DVmDevice:{id=3D'Vm=
DeviceId:{deviceId=3D'fbddd528-7d93-49c6-a286-180e021cb274', vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}', device=3D'ide', type=3D'CONTROLLER', bootOrder=3D'0', specParams=3D'[]', address=3D= '{slot=3D0x01, bus=3D0x00, domain=3D0x0000, type=3Dpci, function=3D0x1}', managed= =3D'false', plugged=3D'true', readOnly=3D'false', deviceAlias=3D'ide', customP= roperties=3D'[]', snapshotId=3D'null', logicalName=3D'null', hostDevice=3D'null'}, device=5Ffbddd528-7d93-49c6-a286-180e021cb274device=5F879c93ab-4 df1-435c-af02-565039fcc254device=5F8945f61a-abbe-4156-8485-a4a a6f1908db=3DVmDevice:{id=3D'VmDeviceId:{deviceId=3D'8945f61a-abbe-= 4156-8485-a4aa6f1908db', vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}', device=3D'unix', type=3D'CHANNEL', bootOrder=3D'0', specParams=3D'[]', address=3D'{= bus=3D0, controller=3D0, type=3Dvirtio-serial, port=3D2}', managed=3D'false= ', plugged=3D'true', readOnly=3D'false', deviceAlias=3D'channel1', customProperties=3D'[]', snapshotId=3D'null', logicalName=3D'null'= , hostDevice=3D'null'}}, vmType=3Dkvm, memSize=3D8192, smpCoresPerSo= cket=3D1, vmName=3DOracle=5FSECONDARY, nice=3D0, status=3DMigration Source, = maxMemSize=3D32768, bootMenuEnable=3Dfalse, vmId=3D3f57e669-5e4c-4d10-85cc-d573004a099= d, numOfIoThreads=3D2, smpThreadsPerCore=3D1, memGuaranteedSize=3D819= 2, kvmEnable=3Dtrue, pitReinjection=3Dfalse, displayNetwork=3Dovirtmg= mt, devices=3D[Ljava.lang.Object;@28ae66d7, display=3Dvnc, maxVCpus=3D= 16, clientIp=3D, statusTime=3D4299484520, maxMemSlots=3D16}], log id: = 54b4b435 2018-02-12 16:46:31,150+01 INFO [org.ovirt.engine.core.vdsbro ker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler1) [27fac647] Fetched 3 VMs from VDS 'd569c2dd-8f30-4878-8aea-858db28= 5cf69' 2018-02-12 16:46:31,151+01 INFO [org.ovirt.engine.core.vdsbro ker.monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler9) [54a65b66] Received a vnc Device without an address when processin= g VM 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: {device=3Dvnc, specParams=3D{displayNetwork=3Dovirtmgmt, keyMap=3D= fr, displayIp=3D192.168.0.6}, type=3Dgraphics, deviceId=3D813957b1-446= a-4e88-9e40-9fe76d2c442d, port=3D5901} 2018-02-12 16:46:31,151+01 INFO [org.ovirt.engine.core.vdsbro ker.monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler9) [54a65b66] Received a lease Device without an address when process= ing VM 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: {lease=5Fid=3D3f57e669-5e4c-4d10-85cc-d573004a099d, sd=5Fid=3D1e51cecc-eb2e-47d0-b185-920fdc7afa16, deviceId=3D{uuid=3Da09949aa-5642-4b6d-94a4-8b0d04257be5}, offset=3D= 6291456, device=3Dlease, path=3D/rhev/data-center/mnt/glusterSD/192.168.0.6= : =5FDATA01/1e51cecc-eb2e-47d0-b185-920fdc7afa16/dom=5Fmd/xleases, t= ype=3Dlease} 2018-02-12 16:46:31,152+01 INFO [org.ovirt.engine.core.vdsbroker.m= onitoring.VmAnalyzer] (DefaultQuartzScheduler1) [27fac647] VM '3f57e669-5e4c-4d10-85cc-d= 573004a099d'(Oracle=5FSECONDARY) was unexpectedly detected as 'MigratingTo' on VDS 'd569c2dd-8f30-4878-8aea-858db285cf69'(ginger.local.systea.fr) (expected on 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1') 2018-02-12 16:46:31,152+01 INFO [org.ovirt.engine.core.vdsbroker.m= onitoring.VmAnalyzer] (DefaultQuartzScheduler1) [27fac647] VM '3f57e669-5e4c-4d10-85cc-d= 573004a099d' is migrating to VDS 'd569c2dd-8f30-4878-8aea-858db285cf69'( ginger.local.systea.fr) ignoring it in the refresh until migration= is done .... 2018-02-12 16:46:41,631+01 INFO [org.ovirt.engine.core.vdsbroker.m= onitoring.VmAnalyzer] (ForkJoinPool-1-worker-11) [] VM '3f57e669-5e4c-4d10-85cc-d573004a= 099d' was reported as Down on VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'= ( victor.local.systea.fr) 2018-02-12 16:46:41,632+01 INFO [org.ovirt.engine.core.vdsbro ker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-11) [] STA= RT, DestroyVDSCommand(HostName =3D victor.local.systea.fr, DestroyVmVDSCommandParameters:{runAsync=3D'true', hostId=3D'ce3938b1-b23f-4d22-840a-f17d7cd87bb1', vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d', force=3D'false', secondsToWait=3D'0', gracefully=3D'false', reason=3D'', ignoreNoVm= =3D'true'}), log id: 560eca57 2018-02-12 16:46:41,650+01 INFO [org.ovirt.engine.core.vdsbro ker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-11) [] FIN= ISH, DestroyVDSCommand, log id: 560eca57 2018-02-12 16:46:41,650+01 INFO [org.ovirt.engine.core.vdsbroker.m= onitoring.VmAnalyzer] (ForkJoinPool-1-worker-11) [] VM '3f57e669-5e4c-4d10-85cc-d573004a= 099d'(Oracle=5FSECONDARY) moved from 'MigratingFrom' --> 'Down' 2018-02-12 16:46:41,651+01 INFO [org.ovirt.engine.core.vdsbroker.m= onitoring.VmAnalyzer] (ForkJoinPool-1-worker-11) [] Handing over VM '3f57e669-5e4c-4d10-= 85cc-d573004a099d'(Oracle=5FSECONDARY) to Host 'd569c2dd-8f30-4878-8aea-858db285cf69'. Setting VM to stat= us 'MigratingTo' 2018-02-12 16:46:42,163+01 INFO [org.ovirt.engine.core.vdsbroker.m= onitoring.VmAnalyzer] (ForkJoinPool-1-worker-4) [] VM '3f57e669-5e4c-4d10-85cc-d573004a0= 99d'(Oracle=5FSECONDARY) moved from 'MigratingTo' --> 'Up' 2018-02-12 16:46:42,169+01 INFO [org.ovirt.engine.core.vdsbro ker.vdsbroker.MigrateStatusVDSCommand] (ForkJoinPool-1-worker-4) [= ] START, MigrateStatusVDSCommand(HostName =3D ginger.local.systea.fr= , MigrateStatusVDSCommandParameters:{runAsync=3D'true', hostId=3D'd569c2dd-8f30-4878-8aea-858db285cf69', vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}), log id: 7a25c281 2018-02-12 16:46:42,174+01 INFO [org.ovirt.engine.core.vdsbro ker.vdsbroker.MigrateStatusVDSCommand] (ForkJoinPool-1-worker-4) [= ] FINISH, MigrateStatusVDSCommand, log id: 7a25c281 2018-02-12 16:46:42,194+01 INFO [org.ovirt.engine.core.dal.db broker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-4= ) [] EVENT=5FID: VM=5FMIGRATION=5FDONE(63), Correlation ID: 2f712024-5982-46a8-82c8-fd8293da5725, Job ID: 4bd19aa9-cc99-4d02-884e-5a1e857a7738, Call Stack: null, Custom ID:=
null, Custom Event ID: -1, Message: Migration completed (VM: Oracle=5FSECONDARY, Source: victor.local.systea.fr, Destination: ginger.local.systea.fr, Duration: 11 seconds, Total: 11 seconds, A= ctual downtime: (N/A)) 2018-02-12 16:46:42,201+01 INFO [org.ovirt.engine.core.bll.Migrate= VmToServerCommand] (ForkJoinPool-1-worker-4) [] Lock freed to object 'EngineLock:{exclusiveLocks=3D'[3f57e669-5e4c-4d10-85cc-d573004a09= 9d=3DVM]', sharedLocks=3D''}' 2018-02-12 16:46:42,203+01 INFO [org.ovirt.engine.core.vdsbro ker.vdsbroker.FullListVDSCommand] (ForkJoinPool-1-worker-4) [] STA= RT, FullListVDSCommand(HostName =3D ginger.local.systea.fr, FullListVDSCommandParameters:{runAsync=3D'true', hostId=3D'd569c2dd-8f30-4878-8aea-858db285cf69', vmIds=3D'[3f57e669-5e4c-4d10-85cc-d573004a099d]'}), log id: 7cc652= 98 2018-02-12 16:46:42,254+01 INFO [org.ovirt.engine.core.vdsbro ker.vdsbroker.FullListVDSCommand] (ForkJoinPool-1-worker-4) [] FIN= ISH, FullListVDSCommand, return: [{acpiEnable=3Dtrue, emulatedMachine=3Dpc-i440fx-rhel7.3.0, afterMigrationStatus=3D, tabletEnable=3Dtrue, pid=3D18748, guestDiskMapping=3D{}, transparentHugePages=3Dtrue, timeOffset=3D0, cpuType=3DNehalem, sm= p=3D2, guestNumaNodes=3D[Ljava.lang.Object;@760085fd, custom=3D{device=5Ffbddd528-7d93-49c6-a286-180e021cb274device=5F87=
9c93ab-4df1-435c-af02-565039fcc254=3DVmDevice:{id=3D'VmDeviceId: {deviceId=3D'879c93ab-4df1-435c-af02-565039fcc254', vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}', device=3D'unix', type=3D'CHANNEL', bootOrder=3D'0', specParams=3D'[]', address=3D'{= bus=3D0, controller=3D0, type=3Dvirtio-serial, port=3D1}', managed=3D'false= ', plugged=3D'true', readOnly=3D'false', deviceAlias=3D'channel0', customProperties=3D'[]', snapshotId=3D'null', logicalName=3D'null'= , hostDevice=3D'null'}, device=5Ffbddd528-7d93-49c6-a286 -180e021cb274device=5F879c93ab-4df1-435c-af02-565039fcc254devi ce=5F8945f61a-abbe-4156-8485-a4aa6f1908dbdevice=5F017b5e59-01c4- 4aac-bf0c-b5d9557284d6=3DVmDevice:{id=3D'VmDeviceId:{deviceId=3D'0=
17b5e59-01c4-4aac-bf0c-b5d9557284d6', vmId=3D'3f57e669-5e4c-4d10-8= 5cc-d573004a099d'}', device=3D'tablet', type=3D'UNKNOWN', bootOrder=3D'0', specParams=3D= '[]', address=3D'{bus=3D0, type=3Dusb, port=3D1}', managed=3D'false', pl= ugged=3D'true', readOnly=3D'false', deviceAlias=3D'input0', customProperties=3D'[]= ', snapshotId=3D'null', logicalName=3D'null', hostDevice=3D'null'}, device=5Ffbddd528-7d93-49c6-a286-180e021cb274=3DVmDevice:{id=3D'Vm=
DeviceId:{deviceId=3D'fbddd528-7d93-49c6-a286-180e021cb274', vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}', device=3D'ide', type=3D'CONTROLLER', bootOrder=3D'0', specParams=3D'[]', address=3D= '{slot=3D0x01, bus=3D0x00, domain=3D0x0000, type=3Dpci, function=3D0x1}', managed= =3D'false', plugged=3D'true', readOnly=3D'false', deviceAlias=3D'ide', customP= roperties=3D'[]', snapshotId=3D'null', logicalName=3D'null', hostDevice=3D'null'}, device=5Ffbddd528-7d93-49c6-a286-180e021cb274device=5F879c93ab-4 df1-435c-af02-565039fcc254device=5F8945f61a-abbe-4156-8485-a4a a6f1908db=3DVmDevice:{id=3D'VmDeviceId:{deviceId=3D'8945f61a-abbe-= 4156-8485-a4aa6f1908db', vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}', device=3D'unix', type=3D'CHANNEL', bootOrder=3D'0', specParams=3D'[]', address=3D'{= bus=3D0, controller=3D0, type=3Dvirtio-serial, port=3D2}', managed=3D'false= ', plugged=3D'true', readOnly=3D'false', deviceAlias=3D'channel1', customProperties=3D'[]', snapshotId=3D'null', logicalName=3D'null'= , hostDevice=3D'null'}}, vmType=3Dkvm, memSize=3D8192, smpCoresPerSo= cket=3D1, vmName=3DOracle=5FSECONDARY, nice=3D0, status=3DUp, maxMemSize=3D3= 2768, bootMenuEnable=3Dfalse, vmId=3D3f57e669-5e4c-4d10-85cc-d573004a099= d, numOfIoThreads=3D2, smpThreadsPerCore=3D1, smartcardEnable=3Dfalse= , maxMemSlots=3D16, kvmEnable=3Dtrue, pitReinjection=3Dfalse, displayNetwork=3Dovirtmgmt, devices=3D[Ljava.lang.Object;@2e4d3dd3= , memGuaranteedSize=3D8192, maxVCpus=3D16, clientIp=3D, statusTime=3D= 4304259600 <(430)%20425-9600>, display=3Dvnc}], log id: 7cc65298 2018-02-12 16:46:42,257+01 INFO [org.ovirt.engine.core.vdsbro ker.monitoring.VmDevicesMonitoring] (ForkJoinPool-1-worker-4) [] Received a vnc Device without an address when processing VM 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: {device=3Dvnc, specParams=3D{displayNetwork=3Dovirtmgmt, keyMap=3D= fr, displayIp=3D192.168.0.5}, type=3Dgraphics, deviceId=3D813957b1-446= a-4e88-9e40-9fe76d2c442d, port=3D5901} 2018-02-12 16:46:42,257+01 INFO [org.ovirt.engine.core.vdsbro ker.monitoring.VmDevicesMonitoring] (ForkJoinPool-1-worker-4) [] Received a lease Device without an address when processing VM 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: {lease=5Fid=3D3f57e669-5e4c-4d10-85cc-d573004a099d, sd=5Fid=3D1e51cecc-eb2e-47d0-b185-920fdc7afa16, deviceId=3D{uuid=3Da09949aa-5642-4b6d-94a4-8b0d04257be5}, offset=3D= 6291456, device=3Dlease, path=3D/rhev/data-center/mnt/glusterSD/192.168.0.6= : =5FDATA01/1e51cecc-eb2e-47d0-b185-920fdc7afa16/dom=5Fmd/xleases, t= ype=3Dlease} 2018-02-12 16:46:46,260+01 INFO [org.ovirt.engine.core.vdsbro ker.vdsbroker.FullListVDSCommand] (DefaultQuartzScheduler5) [7fcb2= 00a] FINISH, FullListVDSCommand, return: [{acpiEnable=3Dtrue, emulatedMachine=3Dpc-i440fx-rhel7.3.0, afterMigrationStatus=3D, tabletEnable=3Dtrue, pid=3D18748, guestDiskMapping=3D{0QEMU=5FQEMU= =5FH ARDDISK=5Fd890fa68-fba4-4f49-9=3D{name=3D/dev/sda}, QEMU=5FDVD-ROM=5FQM00003=3D{name=3D/dev/sr0}}, transparentHugePage= s=3Dtrue, timeOffset=3D0, cpuType=3DNehalem, smp=3D2, guestNumaNodes=3D[Ljav= a.lang.Obj ect;@77951faf, custom=3D{device=5Ffbddd528-7d93-4 9c6-a286-180e021cb274device=5F879c93ab-4df1-435c-af02-565039fc c254=3DVmDevice:{id=3D'VmDeviceId:{deviceId=3D'879c93ab-4df1-435c-= af02-565039fcc254', vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}', device=3D'unix', type=3D'CHANNEL', bootOrder=3D'0', specParams=3D'[]', address=3D'{= bus=3D0, controller=3D0, type=3Dvirtio-serial, port=3D1}', managed=3D'false= ', plugged=3D'true', readOnly=3D'false', deviceAlias=3D'channel0', customProperties=3D'[]', snapshotId=3D'null', logicalName=3D'null'= , hostDevice=3D'null'}, device=5Ffbddd528-7d93-49c6-a286 -180e021cb274device=5F879c93ab-4df1-435c-af02-565039fcc254devi ce=5F8945f61a-abbe-4156-8485-a4aa6f1908dbdevice=5F017b5e59-01c4- 4aac-bf0c-b5d9557284d6=3DVmDevice:{id=3D'VmDeviceId:{deviceId=3D'0=
17b5e59-01c4-4aac-bf0c-b5d9557284d6', vmId=3D'3f57e669-5e4c-4d10-8= 5cc-d573004a099d'}', device=3D'tablet', type=3D'UNKNOWN', bootOrder=3D'0', specParams=3D= '[]', address=3D'{bus=3D0, type=3Dusb, port=3D1}', managed=3D'false', pl= ugged=3D'true', readOnly=3D'false', deviceAlias=3D'input0', customProperties=3D'[]= ', snapshotId=3D'null', logicalName=3D'null', hostDevice=3D'null'}, device=5Ffbddd528-7d93-49c6-a286-180e021cb274=3DVmDevice:{id=3D'Vm=
DeviceId:{deviceId=3D'fbddd528-7d93-49c6-a286-180e021cb274', vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}', device=3D'ide', type=3D'CONTROLLER', bootOrder=3D'0', specParams=3D'[]', address=3D= '{slot=3D0x01, bus=3D0x00, domain=3D0x0000, type=3Dpci, function=3D0x1}', managed= =3D'false', plugged=3D'true', readOnly=3D'false', deviceAlias=3D'ide', customP= roperties=3D'[]', snapshotId=3D'null', logicalName=3D'null', hostDevice=3D'null'}, device=5Ffbddd528-7d93-49c6-a286-180e021cb274device=5F879c93ab-4 df1-435c-af02-565039fcc254device=5F8945f61a-abbe-4156-8485-a4a a6f1908db=3DVmDevice:{id=3D'VmDeviceId:{deviceId=3D'8945f61a-abbe-= 4156-8485-a4aa6f1908db', vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}', device=3D'unix', type=3D'CHANNEL', bootOrder=3D'0', specParams=3D'[]', address=3D'{= bus=3D0, controller=3D0, type=3Dvirtio-serial, port=3D2}', managed=3D'false= ', plugged=3D'true', readOnly=3D'false', deviceAlias=3D'channel1', customProperties=3D'[]', snapshotId=3D'null', logicalName=3D'null'= , hostDevice=3D'null'}}, vmType=3Dkvm, memSize=3D8192, smpCoresPerSo= cket=3D1, vmName=3DOracle=5FSECONDARY, nice=3D0, status=3DUp, maxMemSize=3D3= 2768, bootMenuEnable=3Dfalse, vmId=3D3f57e669-5e4c-4d10-85cc-d573004a099= d, numOfIoThreads=3D2, smpThreadsPerCore=3D1, smartcardEnable=3Dfalse= , maxMemSlots=3D16, kvmEnable=3Dtrue, pitReinjection=3Dfalse, displayNetwork=3Dovirtmgmt, devices=3D[Ljava.lang.Object;@286410fd= , memGuaranteedSize=3D8192, maxVCpus=3D16, clientIp=3D, statusTime=3D= 4304263620 <(430)%20426-3620>, display=3Dvnc}], log id: 58cdef4c 2018-02-12 16:46:46,267+01 INFO [org.ovirt.engine.core.vdsbro ker.monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler5) [7fcb200a] Received a vnc Device without an address when processin= g VM 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: {device=3Dvnc, specParams=3D{displayNetwork=3Dovirtmgmt, keyMap=3D= fr, displayIp=3D192.168.0.5}, type=3Dgraphics, deviceId=3D813957b1-446= a-4e88-9e40-9fe76d2c442d, port=3D5901} 2018-02-12 16:46:46,268+01 INFO [org.ovirt.engine.core.vdsbro ker.monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler5) [7fcb200a] Received a lease Device without an address when process= ing VM 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: {lease=5Fid=3D3f57e669-5e4c-4d10-85cc-d573004a099d, sd=5Fid=3D1e51cecc-eb2e-47d0-b185-920fdc7afa16, deviceId=3D{uuid=3Da09949aa-5642-4b6d-94a4-8b0d04257be5}, offset=3D= 6291456, device=3Dlease, path=3D/rhev/data-center/mnt/glusterSD/192.168.0.6= : =5FDATA01/1e51cecc-eb2e-47d0-b185-920fdc7afa16/dom=5Fmd/xleases, t= ype=3Dlease}
For the VM with 2 vdisks we see :
2018-02-12 16:49:06,112+01 INFO [org.ovirt.engine.core.bll.Migrate= VmToServerCommand] (default task-50) [92b5af33-cb87-4142-b8fe-8b838dd7458e] Lock Acqu= ired to object 'EngineLock:{exclusiveLocks=3D'[f7d4ec12-627a-4b83-b59e-= 886400d55474=3DVM]', sharedLocks=3D''}' 2018-02-12 16:49:06,407+01 INFO [org.ovirt.engine.core.bll.Migrate= VmToServerCommand] (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4142-b8fe-8b838= dd7458e] Running command: MigrateVmToServerCommand internal: false. Entitie= s affected : ID: f7d4ec12-627a-4b83-b59e-886400d55474 Type: VMAction=
group MIGRATE=5FVM with role type USER 2018-02-12 16:49:06,712+01 INFO [org.ovirt.engine.core.vdsbroker.M= igrateVDSCommand] (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4142-b8fe-8b838= dd7458e] START, MigrateVDSCommand( MigrateVDSCommandParameters:{runAsync=3D= 'true', hostId=3D'd569c2dd-8f30-4878-8aea-858db285cf69', vmId=3D'f7d4ec12-627a-4b83-b59e-886400d55474', srcHost=3D'192.168.= 0.5', dstVdsId=3D'ce3938b1-b23f-4d22-840a-f17d7cd87bb1', dstHost=3D' 192.168.0.6:54321', migrationMethod=3D'ONLINE', tunnelMigration=3D= 'false', migrationDowntime=3D'0', autoConverge=3D'true', migrateCompressed=3D= 'false', consoleAddress=3D'null', maxBandwidth=3D'500', enableGuestEvents=3D= 'true', maxIncomingMigrations=3D'2', maxOutgoingMigrations=3D'2', convergenceSchedule=3D'[init=3D[{name=3DsetDowntime, params=3D[100= ]}], stalling=3D[{limit=3D1, action=3D{name=3DsetDowntime, params=3D[15= 0]}}, {limit=3D2, action=3D{name=3DsetDowntime, params=3D[200]}}, {limit=3D3, action=3D{name=3DsetDowntime, params=3D[300]}}, {limit=3D4, action=3D{name=3DsetDowntime, params=3D[400]}}, {limit=3D6, action=3D{name=3DsetDowntime, params=3D[500]}}, {limit=3D-1, actio= n=3D{name=3Dabort, params=3D[]}}]]'}), log id: 3702a9e0 2018-02-12 16:49:06,713+01 INFO [org.ovirt.engine.core.vdsbro ker.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-th= read-49) [92b5af33-cb87-4142-b8fe-8b838dd7458e] START, MigrateBrokerVDSCommand(HostName =3D ginger.local.systea.fr, MigrateVDSCommandParameters:{runAsync=3D'true', hostId=3D'd569c2dd-8f30-4878-8aea-858db285cf69', vmId=3D'f7d4ec12-627a-4b83-b59e-886400d55474', srcHost=3D'192.168.= 0.5', dstVdsId=3D'ce3938b1-b23f-4d22-840a-f17d7cd87bb1', dstHost=3D' 192.168.0.6:54321', migrationMethod=3D'ONLINE', tunnelMigration=3D= 'false', migrationDowntime=3D'0', autoConverge=3D'true', migrateCompressed=3D= 'false', consoleAddress=3D'null', maxBandwidth=3D'500', enableGuestEvents=3D= 'true', maxIncomingMigrations=3D'2', maxOutgoingMigrations=3D'2', convergenceSchedule=3D'[init=3D[{name=3DsetDowntime, params=3D[100= ]}], stalling=3D[{limit=3D1, action=3D{name=3DsetDowntime, params=3D[15= 0]}}, {limit=3D2, action=3D{name=3DsetDowntime, params=3D[200]}}, {limit=3D3, action=3D{name=3DsetDowntime, params=3D[300]}}, {limit=3D4, action=3D{name=3DsetDowntime, params=3D[400]}}, {limit=3D6, action=3D{name=3DsetDowntime, params=3D[500]}}, {limit=3D-1, actio= n=3D{name=3Dabort, params=3D[]}}]]'}), log id: 1840069c 2018-02-12 16:49:06,724+01 INFO [org.ovirt.engine.core.vdsbro ker.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-th= read-49) [92b5af33-cb87-4142-b8fe-8b838dd7458e] FINISH, MigrateBrokerVDSCom= mand, log id: 1840069c 2018-02-12 16:49:06,732+01 INFO [org.ovirt.engine.core.vdsbroker.M= igrateVDSCommand] (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4142-b8fe-8b838= dd7458e] FINISH, MigrateVDSCommand, return: MigratingFrom, log id: 3702a9e0=
2018-02-12 16:49:06,753+01 INFO [org.ovirt.engine.core.dal.db broker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-6= -thread-49) [92b5af33-cb87-4142-b8fe-8b838dd7458e] EVENT=5FID: VM=5FMIGRATION=5FSTART(62), Correlation ID: 92b5af33-cb87-4142-b8f= e-8b838dd7458e, Job ID: f4f54054-f7c8-4481-8eda-d5a15c383061, Call Stack: null, Cu= stom ID: null, Custom Event ID: -1, Message: Migration started (VM: Oracle=5FPRIMARY, Source: ginger.local.systea.fr, Destination: victor.local.systea.fr, User: admin@internal-authz). ... 2018-02-12 16:49:16,453+01 INFO [org.ovirt.engine.core.vdsbro ker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler4) [162a5bc3] Fetched 2 VMs from VDS 'ce3938b1-b23f-4d22-840a-f17d7cd= 87bb1' 2018-02-12 16:49:16,455+01 INFO [org.ovirt.engine.core.vdsbroker.m= onitoring.VmAnalyzer] (DefaultQuartzScheduler4) [162a5bc3] VM 'f7d4ec12-627a-4b83-b59e-8= 86400d55474'(Oracle=5FPRIMARY) was unexpectedly detected as 'MigratingTo' on VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'(victor.local.systea.fr) (expected on 'd569c2dd-8f30-4878-8aea-858db285cf69') 2018-02-12 16:49:16,455+01 INFO [org.ovirt.engine.core.vdsbroker.m= onitoring.VmAnalyzer] (DefaultQuartzScheduler4) [162a5bc3] VM 'f7d4ec12-627a-4b83-b59e-8= 86400d55474' is migrating to VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'( victor.local.systea.fr) ignoring it in the refresh until migration= is done ... 2018-02-12 16:49:31,484+01 INFO [org.ovirt.engine.core.vdsbroker.m= onitoring.VmAnalyzer] (DefaultQuartzScheduler5) [11a7619a] VM 'f7d4ec12-627a-4b83-b59e-8= 86400d55474'(Oracle=5FPRIMARY) was unexpectedly detected as 'MigratingTo' on VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'(victor.local.systea.fr) (expected on 'd569c2dd-8f30-4878-8aea-858db285cf69') 2018-02-12 16:49:31,484+01 INFO [org.ovirt.engine.core.vdsbroker.m= onitoring.VmAnalyzer] (DefaultQuartzScheduler5) [11a7619a] VM 'f7d4ec12-627a-4b83-b59e-8= 86400d55474' is migrating to VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'( victor.local.systea.fr) ignoring it in the refresh until migration= is done
and so on, last lines repeated indefinitly for hours since we powe= roff the VM... Is this something known ? Any idea about that ?
Thanks
Ovirt 4.1.6, updated last at feb-13. Gluster 3.12.1.
--
Cordialement,
*Frank Soyer *
=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F= =5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F= =5F Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
=C2=A0 =C2=A0
> your VDSM logs (see [2]), but I doubt it is related.<br /><br />I= ndeed, it's not related.<br /><br />The error in vdsm=5Fvictor.log just= means that the info gathering call<br />tries to access libvirt domain= before the incoming migration is<br />completed. It's ugly but harmles= s.<br /><br />> Milan, maybe you have any advice to troubleshoot the= issue? Will the<br />> libvirt/qemu logs can help?<br /><br />It se= ems there is something wrong on (at least) the source host. There<br />= are no migration progress messages in the vdsm=5Fginger.log and there a= re<br />warnings about stale stat samples. That looks like problems wit= h<br />calling libvirt =E2=80=93 slow and/or stuck calls, maybe due to = storage<br />problems. The possibly faulty second disk could cause that= .<br /><br />libvirt debug logs could tell us whether that is indeed th= e problem and<br />whether it is caused by storage or something else.<b= r /><br />> I would suggest to open a bug on that issue so we can tr= ack it more<br />> properly.<br />><br />> Regards,<br />> = Maor<br />><br />><br />> [1]<br />> https://bugzilla.redha= t.com/show=5Fbug.cgi?id=3D1486543 - Migration leads to<br />> VM run= ning on 2 Hosts<br />><br />> [2]<br />> 2018-02-16 09:43:35,2= 36+0100 ERROR (jsonrpc/7) [jsonrpc.JsonRpcServer]<br />> Internal se= rver error (=5F=5Finit=5F=5F:577)<br />> Traceback (most recent call= last):<br />> File "/usr/lib/python2.7/site-packages/yajsonrpc/=5F=5F= init=5F=5F.py", line 572,<br />> in =5Fhandle=5Frequest<br />> re= s =3D method(**params)<br />> File "/usr/lib/python2.7/site-packages= /vdsm/rpc/Bridge.py", line 198, in<br />> =5FdynamicMethod<br />>= result =3D fn(*methodArgs)<br />> File "/usr/share/vdsm/API.py", li= ne 1454, in getAllVmIoTunePolicies<br />> io=5Ftune=5Fpolicies=5Fdic= t =3D self.=5Fcif.getAllVmIoTunePolicies()<br />> File "/usr/share/v= dsm/clientIF.py", line 454, in getAllVmIoTunePolicies<br />> 'curren= t=5Fvalues': v.getIoTune()}<br />> File "/usr/share/vdsm/virt/vm.py"= , line 2859, in getIoTune<br />> result =3D self.getIoTuneResponse()= <br />> File "/usr/share/vdsm/virt/vm.py", line 2878, in getIoTuneRe= sponse<br />> res =3D self.=5Fdom.blockIoTune(<br />> File "/usr/=
>>>><br />>>>> In engine.log, for a VMs with 1= vdisk migrating well, we see :<br />>>>><br />>>>= > 2018-02-12 16:46:29,705+01 INFO [org.ovirt.engine.core.bll.Migrate= VmToServerCommand]<br />>>>> (default task-28) [2f712024-59= 82-46a8-82c8-fd8293da5725] Lock Acquired<br />>>>> to objec= t 'EngineLock:{exclusiveLocks=3D'[3f57e669-5e4c-4d10-85cc-d573004a099d=3D= VM]',<br />>>>> sharedLocks=3D''}'<br />>>>> 20= 18-02-12 16:46:29,955+01 INFO [org.ovirt.engine.core.bll.MigrateVmToSer= verCommand]<br />>>>> (org.ovirt.thread.pool-6-thread-32) [= 2f712024-5982-46a8-82c8-fd8293da5725]<br />>>>> Running com= mand: MigrateVmToServerCommand internal: false. Entities<br />>>&= gt;> affected : ID: 3f57e669-5e4c-4d10-85cc-d573004a099d Type: VMAct= ion<br />>>>> group MIGRATE=5FVM with role type USER<br />&= gt;>>> 2018-02-12 16:46:30,261+01 INFO [org.ovirt.engine.core.= vdsbroker.MigrateVDSCommand]<br />>>>> (org.ovirt.thread.po=
>>>> customProperties=3D'[]', snapshotId=3D'null', logical= Name=3D'null',<br />>>>> hostDevice=3D'null'}, device=5Ffbd=
>>>> controller=3D0, type=3Dvirtio-serial, port=3D1}', man= aged=3D'false',<br />>>>> plugged=3D'true', readOnly=3D'fal= se', deviceAlias=3D'channel0',<br />>>>> customProperties=3D= '[]', snapshotId=3D'null', logicalName=3D'null',<br />>>>> = hostDevice=3D'null'}, device=5Ffbddd528-7d93-49c6-a286<br />>>>= ;> -180e021cb274device=5F879c93ab-4df1-435c-af02-565039fcc254devi<br= />>>>> ce=5F8945f61a-abbe-4156-8485-a4aa6f1908dbdevice=5F0= 17b5e59-01c4-<br />>>>> 4aac-bf0c-b5d9557284d6=3DVmDevice:{= id=3D'VmDeviceId:{deviceId=3D'0<br />>>>> 17b5e59-01c4-4aac= -bf0c-b5d9557284d6', vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}',<b= r />>>>> device=3D'tablet', type=3D'UNKNOWN', bootOrder=3D'= 0', specParams=3D'[]',<br />>>>> address=3D'{bus=3D0, type=3D= usb, port=3D1}', managed=3D'false', plugged=3D'true',<br />>>>= > readOnly=3D'false', deviceAlias=3D'input0', customProperties=3D'[]= ',<br />>>>> snapshotId=3D'null', logicalName=3D'null', hos= tDevice=3D'null'},<br />>>>> device=5Ffbddd528-7d93-49c6-a2= 86-180e021cb274=3DVmDevice:{id=3D'Vm<br />>>>> DeviceId:{de= viceId=3D'fbddd528-7d93-49c6-a286-180e021cb274',<br />>>>> = vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}', device=3D'ide',<br />&= gt;>>> type=3D'CONTROLLER', bootOrder=3D'0', specParams=3D'[]'= , address=3D'{slot=3D0x01,<br />>>>> bus=3D0x00, domain=3D0= x0000, type=3Dpci, function=3D0x1}', managed=3D'false',<br />>>&g= t;> plugged=3D'true', readOnly=3D'false', deviceAlias=3D'ide', custo= mProperties=3D'[]',<br />>>>> snapshotId=3D'null', logicalN= ame=3D'null', hostDevice=3D'null'},<br />>>>> device=5Ffbdd= d528-7d93-49c6-a286-180e021cb274device=5F879c93ab-4<br />>>>&g= t; df1-435c-af02-565039fcc254device=5F8945f61a-abbe-4156-8485-a4a<br />= >>>> a6f1908db=3DVmDevice:{id=3D'VmDeviceId:{deviceId=3D'89= 45f61a-abbe-4156-8485-a4aa6f1908db',<br />>>>> vmId=3D'3f57= e669-5e4c-4d10-85cc-d573004a099d'}', device=3D'unix',<br />>>>= > type=3D'CHANNEL', bootOrder=3D'0', specParams=3D'[]', address=3D'{= bus=3D0,<br />>>>> controller=3D0, type=3Dvirtio-serial, po= rt=3D2}', managed=3D'false',<br />>>>> plugged=3D'true', re= adOnly=3D'false', deviceAlias=3D'channel1',<br />>>>> custo= mProperties=3D'[]', snapshotId=3D'null', logicalName=3D'null',<br />>= ;>>> hostDevice=3D'null'}}, vmType=3Dkvm, memSize=3D8192, smpC= oresPerSocket=3D1,<br />>>>> vmName=3DOracle=5FSECONDARY, n= ice=3D0, status=3DUp, maxMemSize=3D32768,<br />>>>> bootMen= uEnable=3Dfalse, vmId=3D3f57e669-5e4c-4d10-85cc-d573004a099d,<br />>= >>> numOfIoThreads=3D2, smpThreadsPerCore=3D1, smartcardEnable= =3Dfalse,<br />>>>> maxMemSlots=3D16, kvmEnable=3Dtrue, pit= Reinjection=3Dfalse,<br />>>>> displayNetwork=3Dovirtmgmt, = devices=3D[Ljava.lang.Object;@286410fd,<br />>>>> memGuaran= teedSize=3D8192, maxVCpus=3D16, clientIp=3D, statusTime=3D4304263620<br= />>>>> <(430)%20426-3620>, display=3Dvnc}], log id: = 58cdef4c<br />>>>> 2018-02-12 16:46:46,267+01 INFO [org.ovi= rt.engine.core.vdsbro<br />>>>> ker.monitoring.VmDevicesMon= itoring] (DefaultQuartzScheduler5)<br />>>>> [7fcb200a] Rec= eived a vnc Device without an address when processing VM<br />>>&= gt;> 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device:<= br />>>>> {device=3Dvnc, specParams=3D{displayNetwork=3Dovi= rtmgmt, keyMap=3Dfr,<br />>>>> displayIp=3D192.168.0.5}, ty=
------=_=-_OpenGroupware_org_NGMime-15477-1519640593.181730-42------ Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: quoted-printable Content-Length: 47409 <html>Hi,<br />I don't beleive that this is relatd to a host, tests hav= e been done from victor source to ginger dest and ginger to victor. I d= on't see problems on storage (gluster 3.12 native managed by ovirt), wh= en VMs with a uniq disk from 20 to 250G migrate without error in some s= econds and with no downtime.<br />How ca I enable this libvirt debug mo= de ?<br /><br />--<br /><style type=3D"text/css">.Text1 { color: black; font-size:9pt; font-family:Verdana; } .Text2 { color: black; font-size:7pt; font-family:Verdana; }</style><p class=3D"Text1">Cordialement,<br /><br /><b>Frank Soyer= </b><br /><br /> </p><br /><br />Le Vendredi, F=C3=A9vrier 23, 20= 18 09:56 CET, Milan Zamazal <mzamazal@redhat.com> a =C3=A9crit:<b= r /> <blockquote type=3D"cite" cite=3D"87y3jkrsmp.fsf@redhat.com">= Maor Lipchuk <mlipchuk@redhat.com> writes:<br /><br />> I enco= untered a bug (see [1]) which contains the same error mentioned in<br /= lib/python2.7/site-packages/vdsm/virt/virdomain.py", line 47,<br />>= in =5F=5Fgetattr=5F=5F<br />> % self.vmid)<br />> NotConnectedEr= ror: VM u'755cf168-de65-42ed-b22f-efe9136f7594' was not<br />> start= ed yet or was shut down<br />><br />> On Thu, Feb 22, 2018 at 4:2= 2 PM, fsoyer <fsoyer@systea.fr> wrote:<br />><br />>> Hi= ,<br />>> Yes, on 2018-02-16 (vdsm logs) I tried with a VM standi= ng on ginger<br />>> (192.168.0.6) migrated (or failed to migrate= ...) to victor (192.168.0.5),<br />>> while the engine.log in the= first mail on 2018-02-12 was for VMs standing<br />>> on victor,= migrated (or failed to migrate...) to ginger. Symptoms were<br />>&= gt; exactly the same, in both directions, and VMs works like a charm be= fore,<br />>> and even after (migration "killed" by a poweroff of= VMs).<br />>> Am I the only one experimenting this problem ?<br = />>><br />>><br />>> Thanks<br />>> --<br />>= ;><br />>> Cordialement,<br />>><br />>> *Frank So= yer *<br />>><br />>><br />>><br />>> Le Jeudi,= F=C3=A9vrier 22, 2018 00:45 CET, Maor Lipchuk <mlipchuk@redhat.com&= gt;<br />>> a =C3=A9crit:<br />>><br />>><br />>&g= t; Hi Frank,<br />>><br />>> Sorry about the delay repond.<= br />>> I've been going through the logs you attached, although I= could not find<br />>> any specific indication why the migration= failed because of the disk you<br />>> were mentionning.<br />&g= t;> Does this VM run with both disks on the target host without migr= ation?<br />>><br />>> Regards,<br />>> Maor<br />>= ;><br />>><br />>> On Fri, Feb 16, 2018 at 11:03 AM, fso= yer <fsoyer@systea.fr> wrote:<br />>>><br />>>>= Hi Maor,<br />>>> sorry for the double post, I've change the = email adress of my account and<br />>>> supposed that I'd need= to re-post it.<br />>>> And thank you for your time. Here are= the logs. I added a vdisk to an<br />>>> existing VM : it no = more migrates, needing to poweroff it after minutes.<br />>>> = Then simply deleting the second disk makes migrate it in exactly 9s wit= hout<br />>>> problem !<br />>>> https://gist.github.= com/fgth/4707446331d201eef574ac31b6e89561<br />>>> https://gis= t.github.com/fgth/f8de9c22664aee53722af676bff8719d<br />>>><br= />>>> --<br />>>><br />>>> Cordialement,<br= />>>><br />>>> *Frank Soyer *<br />>>> Le M= ercredi, F=C3=A9vrier 14, 2018 11:04 CET, Maor Lipchuk <<br />>&g= t;> mlipchuk@redhat.com> a =C3=A9crit:<br />>>><br />>= ;>><br />>>> Hi Frank,<br />>>><br />>>&g= t; I already replied on your last email.<br />>>> Can you prov= ide the VDSM logs from the time of the migration failure for<br />>&= gt;> both hosts:<br />>>> ginger.local.systea.f <http://= ginger.local.systea.fr/>r and v<br />>>> ictor.local.systea= .fr<br />>>><br />>>> Thanks,<br />>>> Maor<= br />>>><br />>>> On Wed, Feb 14, 2018 at 11:23 AM, f= soyer <fsoyer@systea.fr> wrote:<br />>>>><br />>&g= t;>> Hi all,<br />>>>> I discovered yesterday a probl= em when migrating VM with more than one<br />>>>> vdisk.<br= />>>>> On our test servers (oVirt4.1, shared storage with = Gluster), I created 2<br />>>>> VMs needed for a test, from= a template with a 20G vdisk. On this VMs I<br />>>>> added= a 100G vdisk (for this tests I didn't want to waste time to extend<br = />>>>> the existing vdisks... But I lost time finally...). = The VMs with the 2<br />>>>> vdisks works well.<br />>&g= t;>> Now I saw some updates waiting on the host. I tried to put i= t in<br />>>>> maintenance... But it stopped on the two VM.= They were marked "migrating",<br />>>>> but no more access= ible. Other (small) VMs with only 1 vdisk was migrated<br />>>>= ;> without problem at the same time.<br />>>>> I saw tha= t a kvm process for the (big) VMs was launched on the source<br />>&= gt;>> AND destination host, but after tens of minutes, the migrat= ion and the VMs<br />>>>> was always freezed. I tried to ca= ncel the migration for the VMs : failed.<br />>>>> The only= way to stop it was to poweroff the VMs : the kvm process died on<br />= >>>> the 2 hosts and the GUI alerted on a failed migration.= <br />>>>> In doubt, I tried to delete the second vdisk on = one of this VMs : it<br />>>>> migrates then without error = ! And no access problem.<br />>>>> I tried to extend the fi= rst vdisk of the second VM, the delete the<br />>>>> second= vdisk : it migrates now without problem !<br />>>>><br />&= gt;>>> So after another test with a VM with 2 vdisks, I can sa= y that this<br />>>>> blocked the migration process :(<br /= ol-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725]<br />>>>= ;> START, MigrateVDSCommand( MigrateVDSCommandParameters:{runAsync=3D= 'true',<br />>>>> hostId=3D'ce3938b1-b23f-4d22-840a-f17d7cd= 87bb1',<br />>>>> vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a0= 99d', srcHost=3D'192.168.0.6',<br />>>>> dstVdsId=3D'd569c2= dd-8f30-4878-8aea-858db285cf69', dstHost=3D'<br />>>>> 192.= 168.0.5:54321', migrationMethod=3D'ONLINE', tunnelMigration=3D'false',<= br />>>>> migrationDowntime=3D'0', autoConverge=3D'true', m= igrateCompressed=3D'false',<br />>>>> consoleAddress=3D'nul= l', maxBandwidth=3D'500', enableGuestEvents=3D'true',<br />>>>= > maxIncomingMigrations=3D'2', maxOutgoingMigrations=3D'2',<br />>= ;>>> convergenceSchedule=3D'[init=3D[{name=3DsetDowntime, para= ms=3D[100]}],<br />>>>> stalling=3D[{limit=3D1, action=3D{n= ame=3DsetDowntime, params=3D[150]}}, {limit=3D2,<br />>>>> = action=3D{name=3DsetDowntime, params=3D[200]}}, {limit=3D3,<br />>&g= t;>> action=3D{name=3DsetDowntime, params=3D[300]}}, {limit=3D4,<= br />>>>> action=3D{name=3DsetDowntime, params=3D[400]}}, {= limit=3D6,<br />>>>> action=3D{name=3DsetDowntime, params=3D= [500]}}, {limit=3D-1, action=3D{name=3Dabort,<br />>>>> par= ams=3D[]}}]]'}), log id: 14f61ee0<br />>>>> 2018-02-12 16:4= 6:30,262+01 INFO [org.ovirt.engine.core.vdsbro<br />>>>> ke= r.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-thread-32= )<br />>>>> [2f712024-5982-46a8-82c8-fd8293da5725] START,<b= r />>>>> MigrateBrokerVDSCommand(HostName =3D victor.local.= systea.fr,<br />>>>> MigrateVDSCommandParameters:{runAsync=3D= 'true',<br />>>>> hostId=3D'ce3938b1-b23f-4d22-840a-f17d7cd= 87bb1',<br />>>>> vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a0= 99d', srcHost=3D'192.168.0.6',<br />>>>> dstVdsId=3D'd569c2= dd-8f30-4878-8aea-858db285cf69', dstHost=3D'<br />>>>> 192.= 168.0.5:54321', migrationMethod=3D'ONLINE', tunnelMigration=3D'false',<= br />>>>> migrationDowntime=3D'0', autoConverge=3D'true', m= igrateCompressed=3D'false',<br />>>>> consoleAddress=3D'nul= l', maxBandwidth=3D'500', enableGuestEvents=3D'true',<br />>>>= > maxIncomingMigrations=3D'2', maxOutgoingMigrations=3D'2',<br />>= ;>>> convergenceSchedule=3D'[init=3D[{name=3DsetDowntime, para= ms=3D[100]}],<br />>>>> stalling=3D[{limit=3D1, action=3D{n= ame=3DsetDowntime, params=3D[150]}}, {limit=3D2,<br />>>>> = action=3D{name=3DsetDowntime, params=3D[200]}}, {limit=3D3,<br />>&g= t;>> action=3D{name=3DsetDowntime, params=3D[300]}}, {limit=3D4,<= br />>>>> action=3D{name=3DsetDowntime, params=3D[400]}}, {= limit=3D6,<br />>>>> action=3D{name=3DsetDowntime, params=3D= [500]}}, {limit=3D-1, action=3D{name=3Dabort,<br />>>>> par= ams=3D[]}}]]'}), log id: 775cd381<br />>>>> 2018-02-12 16:4= 6:30,277+01 INFO [org.ovirt.engine.core.vdsbro<br />>>>> ke= r.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-thread-32= )<br />>>>> [2f712024-5982-46a8-82c8-fd8293da5725] FINISH, = MigrateBrokerVDSCommand,<br />>>>> log id: 775cd381<br />&g= t;>>> 2018-02-12 16:46:30,285+01 INFO [org.ovirt.engine.core.v= dsbroker.MigrateVDSCommand]<br />>>>> (org.ovirt.thread.poo= l-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725]<br />>>>= > FINISH, MigrateVDSCommand, return: MigratingFrom, log id: 14f61ee0= <br />>>>> 2018-02-12 16:46:30,301+01 INFO [org.ovirt.engin= e.core.dal.db<br />>>>> broker.auditloghandling.AuditLogDir= ector] (org.ovirt.thread.pool-6-thread-32)<br />>>>> [2f712= 024-5982-46a8-82c8-fd8293da5725] EVENT=5FID:<br />>>>> VM=5F= MIGRATION=5FSTART(62), Correlation ID: 2f712024-5982-46a8-82c8-fd8293da= 5725,<br />>>>> Job ID: 4bd19aa9-cc99-4d02-884e-5a1e857a773= 8, Call Stack: null, Custom<br />>>>> ID: null, Custom Even= t ID: -1, Message: Migration started (VM:<br />>>>> Oracle=5F= SECONDARY, Source: victor.local.systea.fr, Destination:<br />>>&g= t;> ginger.local.systea.fr, User: admin@internal-authz).<br />>&g= t;>> 2018-02-12 16:46:31,106+01 INFO [org.ovirt.engine.core.vdsbr= o<br />>>>> ker.vdsbroker.FullListVDSCommand] (DefaultQuart= zScheduler9) [54a65b66]<br />>>>> START, FullListVDSCommand= (HostName =3D victor.local.systea.fr,<br />>>>> FullListVDS= CommandParameters:{runAsync=3D'true',<br />>>>> hostId=3D'c= e3938b1-b23f-4d22-840a-f17d7cd87bb1',<br />>>>> vmIds=3D'[3= f57e669-5e4c-4d10-85cc-d573004a099d]'}), log id: 54b4b435<br />>>= >> 2018-02-12 16:46:31,147+01 INFO [org.ovirt.engine.core.vdsbro<= br />>>>> ker.vdsbroker.FullListVDSCommand] (DefaultQuartzS= cheduler9) [54a65b66]<br />>>>> FINISH, FullListVDSCommand,= return: [{acpiEnable=3Dtrue,<br />>>>> emulatedMachine=3Dp= c-i440fx-rhel7.3.0, tabletEnable=3Dtrue, pid=3D1493,<br />>>>&= gt; guestDiskMapping=3D{0QEMU=5FQEMU=5FHARDDISK=5Fd890fa68-fba4-4f49-9=3D= {name=3D/dev/sda},<br />>>>> QEMU=5FDVD-ROM=5FQM00003=3D{na= me=3D/dev/sr0}}, transparentHugePages=3Dtrue,<br />>>>> tim= eOffset=3D0, cpuType=3DNehalem, smp=3D2, pauseCode=3DNOERR,<br />>&g= t;>> guestNumaNodes=3D[Ljava.lang.Object;@1d9042cd, smartcardEnab= le=3Dfalse,<br />>>>> custom=3D{device=5Ffbddd528-7d93-49c6= -a286-180e021cb274device=5F87<br />>>>> 9c93ab-4df1-435c-af= 02-565039fcc254=3DVmDevice:{id=3D'VmDeviceId:<br />>>>> {de= viceId=3D'879c93ab-4df1-435c-af02-565039fcc254',<br />>>>> = vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}', device=3D'unix',<br />= >>>> type=3D'CHANNEL', bootOrder=3D'0', specParams=3D'[]', = address=3D'{bus=3D0,<br />>>>> controller=3D0, type=3Dvirti= o-serial, port=3D1}', managed=3D'false',<br />>>>> plugged=3D= 'true', readOnly=3D'false', deviceAlias=3D'channel0',<br />>>>= > customProperties=3D'[]', snapshotId=3D'null', logicalName=3D'null'= ,<br />>>>> hostDevice=3D'null'}, device=5Ffbddd528-7d93-49= c6-a286<br />>>>> -180e021cb274device=5F879c93ab-4df1-435c-= af02-565039fcc254devi<br />>>>> ce=5F8945f61a-abbe-4156-848= 5-a4aa6f1908dbdevice=5F017b5e59-01c4-<br />>>>> 4aac-bf0c-b= 5d9557284d6=3DVmDevice:{id=3D'VmDeviceId:{deviceId=3D'0<br />>>&g= t;> 17b5e59-01c4-4aac-bf0c-b5d9557284d6', vmId=3D'3f57e669-5e4c-4d10= -85cc-d573004a099d'}',<br />>>>> device=3D'tablet', type=3D= 'UNKNOWN', bootOrder=3D'0', specParams=3D'[]',<br />>>>> ad= dress=3D'{bus=3D0, type=3Dusb, port=3D1}', managed=3D'false', plugged=3D= 'true',<br />>>>> readOnly=3D'false', deviceAlias=3D'input0= ', customProperties=3D'[]',<br />>>>> snapshotId=3D'null', = logicalName=3D'null', hostDevice=3D'null'},<br />>>>> devic= e=5Ffbddd528-7d93-49c6-a286-180e021cb274=3DVmDevice:{id=3D'Vm<br />>= >>> DeviceId:{deviceId=3D'fbddd528-7d93-49c6-a286-180e021cb274= ',<br />>>>> vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}= ', device=3D'ide',<br />>>>> type=3D'CONTROLLER', bootOrder= =3D'0', specParams=3D'[]', address=3D'{slot=3D0x01,<br />>>>&g= t; bus=3D0x00, domain=3D0x0000, type=3Dpci, function=3D0x1}', managed=3D= 'false',<br />>>>> plugged=3D'true', readOnly=3D'false', de= viceAlias=3D'ide', customProperties=3D'[]',<br />>>>> snaps= hotId=3D'null', logicalName=3D'null', hostDevice=3D'null'},<br />>&g= t;>> device=5Ffbddd528-7d93-49c6-a286-180e021cb274device=5F879c93= ab-4<br />>>>> df1-435c-af02-565039fcc254device=5F8945f61a-= abbe-4156-8485-a4a<br />>>>> a6f1908db=3DVmDevice:{id=3D'Vm= DeviceId:{deviceId=3D'8945f61a-abbe-4156-8485-a4aa6f1908db',<br />>&= gt;>> vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}', device=3D'= unix',<br />>>>> type=3D'CHANNEL', bootOrder=3D'0', specPar= ams=3D'[]', address=3D'{bus=3D0,<br />>>>> controller=3D0, = type=3Dvirtio-serial, port=3D2}', managed=3D'false',<br />>>>&= gt; plugged=3D'true', readOnly=3D'false', deviceAlias=3D'channel1',<br = />>>>> customProperties=3D'[]', snapshotId=3D'null', logica= lName=3D'null',<br />>>>> hostDevice=3D'null'}}, vmType=3Dk= vm, memSize=3D8192, smpCoresPerSocket=3D1,<br />>>>> vmName= =3DOracle=5FSECONDARY, nice=3D0, status=3DMigration Source, maxMemSize=3D= 32768,<br />>>>> bootMenuEnable=3Dfalse, vmId=3D3f57e669-5e= 4c-4d10-85cc-d573004a099d,<br />>>>> numOfIoThreads=3D2, sm= pThreadsPerCore=3D1, memGuaranteedSize=3D8192,<br />>>>> kv= mEnable=3Dtrue, pitReinjection=3Dfalse, displayNetwork=3Dovirtmgmt,<br = />>>>> devices=3D[Ljava.lang.Object;@28ae66d7, display=3Dvn= c, maxVCpus=3D16,<br />>>>> clientIp=3D, statusTime=3D42994= 84520, maxMemSlots=3D16}], log id: 54b4b435<br />>>>> 2018-= 02-12 16:46:31,150+01 INFO [org.ovirt.engine.core.vdsbro<br />>>&= gt;> ker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler1)<= br />>>>> [27fac647] Fetched 3 VMs from VDS 'd569c2dd-8f30-= 4878-8aea-858db285cf69'<br />>>>> 2018-02-12 16:46:31,151+0= 1 INFO [org.ovirt.engine.core.vdsbro<br />>>>> ker.monitori= ng.VmDevicesMonitoring] (DefaultQuartzScheduler9)<br />>>>>= [54a65b66] Received a vnc Device without an address when processing VM= <br />>>>> 3f57e669-5e4c-4d10-85cc-d573004a099d devices, sk= ipping device:<br />>>>> {device=3Dvnc, specParams=3D{displ= ayNetwork=3Dovirtmgmt, keyMap=3Dfr,<br />>>>> displayIp=3D1= 92.168.0.6}, type=3Dgraphics, deviceId=3D813957b1-446a-4e88-9e40-9fe76d= 2c442d,<br />>>>> port=3D5901}<br />>>>> 2018-0= 2-12 16:46:31,151+01 INFO [org.ovirt.engine.core.vdsbro<br />>>&g= t;> ker.monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler9)<br= />>>>> [54a65b66] Received a lease Device without an addre= ss when processing VM<br />>>>> 3f57e669-5e4c-4d10-85cc-d57= 3004a099d devices, skipping device:<br />>>>> {lease=5Fid=3D= 3f57e669-5e4c-4d10-85cc-d573004a099d,<br />>>>> sd=5Fid=3D1= e51cecc-eb2e-47d0-b185-920fdc7afa16,<br />>>>> deviceId=3D{= uuid=3Da09949aa-5642-4b6d-94a4-8b0d04257be5}, offset=3D6291456,<br />&g= t;>>> device=3Dlease, path=3D/rhev/data-center/mnt/glusterSD/1= 92.168.0.6:<br />>>>> =5FDATA01/1e51cecc-eb2e-47d0-b185-920= fdc7afa16/dom=5Fmd/xleases, type=3Dlease}<br />>>>> 2018-02= -12 16:46:31,152+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.Vm= Analyzer]<br />>>>> (DefaultQuartzScheduler1) [27fac647] VM= '3f57e669-5e4c-4d10-85cc-d573004a099d'(Oracle=5FSECONDARY)<br />>&g= t;>> was unexpectedly detected as 'MigratingTo' on VDS<br />>&= gt;>> 'd569c2dd-8f30-4878-8aea-858db285cf69'(ginger.local.systea.= fr)<br />>>>> (expected on 'ce3938b1-b23f-4d22-840a-f17d7cd= 87bb1')<br />>>>> 2018-02-12 16:46:31,152+01 INFO [org.ovir= t.engine.core.vdsbroker.monitoring.VmAnalyzer]<br />>>>> (D= efaultQuartzScheduler1) [27fac647] VM '3f57e669-5e4c-4d10-85cc-d573004a= 099d'<br />>>>> is migrating to VDS 'd569c2dd-8f30-4878-8ae= a-858db285cf69'(<br />>>>> ginger.local.systea.fr) ignoring= it in the refresh until migration is<br />>>>> done<br />&= gt;>>> ....<br />>>>> 2018-02-12 16:46:41,631+01 I= NFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]<br />>&g= t;>> (ForkJoinPool-1-worker-11) [] VM '3f57e669-5e4c-4d10-85cc-d5= 73004a099d'<br />>>>> was reported as Down on VDS 'ce3938b1= -b23f-4d22-840a-f17d7cd87bb1'(<br />>>>> victor.local.syste= a.fr)<br />>>>> 2018-02-12 16:46:41,632+01 INFO [org.ovirt.= engine.core.vdsbro<br />>>>> ker.vdsbroker.DestroyVDSComman= d] (ForkJoinPool-1-worker-11) [] START,<br />>>>> DestroyVD= SCommand(HostName =3D victor.local.systea.fr,<br />>>>> Des= troyVmVDSCommandParameters:{runAsync=3D'true',<br />>>>> ho= stId=3D'ce3938b1-b23f-4d22-840a-f17d7cd87bb1',<br />>>>> vm= Id=3D'3f57e669-5e4c-4d10-85cc-d573004a099d', force=3D'false',<br />>= >>> secondsToWait=3D'0', gracefully=3D'false', reason=3D'', ig= noreNoVm=3D'true'}), log<br />>>>> id: 560eca57<br />>&g= t;>> 2018-02-12 16:46:41,650+01 INFO [org.ovirt.engine.core.vdsbr= o<br />>>>> ker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-= 1-worker-11) [] FINISH,<br />>>>> DestroyVDSCommand, log id= : 560eca57<br />>>>> 2018-02-12 16:46:41,650+01 INFO [org.o= virt.engine.core.vdsbroker.monitoring.VmAnalyzer]<br />>>>>= (ForkJoinPool-1-worker-11) [] VM '3f57e669-5e4c-4d10-85cc-d573004a099d= '(Oracle=5FSECONDARY)<br />>>>> moved from 'MigratingFrom' = --> 'Down'<br />>>>> 2018-02-12 16:46:41,651+01 INFO [or= g.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]<br />>>>&= gt; (ForkJoinPool-1-worker-11) [] Handing over VM '3f57e669-5e4c-4d10-8= 5cc-d573004a099d'(Oracle=5FSECONDARY)<br />>>>> to Host 'd5= 69c2dd-8f30-4878-8aea-858db285cf69'. Setting VM to status<br />>>= >> 'MigratingTo'<br />>>>> 2018-02-12 16:46:42,163+01= INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]<br />>= >>> (ForkJoinPool-1-worker-4) [] VM '3f57e669-5e4c-4d10-85cc-d= 573004a099d'(Oracle=5FSECONDARY)<br />>>>> moved from 'Migr= atingTo' --> 'Up'<br />>>>> 2018-02-12 16:46:42,169+01 I= NFO [org.ovirt.engine.core.vdsbro<br />>>>> ker.vdsbroker.M= igrateStatusVDSCommand] (ForkJoinPool-1-worker-4) []<br />>>>&= gt; START, MigrateStatusVDSCommand(HostName =3D ginger.local.systea.fr,= <br />>>>> MigrateStatusVDSCommandParameters:{runAsync=3D't= rue',<br />>>>> hostId=3D'd569c2dd-8f30-4878-8aea-858db285c= f69',<br />>>>> vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099= d'}), log id: 7a25c281<br />>>>> 2018-02-12 16:46:42,174+01= INFO [org.ovirt.engine.core.vdsbro<br />>>>> ker.vdsbroker= .MigrateStatusVDSCommand] (ForkJoinPool-1-worker-4) []<br />>>>= ;> FINISH, MigrateStatusVDSCommand, log id: 7a25c281<br />>>&g= t;> 2018-02-12 16:46:42,194+01 INFO [org.ovirt.engine.core.dal.db<br= />>>>> broker.auditloghandling.AuditLogDirector] (ForkJoin= Pool-1-worker-4) []<br />>>>> EVENT=5FID: VM=5FMIGRATION=5F= DONE(63), Correlation ID:<br />>>>> 2f712024-5982-46a8-82c8= -fd8293da5725, Job ID:<br />>>>> 4bd19aa9-cc99-4d02-884e-5a= 1e857a7738, Call Stack: null, Custom ID:<br />>>>> null, Cu= stom Event ID: -1, Message: Migration completed (VM:<br />>>>&= gt; Oracle=5FSECONDARY, Source: victor.local.systea.fr, Destination:<br= />>>>> ginger.local.systea.fr, Duration: 11 seconds, Total= : 11 seconds, Actual<br />>>>> downtime: (N/A))<br />>&g= t;>> 2018-02-12 16:46:42,201+01 INFO [org.ovirt.engine.core.bll.M= igrateVmToServerCommand]<br />>>>> (ForkJoinPool-1-worker-4= ) [] Lock freed to object<br />>>>> 'EngineLock:{exclusiveL= ocks=3D'[3f57e669-5e4c-4d10-85cc-d573004a099d=3DVM]',<br />>>>= > sharedLocks=3D''}'<br />>>>> 2018-02-12 16:46:42,203+0= 1 INFO [org.ovirt.engine.core.vdsbro<br />>>>> ker.vdsbroke= r.FullListVDSCommand] (ForkJoinPool-1-worker-4) [] START,<br />>>= >> FullListVDSCommand(HostName =3D ginger.local.systea.fr,<br />&= gt;>>> FullListVDSCommandParameters:{runAsync=3D'true',<br />&= gt;>>> hostId=3D'd569c2dd-8f30-4878-8aea-858db285cf69',<br />&= gt;>>> vmIds=3D'[3f57e669-5e4c-4d10-85cc-d573004a099d]'}), log= id: 7cc65298<br />>>>> 2018-02-12 16:46:42,254+01 INFO [or= g.ovirt.engine.core.vdsbro<br />>>>> ker.vdsbroker.FullList= VDSCommand] (ForkJoinPool-1-worker-4) [] FINISH,<br />>>>> = FullListVDSCommand, return: [{acpiEnable=3Dtrue,<br />>>>> = emulatedMachine=3Dpc-i440fx-rhel7.3.0, afterMigrationStatus=3D,<br />&g= t;>>> tabletEnable=3Dtrue, pid=3D18748, guestDiskMapping=3D{},= <br />>>>> transparentHugePages=3Dtrue, timeOffset=3D0, cpu= Type=3DNehalem, smp=3D2,<br />>>>> guestNumaNodes=3D[Ljava.= lang.Object;@760085fd,<br />>>>> custom=3D{device=5Ffbddd52= 8-7d93-49c6-a286-180e021cb274device=5F87<br />>>>> 9c93ab-4= df1-435c-af02-565039fcc254=3DVmDevice:{id=3D'VmDeviceId:<br />>>&= gt;> {deviceId=3D'879c93ab-4df1-435c-af02-565039fcc254',<br />>&g= t;>> vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}', device=3D'u= nix',<br />>>>> type=3D'CHANNEL', bootOrder=3D'0', specPara= ms=3D'[]', address=3D'{bus=3D0,<br />>>>> controller=3D0, t= ype=3Dvirtio-serial, port=3D1}', managed=3D'false',<br />>>>&g= t; plugged=3D'true', readOnly=3D'false', deviceAlias=3D'channel0',<br /= dd528-7d93-49c6-a286<br />>>>> -180e021cb274device=5F879c93= ab-4df1-435c-af02-565039fcc254devi<br />>>>> ce=5F8945f61a-= abbe-4156-8485-a4aa6f1908dbdevice=5F017b5e59-01c4-<br />>>>>= ; 4aac-bf0c-b5d9557284d6=3DVmDevice:{id=3D'VmDeviceId:{deviceId=3D'0<br= />>>>> 17b5e59-01c4-4aac-bf0c-b5d9557284d6', vmId=3D'3f57e= 669-5e4c-4d10-85cc-d573004a099d'}',<br />>>>> device=3D'tab= let', type=3D'UNKNOWN', bootOrder=3D'0', specParams=3D'[]',<br />>&g= t;>> address=3D'{bus=3D0, type=3Dusb, port=3D1}', managed=3D'fals= e', plugged=3D'true',<br />>>>> readOnly=3D'false', deviceA= lias=3D'input0', customProperties=3D'[]',<br />>>>> snapsho= tId=3D'null', logicalName=3D'null', hostDevice=3D'null'},<br />>>= >> device=5Ffbddd528-7d93-49c6-a286-180e021cb274=3DVmDevice:{id=3D= 'Vm<br />>>>> DeviceId:{deviceId=3D'fbddd528-7d93-49c6-a286= -180e021cb274',<br />>>>> vmId=3D'3f57e669-5e4c-4d10-85cc-d= 573004a099d'}', device=3D'ide',<br />>>>> type=3D'CONTROLLE= R', bootOrder=3D'0', specParams=3D'[]', address=3D'{slot=3D0x01,<br />&= gt;>>> bus=3D0x00, domain=3D0x0000, type=3Dpci, function=3D0x1= }', managed=3D'false',<br />>>>> plugged=3D'true', readOnly= =3D'false', deviceAlias=3D'ide', customProperties=3D'[]',<br />>>= >> snapshotId=3D'null', logicalName=3D'null', hostDevice=3D'null'= },<br />>>>> device=5Ffbddd528-7d93-49c6-a286-180e021cb274d= evice=5F879c93ab-4<br />>>>> df1-435c-af02-565039fcc254devi= ce=5F8945f61a-abbe-4156-8485-a4a<br />>>>> a6f1908db=3DVmDe= vice:{id=3D'VmDeviceId:{deviceId=3D'8945f61a-abbe-4156-8485-a4aa6f1908d= b',<br />>>>> vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'= }', device=3D'unix',<br />>>>> type=3D'CHANNEL', bootOrder=3D= '0', specParams=3D'[]', address=3D'{bus=3D0,<br />>>>> cont= roller=3D0, type=3Dvirtio-serial, port=3D2}', managed=3D'false',<br />&= gt;>>> plugged=3D'true', readOnly=3D'false', deviceAlias=3D'ch= annel1',<br />>>>> customProperties=3D'[]', snapshotId=3D'n= ull', logicalName=3D'null',<br />>>>> hostDevice=3D'null'}}= , vmType=3Dkvm, memSize=3D8192, smpCoresPerSocket=3D1,<br />>>>= ;> vmName=3DOracle=5FSECONDARY, nice=3D0, status=3DUp, maxMemSize=3D= 32768,<br />>>>> bootMenuEnable=3Dfalse, vmId=3D3f57e669-5e= 4c-4d10-85cc-d573004a099d,<br />>>>> numOfIoThreads=3D2, sm= pThreadsPerCore=3D1, smartcardEnable=3Dfalse,<br />>>>> max= MemSlots=3D16, kvmEnable=3Dtrue, pitReinjection=3Dfalse,<br />>>&= gt;> displayNetwork=3Dovirtmgmt, devices=3D[Ljava.lang.Object;@2e4d3= dd3,<br />>>>> memGuaranteedSize=3D8192, maxVCpus=3D16, cli= entIp=3D, statusTime=3D4304259600<br />>>>> <(430)%20425= -9600>, display=3Dvnc}], log id: 7cc65298<br />>>>> 2018= -02-12 16:46:42,257+01 INFO [org.ovirt.engine.core.vdsbro<br />>>= >> ker.monitoring.VmDevicesMonitoring] (ForkJoinPool-1-worker-4) = []<br />>>>> Received a vnc Device without an address when = processing VM<br />>>>> 3f57e669-5e4c-4d10-85cc-d573004a099= d devices, skipping device:<br />>>>> {device=3Dvnc, specPa= rams=3D{displayNetwork=3Dovirtmgmt, keyMap=3Dfr,<br />>>>> = displayIp=3D192.168.0.5}, type=3Dgraphics, deviceId=3D813957b1-446a-4e8= 8-9e40-9fe76d2c442d,<br />>>>> port=3D5901}<br />>>&g= t;> 2018-02-12 16:46:42,257+01 INFO [org.ovirt.engine.core.vdsbro<br= />>>>> ker.monitoring.VmDevicesMonitoring] (ForkJoinPool-1= -worker-4) []<br />>>>> Received a lease Device without an = address when processing VM<br />>>>> 3f57e669-5e4c-4d10-85c= c-d573004a099d devices, skipping device:<br />>>>> {lease=5F= id=3D3f57e669-5e4c-4d10-85cc-d573004a099d,<br />>>>> sd=5Fi= d=3D1e51cecc-eb2e-47d0-b185-920fdc7afa16,<br />>>>> deviceI= d=3D{uuid=3Da09949aa-5642-4b6d-94a4-8b0d04257be5}, offset=3D6291456,<br= />>>>> device=3Dlease, path=3D/rhev/data-center/mnt/gluste= rSD/192.168.0.6:<br />>>>> =5FDATA01/1e51cecc-eb2e-47d0-b18= 5-920fdc7afa16/dom=5Fmd/xleases, type=3Dlease}<br />>>>> 20= 18-02-12 16:46:46,260+01 INFO [org.ovirt.engine.core.vdsbro<br />>&g= t;>> ker.vdsbroker.FullListVDSCommand] (DefaultQuartzScheduler5) = [7fcb200a]<br />>>>> FINISH, FullListVDSCommand, return: [{= acpiEnable=3Dtrue,<br />>>>> emulatedMachine=3Dpc-i440fx-rh= el7.3.0, afterMigrationStatus=3D,<br />>>>> tabletEnable=3D= true, pid=3D18748, guestDiskMapping=3D{0QEMU=5FQEMU=5FH<br />>>&g= t;> ARDDISK=5Fd890fa68-fba4-4f49-9=3D{name=3D/dev/sda},<br />>>= ;>> QEMU=5FDVD-ROM=5FQM00003=3D{name=3D/dev/sr0}}, transparentHug= ePages=3Dtrue,<br />>>>> timeOffset=3D0, cpuType=3DNehalem,= smp=3D2, guestNumaNodes=3D[Ljava.lang.Obj<br />>>>> ect;@7= 7951faf, custom=3D{device=5Ffbddd528-7d93-4<br />>>>> 9c6-a= 286-180e021cb274device=5F879c93ab-4df1-435c-af02-565039fc<br />>>= >> c254=3DVmDevice:{id=3D'VmDeviceId:{deviceId=3D'879c93ab-4df1-4= 35c-af02-565039fcc254',<br />>>>> vmId=3D'3f57e669-5e4c-4d1= 0-85cc-d573004a099d'}', device=3D'unix',<br />>>>> type=3D'= CHANNEL', bootOrder=3D'0', specParams=3D'[]', address=3D'{bus=3D0,<br /= pe=3Dgraphics, deviceId=3D813957b1-446a-4e88-9e40-9fe76d2c442d,<br />&g= t;>>> port=3D5901}<br />>>>> 2018-02-12 16:46:46,2= 68+01 INFO [org.ovirt.engine.core.vdsbro<br />>>>> ker.moni= toring.VmDevicesMonitoring] (DefaultQuartzScheduler5)<br />>>>= > [7fcb200a] Received a lease Device without an address when process= ing VM<br />>>>> 3f57e669-5e4c-4d10-85cc-d573004a099d devic= es, skipping device:<br />>>>> {lease=5Fid=3D3f57e669-5e4c-= 4d10-85cc-d573004a099d,<br />>>>> sd=5Fid=3D1e51cecc-eb2e-4= 7d0-b185-920fdc7afa16,<br />>>>> deviceId=3D{uuid=3Da09949a= a-5642-4b6d-94a4-8b0d04257be5}, offset=3D6291456,<br />>>>>= device=3Dlease, path=3D/rhev/data-center/mnt/glusterSD/192.168.0.6:<br= />>>>> =5FDATA01/1e51cecc-eb2e-47d0-b185-920fdc7afa16/dom=5F= md/xleases, type=3Dlease}<br />>>>><br />>>>><b= r />>>>><br />>>>><br />>>>> For th= e VM with 2 vdisks we see :<br />>>>><br />>>>>= 2018-02-12 16:49:06,112+01 INFO [org.ovirt.engine.core.bll.MigrateVmTo= ServerCommand]<br />>>>> (default task-50) [92b5af33-cb87-4= 142-b8fe-8b838dd7458e] Lock Acquired<br />>>>> to object 'E= ngineLock:{exclusiveLocks=3D'[f7d4ec12-627a-4b83-b59e-886400d55474=3DVM= ]',<br />>>>> sharedLocks=3D''}'<br />>>>> 2018= -02-12 16:49:06,407+01 INFO [org.ovirt.engine.core.bll.MigrateVmToServe= rCommand]<br />>>>> (org.ovirt.thread.pool-6-thread-49) [92= b5af33-cb87-4142-b8fe-8b838dd7458e]<br />>>>> Running comma= nd: MigrateVmToServerCommand internal: false. Entities<br />>>>= ;> affected : ID: f7d4ec12-627a-4b83-b59e-886400d55474 Type: VMActio= n<br />>>>> group MIGRATE=5FVM with role type USER<br />>= ;>>> 2018-02-12 16:49:06,712+01 INFO [org.ovirt.engine.core.vd= sbroker.MigrateVDSCommand]<br />>>>> (org.ovirt.thread.pool= -6-thread-49) [92b5af33-cb87-4142-b8fe-8b838dd7458e]<br />>>>&= gt; START, MigrateVDSCommand( MigrateVDSCommandParameters:{runAsync=3D'= true',<br />>>>> hostId=3D'd569c2dd-8f30-4878-8aea-858db285= cf69',<br />>>>> vmId=3D'f7d4ec12-627a-4b83-b59e-886400d554= 74', srcHost=3D'192.168.0.5',<br />>>>> dstVdsId=3D'ce3938b= 1-b23f-4d22-840a-f17d7cd87bb1', dstHost=3D'<br />>>>> 192.1= 68.0.6:54321', migrationMethod=3D'ONLINE', tunnelMigration=3D'false',<b= r />>>>> migrationDowntime=3D'0', autoConverge=3D'true', mi= grateCompressed=3D'false',<br />>>>> consoleAddress=3D'null= ', maxBandwidth=3D'500', enableGuestEvents=3D'true',<br />>>>&= gt; maxIncomingMigrations=3D'2', maxOutgoingMigrations=3D'2',<br />>= >>> convergenceSchedule=3D'[init=3D[{name=3DsetDowntime, param= s=3D[100]}],<br />>>>> stalling=3D[{limit=3D1, action=3D{na= me=3DsetDowntime, params=3D[150]}}, {limit=3D2,<br />>>>> a= ction=3D{name=3DsetDowntime, params=3D[200]}}, {limit=3D3,<br />>>= ;>> action=3D{name=3DsetDowntime, params=3D[300]}}, {limit=3D4,<b= r />>>>> action=3D{name=3DsetDowntime, params=3D[400]}}, {l= imit=3D6,<br />>>>> action=3D{name=3DsetDowntime, params=3D= [500]}}, {limit=3D-1, action=3D{name=3Dabort,<br />>>>> par= ams=3D[]}}]]'}), log id: 3702a9e0<br />>>>> 2018-02-12 16:4= 9:06,713+01 INFO [org.ovirt.engine.core.vdsbro<br />>>>> ke= r.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-thread-49= )<br />>>>> [92b5af33-cb87-4142-b8fe-8b838dd7458e] START,<b= r />>>>> MigrateBrokerVDSCommand(HostName =3D ginger.local.= systea.fr,<br />>>>> MigrateVDSCommandParameters:{runAsync=3D= 'true',<br />>>>> hostId=3D'd569c2dd-8f30-4878-8aea-858db28= 5cf69',<br />>>>> vmId=3D'f7d4ec12-627a-4b83-b59e-886400d55= 474', srcHost=3D'192.168.0.5',<br />>>>> dstVdsId=3D'ce3938= b1-b23f-4d22-840a-f17d7cd87bb1', dstHost=3D'<br />>>>> 192.= 168.0.6:54321', migrationMethod=3D'ONLINE', tunnelMigration=3D'false',<= br />>>>> migrationDowntime=3D'0', autoConverge=3D'true', m= igrateCompressed=3D'false',<br />>>>> consoleAddress=3D'nul= l', maxBandwidth=3D'500', enableGuestEvents=3D'true',<br />>>>= > maxIncomingMigrations=3D'2', maxOutgoingMigrations=3D'2',<br />>= ;>>> convergenceSchedule=3D'[init=3D[{name=3DsetDowntime, para= ms=3D[100]}],<br />>>>> stalling=3D[{limit=3D1, action=3D{n= ame=3DsetDowntime, params=3D[150]}}, {limit=3D2,<br />>>>> = action=3D{name=3DsetDowntime, params=3D[200]}}, {limit=3D3,<br />>&g= t;>> action=3D{name=3DsetDowntime, params=3D[300]}}, {limit=3D4,<= br />>>>> action=3D{name=3DsetDowntime, params=3D[400]}}, {= limit=3D6,<br />>>>> action=3D{name=3DsetDowntime, params=3D= [500]}}, {limit=3D-1, action=3D{name=3Dabort,<br />>>>> par= ams=3D[]}}]]'}), log id: 1840069c<br />>>>> 2018-02-12 16:4= 9:06,724+01 INFO [org.ovirt.engine.core.vdsbro<br />>>>> ke= r.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-thread-49= )<br />>>>> [92b5af33-cb87-4142-b8fe-8b838dd7458e] FINISH, = MigrateBrokerVDSCommand,<br />>>>> log id: 1840069c<br />&g= t;>>> 2018-02-12 16:49:06,732+01 INFO [org.ovirt.engine.core.v= dsbroker.MigrateVDSCommand]<br />>>>> (org.ovirt.thread.poo= l-6-thread-49) [92b5af33-cb87-4142-b8fe-8b838dd7458e]<br />>>>= > FINISH, MigrateVDSCommand, return: MigratingFrom, log id: 3702a9e0= <br />>>>> 2018-02-12 16:49:06,753+01 INFO [org.ovirt.engin= e.core.dal.db<br />>>>> broker.auditloghandling.AuditLogDir= ector] (org.ovirt.thread.pool-6-thread-49)<br />>>>> [92b5a= f33-cb87-4142-b8fe-8b838dd7458e] EVENT=5FID:<br />>>>> VM=5F= MIGRATION=5FSTART(62), Correlation ID: 92b5af33-cb87-4142-b8fe-8b838dd7= 458e,<br />>>>> Job ID: f4f54054-f7c8-4481-8eda-d5a15c38306= 1, Call Stack: null, Custom<br />>>>> ID: null, Custom Even= t ID: -1, Message: Migration started (VM:<br />>>>> Oracle=5F= PRIMARY, Source: ginger.local.systea.fr, Destination:<br />>>>= > victor.local.systea.fr, User: admin@internal-authz).<br />>>= >> ...<br />>>>> 2018-02-12 16:49:16,453+01 INFO [org= .ovirt.engine.core.vdsbro<br />>>>> ker.monitoring.VmsStati= sticsFetcher] (DefaultQuartzScheduler4)<br />>>>> [162a5bc3= ] Fetched 2 VMs from VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'<br />&g= t;>>> 2018-02-12 16:49:16,455+01 INFO [org.ovirt.engine.core.v= dsbroker.monitoring.VmAnalyzer]<br />>>>> (DefaultQuartzSch= eduler4) [162a5bc3] VM 'f7d4ec12-627a-4b83-b59e-886400d55474'(Oracle=5F= PRIMARY)<br />>>>> was unexpectedly detected as 'MigratingT= o' on VDS<br />>>>> 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'(= victor.local.systea.fr)<br />>>>> (expected on 'd569c2dd-8f= 30-4878-8aea-858db285cf69')<br />>>>> 2018-02-12 16:49:16,4= 55+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]<br /=
>>>> (DefaultQuartzScheduler4) [162a5bc3] VM 'f7d4ec12-627= a-4b83-b59e-886400d55474'<br />>>>> is migrating to VDS 'ce= 3938b1-b23f-4d22-840a-f17d7cd87bb1'(<br />>>>> victor.local= .systea.fr) ignoring it in the refresh until migration is<br />>>= >> done<br />>>>> ...<br />>>>> 2018-02-1= 2 16:49:31,484+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAn= alyzer]<br />>>>> (DefaultQuartzScheduler5) [11a7619a] VM '= f7d4ec12-627a-4b83-b59e-886400d55474'(Oracle=5FPRIMARY)<br />>>&g= t;> was unexpectedly detected as 'MigratingTo' on VDS<br />>>&= gt;> 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'(victor.local.systea.fr)<= br />>>>> (expected on 'd569c2dd-8f30-4878-8aea-858db285cf6= 9')<br />>>>> 2018-02-12 16:49:31,484+01 INFO [org.ovirt.en= gine.core.vdsbroker.monitoring.VmAnalyzer]<br />>>>> (Defau= ltQuartzScheduler5) [11a7619a] VM 'f7d4ec12-627a-4b83-b59e-886400d55474= '<br />>>>> is migrating to VDS 'ce3938b1-b23f-4d22-840a-f1= 7d7cd87bb1'(<br />>>>> victor.local.systea.fr) ignoring it = in the refresh until migration is<br />>>>> done<br />>&= gt;>><br />>>>><br />>>>><br />>>&g= t;> and so on, last lines repeated indefinitly for hours since we po= weroff<br />>>>> the VM...<br />>>>> Is this so= mething known ? Any idea about that ?<br />>>>><br />>&g= t;>> Thanks<br />>>>><br />>>>> Ovirt 4.1= .6, updated last at feb-13. Gluster 3.12.1.<br />>>>><br />= >>>> --<br />>>>><br />>>>> Cordial= ement,<br />>>>><br />>>>> *Frank Soyer *<br />= >>>><br />>>>> =5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F= =5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F= =5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F<br />>>>> Users mailin= g list<br />>>>> Users@ovirt.org<br />>>>> http= ://lists.ovirt.org/mailman/listinfo/users<br />>>>><br />&g= t;>><br />>>><br />>>><br />>>><br />&= gt;><br />>><br />>><br /> </blockquote><br /> = ;</html>
------=_=-_OpenGroupware_org_NGMime-15477-1519640593.181730-42--------

"fsoyer" <fsoyer@systea.fr> writes:
I don't beleive that this is relatd to a host, tests have been done from victor source to ginger dest and ginger to victor. I don't see problems on storage (gluster 3.12 native managed by ovirt), when VMs with a uniq disk from 20 to 250G migrate without error in some seconds and with no downtime.
The host itself may be fine, but libvirt/QEMU running there may expose problems, perhaps just for some VMs. According to your logs something is not behaving as expected on the source host during the faulty migration.
How ca I enable this libvirt debug mode ?
Set the following options in /etc/libvirt/libvirtd.conf (look for examples in comments there) - log_level=1 - log_outputs="1:file:/var/log/libvirt/libvirtd.log" and restart libvirt. Then /var/log/libvirt/libvirtd.log should contain the log. It will be huge, so I suggest to enable it only for the time of reproducing the problem.
--
Cordialement,
Frank Soyer
Le Vendredi, Février 23, 2018 09:56 CET, Milan Zamazal <mzamazal@redhat.com> a écrit: Maor Lipchuk <mlipchuk@redhat.com> writes:
I encountered a bug (see [1]) which contains the same error mentioned in your VDSM logs (see [2]), but I doubt it is related.
Indeed, it's not related.
The error in vdsm_victor.log just means that the info gathering call tries to access libvirt domain before the incoming migration is completed. It's ugly but harmless.
Milan, maybe you have any advice to troubleshoot the issue? Will the libvirt/qemu logs can help?
It seems there is something wrong on (at least) the source host. There are no migration progress messages in the vdsm_ginger.log and there are warnings about stale stat samples. That looks like problems with calling libvirt – slow and/or stuck calls, maybe due to storage problems. The possibly faulty second disk could cause that.
libvirt debug logs could tell us whether that is indeed the problem and whether it is caused by storage or something else.
I would suggest to open a bug on that issue so we can track it more properly.
Regards, Maor
[1] https://bugzilla.redhat.com/show_bug.cgi?id=1486543 - Migration leads to VM running on 2 Hosts
[2] 2018-02-16 09:43:35,236+0100 ERROR (jsonrpc/7) [jsonrpc.JsonRpcServer] Internal server error (__init__:577) Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 572, in _handle_request res = method(**params) File "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 198, in _dynamicMethod result = fn(*methodArgs) File "/usr/share/vdsm/API.py", line 1454, in getAllVmIoTunePolicies io_tune_policies_dict = self._cif.getAllVmIoTunePolicies() File "/usr/share/vdsm/clientIF.py", line 454, in getAllVmIoTunePolicies 'current_values': v.getIoTune()} File "/usr/share/vdsm/virt/vm.py", line 2859, in getIoTune result = self.getIoTuneResponse() File "/usr/share/vdsm/virt/vm.py", line 2878, in getIoTuneResponse res = self._dom.blockIoTune( File "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py", line 47, in __getattr__ % self.vmid) NotConnectedError: VM u'755cf168-de65-42ed-b22f-efe9136f7594' was not started yet or was shut down
On Thu, Feb 22, 2018 at 4:22 PM, fsoyer <fsoyer@systea.fr> wrote:
Hi, Yes, on 2018-02-16 (vdsm logs) I tried with a VM standing on ginger (192.168.0.6) migrated (or failed to migrate...) to victor (192.168.0.5), while the engine.log in the first mail on 2018-02-12 was for VMs standing on victor, migrated (or failed to migrate...) to ginger. Symptoms were exactly the same, in both directions, and VMs works like a charm before, and even after (migration "killed" by a poweroff of VMs). Am I the only one experimenting this problem ?
Thanks --
Cordialement,
*Frank Soyer *
Le Jeudi, Février 22, 2018 00:45 CET, Maor Lipchuk <mlipchuk@redhat.com> a écrit:
Hi Frank,
Sorry about the delay repond. I've been going through the logs you attached, although I could not find any specific indication why the migration failed because of the disk you were mentionning. Does this VM run with both disks on the target host without migration?
Regards, Maor
On Fri, Feb 16, 2018 at 11:03 AM, fsoyer <fsoyer@systea.fr> wrote:
Hi Maor, sorry for the double post, I've change the email adress of my account and supposed that I'd need to re-post it. And thank you for your time. Here are the logs. I added a vdisk to an existing VM : it no more migrates, needing to poweroff it after minutes. Then simply deleting the second disk makes migrate it in exactly 9s without problem ! https://gist.github.com/fgth/4707446331d201eef574ac31b6e89561 https://gist.github.com/fgth/f8de9c22664aee53722af676bff8719d
--
Cordialement,
*Frank Soyer * Le Mercredi, Février 14, 2018 11:04 CET, Maor Lipchuk < mlipchuk@redhat.com> a écrit:
Hi Frank,
I already replied on your last email. Can you provide the VDSM logs from the time of the migration failure for both hosts: ginger.local.systea.f <http://ginger.local.systea.fr/>r and v ictor.local.systea.fr
Thanks, Maor
On Wed, Feb 14, 2018 at 11:23 AM, fsoyer <fsoyer@systea.fr> wrote:
Hi all, I discovered yesterday a problem when migrating VM with more than one vdisk. On our test servers (oVirt4.1, shared storage with Gluster), I created 2 VMs needed for a test, from a template with a 20G vdisk. On this VMs I added a 100G vdisk (for this tests I didn't want to waste time to extend the existing vdisks... But I lost time finally...). The VMs with the 2 vdisks works well. Now I saw some updates waiting on the host. I tried to put it in maintenance... But it stopped on the two VM. They were marked "migrating", but no more accessible. Other (small) VMs with only 1 vdisk was migrated without problem at the same time. I saw that a kvm process for the (big) VMs was launched on the source AND destination host, but after tens of minutes, the migration and the VMs was always freezed. I tried to cancel the migration for the VMs : failed. The only way to stop it was to poweroff the VMs : the kvm process died on the 2 hosts and the GUI alerted on a failed migration. In doubt, I tried to delete the second vdisk on one of this VMs : it migrates then without error ! And no access problem. I tried to extend the first vdisk of the second VM, the delete the second vdisk : it migrates now without problem !
So after another test with a VM with 2 vdisks, I can say that this blocked the migration process :(
In engine.log, for a VMs with 1 vdisk migrating well, we see :
2018-02-12 16:46:29,705+01 INFO [org.ovirt.engine.core.bll.MigrateVmToServerCommand] (default task-28) [2f712024-5982-46a8-82c8-fd8293da5725] Lock Acquired to object 'EngineLock:{exclusiveLocks='[3f57e669-5e4c-4d10-85cc-d573004a099d=VM]', sharedLocks=''}' 2018-02-12 16:46:29,955+01 INFO [org.ovirt.engine.core.bll.MigrateVmToServerCommand] (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725] Running command: MigrateVmToServerCommand internal: false. Entities affected : ID: 3f57e669-5e4c-4d10-85cc-d573004a099d Type: VMAction group MIGRATE_VM with role type USER 2018-02-12 16:46:30,261+01 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725] START, MigrateVDSCommand( MigrateVDSCommandParameters:{runAsync='true', hostId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d', srcHost='192.168.0.6', dstVdsId='d569c2dd-8f30-4878-8aea-858db285cf69', dstHost=' 192.168.0.5:54321', migrationMethod='ONLINE', tunnelMigration='false', migrationDowntime='0', autoConverge='true', migrateCompressed='false', consoleAddress='null', maxBandwidth='500', enableGuestEvents='true', maxIncomingMigrations='2', maxOutgoingMigrations='2', convergenceSchedule='[init=[{name=setDowntime, params=[100]}], stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2, action={name=setDowntime, params=[200]}}, {limit=3, action={name=setDowntime, params=[300]}}, {limit=4, action={name=setDowntime, params=[400]}}, {limit=6, action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort, params=[]}}]]'}), log id: 14f61ee0 2018-02-12 16:46:30,262+01 INFO [org.ovirt.engine.core.vdsbro ker.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725] START, MigrateBrokerVDSCommand(HostName = victor.local.systea.fr, MigrateVDSCommandParameters:{runAsync='true', hostId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d', srcHost='192.168.0.6', dstVdsId='d569c2dd-8f30-4878-8aea-858db285cf69', dstHost=' 192.168.0.5:54321', migrationMethod='ONLINE', tunnelMigration='false', migrationDowntime='0', autoConverge='true', migrateCompressed='false', consoleAddress='null', maxBandwidth='500', enableGuestEvents='true', maxIncomingMigrations='2', maxOutgoingMigrations='2', convergenceSchedule='[init=[{name=setDowntime, params=[100]}], stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2, action={name=setDowntime, params=[200]}}, {limit=3, action={name=setDowntime, params=[300]}}, {limit=4, action={name=setDowntime, params=[400]}}, {limit=6, action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort, params=[]}}]]'}), log id: 775cd381 2018-02-12 16:46:30,277+01 INFO [org.ovirt.engine.core.vdsbro ker.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725] FINISH, MigrateBrokerVDSCommand, log id: 775cd381 2018-02-12 16:46:30,285+01 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725] FINISH, MigrateVDSCommand, return: MigratingFrom, log id: 14f61ee0 2018-02-12 16:46:30,301+01 INFO [org.ovirt.engine.core.dal.db broker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725] EVENT_ID: VM_MIGRATION_START(62), Correlation ID: 2f712024-5982-46a8-82c8-fd8293da5725, Job ID: 4bd19aa9-cc99-4d02-884e-5a1e857a7738, Call Stack: null, Custom ID: null, Custom Event ID: -1, Message: Migration started (VM: Oracle_SECONDARY, Source: victor.local.systea.fr, Destination: ginger.local.systea.fr, User: admin@internal-authz). 2018-02-12 16:46:31,106+01 INFO [org.ovirt.engine.core.vdsbro ker.vdsbroker.FullListVDSCommand] (DefaultQuartzScheduler9) [54a65b66] START, FullListVDSCommand(HostName = victor.local.systea.fr, FullListVDSCommandParameters:{runAsync='true', hostId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1', vmIds='[3f57e669-5e4c-4d10-85cc-d573004a099d]'}), log id: 54b4b435 2018-02-12 16:46:31,147+01 INFO [org.ovirt.engine.core.vdsbro ker.vdsbroker.FullListVDSCommand] (DefaultQuartzScheduler9) [54a65b66] FINISH, FullListVDSCommand, return: [{acpiEnable=true, emulatedMachine=pc-i440fx-rhel7.3.0, tabletEnable=true, pid=1493, guestDiskMapping={0QEMU_QEMU_HARDDISK_d890fa68-fba4-4f49-9={name=/dev/sda}, QEMU_DVD-ROM_QM00003={name=/dev/sr0}}, transparentHugePages=true, timeOffset=0, cpuType=Nehalem, smp=2, pauseCode=NOERR, guestNumaNodes=[Ljava.lang.Object;@1d9042cd, smartcardEnable=false, custom={device_fbddd528-7d93-49c6-a286-180e021cb274device_87 9c93ab-4df1-435c-af02-565039fcc254=VmDevice:{id='VmDeviceId: {deviceId='879c93ab-4df1-435c-af02-565039fcc254', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='unix', type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, controller=0, type=virtio-serial, port=1}', managed='false', plugged='true', readOnly='false', deviceAlias='channel0', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}, device_fbddd528-7d93-49c6-a286 -180e021cb274device_879c93ab-4df1-435c-af02-565039fcc254devi ce_8945f61a-abbe-4156-8485-a4aa6f1908dbdevice_017b5e59-01c4- 4aac-bf0c-b5d9557284d6=VmDevice:{id='VmDeviceId:{deviceId='0 17b5e59-01c4-4aac-bf0c-b5d9557284d6', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='tablet', type='UNKNOWN', bootOrder='0', specParams='[]', address='{bus=0, type=usb, port=1}', managed='false', plugged='true', readOnly='false', deviceAlias='input0', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}, device_fbddd528-7d93-49c6-a286-180e021cb274=VmDevice:{id='Vm DeviceId:{deviceId='fbddd528-7d93-49c6-a286-180e021cb274', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='ide', type='CONTROLLER', bootOrder='0', specParams='[]', address='{slot=0x01, bus=0x00, domain=0x0000, type=pci, function=0x1}', managed='false', plugged='true', readOnly='false', deviceAlias='ide', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}, device_fbddd528-7d93-49c6-a286-180e021cb274device_879c93ab-4 df1-435c-af02-565039fcc254device_8945f61a-abbe-4156-8485-a4a a6f1908db=VmDevice:{id='VmDeviceId:{deviceId='8945f61a-abbe-4156-8485-a4aa6f1908db', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='unix', type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, controller=0, type=virtio-serial, port=2}', managed='false', plugged='true', readOnly='false', deviceAlias='channel1', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}}, vmType=kvm, memSize=8192, smpCoresPerSocket=1, vmName=Oracle_SECONDARY, nice=0, status=Migration Source, maxMemSize=32768, bootMenuEnable=false, vmId=3f57e669-5e4c-4d10-85cc-d573004a099d, numOfIoThreads=2, smpThreadsPerCore=1, memGuaranteedSize=8192, kvmEnable=true, pitReinjection=false, displayNetwork=ovirtmgmt, devices=[Ljava.lang.Object;@28ae66d7, display=vnc, maxVCpus=16, clientIp=, statusTime=4299484520, maxMemSlots=16}], log id: 54b4b435 2018-02-12 16:46:31,150+01 INFO [org.ovirt.engine.core.vdsbro ker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler1) [27fac647] Fetched 3 VMs from VDS 'd569c2dd-8f30-4878-8aea-858db285cf69' 2018-02-12 16:46:31,151+01 INFO [org.ovirt.engine.core.vdsbro ker.monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler9) [54a65b66] Received a vnc Device without an address when processing VM 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: {device=vnc, specParams={displayNetwork=ovirtmgmt, keyMap=fr, displayIp=192.168.0.6}, type=graphics, deviceId=813957b1-446a-4e88-9e40-9fe76d2c442d, port=5901} 2018-02-12 16:46:31,151+01 INFO [org.ovirt.engine.core.vdsbro ker.monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler9) [54a65b66] Received a lease Device without an address when processing VM 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: {lease_id=3f57e669-5e4c-4d10-85cc-d573004a099d, sd_id=1e51cecc-eb2e-47d0-b185-920fdc7afa16, deviceId={uuid=a09949aa-5642-4b6d-94a4-8b0d04257be5}, offset=6291456, device=lease, path=/rhev/data-center/mnt/glusterSD/192.168.0.6: _DATA01/1e51cecc-eb2e-47d0-b185-920fdc7afa16/dom_md/xleases, type=lease} 2018-02-12 16:46:31,152+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (DefaultQuartzScheduler1) [27fac647] VM '3f57e669-5e4c-4d10-85cc-d573004a099d'(Oracle_SECONDARY) was unexpectedly detected as 'MigratingTo' on VDS 'd569c2dd-8f30-4878-8aea-858db285cf69'(ginger.local.systea.fr) (expected on 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1') 2018-02-12 16:46:31,152+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (DefaultQuartzScheduler1) [27fac647] VM '3f57e669-5e4c-4d10-85cc-d573004a099d' is migrating to VDS 'd569c2dd-8f30-4878-8aea-858db285cf69'( ginger.local.systea.fr) ignoring it in the refresh until migration is done .... 2018-02-12 16:46:41,631+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-11) [] VM '3f57e669-5e4c-4d10-85cc-d573004a099d' was reported as Down on VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'( victor.local.systea.fr) 2018-02-12 16:46:41,632+01 INFO [org.ovirt.engine.core.vdsbro ker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-11) [] START, DestroyVDSCommand(HostName = victor.local.systea.fr, DestroyVmVDSCommandParameters:{runAsync='true', hostId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d', force='false', secondsToWait='0', gracefully='false', reason='', ignoreNoVm='true'}), log id: 560eca57 2018-02-12 16:46:41,650+01 INFO [org.ovirt.engine.core.vdsbro ker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-11) [] FINISH, DestroyVDSCommand, log id: 560eca57 2018-02-12 16:46:41,650+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-11) [] VM '3f57e669-5e4c-4d10-85cc-d573004a099d'(Oracle_SECONDARY) moved from 'MigratingFrom' --> 'Down' 2018-02-12 16:46:41,651+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-11) [] Handing over VM '3f57e669-5e4c-4d10-85cc-d573004a099d'(Oracle_SECONDARY) to Host 'd569c2dd-8f30-4878-8aea-858db285cf69'. Setting VM to status 'MigratingTo' 2018-02-12 16:46:42,163+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-4) [] VM '3f57e669-5e4c-4d10-85cc-d573004a099d'(Oracle_SECONDARY) moved from 'MigratingTo' --> 'Up' 2018-02-12 16:46:42,169+01 INFO [org.ovirt.engine.core.vdsbro ker.vdsbroker.MigrateStatusVDSCommand] (ForkJoinPool-1-worker-4) [] START, MigrateStatusVDSCommand(HostName = ginger.local.systea.fr, MigrateStatusVDSCommandParameters:{runAsync='true', hostId='d569c2dd-8f30-4878-8aea-858db285cf69', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}), log id: 7a25c281 2018-02-12 16:46:42,174+01 INFO [org.ovirt.engine.core.vdsbro ker.vdsbroker.MigrateStatusVDSCommand] (ForkJoinPool-1-worker-4) [] FINISH, MigrateStatusVDSCommand, log id: 7a25c281 2018-02-12 16:46:42,194+01 INFO [org.ovirt.engine.core.dal.db broker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-4) [] EVENT_ID: VM_MIGRATION_DONE(63), Correlation ID: 2f712024-5982-46a8-82c8-fd8293da5725, Job ID: 4bd19aa9-cc99-4d02-884e-5a1e857a7738, Call Stack: null, Custom ID: null, Custom Event ID: -1, Message: Migration completed (VM: Oracle_SECONDARY, Source: victor.local.systea.fr, Destination: ginger.local.systea.fr, Duration: 11 seconds, Total: 11 seconds, Actual downtime: (N/A)) 2018-02-12 16:46:42,201+01 INFO [org.ovirt.engine.core.bll.MigrateVmToServerCommand] (ForkJoinPool-1-worker-4) [] Lock freed to object 'EngineLock:{exclusiveLocks='[3f57e669-5e4c-4d10-85cc-d573004a099d=VM]', sharedLocks=''}' 2018-02-12 16:46:42,203+01 INFO [org.ovirt.engine.core.vdsbro ker.vdsbroker.FullListVDSCommand] (ForkJoinPool-1-worker-4) [] START, FullListVDSCommand(HostName = ginger.local.systea.fr, FullListVDSCommandParameters:{runAsync='true', hostId='d569c2dd-8f30-4878-8aea-858db285cf69', vmIds='[3f57e669-5e4c-4d10-85cc-d573004a099d]'}), log id: 7cc65298 2018-02-12 16:46:42,254+01 INFO [org.ovirt.engine.core.vdsbro ker.vdsbroker.FullListVDSCommand] (ForkJoinPool-1-worker-4) [] FINISH, FullListVDSCommand, return: [{acpiEnable=true, emulatedMachine=pc-i440fx-rhel7.3.0, afterMigrationStatus=, tabletEnable=true, pid=18748, guestDiskMapping={}, transparentHugePages=true, timeOffset=0, cpuType=Nehalem, smp=2, guestNumaNodes=[Ljava.lang.Object;@760085fd, custom={device_fbddd528-7d93-49c6-a286-180e021cb274device_87 9c93ab-4df1-435c-af02-565039fcc254=VmDevice:{id='VmDeviceId: {deviceId='879c93ab-4df1-435c-af02-565039fcc254', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='unix', type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, controller=0, type=virtio-serial, port=1}', managed='false', plugged='true', readOnly='false', deviceAlias='channel0', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}, device_fbddd528-7d93-49c6-a286 -180e021cb274device_879c93ab-4df1-435c-af02-565039fcc254devi ce_8945f61a-abbe-4156-8485-a4aa6f1908dbdevice_017b5e59-01c4- 4aac-bf0c-b5d9557284d6=VmDevice:{id='VmDeviceId:{deviceId='0 17b5e59-01c4-4aac-bf0c-b5d9557284d6', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='tablet', type='UNKNOWN', bootOrder='0', specParams='[]', address='{bus=0, type=usb, port=1}', managed='false', plugged='true', readOnly='false', deviceAlias='input0', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}, device_fbddd528-7d93-49c6-a286-180e021cb274=VmDevice:{id='Vm DeviceId:{deviceId='fbddd528-7d93-49c6-a286-180e021cb274', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='ide', type='CONTROLLER', bootOrder='0', specParams='[]', address='{slot=0x01, bus=0x00, domain=0x0000, type=pci, function=0x1}', managed='false', plugged='true', readOnly='false', deviceAlias='ide', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}, device_fbddd528-7d93-49c6-a286-180e021cb274device_879c93ab-4 df1-435c-af02-565039fcc254device_8945f61a-abbe-4156-8485-a4a a6f1908db=VmDevice:{id='VmDeviceId:{deviceId='8945f61a-abbe-4156-8485-a4aa6f1908db', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='unix', type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, controller=0, type=virtio-serial, port=2}', managed='false', plugged='true', readOnly='false', deviceAlias='channel1', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}}, vmType=kvm, memSize=8192, smpCoresPerSocket=1, vmName=Oracle_SECONDARY, nice=0, status=Up, maxMemSize=32768, bootMenuEnable=false, vmId=3f57e669-5e4c-4d10-85cc-d573004a099d, numOfIoThreads=2, smpThreadsPerCore=1, smartcardEnable=false, maxMemSlots=16, kvmEnable=true, pitReinjection=false, displayNetwork=ovirtmgmt, devices=[Ljava.lang.Object;@2e4d3dd3, memGuaranteedSize=8192, maxVCpus=16, clientIp=, statusTime=4304259600 <(430)%20425-9600>, display=vnc}], log id: 7cc65298 2018-02-12 16:46:42,257+01 INFO [org.ovirt.engine.core.vdsbro ker.monitoring.VmDevicesMonitoring] (ForkJoinPool-1-worker-4) [] Received a vnc Device without an address when processing VM 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: {device=vnc, specParams={displayNetwork=ovirtmgmt, keyMap=fr, displayIp=192.168.0.5}, type=graphics, deviceId=813957b1-446a-4e88-9e40-9fe76d2c442d, port=5901} 2018-02-12 16:46:42,257+01 INFO [org.ovirt.engine.core.vdsbro ker.monitoring.VmDevicesMonitoring] (ForkJoinPool-1-worker-4) [] Received a lease Device without an address when processing VM 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: {lease_id=3f57e669-5e4c-4d10-85cc-d573004a099d, sd_id=1e51cecc-eb2e-47d0-b185-920fdc7afa16, deviceId={uuid=a09949aa-5642-4b6d-94a4-8b0d04257be5}, offset=6291456, device=lease, path=/rhev/data-center/mnt/glusterSD/192.168.0.6: _DATA01/1e51cecc-eb2e-47d0-b185-920fdc7afa16/dom_md/xleases, type=lease} 2018-02-12 16:46:46,260+01 INFO [org.ovirt.engine.core.vdsbro ker.vdsbroker.FullListVDSCommand] (DefaultQuartzScheduler5) [7fcb200a] FINISH, FullListVDSCommand, return: [{acpiEnable=true, emulatedMachine=pc-i440fx-rhel7.3.0, afterMigrationStatus=, tabletEnable=true, pid=18748, guestDiskMapping={0QEMU_QEMU_H ARDDISK_d890fa68-fba4-4f49-9={name=/dev/sda}, QEMU_DVD-ROM_QM00003={name=/dev/sr0}}, transparentHugePages=true, timeOffset=0, cpuType=Nehalem, smp=2, guestNumaNodes=[Ljava.lang.Obj ect;@77951faf, custom={device_fbddd528-7d93-4 9c6-a286-180e021cb274device_879c93ab-4df1-435c-af02-565039fc c254=VmDevice:{id='VmDeviceId:{deviceId='879c93ab-4df1-435c-af02-565039fcc254', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='unix', type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, controller=0, type=virtio-serial, port=1}', managed='false', plugged='true', readOnly='false', deviceAlias='channel0', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}, device_fbddd528-7d93-49c6-a286 -180e021cb274device_879c93ab-4df1-435c-af02-565039fcc254devi ce_8945f61a-abbe-4156-8485-a4aa6f1908dbdevice_017b5e59-01c4- 4aac-bf0c-b5d9557284d6=VmDevice:{id='VmDeviceId:{deviceId='0 17b5e59-01c4-4aac-bf0c-b5d9557284d6', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='tablet', type='UNKNOWN', bootOrder='0', specParams='[]', address='{bus=0, type=usb, port=1}', managed='false', plugged='true', readOnly='false', deviceAlias='input0', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}, device_fbddd528-7d93-49c6-a286-180e021cb274=VmDevice:{id='Vm DeviceId:{deviceId='fbddd528-7d93-49c6-a286-180e021cb274', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='ide', type='CONTROLLER', bootOrder='0', specParams='[]', address='{slot=0x01, bus=0x00, domain=0x0000, type=pci, function=0x1}', managed='false', plugged='true', readOnly='false', deviceAlias='ide', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}, device_fbddd528-7d93-49c6-a286-180e021cb274device_879c93ab-4 df1-435c-af02-565039fcc254device_8945f61a-abbe-4156-8485-a4a a6f1908db=VmDevice:{id='VmDeviceId:{deviceId='8945f61a-abbe-4156-8485-a4aa6f1908db', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='unix', type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, controller=0, type=virtio-serial, port=2}', managed='false', plugged='true', readOnly='false', deviceAlias='channel1', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}}, vmType=kvm, memSize=8192, smpCoresPerSocket=1, vmName=Oracle_SECONDARY, nice=0, status=Up, maxMemSize=32768, bootMenuEnable=false, vmId=3f57e669-5e4c-4d10-85cc-d573004a099d, numOfIoThreads=2, smpThreadsPerCore=1, smartcardEnable=false, maxMemSlots=16, kvmEnable=true, pitReinjection=false, displayNetwork=ovirtmgmt, devices=[Ljava.lang.Object;@286410fd, memGuaranteedSize=8192, maxVCpus=16, clientIp=, statusTime=4304263620 <(430)%20426-3620>, display=vnc}], log id: 58cdef4c 2018-02-12 16:46:46,267+01 INFO [org.ovirt.engine.core.vdsbro ker.monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler5) [7fcb200a] Received a vnc Device without an address when processing VM 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: {device=vnc, specParams={displayNetwork=ovirtmgmt, keyMap=fr, displayIp=192.168.0.5}, type=graphics, deviceId=813957b1-446a-4e88-9e40-9fe76d2c442d, port=5901} 2018-02-12 16:46:46,268+01 INFO [org.ovirt.engine.core.vdsbro ker.monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler5) [7fcb200a] Received a lease Device without an address when processing VM 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: {lease_id=3f57e669-5e4c-4d10-85cc-d573004a099d, sd_id=1e51cecc-eb2e-47d0-b185-920fdc7afa16, deviceId={uuid=a09949aa-5642-4b6d-94a4-8b0d04257be5}, offset=6291456, device=lease, path=/rhev/data-center/mnt/glusterSD/192.168.0.6: _DATA01/1e51cecc-eb2e-47d0-b185-920fdc7afa16/dom_md/xleases, type=lease}
For the VM with 2 vdisks we see :
2018-02-12 16:49:06,112+01 INFO [org.ovirt.engine.core.bll.MigrateVmToServerCommand] (default task-50) [92b5af33-cb87-4142-b8fe-8b838dd7458e] Lock Acquired to object 'EngineLock:{exclusiveLocks='[f7d4ec12-627a-4b83-b59e-886400d55474=VM]', sharedLocks=''}' 2018-02-12 16:49:06,407+01 INFO [org.ovirt.engine.core.bll.MigrateVmToServerCommand] (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4142-b8fe-8b838dd7458e] Running command: MigrateVmToServerCommand internal: false. Entities affected : ID: f7d4ec12-627a-4b83-b59e-886400d55474 Type: VMAction group MIGRATE_VM with role type USER 2018-02-12 16:49:06,712+01 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4142-b8fe-8b838dd7458e] START, MigrateVDSCommand( MigrateVDSCommandParameters:{runAsync='true', hostId='d569c2dd-8f30-4878-8aea-858db285cf69', vmId='f7d4ec12-627a-4b83-b59e-886400d55474', srcHost='192.168.0.5', dstVdsId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1', dstHost=' 192.168.0.6:54321', migrationMethod='ONLINE', tunnelMigration='false', migrationDowntime='0', autoConverge='true', migrateCompressed='false', consoleAddress='null', maxBandwidth='500', enableGuestEvents='true', maxIncomingMigrations='2', maxOutgoingMigrations='2', convergenceSchedule='[init=[{name=setDowntime, params=[100]}], stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2, action={name=setDowntime, params=[200]}}, {limit=3, action={name=setDowntime, params=[300]}}, {limit=4, action={name=setDowntime, params=[400]}}, {limit=6, action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort, params=[]}}]]'}), log id: 3702a9e0 2018-02-12 16:49:06,713+01 INFO [org.ovirt.engine.core.vdsbro ker.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4142-b8fe-8b838dd7458e] START, MigrateBrokerVDSCommand(HostName = ginger.local.systea.fr, MigrateVDSCommandParameters:{runAsync='true', hostId='d569c2dd-8f30-4878-8aea-858db285cf69', vmId='f7d4ec12-627a-4b83-b59e-886400d55474', srcHost='192.168.0.5', dstVdsId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1', dstHost=' 192.168.0.6:54321', migrationMethod='ONLINE', tunnelMigration='false', migrationDowntime='0', autoConverge='true', migrateCompressed='false', consoleAddress='null', maxBandwidth='500', enableGuestEvents='true', maxIncomingMigrations='2', maxOutgoingMigrations='2', convergenceSchedule='[init=[{name=setDowntime, params=[100]}], stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2, action={name=setDowntime, params=[200]}}, {limit=3, action={name=setDowntime, params=[300]}}, {limit=4, action={name=setDowntime, params=[400]}}, {limit=6, action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort, params=[]}}]]'}), log id: 1840069c 2018-02-12 16:49:06,724+01 INFO [org.ovirt.engine.core.vdsbro ker.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4142-b8fe-8b838dd7458e] FINISH, MigrateBrokerVDSCommand, log id: 1840069c 2018-02-12 16:49:06,732+01 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4142-b8fe-8b838dd7458e] FINISH, MigrateVDSCommand, return: MigratingFrom, log id: 3702a9e0 2018-02-12 16:49:06,753+01 INFO [org.ovirt.engine.core.dal.db broker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4142-b8fe-8b838dd7458e] EVENT_ID: VM_MIGRATION_START(62), Correlation ID: 92b5af33-cb87-4142-b8fe-8b838dd7458e, Job ID: f4f54054-f7c8-4481-8eda-d5a15c383061, Call Stack: null, Custom ID: null, Custom Event ID: -1, Message: Migration started (VM: Oracle_PRIMARY, Source: ginger.local.systea.fr, Destination: victor.local.systea.fr, User: admin@internal-authz). ... 2018-02-12 16:49:16,453+01 INFO [org.ovirt.engine.core.vdsbro ker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler4) [162a5bc3] Fetched 2 VMs from VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1' 2018-02-12 16:49:16,455+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (DefaultQuartzScheduler4) [162a5bc3] VM 'f7d4ec12-627a-4b83-b59e-886400d55474'(Oracle_PRIMARY) was unexpectedly detected as 'MigratingTo' on VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'(victor.local.systea.fr) (expected on 'd569c2dd-8f30-4878-8aea-858db285cf69') 2018-02-12 16:49:16,455+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (DefaultQuartzScheduler4) [162a5bc3] VM 'f7d4ec12-627a-4b83-b59e-886400d55474' is migrating to VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'( victor.local.systea.fr) ignoring it in the refresh until migration is done ... 2018-02-12 16:49:31,484+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (DefaultQuartzScheduler5) [11a7619a] VM 'f7d4ec12-627a-4b83-b59e-886400d55474'(Oracle_PRIMARY) was unexpectedly detected as 'MigratingTo' on VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'(victor.local.systea.fr) (expected on 'd569c2dd-8f30-4878-8aea-858db285cf69') 2018-02-12 16:49:31,484+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (DefaultQuartzScheduler5) [11a7619a] VM 'f7d4ec12-627a-4b83-b59e-886400d55474' is migrating to VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'( victor.local.systea.fr) ignoring it in the refresh until migration is done
and so on, last lines repeated indefinitly for hours since we poweroff the VM... Is this something known ? Any idea about that ?
Thanks
Ovirt 4.1.6, updated last at feb-13. Gluster 3.12.1.
--
Cordialement,
*Frank Soyer *
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

------=_=-_OpenGroupware_org_NGMime-18019-1519921630.582362-42------ Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable Content-Length: 38502 I Milan, I tried to activate the debug mode, but the restart of libvirt crashed = something on the host : it was no more possible to start any vm on it, = and migration to it just never started. So I decided to restart it, and= to be sure, I've restarted all the hosts. And... now the migration of all VMs, simple or multi-disks, works ?!? S= o, there was probably something hidden that was resetted or repaired by= the global restart !.... In french, we call that "tomber en marche" ;)= So : solved. Thank you for the wasted time ! -- Cordialement, Frank Soyer Mob. 06 72 28 38 53 - Fix. 05 49 50 52 34 Le Lundi, F=C3=A9vrier 26, 2018 12:59 CET, Milan Zamazal <mzamazal@redh= at.com> a =C3=A9crit: =C2=A0"fsoyer" <fsoyer@systea.fr> writes: > I don't beleive that this is relatd to a host, tests have been done f= rom victor > source to ginger dest and ginger to victor. I don't see problems on s= torage > (gluster 3.12 native managed by ovirt), when VMs with a uniq disk fro= m 20 to > 250G migrate without error in some seconds and with no downtime. The host itself may be fine, but libvirt/QEMU running there may expose problems, perhaps just for some VMs. According to your logs something is not behaving as expected on the source host during the faulty migration. > How ca I enable this libvirt debug mode ? Set the following options in /etc/libvirt/libvirtd.conf (look for examples in comments there) - log=5Flevel=3D1 - log=5Foutputs=3D"1:file:/var/log/libvirt/libvirtd.log" and restart libvirt. Then /var/log/libvirt/libvirtd.log should contain the log. It will be huge, so I suggest to enable it only for the time of reproducing the problem. > -- > > Cordialement, > > Frank Soyer > > =C2=A0 > > Le Vendredi, F=C3=A9vrier 23, 2018 09:56 CET, Milan Zamazal <mzamazal= @redhat.com> a =C3=A9crit: > =C2=A0Maor Lipchuk <mlipchuk@redhat.com> writes: > >> I encountered a bug (see [1]) which contains the same error mentione= d in >> your VDSM logs (see [2]), but I doubt it is related. > > Indeed, it's not related. > > The error in vdsm=5Fvictor.log just means that the info gathering cal= l > tries to access libvirt domain before the incoming migration is > completed. It's ugly but harmless. > >> Milan, maybe you have any advice to troubleshoot the issue? Will the= >> libvirt/qemu logs can help? > > It seems there is something wrong on (at least) the source host. Ther= e > are no migration progress messages in the vdsm=5Fginger.log and there= are > warnings about stale stat samples. That looks like problems with > calling libvirt =E2=80=93 slow and/or stuck calls, maybe due to stora= ge > problems. The possibly faulty second disk could cause that. > > libvirt debug logs could tell us whether that is indeed the problem a= nd > whether it is caused by storage or something else. > >> I would suggest to open a bug on that issue so we can track it more >> properly. >> >> Regards, >> Maor >> >> >> [1] >> https://bugzilla.redhat.com/show=5Fbug.cgi?id=3D1486543 - Migration = leads to >> VM running on 2 Hosts >> >> [2] >> 2018-02-16 09:43:35,236+0100 ERROR (jsonrpc/7) [jsonrpc.JsonRpcServe= r] >> Internal server error (=5F=5Finit=5F=5F:577) >> Traceback (most recent call last): >> File "/usr/lib/python2.7/site-packages/yajsonrpc/=5F=5Finit=5F=5F.py= ", line 572, >> in =5Fhandle=5Frequest >> res =3D method(**params) >> File "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 198= , in >> =5FdynamicMethod >> result =3D fn(*methodArgs) >> File "/usr/share/vdsm/API.py", line 1454, in getAllVmIoTunePolicies >> io=5Ftune=5Fpolicies=5Fdict =3D self.=5Fcif.getAllVmIoTunePolicies()= >> File "/usr/share/vdsm/clientIF.py", line 454, in getAllVmIoTunePolic= ies >> 'current=5Fvalues': v.getIoTune()} >> File "/usr/share/vdsm/virt/vm.py", line 2859, in getIoTune >> result =3D self.getIoTuneResponse() >> File "/usr/share/vdsm/virt/vm.py", line 2878, in getIoTuneResponse >> res =3D self.=5Fdom.blockIoTune( >> File "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py", line= 47, >> in =5F=5Fgetattr=5F=5F >> % self.vmid) >> NotConnectedError: VM u'755cf168-de65-42ed-b22f-efe9136f7594' was no= t >> started yet or was shut down >> >> On Thu, Feb 22, 2018 at 4:22 PM, fsoyer <fsoyer@systea.fr> wrote: >> >>> Hi, >>> Yes, on 2018-02-16 (vdsm logs) I tried with a VM standing on ginger= >>> (192.168.0.6) migrated (or failed to migrate...) to victor (192.168= .0.5), >>> while the engine.log in the first mail on 2018-02-12 was for VMs st= anding >>> on victor, migrated (or failed to migrate...) to ginger. Symptoms w= ere >>> exactly the same, in both directions, and VMs works like a charm be= fore, >>> and even after (migration "killed" by a poweroff of VMs). >>> Am I the only one experimenting this problem ? >>> >>> >>> Thanks >>> -- >>> >>> Cordialement, >>> >>> *Frank Soyer * >>> >>> >>> >>> Le Jeudi, F=C3=A9vrier 22, 2018 00:45 CET, Maor Lipchuk <mlipchuk@r= edhat.com> >>> a =C3=A9crit: >>> >>> >>> Hi Frank, >>> >>> Sorry about the delay repond. >>> I've been going through the logs you attached, although I could not= find >>> any specific indication why the migration failed because of the dis= k you >>> were mentionning. >>> Does this VM run with both disks on the target host without migrati= on? >>> >>> Regards, >>> Maor >>> >>> >>> On Fri, Feb 16, 2018 at 11:03 AM, fsoyer <fsoyer@systea.fr> wrote: >>>> >>>> Hi Maor, >>>> sorry for the double post, I've change the email adress of my acco= unt and >>>> supposed that I'd need to re-post it. >>>> And thank you for your time. Here are the logs. I added a vdisk to= an >>>> existing VM : it no more migrates, needing to poweroff it after mi= nutes. >>>> Then simply deleting the second disk makes migrate it in exactly 9= s without >>>> problem ! >>>> https://gist.github.com/fgth/4707446331d201eef574ac31b6e89561 >>>> https://gist.github.com/fgth/f8de9c22664aee53722af676bff8719d >>>> >>>> -- >>>> >>>> Cordialement, >>>> >>>> *Frank Soyer * >>>> Le Mercredi, F=C3=A9vrier 14, 2018 11:04 CET, Maor Lipchuk < >>>> mlipchuk@redhat.com> a =C3=A9crit: >>>> >>>> >>>> Hi Frank, >>>> >>>> I already replied on your last email. >>>> Can you provide the VDSM logs from the time of the migration failu= re for >>>> both hosts: >>>> ginger.local.systea.f <http://ginger.local.systea.fr/>r and v >>>> ictor.local.systea.fr >>>> >>>> Thanks, >>>> Maor >>>> >>>> On Wed, Feb 14, 2018 at 11:23 AM, fsoyer <fsoyer@systea.fr> wrote:= >>>>> >>>>> Hi all, >>>>> I discovered yesterday a problem when migrating VM with more than= one >>>>> vdisk. >>>>> On our test servers (oVirt4.1, shared storage with Gluster), I cr= eated 2 >>>>> VMs needed for a test, from a template with a 20G vdisk. On this = VMs I >>>>> added a 100G vdisk (for this tests I didn't want to waste time to= extend >>>>> the existing vdisks... But I lost time finally...). The VMs with = the 2 >>>>> vdisks works well. >>>>> Now I saw some updates waiting on the host. I tried to put it in >>>>> maintenance... But it stopped on the two VM. They were marked "mi= grating", >>>>> but no more accessible. Other (small) VMs with only 1 vdisk was m= igrated >>>>> without problem at the same time. >>>>> I saw that a kvm process for the (big) VMs was launched on the so= urce >>>>> AND destination host, but after tens of minutes, the migration an= d the VMs >>>>> was always freezed. I tried to cancel the migration for the VMs := failed. >>>>> The only way to stop it was to poweroff the VMs : the kvm process= died on >>>>> the 2 hosts and the GUI alerted on a failed migration. >>>>> In doubt, I tried to delete the second vdisk on one of this VMs := it >>>>> migrates then without error ! And no access problem. >>>>> I tried to extend the first vdisk of the second VM, the delete th= e >>>>> second vdisk : it migrates now without problem ! >>>>> >>>>> So after another test with a VM with 2 vdisks, I can say that thi= s >>>>> blocked the migration process :( >>>>> >>>>> In engine.log, for a VMs with 1 vdisk migrating well, we see : >>>>> >>>>> 2018-02-12 16:46:29,705+01 INFO [org.ovirt.engine.core.bll.Migrat= eVmToServerCommand] >>>>> (default task-28) [2f712024-5982-46a8-82c8-fd8293da5725] Lock Acq= uired >>>>> to object 'EngineLock:{exclusiveLocks=3D'[3f57e669-5e4c-4d10-85cc= -d573004a099d=3DVM]', >>>>> sharedLocks=3D''}' >>>>> 2018-02-12 16:46:29,955+01 INFO [org.ovirt.engine.core.bll.Migrat= eVmToServerCommand] >>>>> (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd82= 93da5725] >>>>> Running command: MigrateVmToServerCommand internal: false. Entiti= es >>>>> affected : ID: 3f57e669-5e4c-4d10-85cc-d573004a099d Type: VMActio= n >>>>> group MIGRATE=5FVM with role type USER >>>>> 2018-02-12 16:46:30,261+01 INFO [org.ovirt.engine.core.vdsbroker.= MigrateVDSCommand] >>>>> (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd82= 93da5725] >>>>> START, MigrateVDSCommand( MigrateVDSCommandParameters:{runAsync=3D= 'true', >>>>> hostId=3D'ce3938b1-b23f-4d22-840a-f17d7cd87bb1', >>>>> vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d', srcHost=3D'192.168= .0.6', >>>>> dstVdsId=3D'd569c2dd-8f30-4878-8aea-858db285cf69', dstHost=3D' >>>>> 192.168.0.5:54321', migrationMethod=3D'ONLINE', tunnelMigration=3D= 'false', >>>>> migrationDowntime=3D'0', autoConverge=3D'true', migrateCompressed= =3D'false', >>>>> consoleAddress=3D'null', maxBandwidth=3D'500', enableGuestEvents=3D= 'true', >>>>> maxIncomingMigrations=3D'2', maxOutgoingMigrations=3D'2', >>>>> convergenceSchedule=3D'[init=3D[{name=3DsetDowntime, params=3D[10= 0]}], >>>>> stalling=3D[{limit=3D1, action=3D{name=3DsetDowntime, params=3D[1= 50]}}, {limit=3D2, >>>>> action=3D{name=3DsetDowntime, params=3D[200]}}, {limit=3D3, >>>>> action=3D{name=3DsetDowntime, params=3D[300]}}, {limit=3D4, >>>>> action=3D{name=3DsetDowntime, params=3D[400]}}, {limit=3D6, >>>>> action=3D{name=3DsetDowntime, params=3D[500]}}, {limit=3D-1, acti= on=3D{name=3Dabort, >>>>> params=3D[]}}]]'}), log id: 14f61ee0 >>>>> 2018-02-12 16:46:30,262+01 INFO [org.ovirt.engine.core.vdsbro >>>>> ker.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-t= hread-32) >>>>> [2f712024-5982-46a8-82c8-fd8293da5725] START, >>>>> MigrateBrokerVDSCommand(HostName =3D victor.local.systea.fr, >>>>> MigrateVDSCommandParameters:{runAsync=3D'true', >>>>> hostId=3D'ce3938b1-b23f-4d22-840a-f17d7cd87bb1', >>>>> vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d', srcHost=3D'192.168= .0.6', >>>>> dstVdsId=3D'd569c2dd-8f30-4878-8aea-858db285cf69', dstHost=3D' >>>>> 192.168.0.5:54321', migrationMethod=3D'ONLINE', tunnelMigration=3D= 'false', >>>>> migrationDowntime=3D'0', autoConverge=3D'true', migrateCompressed= =3D'false', >>>>> consoleAddress=3D'null', maxBandwidth=3D'500', enableGuestEvents=3D= 'true', >>>>> maxIncomingMigrations=3D'2', maxOutgoingMigrations=3D'2', >>>>> convergenceSchedule=3D'[init=3D[{name=3DsetDowntime, params=3D[10= 0]}], >>>>> stalling=3D[{limit=3D1, action=3D{name=3DsetDowntime, params=3D[1= 50]}}, {limit=3D2, >>>>> action=3D{name=3DsetDowntime, params=3D[200]}}, {limit=3D3, >>>>> action=3D{name=3DsetDowntime, params=3D[300]}}, {limit=3D4, >>>>> action=3D{name=3DsetDowntime, params=3D[400]}}, {limit=3D6, >>>>> action=3D{name=3DsetDowntime, params=3D[500]}}, {limit=3D-1, acti= on=3D{name=3Dabort, >>>>> params=3D[]}}]]'}), log id: 775cd381 >>>>> 2018-02-12 16:46:30,277+01 INFO [org.ovirt.engine.core.vdsbro >>>>> ker.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-t= hread-32) >>>>> [2f712024-5982-46a8-82c8-fd8293da5725] FINISH, MigrateBrokerVDSCo= mmand, >>>>> log id: 775cd381 >>>>> 2018-02-12 16:46:30,285+01 INFO [org.ovirt.engine.core.vdsbroker.= MigrateVDSCommand] >>>>> (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd82= 93da5725] >>>>> FINISH, MigrateVDSCommand, return: MigratingFrom, log id: 14f61ee= 0 >>>>> 2018-02-12 16:46:30,301+01 INFO [org.ovirt.engine.core.dal.db >>>>> broker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-= 6-thread-32) >>>>> [2f712024-5982-46a8-82c8-fd8293da5725] EVENT=5FID: >>>>> VM=5FMIGRATION=5FSTART(62), Correlation ID: 2f712024-5982-46a8-82= c8-fd8293da5725, >>>>> Job ID: 4bd19aa9-cc99-4d02-884e-5a1e857a7738, Call Stack: null, C= ustom >>>>> ID: null, Custom Event ID: -1, Message: Migration started (VM: >>>>> Oracle=5FSECONDARY, Source: victor.local.systea.fr, Destination: >>>>> ginger.local.systea.fr, User: admin@internal-authz). >>>>> 2018-02-12 16:46:31,106+01 INFO [org.ovirt.engine.core.vdsbro >>>>> ker.vdsbroker.FullListVDSCommand] (DefaultQuartzScheduler9) [54a6= 5b66] >>>>> START, FullListVDSCommand(HostName =3D victor.local.systea.fr, >>>>> FullListVDSCommandParameters:{runAsync=3D'true', >>>>> hostId=3D'ce3938b1-b23f-4d22-840a-f17d7cd87bb1', >>>>> vmIds=3D'[3f57e669-5e4c-4d10-85cc-d573004a099d]'}), log id: 54b4b= 435 >>>>> 2018-02-12 16:46:31,147+01 INFO [org.ovirt.engine.core.vdsbro >>>>> ker.vdsbroker.FullListVDSCommand] (DefaultQuartzScheduler9) [54a6= 5b66] >>>>> FINISH, FullListVDSCommand, return: [{acpiEnable=3Dtrue, >>>>> emulatedMachine=3Dpc-i440fx-rhel7.3.0, tabletEnable=3Dtrue, pid=3D= 1493, >>>>> guestDiskMapping=3D{0QEMU=5FQEMU=5FHARDDISK=5Fd890fa68-fba4-4f49-= 9=3D{name=3D/dev/sda}, >>>>> QEMU=5FDVD-ROM=5FQM00003=3D{name=3D/dev/sr0}}, transparentHugePag= es=3Dtrue, >>>>> timeOffset=3D0, cpuType=3DNehalem, smp=3D2, pauseCode=3DNOERR, >>>>> guestNumaNodes=3D[Ljava.lang.Object;@1d9042cd, smartcardEnable=3D= false, >>>>> custom=3D{device=5Ffbddd528-7d93-49c6-a286-180e021cb274device=5F8= 7 >>>>> 9c93ab-4df1-435c-af02-565039fcc254=3DVmDevice:{id=3D'VmDeviceId: >>>>> {deviceId=3D'879c93ab-4df1-435c-af02-565039fcc254', >>>>> vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}', device=3D'unix',= >>>>> type=3D'CHANNEL', bootOrder=3D'0', specParams=3D'[]', address=3D'= {bus=3D0, >>>>> controller=3D0, type=3Dvirtio-serial, port=3D1}', managed=3D'fals= e', >>>>> plugged=3D'true', readOnly=3D'false', deviceAlias=3D'channel0', >>>>> customProperties=3D'[]', snapshotId=3D'null', logicalName=3D'null= ', >>>>> hostDevice=3D'null'}, device=5Ffbddd528-7d93-49c6-a286 >>>>> -180e021cb274device=5F879c93ab-4df1-435c-af02-565039fcc254devi >>>>> ce=5F8945f61a-abbe-4156-8485-a4aa6f1908dbdevice=5F017b5e59-01c4- >>>>> 4aac-bf0c-b5d9557284d6=3DVmDevice:{id=3D'VmDeviceId:{deviceId=3D'= 0 >>>>> 17b5e59-01c4-4aac-bf0c-b5d9557284d6', vmId=3D'3f57e669-5e4c-4d10-= 85cc-d573004a099d'}', >>>>> device=3D'tablet', type=3D'UNKNOWN', bootOrder=3D'0', specParams=3D= '[]', >>>>> address=3D'{bus=3D0, type=3Dusb, port=3D1}', managed=3D'false', p= lugged=3D'true', >>>>> readOnly=3D'false', deviceAlias=3D'input0', customProperties=3D'[= ]', >>>>> snapshotId=3D'null', logicalName=3D'null', hostDevice=3D'null'}, >>>>> device=5Ffbddd528-7d93-49c6-a286-180e021cb274=3DVmDevice:{id=3D'V= m >>>>> DeviceId:{deviceId=3D'fbddd528-7d93-49c6-a286-180e021cb274', >>>>> vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}', device=3D'ide', >>>>> type=3D'CONTROLLER', bootOrder=3D'0', specParams=3D'[]', address=3D= '{slot=3D0x01, >>>>> bus=3D0x00, domain=3D0x0000, type=3Dpci, function=3D0x1}', manage= d=3D'false', >>>>> plugged=3D'true', readOnly=3D'false', deviceAlias=3D'ide', custom= Properties=3D'[]', >>>>> snapshotId=3D'null', logicalName=3D'null', hostDevice=3D'null'}, >>>>> device=5Ffbddd528-7d93-49c6-a286-180e021cb274device=5F879c93ab-4 >>>>> df1-435c-af02-565039fcc254device=5F8945f61a-abbe-4156-8485-a4a >>>>> a6f1908db=3DVmDevice:{id=3D'VmDeviceId:{deviceId=3D'8945f61a-abbe= -4156-8485-a4aa6f1908db', >>>>> vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}', device=3D'unix',= >>>>> type=3D'CHANNEL', bootOrder=3D'0', specParams=3D'[]', address=3D'= {bus=3D0, >>>>> controller=3D0, type=3Dvirtio-serial, port=3D2}', managed=3D'fals= e', >>>>> plugged=3D'true', readOnly=3D'false', deviceAlias=3D'channel1', >>>>> customProperties=3D'[]', snapshotId=3D'null', logicalName=3D'null= ', >>>>> hostDevice=3D'null'}}, vmType=3Dkvm, memSize=3D8192, smpCoresPerS= ocket=3D1, >>>>> vmName=3DOracle=5FSECONDARY, nice=3D0, status=3DMigration Source,= maxMemSize=3D32768, >>>>> bootMenuEnable=3Dfalse, vmId=3D3f57e669-5e4c-4d10-85cc-d573004a09= 9d, >>>>> numOfIoThreads=3D2, smpThreadsPerCore=3D1, memGuaranteedSize=3D81= 92, >>>>> kvmEnable=3Dtrue, pitReinjection=3Dfalse, displayNetwork=3Dovirtm= gmt, >>>>> devices=3D[Ljava.lang.Object;@28ae66d7, display=3Dvnc, maxVCpus=3D= 16, >>>>> clientIp=3D, statusTime=3D4299484520, maxMemSlots=3D16}], log id:= 54b4b435 >>>>> 2018-02-12 16:46:31,150+01 INFO [org.ovirt.engine.core.vdsbro >>>>> ker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler1) >>>>> [27fac647] Fetched 3 VMs from VDS 'd569c2dd-8f30-4878-8aea-858db2= 85cf69' >>>>> 2018-02-12 16:46:31,151+01 INFO [org.ovirt.engine.core.vdsbro >>>>> ker.monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler9) >>>>> [54a65b66] Received a vnc Device without an address when processi= ng VM >>>>> 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: >>>>> {device=3Dvnc, specParams=3D{displayNetwork=3Dovirtmgmt, keyMap=3D= fr, >>>>> displayIp=3D192.168.0.6}, type=3Dgraphics, deviceId=3D813957b1-44= 6a-4e88-9e40-9fe76d2c442d, >>>>> port=3D5901} >>>>> 2018-02-12 16:46:31,151+01 INFO [org.ovirt.engine.core.vdsbro >>>>> ker.monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler9) >>>>> [54a65b66] Received a lease Device without an address when proces= sing VM >>>>> 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: >>>>> {lease=5Fid=3D3f57e669-5e4c-4d10-85cc-d573004a099d, >>>>> sd=5Fid=3D1e51cecc-eb2e-47d0-b185-920fdc7afa16, >>>>> deviceId=3D{uuid=3Da09949aa-5642-4b6d-94a4-8b0d04257be5}, offset=3D= 6291456, >>>>> device=3Dlease, path=3D/rhev/data-center/mnt/glusterSD/192.168.0.= 6: >>>>> =5FDATA01/1e51cecc-eb2e-47d0-b185-920fdc7afa16/dom=5Fmd/xleases, = type=3Dlease} >>>>> 2018-02-12 16:46:31,152+01 INFO [org.ovirt.engine.core.vdsbroker.= monitoring.VmAnalyzer] >>>>> (DefaultQuartzScheduler1) [27fac647] VM '3f57e669-5e4c-4d10-85cc-= d573004a099d'(Oracle=5FSECONDARY) >>>>> was unexpectedly detected as 'MigratingTo' on VDS >>>>> 'd569c2dd-8f30-4878-8aea-858db285cf69'(ginger.local.systea.fr) >>>>> (expected on 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1') >>>>> 2018-02-12 16:46:31,152+01 INFO [org.ovirt.engine.core.vdsbroker.= monitoring.VmAnalyzer] >>>>> (DefaultQuartzScheduler1) [27fac647] VM '3f57e669-5e4c-4d10-85cc-= d573004a099d' >>>>> is migrating to VDS 'd569c2dd-8f30-4878-8aea-858db285cf69'( >>>>> ginger.local.systea.fr) ignoring it in the refresh until migratio= n is >>>>> done >>>>> .... >>>>> 2018-02-12 16:46:41,631+01 INFO [org.ovirt.engine.core.vdsbroker.= monitoring.VmAnalyzer] >>>>> (ForkJoinPool-1-worker-11) [] VM '3f57e669-5e4c-4d10-85cc-d573004= a099d' >>>>> was reported as Down on VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1= '( >>>>> victor.local.systea.fr) >>>>> 2018-02-12 16:46:41,632+01 INFO [org.ovirt.engine.core.vdsbro >>>>> ker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-11) [] ST= ART, >>>>> DestroyVDSCommand(HostName =3D victor.local.systea.fr, >>>>> DestroyVmVDSCommandParameters:{runAsync=3D'true', >>>>> hostId=3D'ce3938b1-b23f-4d22-840a-f17d7cd87bb1', >>>>> vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d', force=3D'false', >>>>> secondsToWait=3D'0', gracefully=3D'false', reason=3D'', ignoreNoV= m=3D'true'}), log >>>>> id: 560eca57 >>>>> 2018-02-12 16:46:41,650+01 INFO [org.ovirt.engine.core.vdsbro >>>>> ker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-11) [] FI= NISH, >>>>> DestroyVDSCommand, log id: 560eca57 >>>>> 2018-02-12 16:46:41,650+01 INFO [org.ovirt.engine.core.vdsbroker.= monitoring.VmAnalyzer] >>>>> (ForkJoinPool-1-worker-11) [] VM '3f57e669-5e4c-4d10-85cc-d573004= a099d'(Oracle=5FSECONDARY) >>>>> moved from 'MigratingFrom' --> 'Down' >>>>> 2018-02-12 16:46:41,651+01 INFO [org.ovirt.engine.core.vdsbroker.= monitoring.VmAnalyzer] >>>>> (ForkJoinPool-1-worker-11) [] Handing over VM '3f57e669-5e4c-4d10= -85cc-d573004a099d'(Oracle=5FSECONDARY) >>>>> to Host 'd569c2dd-8f30-4878-8aea-858db285cf69'. Setting VM to sta= tus >>>>> 'MigratingTo' >>>>> 2018-02-12 16:46:42,163+01 INFO [org.ovirt.engine.core.vdsbroker.= monitoring.VmAnalyzer] >>>>> (ForkJoinPool-1-worker-4) [] VM '3f57e669-5e4c-4d10-85cc-d573004a= 099d'(Oracle=5FSECONDARY) >>>>> moved from 'MigratingTo' --> 'Up' >>>>> 2018-02-12 16:46:42,169+01 INFO [org.ovirt.engine.core.vdsbro >>>>> ker.vdsbroker.MigrateStatusVDSCommand] (ForkJoinPool-1-worker-4) = [] >>>>> START, MigrateStatusVDSCommand(HostName =3D ginger.local.systea.f= r, >>>>> MigrateStatusVDSCommandParameters:{runAsync=3D'true', >>>>> hostId=3D'd569c2dd-8f30-4878-8aea-858db285cf69', >>>>> vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}), log id: 7a25c281= >>>>> 2018-02-12 16:46:42,174+01 INFO [org.ovirt.engine.core.vdsbro >>>>> ker.vdsbroker.MigrateStatusVDSCommand] (ForkJoinPool-1-worker-4) = [] >>>>> FINISH, MigrateStatusVDSCommand, log id: 7a25c281 >>>>> 2018-02-12 16:46:42,194+01 INFO [org.ovirt.engine.core.dal.db >>>>> broker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-= 4) [] >>>>> EVENT=5FID: VM=5FMIGRATION=5FDONE(63), Correlation ID: >>>>> 2f712024-5982-46a8-82c8-fd8293da5725, Job ID: >>>>> 4bd19aa9-cc99-4d02-884e-5a1e857a7738, Call Stack: null, Custom ID= : >>>>> null, Custom Event ID: -1, Message: Migration completed (VM: >>>>> Oracle=5FSECONDARY, Source: victor.local.systea.fr, Destination: >>>>> ginger.local.systea.fr, Duration: 11 seconds, Total: 11 seconds, = Actual >>>>> downtime: (N/A)) >>>>> 2018-02-12 16:46:42,201+01 INFO [org.ovirt.engine.core.bll.Migrat= eVmToServerCommand] >>>>> (ForkJoinPool-1-worker-4) [] Lock freed to object >>>>> 'EngineLock:{exclusiveLocks=3D'[3f57e669-5e4c-4d10-85cc-d573004a0= 99d=3DVM]', >>>>> sharedLocks=3D''}' >>>>> 2018-02-12 16:46:42,203+01 INFO [org.ovirt.engine.core.vdsbro >>>>> ker.vdsbroker.FullListVDSCommand] (ForkJoinPool-1-worker-4) [] ST= ART, >>>>> FullListVDSCommand(HostName =3D ginger.local.systea.fr, >>>>> FullListVDSCommandParameters:{runAsync=3D'true', >>>>> hostId=3D'd569c2dd-8f30-4878-8aea-858db285cf69', >>>>> vmIds=3D'[3f57e669-5e4c-4d10-85cc-d573004a099d]'}), log id: 7cc65= 298 >>>>> 2018-02-12 16:46:42,254+01 INFO [org.ovirt.engine.core.vdsbro >>>>> ker.vdsbroker.FullListVDSCommand] (ForkJoinPool-1-worker-4) [] FI= NISH, >>>>> FullListVDSCommand, return: [{acpiEnable=3Dtrue, >>>>> emulatedMachine=3Dpc-i440fx-rhel7.3.0, afterMigrationStatus=3D, >>>>> tabletEnable=3Dtrue, pid=3D18748, guestDiskMapping=3D{}, >>>>> transparentHugePages=3Dtrue, timeOffset=3D0, cpuType=3DNehalem, s= mp=3D2, >>>>> guestNumaNodes=3D[Ljava.lang.Object;@760085fd, >>>>> custom=3D{device=5Ffbddd528-7d93-49c6-a286-180e021cb274device=5F8= 7 >>>>> 9c93ab-4df1-435c-af02-565039fcc254=3DVmDevice:{id=3D'VmDeviceId: >>>>> {deviceId=3D'879c93ab-4df1-435c-af02-565039fcc254', >>>>> vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}', device=3D'unix',= >>>>> type=3D'CHANNEL', bootOrder=3D'0', specParams=3D'[]', address=3D'= {bus=3D0, >>>>> controller=3D0, type=3Dvirtio-serial, port=3D1}', managed=3D'fals= e', >>>>> plugged=3D'true', readOnly=3D'false', deviceAlias=3D'channel0', >>>>> customProperties=3D'[]', snapshotId=3D'null', logicalName=3D'null= ', >>>>> hostDevice=3D'null'}, device=5Ffbddd528-7d93-49c6-a286 >>>>> -180e021cb274device=5F879c93ab-4df1-435c-af02-565039fcc254devi >>>>> ce=5F8945f61a-abbe-4156-8485-a4aa6f1908dbdevice=5F017b5e59-01c4- >>>>> 4aac-bf0c-b5d9557284d6=3DVmDevice:{id=3D'VmDeviceId:{deviceId=3D'= 0 >>>>> 17b5e59-01c4-4aac-bf0c-b5d9557284d6', vmId=3D'3f57e669-5e4c-4d10-= 85cc-d573004a099d'}', >>>>> device=3D'tablet', type=3D'UNKNOWN', bootOrder=3D'0', specParams=3D= '[]', >>>>> address=3D'{bus=3D0, type=3Dusb, port=3D1}', managed=3D'false', p= lugged=3D'true', >>>>> readOnly=3D'false', deviceAlias=3D'input0', customProperties=3D'[= ]', >>>>> snapshotId=3D'null', logicalName=3D'null', hostDevice=3D'null'}, >>>>> device=5Ffbddd528-7d93-49c6-a286-180e021cb274=3DVmDevice:{id=3D'V= m >>>>> DeviceId:{deviceId=3D'fbddd528-7d93-49c6-a286-180e021cb274', >>>>> vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}', device=3D'ide', >>>>> type=3D'CONTROLLER', bootOrder=3D'0', specParams=3D'[]', address=3D= '{slot=3D0x01, >>>>> bus=3D0x00, domain=3D0x0000, type=3Dpci, function=3D0x1}', manage= d=3D'false', >>>>> plugged=3D'true', readOnly=3D'false', deviceAlias=3D'ide', custom= Properties=3D'[]', >>>>> snapshotId=3D'null', logicalName=3D'null', hostDevice=3D'null'}, >>>>> device=5Ffbddd528-7d93-49c6-a286-180e021cb274device=5F879c93ab-4 >>>>> df1-435c-af02-565039fcc254device=5F8945f61a-abbe-4156-8485-a4a >>>>> a6f1908db=3DVmDevice:{id=3D'VmDeviceId:{deviceId=3D'8945f61a-abbe= -4156-8485-a4aa6f1908db', >>>>> vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}', device=3D'unix',= >>>>> type=3D'CHANNEL', bootOrder=3D'0', specParams=3D'[]', address=3D'= {bus=3D0, >>>>> controller=3D0, type=3Dvirtio-serial, port=3D2}', managed=3D'fals= e', >>>>> plugged=3D'true', readOnly=3D'false', deviceAlias=3D'channel1', >>>>> customProperties=3D'[]', snapshotId=3D'null', logicalName=3D'null= ', >>>>> hostDevice=3D'null'}}, vmType=3Dkvm, memSize=3D8192, smpCoresPerS= ocket=3D1, >>>>> vmName=3DOracle=5FSECONDARY, nice=3D0, status=3DUp, maxMemSize=3D= 32768, >>>>> bootMenuEnable=3Dfalse, vmId=3D3f57e669-5e4c-4d10-85cc-d573004a09= 9d, >>>>> numOfIoThreads=3D2, smpThreadsPerCore=3D1, smartcardEnable=3Dfals= e, >>>>> maxMemSlots=3D16, kvmEnable=3Dtrue, pitReinjection=3Dfalse, >>>>> displayNetwork=3Dovirtmgmt, devices=3D[Ljava.lang.Object;@2e4d3dd= 3, >>>>> memGuaranteedSize=3D8192, maxVCpus=3D16, clientIp=3D, statusTime=3D= 4304259600 >>>>> <(430)%20425-9600>, display=3Dvnc}], log id: 7cc65298 >>>>> 2018-02-12 16:46:42,257+01 INFO [org.ovirt.engine.core.vdsbro >>>>> ker.monitoring.VmDevicesMonitoring] (ForkJoinPool-1-worker-4) [] >>>>> Received a vnc Device without an address when processing VM >>>>> 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: >>>>> {device=3Dvnc, specParams=3D{displayNetwork=3Dovirtmgmt, keyMap=3D= fr, >>>>> displayIp=3D192.168.0.5}, type=3Dgraphics, deviceId=3D813957b1-44= 6a-4e88-9e40-9fe76d2c442d, >>>>> port=3D5901} >>>>> 2018-02-12 16:46:42,257+01 INFO [org.ovirt.engine.core.vdsbro >>>>> ker.monitoring.VmDevicesMonitoring] (ForkJoinPool-1-worker-4) [] >>>>> Received a lease Device without an address when processing VM >>>>> 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: >>>>> {lease=5Fid=3D3f57e669-5e4c-4d10-85cc-d573004a099d, >>>>> sd=5Fid=3D1e51cecc-eb2e-47d0-b185-920fdc7afa16, >>>>> deviceId=3D{uuid=3Da09949aa-5642-4b6d-94a4-8b0d04257be5}, offset=3D= 6291456, >>>>> device=3Dlease, path=3D/rhev/data-center/mnt/glusterSD/192.168.0.= 6: >>>>> =5FDATA01/1e51cecc-eb2e-47d0-b185-920fdc7afa16/dom=5Fmd/xleases, = type=3Dlease} >>>>> 2018-02-12 16:46:46,260+01 INFO [org.ovirt.engine.core.vdsbro >>>>> ker.vdsbroker.FullListVDSCommand] (DefaultQuartzScheduler5) [7fcb= 200a] >>>>> FINISH, FullListVDSCommand, return: [{acpiEnable=3Dtrue, >>>>> emulatedMachine=3Dpc-i440fx-rhel7.3.0, afterMigrationStatus=3D, >>>>> tabletEnable=3Dtrue, pid=3D18748, guestDiskMapping=3D{0QEMU=5FQEM= U=5FH >>>>> ARDDISK=5Fd890fa68-fba4-4f49-9=3D{name=3D/dev/sda}, >>>>> QEMU=5FDVD-ROM=5FQM00003=3D{name=3D/dev/sr0}}, transparentHugePag= es=3Dtrue, >>>>> timeOffset=3D0, cpuType=3DNehalem, smp=3D2, guestNumaNodes=3D[Lja= va.lang.Obj >>>>> ect;@77951faf, custom=3D{device=5Ffbddd528-7d93-4 >>>>> 9c6-a286-180e021cb274device=5F879c93ab-4df1-435c-af02-565039fc >>>>> c254=3DVmDevice:{id=3D'VmDeviceId:{deviceId=3D'879c93ab-4df1-435c= -af02-565039fcc254', >>>>> vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}', device=3D'unix',= >>>>> type=3D'CHANNEL', bootOrder=3D'0', specParams=3D'[]', address=3D'= {bus=3D0, >>>>> controller=3D0, type=3Dvirtio-serial, port=3D1}', managed=3D'fals= e', >>>>> plugged=3D'true', readOnly=3D'false', deviceAlias=3D'channel0', >>>>> customProperties=3D'[]', snapshotId=3D'null', logicalName=3D'null= ', >>>>> hostDevice=3D'null'}, device=5Ffbddd528-7d93-49c6-a286 >>>>> -180e021cb274device=5F879c93ab-4df1-435c-af02-565039fcc254devi >>>>> ce=5F8945f61a-abbe-4156-8485-a4aa6f1908dbdevice=5F017b5e59-01c4- >>>>> 4aac-bf0c-b5d9557284d6=3DVmDevice:{id=3D'VmDeviceId:{deviceId=3D'= 0 >>>>> 17b5e59-01c4-4aac-bf0c-b5d9557284d6', vmId=3D'3f57e669-5e4c-4d10-= 85cc-d573004a099d'}', >>>>> device=3D'tablet', type=3D'UNKNOWN', bootOrder=3D'0', specParams=3D= '[]', >>>>> address=3D'{bus=3D0, type=3Dusb, port=3D1}', managed=3D'false', p= lugged=3D'true', >>>>> readOnly=3D'false', deviceAlias=3D'input0', customProperties=3D'[= ]', >>>>> snapshotId=3D'null', logicalName=3D'null', hostDevice=3D'null'}, >>>>> device=5Ffbddd528-7d93-49c6-a286-180e021cb274=3DVmDevice:{id=3D'V= m >>>>> DeviceId:{deviceId=3D'fbddd528-7d93-49c6-a286-180e021cb274', >>>>> vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}', device=3D'ide', >>>>> type=3D'CONTROLLER', bootOrder=3D'0', specParams=3D'[]', address=3D= '{slot=3D0x01, >>>>> bus=3D0x00, domain=3D0x0000, type=3Dpci, function=3D0x1}', manage= d=3D'false', >>>>> plugged=3D'true', readOnly=3D'false', deviceAlias=3D'ide', custom= Properties=3D'[]', >>>>> snapshotId=3D'null', logicalName=3D'null', hostDevice=3D'null'}, >>>>> device=5Ffbddd528-7d93-49c6-a286-180e021cb274device=5F879c93ab-4 >>>>> df1-435c-af02-565039fcc254device=5F8945f61a-abbe-4156-8485-a4a >>>>> a6f1908db=3DVmDevice:{id=3D'VmDeviceId:{deviceId=3D'8945f61a-abbe= -4156-8485-a4aa6f1908db', >>>>> vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}', device=3D'unix',= >>>>> type=3D'CHANNEL', bootOrder=3D'0', specParams=3D'[]', address=3D'= {bus=3D0, >>>>> controller=3D0, type=3Dvirtio-serial, port=3D2}', managed=3D'fals= e', >>>>> plugged=3D'true', readOnly=3D'false', deviceAlias=3D'channel1', >>>>> customProperties=3D'[]', snapshotId=3D'null', logicalName=3D'null= ', >>>>> hostDevice=3D'null'}}, vmType=3Dkvm, memSize=3D8192, smpCoresPerS= ocket=3D1, >>>>> vmName=3DOracle=5FSECONDARY, nice=3D0, status=3DUp, maxMemSize=3D= 32768, >>>>> bootMenuEnable=3Dfalse, vmId=3D3f57e669-5e4c-4d10-85cc-d573004a09= 9d, >>>>> numOfIoThreads=3D2, smpThreadsPerCore=3D1, smartcardEnable=3Dfals= e, >>>>> maxMemSlots=3D16, kvmEnable=3Dtrue, pitReinjection=3Dfalse, >>>>> displayNetwork=3Dovirtmgmt, devices=3D[Ljava.lang.Object;@286410f= d, >>>>> memGuaranteedSize=3D8192, maxVCpus=3D16, clientIp=3D, statusTime=3D= 4304263620 >>>>> <(430)%20426-3620>, display=3Dvnc}], log id: 58cdef4c >>>>> 2018-02-12 16:46:46,267+01 INFO [org.ovirt.engine.core.vdsbro >>>>> ker.monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler5) >>>>> [7fcb200a] Received a vnc Device without an address when processi= ng VM >>>>> 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: >>>>> {device=3Dvnc, specParams=3D{displayNetwork=3Dovirtmgmt, keyMap=3D= fr, >>>>> displayIp=3D192.168.0.5}, type=3Dgraphics, deviceId=3D813957b1-44= 6a-4e88-9e40-9fe76d2c442d, >>>>> port=3D5901} >>>>> 2018-02-12 16:46:46,268+01 INFO [org.ovirt.engine.core.vdsbro >>>>> ker.monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler5) >>>>> [7fcb200a] Received a lease Device without an address when proces= sing VM >>>>> 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: >>>>> {lease=5Fid=3D3f57e669-5e4c-4d10-85cc-d573004a099d, >>>>> sd=5Fid=3D1e51cecc-eb2e-47d0-b185-920fdc7afa16, >>>>> deviceId=3D{uuid=3Da09949aa-5642-4b6d-94a4-8b0d04257be5}, offset=3D= 6291456, >>>>> device=3Dlease, path=3D/rhev/data-center/mnt/glusterSD/192.168.0.= 6: >>>>> =5FDATA01/1e51cecc-eb2e-47d0-b185-920fdc7afa16/dom=5Fmd/xleases, = type=3Dlease} >>>>> >>>>> >>>>> >>>>> >>>>> For the VM with 2 vdisks we see : >>>>> >>>>> 2018-02-12 16:49:06,112+01 INFO [org.ovirt.engine.core.bll.Migrat= eVmToServerCommand] >>>>> (default task-50) [92b5af33-cb87-4142-b8fe-8b838dd7458e] Lock Acq= uired >>>>> to object 'EngineLock:{exclusiveLocks=3D'[f7d4ec12-627a-4b83-b59e= -886400d55474=3DVM]', >>>>> sharedLocks=3D''}' >>>>> 2018-02-12 16:49:06,407+01 INFO [org.ovirt.engine.core.bll.Migrat= eVmToServerCommand] >>>>> (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4142-b8fe-8b83= 8dd7458e] >>>>> Running command: MigrateVmToServerCommand internal: false. Entiti= es >>>>> affected : ID: f7d4ec12-627a-4b83-b59e-886400d55474 Type: VMActio= n >>>>> group MIGRATE=5FVM with role type USER >>>>> 2018-02-12 16:49:06,712+01 INFO [org.ovirt.engine.core.vdsbroker.= MigrateVDSCommand] >>>>> (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4142-b8fe-8b83= 8dd7458e] >>>>> START, MigrateVDSCommand( MigrateVDSCommandParameters:{runAsync=3D= 'true', >>>>> hostId=3D'd569c2dd-8f30-4878-8aea-858db285cf69', >>>>> vmId=3D'f7d4ec12-627a-4b83-b59e-886400d55474', srcHost=3D'192.168= .0.5', >>>>> dstVdsId=3D'ce3938b1-b23f-4d22-840a-f17d7cd87bb1', dstHost=3D' >>>>> 192.168.0.6:54321', migrationMethod=3D'ONLINE', tunnelMigration=3D= 'false', >>>>> migrationDowntime=3D'0', autoConverge=3D'true', migrateCompressed= =3D'false', >>>>> consoleAddress=3D'null', maxBandwidth=3D'500', enableGuestEvents=3D= 'true', >>>>> maxIncomingMigrations=3D'2', maxOutgoingMigrations=3D'2', >>>>> convergenceSchedule=3D'[init=3D[{name=3DsetDowntime, params=3D[10= 0]}], >>>>> stalling=3D[{limit=3D1, action=3D{name=3DsetDowntime, params=3D[1= 50]}}, {limit=3D2, >>>>> action=3D{name=3DsetDowntime, params=3D[200]}}, {limit=3D3, >>>>> action=3D{name=3DsetDowntime, params=3D[300]}}, {limit=3D4, >>>>> action=3D{name=3DsetDowntime, params=3D[400]}}, {limit=3D6, >>>>> action=3D{name=3DsetDowntime, params=3D[500]}}, {limit=3D-1, acti= on=3D{name=3Dabort, >>>>> params=3D[]}}]]'}), log id: 3702a9e0 >>>>> 2018-02-12 16:49:06,713+01 INFO [org.ovirt.engine.core.vdsbro >>>>> ker.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-t= hread-49) >>>>> [92b5af33-cb87-4142-b8fe-8b838dd7458e] START, >>>>> MigrateBrokerVDSCommand(HostName =3D ginger.local.systea.fr, >>>>> MigrateVDSCommandParameters:{runAsync=3D'true', >>>>> hostId=3D'd569c2dd-8f30-4878-8aea-858db285cf69', >>>>> vmId=3D'f7d4ec12-627a-4b83-b59e-886400d55474', srcHost=3D'192.168= .0.5', >>>>> dstVdsId=3D'ce3938b1-b23f-4d22-840a-f17d7cd87bb1', dstHost=3D' >>>>> 192.168.0.6:54321', migrationMethod=3D'ONLINE', tunnelMigration=3D= 'false', >>>>> migrationDowntime=3D'0', autoConverge=3D'true', migrateCompressed= =3D'false', >>>>> consoleAddress=3D'null', maxBandwidth=3D'500', enableGuestEvents=3D= 'true', >>>>> maxIncomingMigrations=3D'2', maxOutgoingMigrations=3D'2', >>>>> convergenceSchedule=3D'[init=3D[{name=3DsetDowntime, params=3D[10= 0]}], >>>>> stalling=3D[{limit=3D1, action=3D{name=3DsetDowntime, params=3D[1= 50]}}, {limit=3D2, >>>>> action=3D{name=3DsetDowntime, params=3D[200]}}, {limit=3D3, >>>>> action=3D{name=3DsetDowntime, params=3D[300]}}, {limit=3D4, >>>>> action=3D{name=3DsetDowntime, params=3D[400]}}, {limit=3D6, >>>>> action=3D{name=3DsetDowntime, params=3D[500]}}, {limit=3D-1, acti= on=3D{name=3Dabort, >>>>> params=3D[]}}]]'}), log id: 1840069c >>>>> 2018-02-12 16:49:06,724+01 INFO [org.ovirt.engine.core.vdsbro >>>>> ker.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-t= hread-49) >>>>> [92b5af33-cb87-4142-b8fe-8b838dd7458e] FINISH, MigrateBrokerVDSCo= mmand, >>>>> log id: 1840069c >>>>> 2018-02-12 16:49:06,732+01 INFO [org.ovirt.engine.core.vdsbroker.= MigrateVDSCommand] >>>>> (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4142-b8fe-8b83= 8dd7458e] >>>>> FINISH, MigrateVDSCommand, return: MigratingFrom, log id: 3702a9e= 0 >>>>> 2018-02-12 16:49:06,753+01 INFO [org.ovirt.engine.core.dal.db >>>>> broker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-= 6-thread-49) >>>>> [92b5af33-cb87-4142-b8fe-8b838dd7458e] EVENT=5FID: >>>>> VM=5FMIGRATION=5FSTART(62), Correlation ID: 92b5af33-cb87-4142-b8= fe-8b838dd7458e, >>>>> Job ID: f4f54054-f7c8-4481-8eda-d5a15c383061, Call Stack: null, C= ustom >>>>> ID: null, Custom Event ID: -1, Message: Migration started (VM: >>>>> Oracle=5FPRIMARY, Source: ginger.local.systea.fr, Destination: >>>>> victor.local.systea.fr, User: admin@internal-authz). >>>>> ... >>>>> 2018-02-12 16:49:16,453+01 INFO [org.ovirt.engine.core.vdsbro >>>>> ker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler4) >>>>> [162a5bc3] Fetched 2 VMs from VDS 'ce3938b1-b23f-4d22-840a-f17d7c= d87bb1' >>>>> 2018-02-12 16:49:16,455+01 INFO [org.ovirt.engine.core.vdsbroker.= monitoring.VmAnalyzer] >>>>> (DefaultQuartzScheduler4) [162a5bc3] VM 'f7d4ec12-627a-4b83-b59e-= 886400d55474'(Oracle=5FPRIMARY) >>>>> was unexpectedly detected as 'MigratingTo' on VDS >>>>> 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'(victor.local.systea.fr) >>>>> (expected on 'd569c2dd-8f30-4878-8aea-858db285cf69') >>>>> 2018-02-12 16:49:16,455+01 INFO [org.ovirt.engine.core.vdsbroker.= monitoring.VmAnalyzer] >>>>> (DefaultQuartzScheduler4) [162a5bc3] VM 'f7d4ec12-627a-4b83-b59e-= 886400d55474' >>>>> is migrating to VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'( >>>>> victor.local.systea.fr) ignoring it in the refresh until migratio= n is >>>>> done >>>>> ... >>>>> 2018-02-12 16:49:31,484+01 INFO [org.ovirt.engine.core.vdsbroker.= monitoring.VmAnalyzer] >>>>> (DefaultQuartzScheduler5) [11a7619a] VM 'f7d4ec12-627a-4b83-b59e-= 886400d55474'(Oracle=5FPRIMARY) >>>>> was unexpectedly detected as 'MigratingTo' on VDS >>>>> 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'(victor.local.systea.fr) >>>>> (expected on 'd569c2dd-8f30-4878-8aea-858db285cf69') >>>>> 2018-02-12 16:49:31,484+01 INFO [org.ovirt.engine.core.vdsbroker.= monitoring.VmAnalyzer] >>>>> (DefaultQuartzScheduler5) [11a7619a] VM 'f7d4ec12-627a-4b83-b59e-= 886400d55474' >>>>> is migrating to VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'( >>>>> victor.local.systea.fr) ignoring it in the refresh until migratio= n is >>>>> done >>>>> >>>>> >>>>> >>>>> and so on, last lines repeated indefinitly for hours since we pow= eroff >>>>> the VM... >>>>> Is this something known ? Any idea about that ? >>>>> >>>>> Thanks >>>>> >>>>> Ovirt 4.1.6, updated last at feb-13. Gluster 3.12.1. >>>>> >>>>> -- >>>>> >>>>> Cordialement, >>>>> >>>>> *Frank Soyer * >>>>> >>>>> =5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F= =5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F= =5F >>>>> Users mailing list >>>>> Users@ovirt.org >>>>> http://lists.ovirt.org/mailman/listinfo/users =C2=A0 =C2=A0 ------=_=-_OpenGroupware_org_NGMime-18019-1519921630.582362-42------ Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: quoted-printable Content-Length: 51348 <html>I Milan,<br />I tried to activate the debug mode, but the restart= of libvirt crashed something on the host : it was no more possible to = start any vm on it, and migration to it just never started. So I decide= d to restart it, and to be sure, I've restarted all the hosts.<br />And= ... now the migration of all VMs, simple or multi-disks, works ?!? So, = there was probably something hidden that was resetted or repaired by th= e global restart !.... In french, we call that "tomber en marche" ;)<br= /><br />So : solved. Thank you for the wasted time !<br /><br />--<br = /><style type=3D"text/css">.Text1 { color: black; font-size:9pt; font-family:Verdana; } .Text2 { color: black; font-size:7pt; font-family:Verdana; }</style><p class=3D"Text1">Cordialement,<br /><br /><b>Frank Soyer= </b><br />Mob. 06 72 28 38 53 - Fix. 05 49 50 52 34</p><br /><br />Le = Lundi, F=C3=A9vrier 26, 2018 12:59 CET, Milan Zamazal <mzamazal@redh= at.com> a =C3=A9crit:<br /> <blockquote type=3D"cite" cite=3D"8= 78tbg9d1y.fsf@redhat.com">"fsoyer" <fsoyer@systea.fr> writes:<br = /><br />> I don't beleive that this is relatd to a host, tests have = been done from victor<br />> source to ginger dest and ginger to vic= tor. I don't see problems on storage<br />> (gluster 3.12 native man= aged by ovirt), when VMs with a uniq disk from 20 to<br />> 250G mig= rate without error in some seconds and with no downtime.<br /><br />The= host itself may be fine, but libvirt/QEMU running there may expose<br = />problems, perhaps just for some VMs. According to your logs something= <br />is not behaving as expected on the source host during the faulty<= br />migration.<br /><br />> How ca I enable this libvirt debug mode= ?<br /><br />Set the following options in /etc/libvirt/libvirtd.conf (= look for<br />examples in comments there)<br /><br />- log=5Flevel=3D1<= br />- log=5Foutputs=3D"1:file:/var/log/libvirt/libvirtd.log"<br /><br = />and restart libvirt. Then /var/log/libvirt/libvirtd.log should contai= n<br />the log. It will be huge, so I suggest to enable it only for the= time<br />of reproducing the problem.<br /><br />> --<br />><br = />> Cordialement,<br />><br />> Frank Soyer<br />><br />>= ; <br />><br />> Le Vendredi, F=C3=A9vrier 23, 2018 09:56 C= ET, Milan Zamazal <mzamazal@redhat.com> a =C3=A9crit:<br />> &= nbsp;Maor Lipchuk <mlipchuk@redhat.com> writes:<br />><br />&g= t;> I encountered a bug (see [1]) which contains the same error ment= ioned in<br />>> your VDSM logs (see [2]), but I doubt it is rela= ted.<br />><br />> Indeed, it's not related.<br />><br />> = The error in vdsm=5Fvictor.log just means that the info gathering call<= br />> tries to access libvirt domain before the incoming migration = is<br />> completed. It's ugly but harmless.<br />><br />>>= Milan, maybe you have any advice to troubleshoot the issue? Will the<b= r />>> libvirt/qemu logs can help?<br />><br />> It seems t= here is something wrong on (at least) the source host. There<br />> = are no migration progress messages in the vdsm=5Fginger.log and there a= re<br />> warnings about stale stat samples. That looks like problem= s with<br />> calling libvirt =E2=80=93 slow and/or stuck calls, may= be due to storage<br />> problems. The possibly faulty second disk c= ould cause that.<br />><br />> libvirt debug logs could tell us w= hether that is indeed the problem and<br />> whether it is caused by= storage or something else.<br />><br />>> I would suggest to = open a bug on that issue so we can track it more<br />>> properly= .<br />>><br />>> Regards,<br />>> Maor<br />>>= <br />>><br />>> [1]<br />>> https://bugzilla.redhat.= com/show=5Fbug.cgi?id=3D1486543 - Migration leads to<br />>> VM r= unning on 2 Hosts<br />>><br />>> [2]<br />>> 2018-02= -16 09:43:35,236+0100 ERROR (jsonrpc/7) [jsonrpc.JsonRpcServer]<br />&g= t;> Internal server error (=5F=5Finit=5F=5F:577)<br />>> Trace= back (most recent call last):<br />>> File "/usr/lib/python2.7/si= te-packages/yajsonrpc/=5F=5Finit=5F=5F.py", line 572,<br />>> in = =5Fhandle=5Frequest<br />>> res =3D method(**params)<br />>>= ; File "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 198,= in<br />>> =5FdynamicMethod<br />>> result =3D fn(*methodA= rgs)<br />>> File "/usr/share/vdsm/API.py", line 1454, in getAllV= mIoTunePolicies<br />>> io=5Ftune=5Fpolicies=5Fdict =3D self.=5Fc= if.getAllVmIoTunePolicies()<br />>> File "/usr/share/vdsm/clientI= F.py", line 454, in getAllVmIoTunePolicies<br />>> 'current=5Fval= ues': v.getIoTune()}<br />>> File "/usr/share/vdsm/virt/vm.py", l= ine 2859, in getIoTune<br />>> result =3D self.getIoTuneResponse(= )<br />>> File "/usr/share/vdsm/virt/vm.py", line 2878, in getIoT= uneResponse<br />>> res =3D self.=5Fdom.blockIoTune(<br />>>= ; File "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py", line = 47,<br />>> in =5F=5Fgetattr=5F=5F<br />>> % self.vmid)<br = />>> NotConnectedError: VM u'755cf168-de65-42ed-b22f-efe9136f7594= ' was not<br />>> started yet or was shut down<br />>><br /= >>> On Thu, Feb 22, 2018 at 4:22 PM, fsoyer <fsoyer@systea.fr&= gt; wrote:<br />>><br />>>> Hi,<br />>>> Yes, o= n 2018-02-16 (vdsm logs) I tried with a VM standing on ginger<br />>= >> (192.168.0.6) migrated (or failed to migrate...) to victor (19= 2.168.0.5),<br />>>> while the engine.log in the first mail on= 2018-02-12 was for VMs standing<br />>>> on victor, migrated = (or failed to migrate...) to ginger. Symptoms were<br />>>> ex= actly the same, in both directions, and VMs works like a charm before,<= br />>>> and even after (migration "killed" by a poweroff of V= Ms).<br />>>> Am I the only one experimenting this problem ?<b= r />>>><br />>>><br />>>> Thanks<br />>&g= t;> --<br />>>><br />>>> Cordialement,<br />>&g= t;><br />>>> *Frank Soyer *<br />>>><br />>>= ><br />>>><br />>>> Le Jeudi, F=C3=A9vrier 22, 201= 8 00:45 CET, Maor Lipchuk <mlipchuk@redhat.com><br />>>>= a =C3=A9crit:<br />>>><br />>>><br />>>> Hi= Frank,<br />>>><br />>>> Sorry about the delay repon= d.<br />>>> I've been going through the logs you attached, alt= hough I could not find<br />>>> any specific indication why th= e migration failed because of the disk you<br />>>> were menti= onning.<br />>>> Does this VM run with both disks on the targe= t host without migration?<br />>>><br />>>> Regards,<= br />>>> Maor<br />>>><br />>>><br />>>= ;> On Fri, Feb 16, 2018 at 11:03 AM, fsoyer <fsoyer@systea.fr>= wrote:<br />>>>><br />>>>> Hi Maor,<br />>&= gt;>> sorry for the double post, I've change the email adress of = my account and<br />>>>> supposed that I'd need to re-post = it.<br />>>>> And thank you for your time. Here are the log= s. I added a vdisk to an<br />>>>> existing VM : it no more= migrates, needing to poweroff it after minutes.<br />>>>> = Then simply deleting the second disk makes migrate it in exactly 9s wit= hout<br />>>>> problem !<br />>>>> https://gist= .github.com/fgth/4707446331d201eef574ac31b6e89561<br />>>>>= https://gist.github.com/fgth/f8de9c22664aee53722af676bff8719d<br />>= ;>>><br />>>>> --<br />>>>><br />>&= gt;>> Cordialement,<br />>>>><br />>>>> *= Frank Soyer *<br />>>>> Le Mercredi, F=C3=A9vrier 14, 2018 = 11:04 CET, Maor Lipchuk <<br />>>>> mlipchuk@redhat.com&= gt; a =C3=A9crit:<br />>>>><br />>>>><br />>= >>> Hi Frank,<br />>>>><br />>>>> I al= ready replied on your last email.<br />>>>> Can you provide= the VDSM logs from the time of the migration failure for<br />>>= >> both hosts:<br />>>>> ginger.local.systea.f <ht= tp://ginger.local.systea.fr/>r and v<br />>>>> ictor.loc= al.systea.fr<br />>>>><br />>>>> Thanks,<br />&= gt;>>> Maor<br />>>>><br />>>>> On Wed= , Feb 14, 2018 at 11:23 AM, fsoyer <fsoyer@systea.fr> wrote:<br /= >>>>>><br />>>>>> Hi all,<br />>>&g= t;>> I discovered yesterday a problem when migrating VM with more= than one<br />>>>>> vdisk.<br />>>>>> On= our test servers (oVirt4.1, shared storage with Gluster), I created 2<= br />>>>>> VMs needed for a test, from a template with a= 20G vdisk. On this VMs I<br />>>>>> added a 100G vdisk = (for this tests I didn't want to waste time to extend<br />>>>= >> the existing vdisks... But I lost time finally...). The VMs wi= th the 2<br />>>>>> vdisks works well.<br />>>>= >> Now I saw some updates waiting on the host. I tried to put it = in<br />>>>>> maintenance... But it stopped on the two V= M. They were marked "migrating",<br />>>>>> but no more = accessible. Other (small) VMs with only 1 vdisk was migrated<br />>&= gt;>>> without problem at the same time.<br />>>>>= > I saw that a kvm process for the (big) VMs was launched on the sou= rce<br />>>>>> AND destination host, but after tens of m= inutes, the migration and the VMs<br />>>>>> was always = freezed. I tried to cancel the migration for the VMs : failed.<br />>= ;>>>> The only way to stop it was to poweroff the VMs : the= kvm process died on<br />>>>>> the 2 hosts and the GUI = alerted on a failed migration.<br />>>>>> In doubt, I tr= ied to delete the second vdisk on one of this VMs : it<br />>>>= ;>> migrates then without error ! And no access problem.<br />>= ;>>>> I tried to extend the first vdisk of the second VM, t= he delete the<br />>>>>> second vdisk : it migrates now = without problem !<br />>>>>><br />>>>>> S= o after another test with a VM with 2 vdisks, I can say that this<br />= >>>>> blocked the migration process :(<br />>>>= >><br />>>>>> In engine.log, for a VMs with 1 vdis= k migrating well, we see :<br />>>>>><br />>>>&= gt;> 2018-02-12 16:46:29,705+01 INFO [org.ovirt.engine.core.bll.Migr= ateVmToServerCommand]<br />>>>>> (default task-28) [2f71= 2024-5982-46a8-82c8-fd8293da5725] Lock Acquired<br />>>>>&g= t; to object 'EngineLock:{exclusiveLocks=3D'[3f57e669-5e4c-4d10-85cc-d5= 73004a099d=3DVM]',<br />>>>>> sharedLocks=3D''}'<br />&g= t;>>>> 2018-02-12 16:46:29,955+01 INFO [org.ovirt.engine.co= re.bll.MigrateVmToServerCommand]<br />>>>>> (org.ovirt.t= hread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725]<br />>= ;>>>> Running command: MigrateVmToServerCommand internal: f= alse. Entities<br />>>>>> affected : ID: 3f57e669-5e4c-4= d10-85cc-d573004a099d Type: VMAction<br />>>>>> group MI= GRATE=5FVM with role type USER<br />>>>>> 2018-02-12 16:= 46:30,261+01 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand]<b= r />>>>>> (org.ovirt.thread.pool-6-thread-32) [2f712024-= 5982-46a8-82c8-fd8293da5725]<br />>>>>> START, MigrateVD= SCommand( MigrateVDSCommandParameters:{runAsync=3D'true',<br />>>= >>> hostId=3D'ce3938b1-b23f-4d22-840a-f17d7cd87bb1',<br />>= >>>> vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d', srcHost= =3D'192.168.0.6',<br />>>>>> dstVdsId=3D'd569c2dd-8f30-4= 878-8aea-858db285cf69', dstHost=3D'<br />>>>>> 192.168.0= .5:54321', migrationMethod=3D'ONLINE', tunnelMigration=3D'false',<br />= >>>>> migrationDowntime=3D'0', autoConverge=3D'true', mi= grateCompressed=3D'false',<br />>>>>> consoleAddress=3D'= null', maxBandwidth=3D'500', enableGuestEvents=3D'true',<br />>>&= gt;>> maxIncomingMigrations=3D'2', maxOutgoingMigrations=3D'2',<b= r />>>>>> convergenceSchedule=3D'[init=3D[{name=3DsetDow= ntime, params=3D[100]}],<br />>>>>> stalling=3D[{limit=3D= 1, action=3D{name=3DsetDowntime, params=3D[150]}}, {limit=3D2,<br />>= ;>>>> action=3D{name=3DsetDowntime, params=3D[200]}}, {limi= t=3D3,<br />>>>>> action=3D{name=3DsetDowntime, params=3D= [300]}}, {limit=3D4,<br />>>>>> action=3D{name=3DsetDown= time, params=3D[400]}}, {limit=3D6,<br />>>>>> action=3D= {name=3DsetDowntime, params=3D[500]}}, {limit=3D-1, action=3D{name=3Dab= ort,<br />>>>>> params=3D[]}}]]'}), log id: 14f61ee0<br = />>>>>> 2018-02-12 16:46:30,262+01 INFO [org.ovirt.engin= e.core.vdsbro<br />>>>>> ker.vdsbroker.MigrateBrokerVDSC= ommand] (org.ovirt.thread.pool-6-thread-32)<br />>>>>> [= 2f712024-5982-46a8-82c8-fd8293da5725] START,<br />>>>>> = MigrateBrokerVDSCommand(HostName =3D victor.local.systea.fr,<br />>&= gt;>>> MigrateVDSCommandParameters:{runAsync=3D'true',<br />&g= t;>>>> hostId=3D'ce3938b1-b23f-4d22-840a-f17d7cd87bb1',<br = />>>>>> vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d', s= rcHost=3D'192.168.0.6',<br />>>>>> dstVdsId=3D'd569c2dd-= 8f30-4878-8aea-858db285cf69', dstHost=3D'<br />>>>>> 192= .168.0.5:54321', migrationMethod=3D'ONLINE', tunnelMigration=3D'false',= <br />>>>>> migrationDowntime=3D'0', autoConverge=3D'tru= e', migrateCompressed=3D'false',<br />>>>>> consoleAddre= ss=3D'null', maxBandwidth=3D'500', enableGuestEvents=3D'true',<br />>= ;>>>> maxIncomingMigrations=3D'2', maxOutgoingMigrations=3D= '2',<br />>>>>> convergenceSchedule=3D'[init=3D[{name=3D= setDowntime, params=3D[100]}],<br />>>>>> stalling=3D[{l= imit=3D1, action=3D{name=3DsetDowntime, params=3D[150]}}, {limit=3D2,<b= r />>>>>> action=3D{name=3DsetDowntime, params=3D[200]}}= , {limit=3D3,<br />>>>>> action=3D{name=3DsetDowntime, p= arams=3D[300]}}, {limit=3D4,<br />>>>>> action=3D{name=3D= setDowntime, params=3D[400]}}, {limit=3D6,<br />>>>>> ac= tion=3D{name=3DsetDowntime, params=3D[500]}}, {limit=3D-1, action=3D{na= me=3Dabort,<br />>>>>> params=3D[]}}]]'}), log id: 775cd= 381<br />>>>>> 2018-02-12 16:46:30,277+01 INFO [org.ovir= t.engine.core.vdsbro<br />>>>>> ker.vdsbroker.MigrateBro= kerVDSCommand] (org.ovirt.thread.pool-6-thread-32)<br />>>>>= ;> [2f712024-5982-46a8-82c8-fd8293da5725] FINISH, MigrateBrokerVDSCo= mmand,<br />>>>>> log id: 775cd381<br />>>>>= > 2018-02-12 16:46:30,285+01 INFO [org.ovirt.engine.core.vdsbroker.M= igrateVDSCommand]<br />>>>>> (org.ovirt.thread.pool-6-th= read-32) [2f712024-5982-46a8-82c8-fd8293da5725]<br />>>>>&g= t; FINISH, MigrateVDSCommand, return: MigratingFrom, log id: 14f61ee0<b= r />>>>>> 2018-02-12 16:46:30,301+01 INFO [org.ovirt.eng= ine.core.dal.db<br />>>>>> broker.auditloghandling.Audit= LogDirector] (org.ovirt.thread.pool-6-thread-32)<br />>>>>&= gt; [2f712024-5982-46a8-82c8-fd8293da5725] EVENT=5FID:<br />>>>= ;>> VM=5FMIGRATION=5FSTART(62), Correlation ID: 2f712024-5982-46a= 8-82c8-fd8293da5725,<br />>>>>> Job ID: 4bd19aa9-cc99-4d= 02-884e-5a1e857a7738, Call Stack: null, Custom<br />>>>>>= ; ID: null, Custom Event ID: -1, Message: Migration started (VM:<br />&= gt;>>>> Oracle=5FSECONDARY, Source: victor.local.systea.fr,= Destination:<br />>>>>> ginger.local.systea.fr, User: a= dmin@internal-authz).<br />>>>>> 2018-02-12 16:46:31,106= +01 INFO [org.ovirt.engine.core.vdsbro<br />>>>>> ker.vd= sbroker.FullListVDSCommand] (DefaultQuartzScheduler9) [54a65b66]<br />&= gt;>>>> START, FullListVDSCommand(HostName =3D victor.local= .systea.fr,<br />>>>>> FullListVDSCommandParameters:{run= Async=3D'true',<br />>>>>> hostId=3D'ce3938b1-b23f-4d22-= 840a-f17d7cd87bb1',<br />>>>>> vmIds=3D'[3f57e669-5e4c-4= d10-85cc-d573004a099d]'}), log id: 54b4b435<br />>>>>> 2= 018-02-12 16:46:31,147+01 INFO [org.ovirt.engine.core.vdsbro<br />>&= gt;>>> ker.vdsbroker.FullListVDSCommand] (DefaultQuartzSchedul= er9) [54a65b66]<br />>>>>> FINISH, FullListVDSCommand, r= eturn: [{acpiEnable=3Dtrue,<br />>>>>> emulatedMachine=3D= pc-i440fx-rhel7.3.0, tabletEnable=3Dtrue, pid=3D1493,<br />>>>= >> guestDiskMapping=3D{0QEMU=5FQEMU=5FHARDDISK=5Fd890fa68-fba4-4f= 49-9=3D{name=3D/dev/sda},<br />>>>>> QEMU=5FDVD-ROM=5FQM= 00003=3D{name=3D/dev/sr0}}, transparentHugePages=3Dtrue,<br />>>&= gt;>> timeOffset=3D0, cpuType=3DNehalem, smp=3D2, pauseCode=3DNOE= RR,<br />>>>>> guestNumaNodes=3D[Ljava.lang.Object;@1d90= 42cd, smartcardEnable=3Dfalse,<br />>>>>> custom=3D{devi= ce=5Ffbddd528-7d93-49c6-a286-180e021cb274device=5F87<br />>>>&= gt;> 9c93ab-4df1-435c-af02-565039fcc254=3DVmDevice:{id=3D'VmDeviceId= :<br />>>>>> {deviceId=3D'879c93ab-4df1-435c-af02-565039= fcc254',<br />>>>>> vmId=3D'3f57e669-5e4c-4d10-85cc-d573= 004a099d'}', device=3D'unix',<br />>>>>> type=3D'CHANNEL= ', bootOrder=3D'0', specParams=3D'[]', address=3D'{bus=3D0,<br />>&g= t;>>> controller=3D0, type=3Dvirtio-serial, port=3D1}', manage= d=3D'false',<br />>>>>> plugged=3D'true', readOnly=3D'fa= lse', deviceAlias=3D'channel0',<br />>>>>> customPropert= ies=3D'[]', snapshotId=3D'null', logicalName=3D'null',<br />>>>= ;>> hostDevice=3D'null'}, device=5Ffbddd528-7d93-49c6-a286<br />&= gt;>>>> -180e021cb274device=5F879c93ab-4df1-435c-af02-56503= 9fcc254devi<br />>>>>> ce=5F8945f61a-abbe-4156-8485-a4aa= 6f1908dbdevice=5F017b5e59-01c4-<br />>>>>> 4aac-bf0c-b5d= 9557284d6=3DVmDevice:{id=3D'VmDeviceId:{deviceId=3D'0<br />>>>= >> 17b5e59-01c4-4aac-bf0c-b5d9557284d6', vmId=3D'3f57e669-5e4c-4d= 10-85cc-d573004a099d'}',<br />>>>>> device=3D'tablet', t= ype=3D'UNKNOWN', bootOrder=3D'0', specParams=3D'[]',<br />>>>&= gt;> address=3D'{bus=3D0, type=3Dusb, port=3D1}', managed=3D'false',= plugged=3D'true',<br />>>>>> readOnly=3D'false', device= Alias=3D'input0', customProperties=3D'[]',<br />>>>>> sn= apshotId=3D'null', logicalName=3D'null', hostDevice=3D'null'},<br />>= ;>>>> device=5Ffbddd528-7d93-49c6-a286-180e021cb274=3DVmDev= ice:{id=3D'Vm<br />>>>>> DeviceId:{deviceId=3D'fbddd528-= 7d93-49c6-a286-180e021cb274',<br />>>>>> vmId=3D'3f57e66= 9-5e4c-4d10-85cc-d573004a099d'}', device=3D'ide',<br />>>>>= > type=3D'CONTROLLER', bootOrder=3D'0', specParams=3D'[]', address=3D= '{slot=3D0x01,<br />>>>>> bus=3D0x00, domain=3D0x0000, t= ype=3Dpci, function=3D0x1}', managed=3D'false',<br />>>>>&g= t; plugged=3D'true', readOnly=3D'false', deviceAlias=3D'ide', customPro= perties=3D'[]',<br />>>>>> snapshotId=3D'null', logicalN= ame=3D'null', hostDevice=3D'null'},<br />>>>>> device=5F= fbddd528-7d93-49c6-a286-180e021cb274device=5F879c93ab-4<br />>>&g= t;>> df1-435c-af02-565039fcc254device=5F8945f61a-abbe-4156-8485-a= 4a<br />>>>>> a6f1908db=3DVmDevice:{id=3D'VmDeviceId:{de= viceId=3D'8945f61a-abbe-4156-8485-a4aa6f1908db',<br />>>>>&= gt; vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}', device=3D'unix',<b= r />>>>>> type=3D'CHANNEL', bootOrder=3D'0', specParams=3D= '[]', address=3D'{bus=3D0,<br />>>>>> controller=3D0, ty= pe=3Dvirtio-serial, port=3D2}', managed=3D'false',<br />>>>>= ;> plugged=3D'true', readOnly=3D'false', deviceAlias=3D'channel1',<b= r />>>>>> customProperties=3D'[]', snapshotId=3D'null', = logicalName=3D'null',<br />>>>>> hostDevice=3D'null'}}, = vmType=3Dkvm, memSize=3D8192, smpCoresPerSocket=3D1,<br />>>>&= gt;> vmName=3DOracle=5FSECONDARY, nice=3D0, status=3DMigration Sourc= e, maxMemSize=3D32768,<br />>>>>> bootMenuEnable=3Dfalse= , vmId=3D3f57e669-5e4c-4d10-85cc-d573004a099d,<br />>>>>>= ; numOfIoThreads=3D2, smpThreadsPerCore=3D1, memGuaranteedSize=3D8192,<= br />>>>>> kvmEnable=3Dtrue, pitReinjection=3Dfalse, dis= playNetwork=3Dovirtmgmt,<br />>>>>> devices=3D[Ljava.lan= g.Object;@28ae66d7, display=3Dvnc, maxVCpus=3D16,<br />>>>>= > clientIp=3D, statusTime=3D4299484520, maxMemSlots=3D16}], log id: = 54b4b435<br />>>>>> 2018-02-12 16:46:31,150+01 INFO [org= .ovirt.engine.core.vdsbro<br />>>>>> ker.monitoring.VmsS= tatisticsFetcher] (DefaultQuartzScheduler1)<br />>>>>> [= 27fac647] Fetched 3 VMs from VDS 'd569c2dd-8f30-4878-8aea-858db285cf69'= <br />>>>>> 2018-02-12 16:46:31,151+01 INFO [org.ovirt.e= ngine.core.vdsbro<br />>>>>> ker.monitoring.VmDevicesMon= itoring] (DefaultQuartzScheduler9)<br />>>>>> [54a65b66]= Received a vnc Device without an address when processing VM<br />>&= gt;>>> 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping = device:<br />>>>>> {device=3Dvnc, specParams=3D{displayN= etwork=3Dovirtmgmt, keyMap=3Dfr,<br />>>>>> displayIp=3D= 192.168.0.6}, type=3Dgraphics, deviceId=3D813957b1-446a-4e88-9e40-9fe76= d2c442d,<br />>>>>> port=3D5901}<br />>>>>&g= t; 2018-02-12 16:46:31,151+01 INFO [org.ovirt.engine.core.vdsbro<br />&= gt;>>>> ker.monitoring.VmDevicesMonitoring] (DefaultQuartzS= cheduler9)<br />>>>>> [54a65b66] Received a lease Device= without an address when processing VM<br />>>>>> 3f57e6= 69-5e4c-4d10-85cc-d573004a099d devices, skipping device:<br />>>&= gt;>> {lease=5Fid=3D3f57e669-5e4c-4d10-85cc-d573004a099d,<br />&g= t;>>>> sd=5Fid=3D1e51cecc-eb2e-47d0-b185-920fdc7afa16,<br /= >>>>>> deviceId=3D{uuid=3Da09949aa-5642-4b6d-94a4-8b0d04= 257be5}, offset=3D6291456,<br />>>>>> device=3Dlease, pa= th=3D/rhev/data-center/mnt/glusterSD/192.168.0.6:<br />>>>>= > =5FDATA01/1e51cecc-eb2e-47d0-b185-920fdc7afa16/dom=5Fmd/xleases, t= ype=3Dlease}<br />>>>>> 2018-02-12 16:46:31,152+01 INFO = [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]<br />>>&g= t;>> (DefaultQuartzScheduler1) [27fac647] VM '3f57e669-5e4c-4d10-= 85cc-d573004a099d'(Oracle=5FSECONDARY)<br />>>>>> was un= expectedly detected as 'MigratingTo' on VDS<br />>>>>> '= d569c2dd-8f30-4878-8aea-858db285cf69'(ginger.local.systea.fr)<br />>= >>>> (expected on 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1')<b= r />>>>>> 2018-02-12 16:46:31,152+01 INFO [org.ovirt.eng= ine.core.vdsbroker.monitoring.VmAnalyzer]<br />>>>>> (De= faultQuartzScheduler1) [27fac647] VM '3f57e669-5e4c-4d10-85cc-d573004a0= 99d'<br />>>>>> is migrating to VDS 'd569c2dd-8f30-4878-= 8aea-858db285cf69'(<br />>>>>> ginger.local.systea.fr) i= gnoring it in the refresh until migration is<br />>>>>> = done<br />>>>>> ....<br />>>>>> 2018-02-1= 2 16:46:41,631+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAn= alyzer]<br />>>>>> (ForkJoinPool-1-worker-11) [] VM '3f5= 7e669-5e4c-4d10-85cc-d573004a099d'<br />>>>>> was report= ed as Down on VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'(<br />>>= >>> victor.local.systea.fr)<br />>>>>> 2018-02-= 12 16:46:41,632+01 INFO [org.ovirt.engine.core.vdsbro<br />>>>= >> ker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-11) []= START,<br />>>>>> DestroyVDSCommand(HostName =3D victor= .local.systea.fr,<br />>>>>> DestroyVmVDSCommandParamete= rs:{runAsync=3D'true',<br />>>>>> hostId=3D'ce3938b1-b23= f-4d22-840a-f17d7cd87bb1',<br />>>>>> vmId=3D'3f57e669-5= e4c-4d10-85cc-d573004a099d', force=3D'false',<br />>>>>>= secondsToWait=3D'0', gracefully=3D'false', reason=3D'', ignoreNoVm=3D'= true'}), log<br />>>>>> id: 560eca57<br />>>>&g= t;> 2018-02-12 16:46:41,650+01 INFO [org.ovirt.engine.core.vdsbro<br= />>>>>> ker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-= 1-worker-11) [] FINISH,<br />>>>>> DestroyVDSCommand, lo= g id: 560eca57<br />>>>>> 2018-02-12 16:46:41,650+01 INF= O [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]<br />>>= >>> (ForkJoinPool-1-worker-11) [] VM '3f57e669-5e4c-4d10-85cc-= d573004a099d'(Oracle=5FSECONDARY)<br />>>>>> moved from = 'MigratingFrom' --> 'Down'<br />>>>>> 2018-02-12 16:4= 6:41,651+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer= ]<br />>>>>> (ForkJoinPool-1-worker-11) [] Handing over = VM '3f57e669-5e4c-4d10-85cc-d573004a099d'(Oracle=5FSECONDARY)<br />>= >>>> to Host 'd569c2dd-8f30-4878-8aea-858db285cf69'. Settin= g VM to status<br />>>>>> 'MigratingTo'<br />>>>= ;>> 2018-02-12 16:46:42,163+01 INFO [org.ovirt.engine.core.vdsbro= ker.monitoring.VmAnalyzer]<br />>>>>> (ForkJoinPool-1-wo= rker-4) [] VM '3f57e669-5e4c-4d10-85cc-d573004a099d'(Oracle=5FSECONDARY= )<br />>>>>> moved from 'MigratingTo' --> 'Up'<br />&= gt;>>>> 2018-02-12 16:46:42,169+01 INFO [org.ovirt.engine.c= ore.vdsbro<br />>>>>> ker.vdsbroker.MigrateStatusVDSComm= and] (ForkJoinPool-1-worker-4) []<br />>>>>> START, Migr= ateStatusVDSCommand(HostName =3D ginger.local.systea.fr,<br />>>&= gt;>> MigrateStatusVDSCommandParameters:{runAsync=3D'true',<br />= >>>>> hostId=3D'd569c2dd-8f30-4878-8aea-858db285cf69',<b= r />>>>>> vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}= ), log id: 7a25c281<br />>>>>> 2018-02-12 16:46:42,174+0= 1 INFO [org.ovirt.engine.core.vdsbro<br />>>>>> ker.vdsb= roker.MigrateStatusVDSCommand] (ForkJoinPool-1-worker-4) []<br />>&g= t;>>> FINISH, MigrateStatusVDSCommand, log id: 7a25c281<br />&= gt;>>>> 2018-02-12 16:46:42,194+01 INFO [org.ovirt.engine.c= ore.dal.db<br />>>>>> broker.auditloghandling.AuditLogDi= rector] (ForkJoinPool-1-worker-4) []<br />>>>>> EVENT=5F= ID: VM=5FMIGRATION=5FDONE(63), Correlation ID:<br />>>>>>= ; 2f712024-5982-46a8-82c8-fd8293da5725, Job ID:<br />>>>>&g= t; 4bd19aa9-cc99-4d02-884e-5a1e857a7738, Call Stack: null, Custom ID:<b= r />>>>>> null, Custom Event ID: -1, Message: Migration = completed (VM:<br />>>>>> Oracle=5FSECONDARY, Source: vi= ctor.local.systea.fr, Destination:<br />>>>>> ginger.loc= al.systea.fr, Duration: 11 seconds, Total: 11 seconds, Actual<br />>= >>>> downtime: (N/A))<br />>>>>> 2018-02-12 = 16:46:42,201+01 INFO [org.ovirt.engine.core.bll.MigrateVmToServerComman= d]<br />>>>>> (ForkJoinPool-1-worker-4) [] Lock freed to= object<br />>>>>> 'EngineLock:{exclusiveLocks=3D'[3f57e= 669-5e4c-4d10-85cc-d573004a099d=3DVM]',<br />>>>>> share= dLocks=3D''}'<br />>>>>> 2018-02-12 16:46:42,203+01 INFO= [org.ovirt.engine.core.vdsbro<br />>>>>> ker.vdsbroker.= FullListVDSCommand] (ForkJoinPool-1-worker-4) [] START,<br />>>&g= t;>> FullListVDSCommand(HostName =3D ginger.local.systea.fr,<br /= >>>>>> FullListVDSCommandParameters:{runAsync=3D'true',<= br />>>>>> hostId=3D'd569c2dd-8f30-4878-8aea-858db285cf6= 9',<br />>>>>> vmIds=3D'[3f57e669-5e4c-4d10-85cc-d573004= a099d]'}), log id: 7cc65298<br />>>>>> 2018-02-12 16:46:= 42,254+01 INFO [org.ovirt.engine.core.vdsbro<br />>>>>> = ker.vdsbroker.FullListVDSCommand] (ForkJoinPool-1-worker-4) [] FINISH,<= br />>>>>> FullListVDSCommand, return: [{acpiEnable=3Dtr= ue,<br />>>>>> emulatedMachine=3Dpc-i440fx-rhel7.3.0, af= terMigrationStatus=3D,<br />>>>>> tabletEnable=3Dtrue, p= id=3D18748, guestDiskMapping=3D{},<br />>>>>> transparen= tHugePages=3Dtrue, timeOffset=3D0, cpuType=3DNehalem, smp=3D2,<br />>= ;>>>> guestNumaNodes=3D[Ljava.lang.Object;@760085fd,<br />&= gt;>>>> custom=3D{device=5Ffbddd528-7d93-49c6-a286-180e021c= b274device=5F87<br />>>>>> 9c93ab-4df1-435c-af02-565039f= cc254=3DVmDevice:{id=3D'VmDeviceId:<br />>>>>> {deviceId= =3D'879c93ab-4df1-435c-af02-565039fcc254',<br />>>>>> vm= Id=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}', device=3D'unix',<br />&g= t;>>>> type=3D'CHANNEL', bootOrder=3D'0', specParams=3D'[]'= , address=3D'{bus=3D0,<br />>>>>> controller=3D0, type=3D= virtio-serial, port=3D1}', managed=3D'false',<br />>>>>>= plugged=3D'true', readOnly=3D'false', deviceAlias=3D'channel0',<br />&= gt;>>>> customProperties=3D'[]', snapshotId=3D'null', logic= alName=3D'null',<br />>>>>> hostDevice=3D'null'}, device= =5Ffbddd528-7d93-49c6-a286<br />>>>>> -180e021cb274devic= e=5F879c93ab-4df1-435c-af02-565039fcc254devi<br />>>>>> = ce=5F8945f61a-abbe-4156-8485-a4aa6f1908dbdevice=5F017b5e59-01c4-<br />&= gt;>>>> 4aac-bf0c-b5d9557284d6=3DVmDevice:{id=3D'VmDeviceId= :{deviceId=3D'0<br />>>>>> 17b5e59-01c4-4aac-bf0c-b5d955= 7284d6', vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}',<br />>>= >>> device=3D'tablet', type=3D'UNKNOWN', bootOrder=3D'0', spec= Params=3D'[]',<br />>>>>> address=3D'{bus=3D0, type=3Dus= b, port=3D1}', managed=3D'false', plugged=3D'true',<br />>>>&g= t;> readOnly=3D'false', deviceAlias=3D'input0', customProperties=3D'= []',<br />>>>>> snapshotId=3D'null', logicalName=3D'null= ', hostDevice=3D'null'},<br />>>>>> device=5Ffbddd528-7d= 93-49c6-a286-180e021cb274=3DVmDevice:{id=3D'Vm<br />>>>>>= ; DeviceId:{deviceId=3D'fbddd528-7d93-49c6-a286-180e021cb274',<br />>= ;>>>> vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}', devi= ce=3D'ide',<br />>>>>> type=3D'CONTROLLER', bootOrder=3D= '0', specParams=3D'[]', address=3D'{slot=3D0x01,<br />>>>>&= gt; bus=3D0x00, domain=3D0x0000, type=3Dpci, function=3D0x1}', managed=3D= 'false',<br />>>>>> plugged=3D'true', readOnly=3D'false'= , deviceAlias=3D'ide', customProperties=3D'[]',<br />>>>>&g= t; snapshotId=3D'null', logicalName=3D'null', hostDevice=3D'null'},<br = />>>>>> device=5Ffbddd528-7d93-49c6-a286-180e021cb274dev= ice=5F879c93ab-4<br />>>>>> df1-435c-af02-565039fcc254de= vice=5F8945f61a-abbe-4156-8485-a4a<br />>>>>> a6f1908db=3D= VmDevice:{id=3D'VmDeviceId:{deviceId=3D'8945f61a-abbe-4156-8485-a4aa6f1= 908db',<br />>>>>> vmId=3D'3f57e669-5e4c-4d10-85cc-d5730= 04a099d'}', device=3D'unix',<br />>>>>> type=3D'CHANNEL'= , bootOrder=3D'0', specParams=3D'[]', address=3D'{bus=3D0,<br />>>= ;>>> controller=3D0, type=3Dvirtio-serial, port=3D2}', managed= =3D'false',<br />>>>>> plugged=3D'true', readOnly=3D'fal= se', deviceAlias=3D'channel1',<br />>>>>> customProperti= es=3D'[]', snapshotId=3D'null', logicalName=3D'null',<br />>>>= >> hostDevice=3D'null'}}, vmType=3Dkvm, memSize=3D8192, smpCoresP= erSocket=3D1,<br />>>>>> vmName=3DOracle=5FSECONDARY, ni= ce=3D0, status=3DUp, maxMemSize=3D32768,<br />>>>>> boot= MenuEnable=3Dfalse, vmId=3D3f57e669-5e4c-4d10-85cc-d573004a099d,<br />&= gt;>>>> numOfIoThreads=3D2, smpThreadsPerCore=3D1, smartcar= dEnable=3Dfalse,<br />>>>>> maxMemSlots=3D16, kvmEnable=3D= true, pitReinjection=3Dfalse,<br />>>>>> displayNetwork=3D= ovirtmgmt, devices=3D[Ljava.lang.Object;@2e4d3dd3,<br />>>>>= ;> memGuaranteedSize=3D8192, maxVCpus=3D16, clientIp=3D, statusTime=3D= 4304259600<br />>>>>> <(430)%20425-9600>, display=3D= vnc}], log id: 7cc65298<br />>>>>> 2018-02-12 16:46:42,2= 57+01 INFO [org.ovirt.engine.core.vdsbro<br />>>>>> ker.= monitoring.VmDevicesMonitoring] (ForkJoinPool-1-worker-4) []<br />>&= gt;>>> Received a vnc Device without an address when processin= g VM<br />>>>>> 3f57e669-5e4c-4d10-85cc-d573004a099d dev= ices, skipping device:<br />>>>>> {device=3Dvnc, specPar= ams=3D{displayNetwork=3Dovirtmgmt, keyMap=3Dfr,<br />>>>>&g= t; displayIp=3D192.168.0.5}, type=3Dgraphics, deviceId=3D813957b1-446a-= 4e88-9e40-9fe76d2c442d,<br />>>>>> port=3D5901}<br />>= ;>>>> 2018-02-12 16:46:42,257+01 INFO [org.ovirt.engine.cor= e.vdsbro<br />>>>>> ker.monitoring.VmDevicesMonitoring] = (ForkJoinPool-1-worker-4) []<br />>>>>> Received a lease= Device without an address when processing VM<br />>>>>>= 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device:<br />&g= t;>>>> {lease=5Fid=3D3f57e669-5e4c-4d10-85cc-d573004a099d,<= br />>>>>> sd=5Fid=3D1e51cecc-eb2e-47d0-b185-920fdc7afa1= 6,<br />>>>>> deviceId=3D{uuid=3Da09949aa-5642-4b6d-94a4= -8b0d04257be5}, offset=3D6291456,<br />>>>>> device=3Dle= ase, path=3D/rhev/data-center/mnt/glusterSD/192.168.0.6:<br />>>&= gt;>> =5FDATA01/1e51cecc-eb2e-47d0-b185-920fdc7afa16/dom=5Fmd/xle= ases, type=3Dlease}<br />>>>>> 2018-02-12 16:46:46,260+0= 1 INFO [org.ovirt.engine.core.vdsbro<br />>>>>> ker.vdsb= roker.FullListVDSCommand] (DefaultQuartzScheduler5) [7fcb200a]<br />>= ;>>>> FINISH, FullListVDSCommand, return: [{acpiEnable=3Dtr= ue,<br />>>>>> emulatedMachine=3Dpc-i440fx-rhel7.3.0, af= terMigrationStatus=3D,<br />>>>>> tabletEnable=3Dtrue, p= id=3D18748, guestDiskMapping=3D{0QEMU=5FQEMU=5FH<br />>>>>&= gt; ARDDISK=5Fd890fa68-fba4-4f49-9=3D{name=3D/dev/sda},<br />>>&g= t;>> QEMU=5FDVD-ROM=5FQM00003=3D{name=3D/dev/sr0}}, transparentHu= gePages=3Dtrue,<br />>>>>> timeOffset=3D0, cpuType=3DNeh= alem, smp=3D2, guestNumaNodes=3D[Ljava.lang.Obj<br />>>>>&g= t; ect;@77951faf, custom=3D{device=5Ffbddd528-7d93-4<br />>>>&= gt;> 9c6-a286-180e021cb274device=5F879c93ab-4df1-435c-af02-565039fc<= br />>>>>> c254=3DVmDevice:{id=3D'VmDeviceId:{deviceId=3D= '879c93ab-4df1-435c-af02-565039fcc254',<br />>>>>> vmId=3D= '3f57e669-5e4c-4d10-85cc-d573004a099d'}', device=3D'unix',<br />>>= ;>>> type=3D'CHANNEL', bootOrder=3D'0', specParams=3D'[]', add= ress=3D'{bus=3D0,<br />>>>>> controller=3D0, type=3Dvirt= io-serial, port=3D1}', managed=3D'false',<br />>>>>> plu= gged=3D'true', readOnly=3D'false', deviceAlias=3D'channel0',<br />>&= gt;>>> customProperties=3D'[]', snapshotId=3D'null', logicalNa= me=3D'null',<br />>>>>> hostDevice=3D'null'}, device=5Ff= bddd528-7d93-49c6-a286<br />>>>>> -180e021cb274device=5F= 879c93ab-4df1-435c-af02-565039fcc254devi<br />>>>>> ce=5F= 8945f61a-abbe-4156-8485-a4aa6f1908dbdevice=5F017b5e59-01c4-<br />>&g= t;>>> 4aac-bf0c-b5d9557284d6=3DVmDevice:{id=3D'VmDeviceId:{dev= iceId=3D'0<br />>>>>> 17b5e59-01c4-4aac-bf0c-b5d9557284d= 6', vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}',<br />>>>&= gt;> device=3D'tablet', type=3D'UNKNOWN', bootOrder=3D'0', specParam= s=3D'[]',<br />>>>>> address=3D'{bus=3D0, type=3Dusb, po= rt=3D1}', managed=3D'false', plugged=3D'true',<br />>>>>>= ; readOnly=3D'false', deviceAlias=3D'input0', customProperties=3D'[]',<= br />>>>>> snapshotId=3D'null', logicalName=3D'null', ho= stDevice=3D'null'},<br />>>>>> device=5Ffbddd528-7d93-49= c6-a286-180e021cb274=3DVmDevice:{id=3D'Vm<br />>>>>> Dev= iceId:{deviceId=3D'fbddd528-7d93-49c6-a286-180e021cb274',<br />>>= >>> vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}', device=3D= 'ide',<br />>>>>> type=3D'CONTROLLER', bootOrder=3D'0', = specParams=3D'[]', address=3D'{slot=3D0x01,<br />>>>>> b= us=3D0x00, domain=3D0x0000, type=3Dpci, function=3D0x1}', managed=3D'fa= lse',<br />>>>>> plugged=3D'true', readOnly=3D'false', d= eviceAlias=3D'ide', customProperties=3D'[]',<br />>>>>> = snapshotId=3D'null', logicalName=3D'null', hostDevice=3D'null'},<br />&= gt;>>>> device=5Ffbddd528-7d93-49c6-a286-180e021cb274device= =5F879c93ab-4<br />>>>>> df1-435c-af02-565039fcc254devic= e=5F8945f61a-abbe-4156-8485-a4a<br />>>>>> a6f1908db=3DV= mDevice:{id=3D'VmDeviceId:{deviceId=3D'8945f61a-abbe-4156-8485-a4aa6f19= 08db',<br />>>>>> vmId=3D'3f57e669-5e4c-4d10-85cc-d57300= 4a099d'}', device=3D'unix',<br />>>>>> type=3D'CHANNEL',= bootOrder=3D'0', specParams=3D'[]', address=3D'{bus=3D0,<br />>>= >>> controller=3D0, type=3Dvirtio-serial, port=3D2}', managed=3D= 'false',<br />>>>>> plugged=3D'true', readOnly=3D'false'= , deviceAlias=3D'channel1',<br />>>>>> customProperties=3D= '[]', snapshotId=3D'null', logicalName=3D'null',<br />>>>>&= gt; hostDevice=3D'null'}}, vmType=3Dkvm, memSize=3D8192, smpCoresPerSoc= ket=3D1,<br />>>>>> vmName=3DOracle=5FSECONDARY, nice=3D= 0, status=3DUp, maxMemSize=3D32768,<br />>>>>> bootMenuE= nable=3Dfalse, vmId=3D3f57e669-5e4c-4d10-85cc-d573004a099d,<br />>&g= t;>>> numOfIoThreads=3D2, smpThreadsPerCore=3D1, smartcardEnab= le=3Dfalse,<br />>>>>> maxMemSlots=3D16, kvmEnable=3Dtru= e, pitReinjection=3Dfalse,<br />>>>>> displayNetwork=3Do= virtmgmt, devices=3D[Ljava.lang.Object;@286410fd,<br />>>>>= > memGuaranteedSize=3D8192, maxVCpus=3D16, clientIp=3D, statusTime=3D= 4304263620<br />>>>>> <(430)%20426-3620>, display=3D= vnc}], log id: 58cdef4c<br />>>>>> 2018-02-12 16:46:46,2= 67+01 INFO [org.ovirt.engine.core.vdsbro<br />>>>>> ker.= monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler5)<br />>>= >>> [7fcb200a] Received a vnc Device without an address when p= rocessing VM<br />>>>>> 3f57e669-5e4c-4d10-85cc-d573004a= 099d devices, skipping device:<br />>>>>> {device=3Dvnc,= specParams=3D{displayNetwork=3Dovirtmgmt, keyMap=3Dfr,<br />>>&g= t;>> displayIp=3D192.168.0.5}, type=3Dgraphics, deviceId=3D813957= b1-446a-4e88-9e40-9fe76d2c442d,<br />>>>>> port=3D5901}<= br />>>>>> 2018-02-12 16:46:46,268+01 INFO [org.ovirt.en= gine.core.vdsbro<br />>>>>> ker.monitoring.VmDevicesMoni= toring] (DefaultQuartzScheduler5)<br />>>>>> [7fcb200a] = Received a lease Device without an address when processing VM<br />>= >>>> 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping= device:<br />>>>>> {lease=5Fid=3D3f57e669-5e4c-4d10-85c= c-d573004a099d,<br />>>>>> sd=5Fid=3D1e51cecc-eb2e-47d0-= b185-920fdc7afa16,<br />>>>>> deviceId=3D{uuid=3Da09949a= a-5642-4b6d-94a4-8b0d04257be5}, offset=3D6291456,<br />>>>>= > device=3Dlease, path=3D/rhev/data-center/mnt/glusterSD/192.168.0.6= :<br />>>>>> =5FDATA01/1e51cecc-eb2e-47d0-b185-920fdc7af= a16/dom=5Fmd/xleases, type=3Dlease}<br />>>>>><br />>= >>>><br />>>>>><br />>>>>><br= />>>>>> For the VM with 2 vdisks we see :<br />>>= >>><br />>>>>> 2018-02-12 16:49:06,112+01 INFO = [org.ovirt.engine.core.bll.MigrateVmToServerCommand]<br />>>>&= gt;> (default task-50) [92b5af33-cb87-4142-b8fe-8b838dd7458e] Lock A= cquired<br />>>>>> to object 'EngineLock:{exclusiveLocks= =3D'[f7d4ec12-627a-4b83-b59e-886400d55474=3DVM]',<br />>>>>= > sharedLocks=3D''}'<br />>>>>> 2018-02-12 16:49:06,4= 07+01 INFO [org.ovirt.engine.core.bll.MigrateVmToServerCommand]<br />&g= t;>>>> (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4= 142-b8fe-8b838dd7458e]<br />>>>>> Running command: Migra= teVmToServerCommand internal: false. Entities<br />>>>>>= affected : ID: f7d4ec12-627a-4b83-b59e-886400d55474 Type: VMAction<br = />>>>>> group MIGRATE=5FVM with role type USER<br />>= >>>> 2018-02-12 16:49:06,712+01 INFO [org.ovirt.engine.core= .vdsbroker.MigrateVDSCommand]<br />>>>>> (org.ovirt.thre= ad.pool-6-thread-49) [92b5af33-cb87-4142-b8fe-8b838dd7458e]<br />>&g= t;>>> START, MigrateVDSCommand( MigrateVDSCommandParameters:{r= unAsync=3D'true',<br />>>>>> hostId=3D'd569c2dd-8f30-487= 8-8aea-858db285cf69',<br />>>>>> vmId=3D'f7d4ec12-627a-4= b83-b59e-886400d55474', srcHost=3D'192.168.0.5',<br />>>>>&= gt; dstVdsId=3D'ce3938b1-b23f-4d22-840a-f17d7cd87bb1', dstHost=3D'<br /= >>>>>> 192.168.0.6:54321', migrationMethod=3D'ONLINE', t= unnelMigration=3D'false',<br />>>>>> migrationDowntime=3D= '0', autoConverge=3D'true', migrateCompressed=3D'false',<br />>>&= gt;>> consoleAddress=3D'null', maxBandwidth=3D'500', enableGuestE= vents=3D'true',<br />>>>>> maxIncomingMigrations=3D'2', = maxOutgoingMigrations=3D'2',<br />>>>>> convergenceSched= ule=3D'[init=3D[{name=3DsetDowntime, params=3D[100]}],<br />>>>= ;>> stalling=3D[{limit=3D1, action=3D{name=3DsetDowntime, params=3D= [150]}}, {limit=3D2,<br />>>>>> action=3D{name=3DsetDown= time, params=3D[200]}}, {limit=3D3,<br />>>>>> action=3D= {name=3DsetDowntime, params=3D[300]}}, {limit=3D4,<br />>>>>= ;> action=3D{name=3DsetDowntime, params=3D[400]}}, {limit=3D6,<br />= >>>>> action=3D{name=3DsetDowntime, params=3D[500]}}, {l= imit=3D-1, action=3D{name=3Dabort,<br />>>>>> params=3D[= ]}}]]'}), log id: 3702a9e0<br />>>>>> 2018-02-12 16:49:0= 6,713+01 INFO [org.ovirt.engine.core.vdsbro<br />>>>>> k= er.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-thread-4= 9)<br />>>>>> [92b5af33-cb87-4142-b8fe-8b838dd7458e] STA= RT,<br />>>>>> MigrateBrokerVDSCommand(HostName =3D ging= er.local.systea.fr,<br />>>>>> MigrateVDSCommandParamete= rs:{runAsync=3D'true',<br />>>>>> hostId=3D'd569c2dd-8f3= 0-4878-8aea-858db285cf69',<br />>>>>> vmId=3D'f7d4ec12-6= 27a-4b83-b59e-886400d55474', srcHost=3D'192.168.0.5',<br />>>>= >> dstVdsId=3D'ce3938b1-b23f-4d22-840a-f17d7cd87bb1', dstHost=3D'= <br />>>>>> 192.168.0.6:54321', migrationMethod=3D'ONLIN= E', tunnelMigration=3D'false',<br />>>>>> migrationDownt= ime=3D'0', autoConverge=3D'true', migrateCompressed=3D'false',<br />>= ;>>>> consoleAddress=3D'null', maxBandwidth=3D'500', enable= GuestEvents=3D'true',<br />>>>>> maxIncomingMigrations=3D= '2', maxOutgoingMigrations=3D'2',<br />>>>>> convergence= Schedule=3D'[init=3D[{name=3DsetDowntime, params=3D[100]}],<br />>&g= t;>>> stalling=3D[{limit=3D1, action=3D{name=3DsetDowntime, pa= rams=3D[150]}}, {limit=3D2,<br />>>>>> action=3D{name=3D= setDowntime, params=3D[200]}}, {limit=3D3,<br />>>>>> ac= tion=3D{name=3DsetDowntime, params=3D[300]}}, {limit=3D4,<br />>>= >>> action=3D{name=3DsetDowntime, params=3D[400]}}, {limit=3D6= ,<br />>>>>> action=3D{name=3DsetDowntime, params=3D[500= ]}}, {limit=3D-1, action=3D{name=3Dabort,<br />>>>>> par= ams=3D[]}}]]'}), log id: 1840069c<br />>>>>> 2018-02-12 = 16:49:06,724+01 INFO [org.ovirt.engine.core.vdsbro<br />>>>>= ;> ker.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-t= hread-49)<br />>>>>> [92b5af33-cb87-4142-b8fe-8b838dd745= 8e] FINISH, MigrateBrokerVDSCommand,<br />>>>>> log id: = 1840069c<br />>>>>> 2018-02-12 16:49:06,732+01 INFO [org= .ovirt.engine.core.vdsbroker.MigrateVDSCommand]<br />>>>>&g= t; (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4142-b8fe-8b838dd= 7458e]<br />>>>>> FINISH, MigrateVDSCommand, return: Mig= ratingFrom, log id: 3702a9e0<br />>>>>> 2018-02-12 16:49= :06,753+01 INFO [org.ovirt.engine.core.dal.db<br />>>>>>= broker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-6-thr= ead-49)<br />>>>>> [92b5af33-cb87-4142-b8fe-8b838dd7458e= ] EVENT=5FID:<br />>>>>> VM=5FMIGRATION=5FSTART(62), Cor= relation ID: 92b5af33-cb87-4142-b8fe-8b838dd7458e,<br />>>>>= ;> Job ID: f4f54054-f7c8-4481-8eda-d5a15c383061, Call Stack: null, C= ustom<br />>>>>> ID: null, Custom Event ID: -1, Message:= Migration started (VM:<br />>>>>> Oracle=5FPRIMARY, Sou= rce: ginger.local.systea.fr, Destination:<br />>>>>> vic= tor.local.systea.fr, User: admin@internal-authz).<br />>>>>= > ...<br />>>>>> 2018-02-12 16:49:16,453+01 INFO [org= .ovirt.engine.core.vdsbro<br />>>>>> ker.monitoring.VmsS= tatisticsFetcher] (DefaultQuartzScheduler4)<br />>>>>> [= 162a5bc3] Fetched 2 VMs from VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'= <br />>>>>> 2018-02-12 16:49:16,455+01 INFO [org.ovirt.e= ngine.core.vdsbroker.monitoring.VmAnalyzer]<br />>>>>> (= DefaultQuartzScheduler4) [162a5bc3] VM 'f7d4ec12-627a-4b83-b59e-886400d= 55474'(Oracle=5FPRIMARY)<br />>>>>> was unexpectedly det= ected as 'MigratingTo' on VDS<br />>>>>> 'ce3938b1-b23f-= 4d22-840a-f17d7cd87bb1'(victor.local.systea.fr)<br />>>>>&g= t; (expected on 'd569c2dd-8f30-4878-8aea-858db285cf69')<br />>>&g= t;>> 2018-02-12 16:49:16,455+01 INFO [org.ovirt.engine.core.vdsbr= oker.monitoring.VmAnalyzer]<br />>>>>> (DefaultQuartzSch= eduler4) [162a5bc3] VM 'f7d4ec12-627a-4b83-b59e-886400d55474'<br />>= >>>> is migrating to VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87= bb1'(<br />>>>>> victor.local.systea.fr) ignoring it in = the refresh until migration is<br />>>>>> done<br />>= >>>> ...<br />>>>>> 2018-02-12 16:49:31,484+= 01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]<br />&g= t;>>>> (DefaultQuartzScheduler5) [11a7619a] VM 'f7d4ec12-62= 7a-4b83-b59e-886400d55474'(Oracle=5FPRIMARY)<br />>>>>> = was unexpectedly detected as 'MigratingTo' on VDS<br />>>>>= > 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'(victor.local.systea.fr)<br = />>>>>> (expected on 'd569c2dd-8f30-4878-8aea-858db285cf= 69')<br />>>>>> 2018-02-12 16:49:31,484+01 INFO [org.ovi= rt.engine.core.vdsbroker.monitoring.VmAnalyzer]<br />>>>>&g= t; (DefaultQuartzScheduler5) [11a7619a] VM 'f7d4ec12-627a-4b83-b59e-886= 400d55474'<br />>>>>> is migrating to VDS 'ce3938b1-b23f= -4d22-840a-f17d7cd87bb1'(<br />>>>>> victor.local.systea= .fr) ignoring it in the refresh until migration is<br />>>>>= ;> done<br />>>>>><br />>>>>><br />>= ;>>>><br />>>>>> and so on, last lines repea= ted indefinitly for hours since we poweroff<br />>>>>> t= he VM...<br />>>>>> Is this something known ? Any idea a= bout that ?<br />>>>>><br />>>>>> Thanks<= br />>>>>><br />>>>>> Ovirt 4.1.6, update= d last at feb-13. Gluster 3.12.1.<br />>>>>><br />>&g= t;>>> --<br />>>>>><br />>>>>> C= ordialement,<br />>>>>><br />>>>>> *Frank= Soyer *<br />>>>>><br />>>>>> =5F=5F=5F=5F= =5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F= =5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F<br />>>= >>> Users mailing list<br />>>>>> Users@ovirt.o= rg<br />>>>>> http://lists.ovirt.org/mailman/listinfo/us= ers<br /> </blockquote><br /> </html> ------=_=-_OpenGroupware_org_NGMime-18019-1519921630.582362-42--------

"fsoyer" <fsoyer@systea.fr> writes:
I tried to activate the debug mode, but the restart of libvirt crashed something on the host : it was no more possible to start any vm on it, and migration to it just never started. So I decided to restart it, and to be sure, I've restarted all the hosts. And... now the migration of all VMs, simple or multi-disks, works ?!? So, there was probably something hidden that was resetted or repaired by the global restart !.... In french, we call that "tomber en marche" ;)
I'm always amazed how many problems in computing are eventually resolved (and how many new ones introduced) by reboot :-). I'm glad that it works for you now. Regards, Milan
participants (3)
-
fsoyer
-
Maor Lipchuk
-
Milan Zamazal