<div dir="ltr">I encountered a bug (see [1]) which contains the same error mentioned in your VDSM logs (see [2]), but I doubt it is related.<div>Milan, maybe you have any advice to troubleshoot the issue? Will the libvirt/qemu logs can help?</div><div>I would suggest to open a bug on that issue so we can track it more properly.</div><div><br></div><div>Regards,</div><div>Maor</div><div><div><div><br></div><div><br></div><div>[1]</div><div><a href="https://bugzilla.redhat.com/show_bug.cgi?id=1486543">https://bugzilla.redhat.com/show_bug.cgi?id=1486543</a> -  Migration leads to VM running on 2 Hosts<br></div><div><br></div><div>[2]</div><div>2018-02-16 09:43:35,236+0100 ERROR (jsonrpc/7) [jsonrpc.JsonRpcServer] Internal server error (__init__:577)</div><div>Traceback (most recent call last):</div><div>  File &quot;/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py&quot;, line 572, in _handle_request</div><div>    res = method(**params)</div><div>  File &quot;/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py&quot;, line 198, in _dynamicMethod</div><div>    result = fn(*methodArgs)</div><div>  File &quot;/usr/share/vdsm/API.py&quot;, line 1454, in getAllVmIoTunePolicies</div><div>    io_tune_policies_dict = self._cif.getAllVmIoTunePolicies()</div><div>  File &quot;/usr/share/vdsm/clientIF.py&quot;, line 454, in getAllVmIoTunePolicies</div><div>    &#39;current_values&#39;: v.getIoTune()}</div><div>  File &quot;/usr/share/vdsm/virt/vm.py&quot;, line 2859, in getIoTune</div><div>    result = self.getIoTuneResponse()</div><div>  File &quot;/usr/share/vdsm/virt/vm.py&quot;, line 2878, in getIoTuneResponse</div><div>    res = self._dom.blockIoTune(</div><div>  File &quot;/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py&quot;, line 47, in __getattr__</div><div>    % self.vmid)</div><div>NotConnectedError: VM u&#39;755cf168-de65-42ed-b22f-efe9136f7594&#39; was not started yet or was shut down</div></div></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Thu, Feb 22, 2018 at 4:22 PM, fsoyer <span dir="ltr">&lt;<a href="mailto:fsoyer@systea.fr" target="_blank">fsoyer@systea.fr</a>&gt;</span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi,<br>Yes, on 2018-02-16 (vdsm logs) I tried with a VM standing on ginger (192.168.0.6) migrated (or failed to migrate...) to victor (192.168.0.5), while the engine.log in the first mail on 2018-02-12 was for VMs standing on victor, migrated (or failed to migrate...) to ginger. Symptoms were exactly the same, in both directions, and VMs works like a charm before, and even after (migration &quot;killed&quot; by a poweroff of VMs).<br>Am I the only one experimenting this problem ?<br><br><br>Thanks<br>--<br><p class="m_-5009927997311193272Text1">Cordialement,<br><br><b>Frank Soyer </b><br> </p><div class="HOEnZb"><div class="h5"><br><br>Le Jeudi, Février 22, 2018 00:45 CET, Maor Lipchuk &lt;<a href="mailto:mlipchuk@redhat.com" target="_blank">mlipchuk@redhat.com</a>&gt; a écrit:<br> <blockquote type="cite" cite="http://CAJ1JNOe9Yi5XnFWvqOYhpoMuhkXOKAR=NOWafRkRHLXuOTtwtg@mail.gmail.com"><div dir="ltr">Hi Frank,<div> </div><div>Sorry about the delay repond.</div><div>I&#39;ve been going through the logs you attached, although I could not find any specific indication why the migration failed because of the disk you were mentionning.</div><div>Does this VM run with both disks on the target host without migration?</div><div> </div><div>Regards,</div><div>Maor</div><div> </div></div><div class="gmail_extra"> <div class="gmail_quote">On Fri, Feb 16, 2018 at 11:03 AM, fsoyer <span dir="ltr">&lt;<a href="mailto:fsoyer@systea.fr" target="_blank">fsoyer@systea.fr</a>&gt;</span> wrote:<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi Maor,<br>sorry for the double post, I&#39;ve change the email adress of my account and supposed that I&#39;d need to re-post it.<br>And thank you for your time. Here are the logs. I added a vdisk to an existing VM : it no more migrates, needing to poweroff it after minutes. Then simply deleting the second disk makes migrate it in exactly 9s without problem ! <br><a href="https://gist.github.com/fgth/4707446331d201eef574ac31b6e89561" target="_blank">https://gist.github.com/fgth/4<wbr>707446331d201eef574ac31b6e8956<wbr>1</a><br><a href="https://gist.github.com/fgth/f8de9c22664aee53722af676bff8719d" target="_blank">https://gist.github.com/fgth/f<wbr>8de9c22664aee53722af676bff8719<wbr>d</a><br><br>--<p class="m_-5009927997311193272m_-4299273321983674487Text1">Cordialement,<br><br><b>Frank Soyer </b></p><div class="m_-5009927997311193272HOEnZb"><div class="m_-5009927997311193272h5">Le Mercredi, Février 14, 2018 11:04 CET, Maor Lipchuk &lt;<a href="mailto:mlipchuk@redhat.com" target="_blank">mlipchuk@redhat.com</a>&gt; a écrit:<br> <blockquote type="cite" cite="http://CAJ1JNOcD3ZX6hYG4TJ0-_umSgw6-wtJoC__HRbarc1io-Y-6Jw@mail.gmail.com"><div dir="ltr">Hi Frank,<div> </div><div>I already replied on your last email.</div><div><div>Can you provide the VDSM logs from the time of the migration failure for both hosts:</div><div> <span style="font-family:monospace"> </span><a style="font-family:monospace" href="http://ginger.local.systea.fr/" target="_blank">ginger.local.systea.f</a>r and <a style="font-family:monospace" href="http://victor.local.systea.fr/" target="_blank">v<wbr>ictor.local.systea.fr</a></div><div> </div><div>Thanks,</div><div>Maor</div></div></div><div class="gmail_extra"> <div class="gmail_quote">On Wed, Feb 14, 2018 at 11:23 AM, fsoyer <span dir="ltr">&lt;<a href="mailto:fsoyer@systea.fr" target="_blank">fsoyer@systea.fr</a>&gt;</span> wrote:<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><p>Hi all,<br>I discovered yesterday a problem when migrating VM with more than one vdisk.<br>On our test servers (oVirt4.1, shared storage with Gluster), I created 2 VMs needed for a test, from a template with a 20G vdisk. On this VMs I added a 100G vdisk (for this tests I didn&#39;t want to waste time to extend the existing vdisks... But I lost time finally...). The VMs with the 2 vdisks works well.<br>Now I saw some updates waiting on the host. I tried to put it in maintenance... But it stopped on the two VM. They were marked &quot;migrating&quot;, but no more accessible. Other (small) VMs with only 1 vdisk was migrated without problem at the same time.<br>I saw that a kvm process for the (big) VMs was launched on the source AND destination host, but after tens of minutes, the migration and the VMs was always freezed. I tried to cancel the migration for the VMs : failed. The only way to stop it was to poweroff the VMs : the kvm process died on the 2 hosts and the GUI alerted on a failed migration.<br>In doubt, I tried to delete the second vdisk on one of this VMs : it migrates then without error ! And no access problem.<br>I tried to extend the first vdisk of the second VM, the delete the second vdisk : it migrates now without problem !   <br><br>So after another test with a VM with 2 vdisks, I can say that this blocked the migration process :(<br><br>In engine.log, for a VMs with 1 vdisk migrating well, we see :</p><blockquote>2018-02-12 16:46:29,705+01 INFO  [org.ovirt.engine.core.bll.Mi<wbr>grateVmToServerCommand] (default task-28) [2f712024-5982-46a8-82c8-fd829<wbr>3da5725] Lock Acquired to object &#39;EngineLock:{exclusiveLocks=&#39;[<wbr>3f57e669-5e4c-4d10-85cc-d57300<wbr>4a099d=VM]&#39;, sharedLocks=&#39;&#39;}&#39;<br>2018-02-12 16:46:29,955+01 INFO  [org.ovirt.engine.core.bll.Mi<wbr>grateVmToServerCommand] (org.ovirt.thread.pool-6-threa<wbr>d-32) [2f712024-5982-46a8-82c8-fd829<wbr>3da5725] Running command: MigrateVmToServerCommand internal: false. Entities affected :  ID: 3f57e669-5e4c-4d10-85cc-d57300<wbr>4a099d Type: VMAction group MIGRATE_VM with role type USER<br>2018-02-12 16:46:30,261+01 INFO  [org.ovirt.engine.core.vdsbro<wbr>ker.MigrateVDSCommand] (org.ovirt.thread.pool-6-threa<wbr>d-32) [2f712024-5982-46a8-82c8-fd829<wbr>3da5725] START, MigrateVDSCommand( MigrateVDSCommandParameters:{r<wbr>unAsync=&#39;true&#39;, hostId=&#39;ce3938b1-b23f-4d22-840<wbr>a-f17d7cd87bb1&#39;, vmId=&#39;3f57e669-5e4c-4d10-85cc-<wbr>d573004a099d&#39;, srcHost=&#39;192.168.0.6&#39;, dstVdsId=&#39;d569c2dd-8f30-4878-8<wbr>aea-858db285cf69&#39;, dstHost=&#39;<a href="http://192.168.0.5:54321" target="_blank">192.168.0.5:54321</a>&#39;, migrationMethod=&#39;ONLINE&#39;, tunnelMigration=&#39;false&#39;, migrationDowntime=&#39;0&#39;, autoConverge=&#39;true&#39;, migrateCompressed=&#39;false&#39;, consoleAddress=&#39;null&#39;, maxBandwidth=&#39;500&#39;, enableGuestEvents=&#39;true&#39;, maxIncomingMigrations=&#39;2&#39;, maxOutgoingMigrations=&#39;2&#39;, convergenceSchedule=&#39;[init=[{n<wbr>ame=setDowntime, params=[100]}], stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2, action={name=setDowntime, params=[200]}}, {limit=3, action={name=setDowntime, params=[300]}}, {limit=4, action={name=setDowntime, params=[400]}}, {limit=6, action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort, params=[]}}]]&#39;}), log id: 14f61ee0<br>2018-02-12 16:46:30,262+01 INFO  [org.ovirt.engine.core.vdsbro<wbr>ker.vdsbroker.MigrateBrokerVDS<wbr>Command] (org.ovirt.thread.pool-6-threa<wbr>d-32) [2f712024-5982-46a8-82c8-fd829<wbr>3da5725] START, MigrateBrokerVDSCommand(HostNa<wbr>me = <a href="http://victor.local.systea.fr" target="_blank">victor.local.systea.fr</a>, MigrateVDSCommandParameters:{r<wbr>unAsync=&#39;true&#39;, hostId=&#39;ce3938b1-b23f-4d22-840<wbr>a-f17d7cd87bb1&#39;, vmId=&#39;3f57e669-5e4c-4d10-85cc-<wbr>d573004a099d&#39;, srcHost=&#39;192.168.0.6&#39;, dstVdsId=&#39;d569c2dd-8f30-4878-8<wbr>aea-858db285cf69&#39;, dstHost=&#39;<a href="http://192.168.0.5:54321" target="_blank">192.168.0.5:54321</a>&#39;, migrationMethod=&#39;ONLINE&#39;, tunnelMigration=&#39;false&#39;, migrationDowntime=&#39;0&#39;, autoConverge=&#39;true&#39;, migrateCompressed=&#39;false&#39;, consoleAddress=&#39;null&#39;, maxBandwidth=&#39;500&#39;, enableGuestEvents=&#39;true&#39;, maxIncomingMigrations=&#39;2&#39;, maxOutgoingMigrations=&#39;2&#39;, convergenceSchedule=&#39;[init=[{n<wbr>ame=setDowntime, params=[100]}], stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2, action={name=setDowntime, params=[200]}}, {limit=3, action={name=setDowntime, params=[300]}}, {limit=4, action={name=setDowntime, params=[400]}}, {limit=6, action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort, params=[]}}]]&#39;}), log id: 775cd381<br>2018-02-12 16:46:30,277+01 INFO  [org.ovirt.engine.core.vdsbro<wbr>ker.vdsbroker.MigrateBrokerVDS<wbr>Command] (org.ovirt.thread.pool-6-threa<wbr>d-32) [2f712024-5982-46a8-82c8-fd829<wbr>3da5725] FINISH, MigrateBrokerVDSCommand, log id: 775cd381<br>2018-02-12 16:46:30,285+01 INFO  [org.ovirt.engine.core.vdsbro<wbr>ker.MigrateVDSCommand] (org.ovirt.thread.pool-6-threa<wbr>d-32) [2f712024-5982-46a8-82c8-fd829<wbr>3da5725] FINISH, MigrateVDSCommand, return: MigratingFrom, log id: 14f61ee0<br>2018-02-12 16:46:30,301+01 INFO  [org.ovirt.engine.core.dal.db<wbr>broker.auditloghandling.AuditL<wbr>ogDirector] (org.ovirt.thread.pool-6-threa<wbr>d-32) [2f712024-5982-46a8-82c8-fd829<wbr>3da5725] EVENT_ID: VM_MIGRATION_START(62), Correlation ID: 2f712024-5982-46a8-82c8-fd8293<wbr>da5725, Job ID: 4bd19aa9-cc99-4d02-884e-5a1e85<wbr>7a7738, Call Stack: null, Custom ID: null, Custom Event ID: -1, Message: Migration started (VM: Oracle_SECONDARY, Source: <a href="http://victor.local.systea.fr" target="_blank">victor.local.systea.fr</a>, Destination: <a href="http://ginger.local.systea.fr" target="_blank">ginger.local.systea.fr</a>, User: admin@internal-authz).<br>2018-02-12 16:46:31,106+01 INFO  [org.ovirt.engine.core.vdsbro<wbr>ker.vdsbroker.FullListVDSComma<wbr>nd] (DefaultQuartzScheduler9) [54a65b66] START, FullListVDSCommand(HostName = <a href="http://victor.local.systea.fr" target="_blank">victor.local.systea.fr</a>, FullListVDSCommandParameters:{<wbr>runAsync=&#39;true&#39;, hostId=&#39;ce3938b1-b23f-4d22-840<wbr>a-f17d7cd87bb1&#39;, vmIds=&#39;[3f57e669-5e4c-4d10-85c<wbr>c-d573004a099d]&#39;}), log id: 54b4b435<br>2018-02-12 16:46:31,147+01 INFO  [org.ovirt.engine.core.vdsbro<wbr>ker.vdsbroker.FullListVDSComma<wbr>nd] (DefaultQuartzScheduler9) [54a65b66] FINISH, FullListVDSCommand, return: [{acpiEnable=true, emulatedMachine=pc-i440fx-rhel<wbr>7.3.0, tabletEnable=true, pid=1493, guestDiskMapping={0QEMU_QEMU_H<wbr>ARDDISK_d890fa68-fba4-4f49-9={<wbr>name=/dev/sda}, QEMU_DVD-ROM_QM00003={name=/de<wbr>v/sr0}}, transparentHugePages=true, timeOffset=0, cpuType=Nehalem, smp=2, pauseCode=NOERR, guestNumaNodes=[Ljava.lang.Obj<wbr>ect;@1d9042cd, smartcardEnable=false, custom={device_fbddd528-7d93-4<wbr>9c6-a286-180e021cb274device_87<wbr>9c93ab-4df1-435c-af02-565039fc<wbr>c254=VmDevice:{id=&#39;VmDeviceId:<wbr>{deviceId=&#39;879c93ab-4df1-435c-<wbr>af02-565039fcc254&#39;, vmId=&#39;3f57e669-5e4c-4d10-85cc-<wbr>d573004a099d&#39;}&#39;, device=&#39;unix&#39;, type=&#39;CHANNEL&#39;, bootOrder=&#39;0&#39;, specParams=&#39;[]&#39;, address=&#39;{bus=0, controller=0, type=virtio-serial, port=1}&#39;, managed=&#39;false&#39;, plugged=&#39;true&#39;, readOnly=&#39;false&#39;, deviceAlias=&#39;channel0&#39;, customProperties=&#39;[]&#39;, snapshotId=&#39;null&#39;, logicalName=&#39;null&#39;, hostDevice=&#39;null&#39;}, device_fbddd528-7d93-49c6-a286<wbr>-180e021cb274device_879c93ab-4<wbr>df1-435c-af02-565039fcc254devi<wbr>ce_8945f61a-abbe-4156-8485-a4a<wbr>a6f1908dbdevice_017b5e59-01c4-<wbr>4aac-bf0c-b5d9557284d6=VmDevic<wbr>e:{id=&#39;VmDeviceId:{deviceId=&#39;0<wbr>17b5e59-01c4-4aac-bf0c-b5d9557<wbr>284d6&#39;, vmId=&#39;3f57e669-5e4c-4d10-85cc-<wbr>d573004a099d&#39;}&#39;, device=&#39;tablet&#39;, type=&#39;UNKNOWN&#39;, bootOrder=&#39;0&#39;, specParams=&#39;[]&#39;, address=&#39;{bus=0, type=usb, port=1}&#39;, managed=&#39;false&#39;, plugged=&#39;true&#39;, readOnly=&#39;false&#39;, deviceAlias=&#39;input0&#39;, customProperties=&#39;[]&#39;, snapshotId=&#39;null&#39;, logicalName=&#39;null&#39;, hostDevice=&#39;null&#39;}, device_fbddd528-7d93-49c6-a286<wbr>-180e021cb274=VmDevice:{id=&#39;Vm<wbr>DeviceId:{deviceId=&#39;fbddd528-<wbr>7d93-49c6-a286-180e021cb274&#39;, vmId=&#39;3f57e669-5e4c-4d10-85cc-<wbr>d573004a099d&#39;}&#39;, device=&#39;ide&#39;, type=&#39;CONTROLLER&#39;, bootOrder=&#39;0&#39;, specParams=&#39;[]&#39;, address=&#39;{slot=0x01, bus=0x00, domain=0x0000, type=pci, function=0x1}&#39;, managed=&#39;false&#39;, plugged=&#39;true&#39;, readOnly=&#39;false&#39;, deviceAlias=&#39;ide&#39;, customProperties=&#39;[]&#39;, snapshotId=&#39;null&#39;, logicalName=&#39;null&#39;, hostDevice=&#39;null&#39;}, device_fbddd528-7d93-49c6-a286<wbr>-180e021cb274device_879c93ab-4<wbr>df1-435c-af02-565039fcc254devi<wbr>ce_8945f61a-abbe-4156-8485-a4a<wbr>a6f1908db=VmDevice:{id=&#39;VmDevi<wbr>ceId:{deviceId=&#39;8945f61a-abbe-<wbr>4156-8485-a4aa6f1908db&#39;, vmId=&#39;3f57e669-5e4c-4d10-85cc-<wbr>d573004a099d&#39;}&#39;, device=&#39;unix&#39;, type=&#39;CHANNEL&#39;, bootOrder=&#39;0&#39;, specParams=&#39;[]&#39;, address=&#39;{bus=0, controller=0, type=virtio-serial, port=2}&#39;, managed=&#39;false&#39;, plugged=&#39;true&#39;, readOnly=&#39;false&#39;, deviceAlias=&#39;channel1&#39;, customProperties=&#39;[]&#39;, snapshotId=&#39;null&#39;, logicalName=&#39;null&#39;, hostDevice=&#39;null&#39;}}, vmType=kvm, memSize=8192, smpCoresPerSocket=1, vmName=Oracle_SECONDARY, nice=0, status=Migration Source, maxMemSize=32768, bootMenuEnable=false, vmId=3f57e669-5e4c-4d10-85cc-d<wbr>573004a099d, numOfIoThreads=2, smpThreadsPerCore=1, memGuaranteedSize=8192, kvmEnable=true, pitReinjection=false, displayNetwork=ovirtmgmt, devices=[Ljava.lang.Object;@28<wbr>ae66d7, display=vnc, maxVCpus=16, clientIp=, statusTime=4299484520, maxMemSlots=16}], log id: 54b4b435<br>2018-02-12 16:46:31,150+01 INFO  [org.ovirt.engine.core.vdsbro<wbr>ker.monitoring.VmsStatisticsFe<wbr>tcher] (DefaultQuartzScheduler1) [27fac647] Fetched 3 VMs from VDS &#39;d569c2dd-8f30-4878-8aea-858db<wbr>285cf69&#39;<br>2018-02-12 16:46:31,151+01 INFO  [org.ovirt.engine.core.vdsbro<wbr>ker.monitoring.VmDevicesMonito<wbr>ring] (DefaultQuartzScheduler9) [54a65b66] Received a vnc Device without an address when processing VM 3f57e669-5e4c-4d10-85cc-d57300<wbr>4a099d devices, skipping device: {device=vnc, specParams={displayNetwork=ovi<wbr>rtmgmt, keyMap=fr, displayIp=192.168.0.6}, type=graphics, deviceId=813957b1-446a-4e88-9e<wbr>40-9fe76d2c442d, port=5901}<br>2018-02-12 16:46:31,151+01 INFO  [org.ovirt.engine.core.vdsbro<wbr>ker.monitoring.VmDevicesMonito<wbr>ring] (DefaultQuartzScheduler9) [54a65b66] Received a lease Device without an address when processing VM 3f57e669-5e4c-4d10-85cc-d57300<wbr>4a099d devices, skipping device: {lease_id=3f57e669-5e4c-4d10-8<wbr>5cc-d573004a099d, sd_id=1e51cecc-eb2e-47d0-b185-<wbr>920fdc7afa16, deviceId={uuid=a09949aa-5642-4<wbr>b6d-94a4-8b0d04257be5}, offset=6291456, device=lease, path=/rhev/data-center/mnt/glu<wbr>sterSD/192.168.0.6:_DATA01/1e5<wbr>1cecc-eb2e-47d0-b185-920fdc7af<wbr>a16/dom_md/xleases, type=lease}<br>2018-02-12 16:46:31,152+01 INFO  [org.ovirt.engine.core.vdsbro<wbr>ker.monitoring.VmAnalyzer] (DefaultQuartzScheduler1) [27fac647] VM &#39;3f57e669-5e4c-4d10-85cc-d5730<wbr>04a099d&#39;(Oracle_SECONDARY) was unexpectedly detected as &#39;MigratingTo&#39; on VDS &#39;d569c2dd-8f30-4878-8aea-858db<wbr>285cf69&#39;(<a href="http://ginger.local.systea.fr" target="_blank">ginger.local.systea.f<wbr>r</a>) (expected on &#39;ce3938b1-b23f-4d22-840a-f17d7<wbr>cd87bb1&#39;)<br>2018-02-12 16:46:31,152+01 INFO  [org.ovirt.engine.core.vdsbro<wbr>ker.monitoring.VmAnalyzer] (DefaultQuartzScheduler1) [27fac647] VM &#39;3f57e669-5e4c-4d10-85cc-d5730<wbr>04a099d&#39; is migrating to VDS &#39;d569c2dd-8f30-4878-8aea-858db<wbr>285cf69&#39;(<a href="http://ginger.local.systea.fr" target="_blank">ginger.local.systea.f<wbr>r</a>) ignoring it in the refresh until migration is done<br>....<br>2018-02-12 16:46:41,631+01 INFO  [org.ovirt.engine.core.vdsbro<wbr>ker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-11) [] VM &#39;3f57e669-5e4c-4d10-85cc-d5730<wbr>04a099d&#39; was reported as Down on VDS &#39;ce3938b1-b23f-4d22-840a-f17d7<wbr>cd87bb1&#39;(<a href="http://victor.local.systea.fr" target="_blank">victor.local.systea.f<wbr>r</a>)<br>2018-02-12 16:46:41,632+01 INFO  [org.ovirt.engine.core.vdsbro<wbr>ker.vdsbroker.DestroyVDSComman<wbr>d] (ForkJoinPool-1-worker-11) [] START, DestroyVDSCommand(HostName = <a href="http://victor.local.systea.fr" target="_blank">victor.local.systea.fr</a>, DestroyVmVDSCommandParameters:<wbr>{runAsync=&#39;true&#39;, hostId=&#39;ce3938b1-b23f-4d22-840<wbr>a-f17d7cd87bb1&#39;, vmId=&#39;3f57e669-5e4c-4d10-85cc-<wbr>d573004a099d&#39;, force=&#39;false&#39;, secondsToWait=&#39;0&#39;, gracefully=&#39;false&#39;, reason=&#39;&#39;, ignoreNoVm=&#39;true&#39;}), log id: 560eca57<br>2018-02-12 16:46:41,650+01 INFO  [org.ovirt.engine.core.vdsbro<wbr>ker.vdsbroker.DestroyVDSComman<wbr>d] (ForkJoinPool-1-worker-11) [] FINISH, DestroyVDSCommand, log id: 560eca57<br>2018-02-12 16:46:41,650+01 INFO  [org.ovirt.engine.core.vdsbro<wbr>ker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-11) [] VM &#39;3f57e669-5e4c-4d10-85cc-d5730<wbr>04a099d&#39;(Oracle_SECONDARY) moved from &#39;MigratingFrom&#39; --&gt; &#39;Down&#39;<br>2018-02-12 16:46:41,651+01 INFO  [org.ovirt.engine.core.vdsbro<wbr>ker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-11) [] Handing over VM &#39;3f57e669-5e4c-4d10-85cc-d5730<wbr>04a099d&#39;(Oracle_SECONDARY) to Host &#39;d569c2dd-8f30-4878-8aea-858db<wbr>285cf69&#39;. Setting VM to status &#39;MigratingTo&#39;<br>2018-02-12 16:46:42,163+01 INFO  [org.ovirt.engine.core.vdsbro<wbr>ker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-4) [] VM &#39;3f57e669-5e4c-4d10-85cc-d5730<wbr>04a099d&#39;(Oracle_SECONDARY) moved from &#39;MigratingTo&#39; --&gt; &#39;Up&#39;<br>2018-02-12 16:46:42,169+01 INFO  [org.ovirt.engine.core.vdsbro<wbr>ker.vdsbroker.MigrateStatusVDS<wbr>Command] (ForkJoinPool-1-worker-4) [] START, MigrateStatusVDSCommand(HostNa<wbr>me = <a href="http://ginger.local.systea.fr" target="_blank">ginger.local.systea.fr</a>, MigrateStatusVDSCommandParamet<wbr>ers:{runAsync=&#39;true&#39;, hostId=&#39;d569c2dd-8f30-4878-8ae<wbr>a-858db285cf69&#39;, vmId=&#39;3f57e669-5e4c-4d10-85cc-<wbr>d573004a099d&#39;}), log id: 7a25c281<br>2018-02-12 16:46:42,174+01 INFO  [org.ovirt.engine.core.vdsbro<wbr>ker.vdsbroker.MigrateStatusVDS<wbr>Command] (ForkJoinPool-1-worker-4) [] FINISH, MigrateStatusVDSCommand, log id: 7a25c281<br>2018-02-12 16:46:42,194+01 INFO  [org.ovirt.engine.core.dal.db<wbr>broker.auditloghandling.AuditL<wbr>ogDirector] (ForkJoinPool-1-worker-4) [] EVENT_ID: VM_MIGRATION_DONE(63), Correlation ID: 2f712024-5982-46a8-82c8-fd8293<wbr>da5725, Job ID: 4bd19aa9-cc99-4d02-884e-5a1e85<wbr>7a7738, Call Stack: null, Custom ID: null, Custom Event ID: -1, Message: Migration completed (VM: Oracle_SECONDARY, Source: <a href="http://victor.local.systea.fr" target="_blank">victor.local.systea.fr</a>, Destination: <a href="http://ginger.local.systea.fr" target="_blank">ginger.local.systea.fr</a>, Duration: 11 seconds, Total: 11 seconds, Actual downtime: (N/A))<br>2018-02-12 16:46:42,201+01 INFO  [org.ovirt.engine.core.bll.Mi<wbr>grateVmToServerCommand] (ForkJoinPool-1-worker-4) [] Lock freed to object &#39;EngineLock:{exclusiveLocks=&#39;[<wbr>3f57e669-5e4c-4d10-85cc-d57300<wbr>4a099d=VM]&#39;, sharedLocks=&#39;&#39;}&#39;<br>2018-02-12 16:46:42,203+01 INFO  [org.ovirt.engine.core.vdsbro<wbr>ker.vdsbroker.FullListVDSComma<wbr>nd] (ForkJoinPool-1-worker-4) [] START, FullListVDSCommand(HostName = <a href="http://ginger.local.systea.fr" target="_blank">ginger.local.systea.fr</a>, FullListVDSCommandParameters:{<wbr>runAsync=&#39;true&#39;, hostId=&#39;d569c2dd-8f30-4878-8ae<wbr>a-858db285cf69&#39;, vmIds=&#39;[3f57e669-5e4c-4d10-85c<wbr>c-d573004a099d]&#39;}), log id: 7cc65298<br>2018-02-12 16:46:42,254+01 INFO  [org.ovirt.engine.core.vdsbro<wbr>ker.vdsbroker.FullListVDSComma<wbr>nd] (ForkJoinPool-1-worker-4) [] FINISH, FullListVDSCommand, return: [{acpiEnable=true, emulatedMachine=pc-i440fx-rhel<wbr>7.3.0, afterMigrationStatus=, tabletEnable=true, pid=18748, guestDiskMapping={}, transparentHugePages=true, timeOffset=0, cpuType=Nehalem, smp=2, guestNumaNodes=[Ljava.lang.Obj<wbr>ect;@760085fd, custom={device_fbddd528-7d93-4<wbr>9c6-a286-180e021cb274device_87<wbr>9c93ab-4df1-435c-af02-565039fc<wbr>c254=VmDevice:{id=&#39;VmDeviceId:<wbr>{deviceId=&#39;879c93ab-4df1-435c-<wbr>af02-565039fcc254&#39;, vmId=&#39;3f57e669-5e4c-4d10-85cc-<wbr>d573004a099d&#39;}&#39;, device=&#39;unix&#39;, type=&#39;CHANNEL&#39;, bootOrder=&#39;0&#39;, specParams=&#39;[]&#39;, address=&#39;{bus=0, controller=0, type=virtio-serial, port=1}&#39;, managed=&#39;false&#39;, plugged=&#39;true&#39;, readOnly=&#39;false&#39;, deviceAlias=&#39;channel0&#39;, customProperties=&#39;[]&#39;, snapshotId=&#39;null&#39;, logicalName=&#39;null&#39;, hostDevice=&#39;null&#39;}, device_fbddd528-7d93-49c6-a286<wbr>-180e021cb274device_879c93ab-4<wbr>df1-435c-af02-565039fcc254devi<wbr>ce_8945f61a-abbe-4156-8485-a4a<wbr>a6f1908dbdevice_017b5e59-01c4-<wbr>4aac-bf0c-b5d9557284d6=VmDevic<wbr>e:{id=&#39;VmDeviceId:{deviceId=&#39;0<wbr>17b5e59-01c4-4aac-bf0c-b5d9557<wbr>284d6&#39;, vmId=&#39;3f57e669-5e4c-4d10-85cc-<wbr>d573004a099d&#39;}&#39;, device=&#39;tablet&#39;, type=&#39;UNKNOWN&#39;, bootOrder=&#39;0&#39;, specParams=&#39;[]&#39;, address=&#39;{bus=0, type=usb, port=1}&#39;, managed=&#39;false&#39;, plugged=&#39;true&#39;, readOnly=&#39;false&#39;, deviceAlias=&#39;input0&#39;, customProperties=&#39;[]&#39;, snapshotId=&#39;null&#39;, logicalName=&#39;null&#39;, hostDevice=&#39;null&#39;}, device_fbddd528-7d93-49c6-a286<wbr>-180e021cb274=VmDevice:{id=&#39;Vm<wbr>DeviceId:{deviceId=&#39;fbddd528-<wbr>7d93-49c6-a286-180e021cb274&#39;, vmId=&#39;3f57e669-5e4c-4d10-85cc-<wbr>d573004a099d&#39;}&#39;, device=&#39;ide&#39;, type=&#39;CONTROLLER&#39;, bootOrder=&#39;0&#39;, specParams=&#39;[]&#39;, address=&#39;{slot=0x01, bus=0x00, domain=0x0000, type=pci, function=0x1}&#39;, managed=&#39;false&#39;, plugged=&#39;true&#39;, readOnly=&#39;false&#39;, deviceAlias=&#39;ide&#39;, customProperties=&#39;[]&#39;, snapshotId=&#39;null&#39;, logicalName=&#39;null&#39;, hostDevice=&#39;null&#39;}, device_fbddd528-7d93-49c6-a286<wbr>-180e021cb274device_879c93ab-4<wbr>df1-435c-af02-565039fcc254devi<wbr>ce_8945f61a-abbe-4156-8485-a4a<wbr>a6f1908db=VmDevice:{id=&#39;VmDevi<wbr>ceId:{deviceId=&#39;8945f61a-abbe-<wbr>4156-8485-a4aa6f1908db&#39;, vmId=&#39;3f57e669-5e4c-4d10-85cc-<wbr>d573004a099d&#39;}&#39;, device=&#39;unix&#39;, type=&#39;CHANNEL&#39;, bootOrder=&#39;0&#39;, specParams=&#39;[]&#39;, address=&#39;{bus=0, controller=0, type=virtio-serial, port=2}&#39;, managed=&#39;false&#39;, plugged=&#39;true&#39;, readOnly=&#39;false&#39;, deviceAlias=&#39;channel1&#39;, customProperties=&#39;[]&#39;, snapshotId=&#39;null&#39;, logicalName=&#39;null&#39;, hostDevice=&#39;null&#39;}}, vmType=kvm, memSize=8192, smpCoresPerSocket=1, vmName=Oracle_SECONDARY, nice=0, status=Up, maxMemSize=32768, bootMenuEnable=false, vmId=3f57e669-5e4c-4d10-85cc-d<wbr>573004a099d, numOfIoThreads=2, smpThreadsPerCore=1, smartcardEnable=false, maxMemSlots=16, kvmEnable=true, pitReinjection=false, displayNetwork=ovirtmgmt, devices=[Ljava.lang.Object;@2e<wbr>4d3dd3, memGuaranteedSize=8192, maxVCpus=16, clientIp=, statusTime=<a value="+14304259600" href="tel:(430)%20425-9600" target="_blank">4304259600</a>, display=vnc}], log id: 7cc65298<br>2018-02-12 16:46:42,257+01 INFO  [org.ovirt.engine.core.vdsbro<wbr>ker.monitoring.VmDevicesMonito<wbr>ring] (ForkJoinPool-1-worker-4) [] Received a vnc Device without an address when processing VM 3f57e669-5e4c-4d10-85cc-d57300<wbr>4a099d devices, skipping device: {device=vnc, specParams={displayNetwork=ovi<wbr>rtmgmt, keyMap=fr, displayIp=192.168.0.5}, type=graphics, deviceId=813957b1-446a-4e88-9e<wbr>40-9fe76d2c442d, port=5901}<br>2018-02-12 16:46:42,257+01 INFO  [org.ovirt.engine.core.vdsbro<wbr>ker.monitoring.VmDevicesMonito<wbr>ring] (ForkJoinPool-1-worker-4) [] Received a lease Device without an address when processing VM 3f57e669-5e4c-4d10-85cc-d57300<wbr>4a099d devices, skipping device: {lease_id=3f57e669-5e4c-4d10-8<wbr>5cc-d573004a099d, sd_id=1e51cecc-eb2e-47d0-b185-<wbr>920fdc7afa16, deviceId={uuid=a09949aa-5642-4<wbr>b6d-94a4-8b0d04257be5}, offset=6291456, device=lease, path=/rhev/data-center/mnt/glu<wbr>sterSD/192.168.0.6:_DATA01/1e5<wbr>1cecc-eb2e-47d0-b185-920fdc7af<wbr>a16/dom_md/xleases, type=lease}<br>2018-02-12 16:46:46,260+01 INFO  [org.ovirt.engine.core.vdsbro<wbr>ker.vdsbroker.FullListVDSComma<wbr>nd] (DefaultQuartzScheduler5) [7fcb200a] FINISH, FullListVDSCommand, return: [{acpiEnable=true, emulatedMachine=pc-i440fx-rhel<wbr>7.3.0, afterMigrationStatus=, tabletEnable=true, pid=18748, guestDiskMapping={0QEMU_QEMU_H<wbr>ARDDISK_d890fa68-fba4-4f49-9={<wbr>name=/dev/sda}, QEMU_DVD-ROM_QM00003={name=/de<wbr>v/sr0}}, transparentHugePages=true, timeOffset=0, cpuType=Nehalem, smp=2, guestNumaNodes=[Ljava.lang.Obj<wbr>ect;@77951faf, custom={device_fbddd528-7d93-4<wbr>9c6-a286-180e021cb274device_87<wbr>9c93ab-4df1-435c-af02-565039fc<wbr>c254=VmDevice:{id=&#39;VmDeviceId:<wbr>{deviceId=&#39;879c93ab-4df1-435c-<wbr>af02-565039fcc254&#39;, vmId=&#39;3f57e669-5e4c-4d10-85cc-<wbr>d573004a099d&#39;}&#39;, device=&#39;unix&#39;, type=&#39;CHANNEL&#39;, bootOrder=&#39;0&#39;, specParams=&#39;[]&#39;, address=&#39;{bus=0, controller=0, type=virtio-serial, port=1}&#39;, managed=&#39;false&#39;, plugged=&#39;true&#39;, readOnly=&#39;false&#39;, deviceAlias=&#39;channel0&#39;, customProperties=&#39;[]&#39;, snapshotId=&#39;null&#39;, logicalName=&#39;null&#39;, hostDevice=&#39;null&#39;}, device_fbddd528-7d93-49c6-a286<wbr>-180e021cb274device_879c93ab-4<wbr>df1-435c-af02-565039fcc254devi<wbr>ce_8945f61a-abbe-4156-8485-a4a<wbr>a6f1908dbdevice_017b5e59-01c4-<wbr>4aac-bf0c-b5d9557284d6=VmDevic<wbr>e:{id=&#39;VmDeviceId:{deviceId=&#39;0<wbr>17b5e59-01c4-4aac-bf0c-b5d9557<wbr>284d6&#39;, vmId=&#39;3f57e669-5e4c-4d10-85cc-<wbr>d573004a099d&#39;}&#39;, device=&#39;tablet&#39;, type=&#39;UNKNOWN&#39;, bootOrder=&#39;0&#39;, specParams=&#39;[]&#39;, address=&#39;{bus=0, type=usb, port=1}&#39;, managed=&#39;false&#39;, plugged=&#39;true&#39;, readOnly=&#39;false&#39;, deviceAlias=&#39;input0&#39;, customProperties=&#39;[]&#39;, snapshotId=&#39;null&#39;, logicalName=&#39;null&#39;, hostDevice=&#39;null&#39;}, device_fbddd528-7d93-49c6-a286<wbr>-180e021cb274=VmDevice:{id=&#39;Vm<wbr>DeviceId:{deviceId=&#39;fbddd528-<wbr>7d93-49c6-a286-180e021cb274&#39;, vmId=&#39;3f57e669-5e4c-4d10-85cc-<wbr>d573004a099d&#39;}&#39;, device=&#39;ide&#39;, type=&#39;CONTROLLER&#39;, bootOrder=&#39;0&#39;, specParams=&#39;[]&#39;, address=&#39;{slot=0x01, bus=0x00, domain=0x0000, type=pci, function=0x1}&#39;, managed=&#39;false&#39;, plugged=&#39;true&#39;, readOnly=&#39;false&#39;, deviceAlias=&#39;ide&#39;, customProperties=&#39;[]&#39;, snapshotId=&#39;null&#39;, logicalName=&#39;null&#39;, hostDevice=&#39;null&#39;}, device_fbddd528-7d93-49c6-a286<wbr>-180e021cb274device_879c93ab-4<wbr>df1-435c-af02-565039fcc254devi<wbr>ce_8945f61a-abbe-4156-8485-a4a<wbr>a6f1908db=VmDevice:{id=&#39;VmDevi<wbr>ceId:{deviceId=&#39;8945f61a-abbe-<wbr>4156-8485-a4aa6f1908db&#39;, vmId=&#39;3f57e669-5e4c-4d10-85cc-<wbr>d573004a099d&#39;}&#39;, device=&#39;unix&#39;, type=&#39;CHANNEL&#39;, bootOrder=&#39;0&#39;, specParams=&#39;[]&#39;, address=&#39;{bus=0, controller=0, type=virtio-serial, port=2}&#39;, managed=&#39;false&#39;, plugged=&#39;true&#39;, readOnly=&#39;false&#39;, deviceAlias=&#39;channel1&#39;, customProperties=&#39;[]&#39;, snapshotId=&#39;null&#39;, logicalName=&#39;null&#39;, hostDevice=&#39;null&#39;}}, vmType=kvm, memSize=8192, smpCoresPerSocket=1, vmName=Oracle_SECONDARY, nice=0, status=Up, maxMemSize=32768, bootMenuEnable=false, vmId=3f57e669-5e4c-4d10-85cc-d<wbr>573004a099d, numOfIoThreads=2, smpThreadsPerCore=1, smartcardEnable=false, maxMemSlots=16, kvmEnable=true, pitReinjection=false, displayNetwork=ovirtmgmt, devices=[Ljava.lang.Object;@28<wbr>6410fd, memGuaranteedSize=8192, maxVCpus=16, clientIp=, statusTime=<a value="+14304263620" href="tel:(430)%20426-3620" target="_blank">4304263620</a>, display=vnc}], log id: 58cdef4c<br>2018-02-12 16:46:46,267+01 INFO  [org.ovirt.engine.core.vdsbro<wbr>ker.monitoring.VmDevicesMonito<wbr>ring] (DefaultQuartzScheduler5) [7fcb200a] Received a vnc Device without an address when processing VM 3f57e669-5e4c-4d10-85cc-d57300<wbr>4a099d devices, skipping device: {device=vnc, specParams={displayNetwork=ovi<wbr>rtmgmt, keyMap=fr, displayIp=192.168.0.5}, type=graphics, deviceId=813957b1-446a-4e88-9e<wbr>40-9fe76d2c442d, port=5901}<br>2018-02-12 16:46:46,268+01 INFO  [org.ovirt.engine.core.vdsbro<wbr>ker.monitoring.VmDevicesMonito<wbr>ring] (DefaultQuartzScheduler5) [7fcb200a] Received a lease Device without an address when processing VM 3f57e669-5e4c-4d10-85cc-d57300<wbr>4a099d devices, skipping device: {lease_id=3f57e669-5e4c-4d10-8<wbr>5cc-d573004a099d, sd_id=1e51cecc-eb2e-47d0-b185-<wbr>920fdc7afa16, deviceId={uuid=a09949aa-5642-4<wbr>b6d-94a4-8b0d04257be5}, offset=6291456, device=lease, path=/rhev/data-center/mnt/glu<wbr>sterSD/192.168.0.6:_DATA01/1e5<wbr>1cecc-eb2e-47d0-b185-920fdc7af<wbr>a16/dom_md/xleases, type=lease}<p> </p></blockquote><br>For the VM with 2 vdisks we see :<blockquote><p>2018-02-12 16:49:06,112+01 INFO  [org.ovirt.engine.core.bll.Mi<wbr>grateVmToServerCommand] (default task-50) [92b5af33-cb87-4142-b8fe-8b838<wbr>dd7458e] Lock Acquired to object &#39;EngineLock:{exclusiveLocks=&#39;[<wbr>f7d4ec12-627a-4b83-b59e-886400<wbr>d55474=VM]&#39;, sharedLocks=&#39;&#39;}&#39;<br>2018-02-12 16:49:06,407+01 INFO  [org.ovirt.engine.core.bll.Mi<wbr>grateVmToServerCommand] (org.ovirt.thread.pool-6-threa<wbr>d-49) [92b5af33-cb87-4142-b8fe-8b838<wbr>dd7458e] Running command: MigrateVmToServerCommand internal: false. Entities affected :  ID: f7d4ec12-627a-4b83-b59e-886400<wbr>d55474 Type: VMAction group MIGRATE_VM with role type USER<br>2018-02-12 16:49:06,712+01 INFO  [org.ovirt.engine.core.vdsbro<wbr>ker.MigrateVDSCommand] (org.ovirt.thread.pool-6-threa<wbr>d-49) [92b5af33-cb87-4142-b8fe-8b838<wbr>dd7458e] START, MigrateVDSCommand( MigrateVDSCommandParameters:{r<wbr>unAsync=&#39;true&#39;, hostId=&#39;d569c2dd-8f30-4878-8ae<wbr>a-858db285cf69&#39;, vmId=&#39;f7d4ec12-627a-4b83-b59e-<wbr>886400d55474&#39;, srcHost=&#39;192.168.0.5&#39;, dstVdsId=&#39;ce3938b1-b23f-4d22-8<wbr>40a-f17d7cd87bb1&#39;, dstHost=&#39;<a href="http://192.168.0.6:54321" target="_blank">192.168.0.6:54321</a>&#39;, migrationMethod=&#39;ONLINE&#39;, tunnelMigration=&#39;false&#39;, migrationDowntime=&#39;0&#39;, autoConverge=&#39;true&#39;, migrateCompressed=&#39;false&#39;, consoleAddress=&#39;null&#39;, maxBandwidth=&#39;500&#39;, enableGuestEvents=&#39;true&#39;, maxIncomingMigrations=&#39;2&#39;, maxOutgoingMigrations=&#39;2&#39;, convergenceSchedule=&#39;[init=[{n<wbr>ame=setDowntime, params=[100]}], stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2, action={name=setDowntime, params=[200]}}, {limit=3, action={name=setDowntime, params=[300]}}, {limit=4, action={name=setDowntime, params=[400]}}, {limit=6, action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort, params=[]}}]]&#39;}), log id: 3702a9e0<br>2018-02-12 16:49:06,713+01 INFO  [org.ovirt.engine.core.vdsbro<wbr>ker.vdsbroker.MigrateBrokerVDS<wbr>Command] (org.ovirt.thread.pool-6-threa<wbr>d-49) [92b5af33-cb87-4142-b8fe-8b838<wbr>dd7458e] START, MigrateBrokerVDSCommand(HostNa<wbr>me = <a href="http://ginger.local.systea.fr" target="_blank">ginger.local.systea.fr</a>, MigrateVDSCommandParameters:{r<wbr>unAsync=&#39;true&#39;, hostId=&#39;d569c2dd-8f30-4878-8ae<wbr>a-858db285cf69&#39;, vmId=&#39;f7d4ec12-627a-4b83-b59e-<wbr>886400d55474&#39;, srcHost=&#39;192.168.0.5&#39;, dstVdsId=&#39;ce3938b1-b23f-4d22-8<wbr>40a-f17d7cd87bb1&#39;, dstHost=&#39;<a href="http://192.168.0.6:54321" target="_blank">192.168.0.6:54321</a>&#39;, migrationMethod=&#39;ONLINE&#39;, tunnelMigration=&#39;false&#39;, migrationDowntime=&#39;0&#39;, autoConverge=&#39;true&#39;, migrateCompressed=&#39;false&#39;, consoleAddress=&#39;null&#39;, maxBandwidth=&#39;500&#39;, enableGuestEvents=&#39;true&#39;, maxIncomingMigrations=&#39;2&#39;, maxOutgoingMigrations=&#39;2&#39;, convergenceSchedule=&#39;[init=[{n<wbr>ame=setDowntime, params=[100]}], stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2, action={name=setDowntime, params=[200]}}, {limit=3, action={name=setDowntime, params=[300]}}, {limit=4, action={name=setDowntime, params=[400]}}, {limit=6, action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort, params=[]}}]]&#39;}), log id: 1840069c<br>2018-02-12 16:49:06,724+01 INFO  [org.ovirt.engine.core.vdsbro<wbr>ker.vdsbroker.MigrateBrokerVDS<wbr>Command] (org.ovirt.thread.pool-6-threa<wbr>d-49) [92b5af33-cb87-4142-b8fe-8b838<wbr>dd7458e] FINISH, MigrateBrokerVDSCommand, log id: 1840069c<br>2018-02-12 16:49:06,732+01 INFO  [org.ovirt.engine.core.vdsbro<wbr>ker.MigrateVDSCommand] (org.ovirt.thread.pool-6-threa<wbr>d-49) [92b5af33-cb87-4142-b8fe-8b838<wbr>dd7458e] FINISH, MigrateVDSCommand, return: MigratingFrom, log id: 3702a9e0<br>2018-02-12 16:49:06,753+01 INFO  [org.ovirt.engine.core.dal.db<wbr>broker.auditloghandling.AuditL<wbr>ogDirector] (org.ovirt.thread.pool-6-threa<wbr>d-49) [92b5af33-cb87-4142-b8fe-8b838<wbr>dd7458e] EVENT_ID: VM_MIGRATION_START(62), Correlation ID: 92b5af33-cb87-4142-b8fe-8b838d<wbr>d7458e, Job ID: f4f54054-f7c8-4481-8eda-d5a15c<wbr>383061, Call Stack: null, Custom ID: null, Custom Event ID: -1, Message: Migration started (VM: Oracle_PRIMARY, Source: <a href="http://ginger.local.systea.fr" target="_blank">ginger.local.systea.fr</a>, Destination: <a href="http://victor.local.systea.fr" target="_blank">victor.local.systea.fr</a>, User: admin@internal-authz).<br>...<br>2018-02-12 16:49:16,453+01 INFO  [org.ovirt.engine.core.vdsbro<wbr>ker.monitoring.VmsStatisticsFe<wbr>tcher] (DefaultQuartzScheduler4) [162a5bc3] Fetched 2 VMs from VDS &#39;ce3938b1-b23f-4d22-840a-f17d7<wbr>cd87bb1&#39;<br>2018-02-12 16:49:16,455+01 INFO  [org.ovirt.engine.core.vdsbro<wbr>ker.monitoring.VmAnalyzer] (DefaultQuartzScheduler4) [162a5bc3] VM &#39;f7d4ec12-627a-4b83-b59e-88640<wbr>0d55474&#39;(Oracle_PRIMARY) was unexpectedly detected as &#39;MigratingTo&#39; on VDS &#39;ce3938b1-b23f-4d22-840a-f17d7<wbr>cd87bb1&#39;(<a href="http://victor.local.systea.fr" target="_blank">victor.local.systea.f<wbr>r</a>) (expected on &#39;d569c2dd-8f30-4878-8aea-858db<wbr>285cf69&#39;)<br>2018-02-12 16:49:16,455+01 INFO  [org.ovirt.engine.core.vdsbro<wbr>ker.monitoring.VmAnalyzer] (DefaultQuartzScheduler4) [162a5bc3] VM &#39;f7d4ec12-627a-4b83-b59e-88640<wbr>0d55474&#39; is migrating to VDS &#39;ce3938b1-b23f-4d22-840a-f17d7<wbr>cd87bb1&#39;(<a href="http://victor.local.systea.fr" target="_blank">victor.local.systea.f<wbr>r</a>) ignoring it in the refresh until migration is done<br>...<br>2018-02-12 16:49:31,484+01 INFO  [org.ovirt.engine.core.vdsbro<wbr>ker.monitoring.VmAnalyzer] (DefaultQuartzScheduler5) [11a7619a] VM &#39;f7d4ec12-627a-4b83-b59e-88640<wbr>0d55474&#39;(Oracle_PRIMARY) was unexpectedly detected as &#39;MigratingTo&#39; on VDS &#39;ce3938b1-b23f-4d22-840a-f17d7<wbr>cd87bb1&#39;(<a href="http://victor.local.systea.fr" target="_blank">victor.local.systea.f<wbr>r</a>) (expected on &#39;d569c2dd-8f30-4878-8aea-858db<wbr>285cf69&#39;)<br>2018-02-12 16:49:31,484+01 INFO  [org.ovirt.engine.core.vdsbro<wbr>ker.monitoring.VmAnalyzer] (DefaultQuartzScheduler5) [11a7619a] VM &#39;f7d4ec12-627a-4b83-b59e-88640<wbr>0d55474&#39; is migrating to VDS &#39;ce3938b1-b23f-4d22-840a-f17d7<wbr>cd87bb1&#39;(<a href="http://victor.local.systea.fr" target="_blank">victor.local.systea.f<wbr>r</a>) ignoring it in the refresh until migration is done<br> </p></blockquote><br>and so on, last lines repeated indefinitly for hours since we poweroff the VM...<br>Is this something known ? Any idea about that ?<br><br>Thanks<br><br>Ovirt 4.1.6, updated last at feb-13. Gluster 3.12.1.<br><br>--<p class="m_-5009927997311193272m_-4299273321983674487m_8587729722327689770Text1">Cordialement,<br><br><b>Frank Soyer </b></p><br>______________________________<wbr>_________________<br>Users mailing list<br><a href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a><br><a rel="noreferrer" href="http://lists.ovirt.org/mailman/listinfo/users" target="_blank">http://lists.ovirt.org/mailman<wbr>/listinfo/users</a><br> </blockquote></div></div></blockquote><br> </div></div></blockquote></div></div></blockquote><br> 
</div></div></blockquote></div><br></div>