<div dir="ltr">Hi Frank,<div><br></div><div>I already replied on your last email.</div><div><div>Can you provide the VDSM logs from the time of the migration failure for both hosts:</div><div> <span style="font-family:monospace"> </span><a href="http://ginger.local.systea.fr/" target="_blank" style="font-family:monospace">ginger.local.systea.f</a>r and <a href="http://victor.local.systea.fr/" target="_blank" style="font-family:monospace">v<wbr>ictor.local.systea.fr</a></div><div><br></div><div>Thanks,</div><div>Maor</div></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Wed, Feb 14, 2018 at 11:23 AM, fsoyer <span dir="ltr"><<a href="mailto:fsoyer@systea.fr" target="_blank">fsoyer@systea.fr</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><p>Hi all,<br>I discovered yesterday a problem when migrating VM with more than one vdisk.<br>On our test servers (oVirt4.1, shared storage with Gluster), I created 2 VMs needed for a test, from a template with a 20G vdisk. On this VMs I added a 100G vdisk (for this tests I didn't want to waste time to extend the existing vdisks... But I lost time finally...). The VMs with the 2 vdisks works well.<br>Now I saw some updates waiting on the host. I tried to put it in maintenance... But it stopped on the two VM. They were marked "migrating", but no more accessible. Other (small) VMs with only 1 vdisk was migrated without problem at the same time.<br>I saw that a kvm process for the (big) VMs was launched on the source AND destination host, but after tens of minutes, the migration and the VMs was always freezed. I tried to cancel the migration for the VMs : failed. The only way to stop it was to poweroff the VMs : the kvm process died on the 2 hosts and the GUI alerted on a failed migration.<br>In doubt, I tried to delete the second vdisk on one of this VMs : it migrates then without error ! And no access problem.<br>I tried to extend the first vdisk of the second VM, the delete the second vdisk : it migrates now without problem ! <br><br>So after another test with a VM with 2 vdisks, I can say that this blocked the migration process :(<br><br>In engine.log, for a VMs with 1 vdisk migrating well, we see :</p><blockquote>2018-02-12 16:46:29,705+01 INFO [org.ovirt.engine.core.bll.<wbr>MigrateVmToServerCommand] (default task-28) [2f712024-5982-46a8-82c8-<wbr>fd8293da5725] Lock Acquired to object 'EngineLock:{exclusiveLocks='[<wbr>3f57e669-5e4c-4d10-85cc-<wbr>d573004a099d=VM]', sharedLocks=''}'<br>2018-02-12 16:46:29,955+01 INFO [org.ovirt.engine.core.bll.<wbr>MigrateVmToServerCommand] (org.ovirt.thread.pool-6-<wbr>thread-32) [2f712024-5982-46a8-82c8-<wbr>fd8293da5725] Running command: MigrateVmToServerCommand internal: false. Entities affected : ID: 3f57e669-5e4c-4d10-85cc-<wbr>d573004a099d Type: VMAction group MIGRATE_VM with role type USER<br>2018-02-12 16:46:30,261+01 INFO [org.ovirt.engine.core.<wbr>vdsbroker.MigrateVDSCommand] (org.ovirt.thread.pool-6-<wbr>thread-32) [2f712024-5982-46a8-82c8-<wbr>fd8293da5725] START, MigrateVDSCommand( MigrateVDSCommandParameters:{<wbr>runAsync='true', hostId='ce3938b1-b23f-4d22-<wbr>840a-f17d7cd87bb1', vmId='3f57e669-5e4c-4d10-85cc-<wbr>d573004a099d', srcHost='192.168.0.6', dstVdsId='d569c2dd-8f30-4878-<wbr>8aea-858db285cf69', dstHost='<a href="http://192.168.0.5:54321" target="_blank">192.168.0.5:54321</a>', migrationMethod='ONLINE', tunnelMigration='false', migrationDowntime='0', autoConverge='true', migrateCompressed='false', consoleAddress='null', maxBandwidth='500', enableGuestEvents='true', maxIncomingMigrations='2', maxOutgoingMigrations='2', convergenceSchedule='[init=[{<wbr>name=setDowntime, params=[100]}], stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2, action={name=setDowntime, params=[200]}}, {limit=3, action={name=setDowntime, params=[300]}}, {limit=4, action={name=setDowntime, params=[400]}}, {limit=6, action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort, params=[]}}]]'}), log id: 14f61ee0<br>2018-02-12 16:46:30,262+01 INFO [org.ovirt.engine.core.<wbr>vdsbroker.vdsbroker.<wbr>MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-<wbr>thread-32) [2f712024-5982-46a8-82c8-<wbr>fd8293da5725] START, MigrateBrokerVDSCommand(<wbr>HostName = <a href="http://victor.local.systea.fr" target="_blank">victor.local.systea.fr</a>, MigrateVDSCommandParameters:{<wbr>runAsync='true', hostId='ce3938b1-b23f-4d22-<wbr>840a-f17d7cd87bb1', vmId='3f57e669-5e4c-4d10-85cc-<wbr>d573004a099d', srcHost='192.168.0.6', dstVdsId='d569c2dd-8f30-4878-<wbr>8aea-858db285cf69', dstHost='<a href="http://192.168.0.5:54321" target="_blank">192.168.0.5:54321</a>', migrationMethod='ONLINE', tunnelMigration='false', migrationDowntime='0', autoConverge='true', migrateCompressed='false', consoleAddress='null', maxBandwidth='500', enableGuestEvents='true', maxIncomingMigrations='2', maxOutgoingMigrations='2', convergenceSchedule='[init=[{<wbr>name=setDowntime, params=[100]}], stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2, action={name=setDowntime, params=[200]}}, {limit=3, action={name=setDowntime, params=[300]}}, {limit=4, action={name=setDowntime, params=[400]}}, {limit=6, action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort, params=[]}}]]'}), log id: 775cd381<br>2018-02-12 16:46:30,277+01 INFO [org.ovirt.engine.core.<wbr>vdsbroker.vdsbroker.<wbr>MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-<wbr>thread-32) [2f712024-5982-46a8-82c8-<wbr>fd8293da5725] FINISH, MigrateBrokerVDSCommand, log id: 775cd381<br>2018-02-12 16:46:30,285+01 INFO [org.ovirt.engine.core.<wbr>vdsbroker.MigrateVDSCommand] (org.ovirt.thread.pool-6-<wbr>thread-32) [2f712024-5982-46a8-82c8-<wbr>fd8293da5725] FINISH, MigrateVDSCommand, return: MigratingFrom, log id: 14f61ee0<br>2018-02-12 16:46:30,301+01 INFO [org.ovirt.engine.core.dal.<wbr>dbbroker.auditloghandling.<wbr>AuditLogDirector] (org.ovirt.thread.pool-6-<wbr>thread-32) [2f712024-5982-46a8-82c8-<wbr>fd8293da5725] EVENT_ID: VM_MIGRATION_START(62), Correlation ID: 2f712024-5982-46a8-82c8-<wbr>fd8293da5725, Job ID: 4bd19aa9-cc99-4d02-884e-<wbr>5a1e857a7738, Call Stack: null, Custom ID: null, Custom Event ID: -1, Message: Migration started (VM: Oracle_SECONDARY, Source: <a href="http://victor.local.systea.fr" target="_blank">victor.local.systea.fr</a>, Destination: <a href="http://ginger.local.systea.fr" target="_blank">ginger.local.systea.fr</a>, User: admin@internal-authz).<br>2018-02-12 16:46:31,106+01 INFO [org.ovirt.engine.core.<wbr>vdsbroker.vdsbroker.<wbr>FullListVDSCommand] (DefaultQuartzScheduler9) [54a65b66] START, FullListVDSCommand(HostName = <a href="http://victor.local.systea.fr" target="_blank">victor.local.systea.fr</a>, FullListVDSCommandParameters:{<wbr>runAsync='true', hostId='ce3938b1-b23f-4d22-<wbr>840a-f17d7cd87bb1', vmIds='[3f57e669-5e4c-4d10-<wbr>85cc-d573004a099d]'}), log id: 54b4b435<br>2018-02-12 16:46:31,147+01 INFO [org.ovirt.engine.core.<wbr>vdsbroker.vdsbroker.<wbr>FullListVDSCommand] (DefaultQuartzScheduler9) [54a65b66] FINISH, FullListVDSCommand, return: [{acpiEnable=true, emulatedMachine=pc-i440fx-<wbr>rhel7.3.0, tabletEnable=true, pid=1493, guestDiskMapping={0QEMU_QEMU_<wbr>HARDDISK_d890fa68-fba4-4f49-9=<wbr>{name=/dev/sda}, QEMU_DVD-ROM_QM00003={name=/<wbr>dev/sr0}}, transparentHugePages=true, timeOffset=0, cpuType=Nehalem, smp=2, pauseCode=NOERR, guestNumaNodes=[Ljava.lang.<wbr>Object;@1d9042cd, smartcardEnable=false, custom={device_fbddd528-7d93-<wbr>49c6-a286-180e021cb274device_<wbr>879c93ab-4df1-435c-af02-<wbr>565039fcc254=VmDevice:{id='<wbr>VmDeviceId:{deviceId='<wbr>879c93ab-4df1-435c-af02-<wbr>565039fcc254', vmId='3f57e669-5e4c-4d10-85cc-<wbr>d573004a099d'}', device='unix', type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, controller=0, type=virtio-serial, port=1}', managed='false', plugged='true', readOnly='false', deviceAlias='channel0', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}, device_fbddd528-7d93-49c6-<wbr>a286-180e021cb274device_<wbr>879c93ab-4df1-435c-af02-<wbr>565039fcc254device_8945f61a-<wbr>abbe-4156-8485-<wbr>a4aa6f1908dbdevice_017b5e59-<wbr>01c4-4aac-bf0c-b5d9557284d6=<wbr>VmDevice:{id='VmDeviceId:{<wbr>deviceId='017b5e59-01c4-4aac-<wbr>bf0c-b5d9557284d6', vmId='3f57e669-5e4c-4d10-85cc-<wbr>d573004a099d'}', device='tablet', type='UNKNOWN', bootOrder='0', specParams='[]', address='{bus=0, type=usb, port=1}', managed='false', plugged='true', readOnly='false', deviceAlias='input0', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}, device_fbddd528-7d93-49c6-<wbr>a286-180e021cb274=VmDevice:{<wbr>id='VmDeviceId:{deviceId='<wbr>fbddd528-7d93-49c6-a286-<wbr>180e021cb274', vmId='3f57e669-5e4c-4d10-85cc-<wbr>d573004a099d'}', device='ide', type='CONTROLLER', bootOrder='0', specParams='[]', address='{slot=0x01, bus=0x00, domain=0x0000, type=pci, function=0x1}', managed='false', plugged='true', readOnly='false', deviceAlias='ide', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}, device_fbddd528-7d93-49c6-<wbr>a286-180e021cb274device_<wbr>879c93ab-4df1-435c-af02-<wbr>565039fcc254device_8945f61a-<wbr>abbe-4156-8485-a4aa6f1908db=<wbr>VmDevice:{id='VmDeviceId:{<wbr>deviceId='8945f61a-abbe-4156-<wbr>8485-a4aa6f1908db', vmId='3f57e669-5e4c-4d10-85cc-<wbr>d573004a099d'}', device='unix', type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, controller=0, type=virtio-serial, port=2}', managed='false', plugged='true', readOnly='false', deviceAlias='channel1', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}}, vmType=kvm, memSize=8192, smpCoresPerSocket=1, vmName=Oracle_SECONDARY, nice=0, status=Migration Source, maxMemSize=32768, bootMenuEnable=false, vmId=3f57e669-5e4c-4d10-85cc-<wbr>d573004a099d, numOfIoThreads=2, smpThreadsPerCore=1, memGuaranteedSize=8192, kvmEnable=true, pitReinjection=false, displayNetwork=ovirtmgmt, devices=[Ljava.lang.Object;@<wbr>28ae66d7, display=vnc, maxVCpus=16, clientIp=, statusTime=4299484520, maxMemSlots=16}], log id: 54b4b435<br>2018-02-12 16:46:31,150+01 INFO [org.ovirt.engine.core.<wbr>vdsbroker.monitoring.<wbr>VmsStatisticsFetcher] (DefaultQuartzScheduler1) [27fac647] Fetched 3 VMs from VDS 'd569c2dd-8f30-4878-8aea-<wbr>858db285cf69'<br>2018-02-12 16:46:31,151+01 INFO [org.ovirt.engine.core.<wbr>vdsbroker.monitoring.<wbr>VmDevicesMonitoring] (DefaultQuartzScheduler9) [54a65b66] Received a vnc Device without an address when processing VM 3f57e669-5e4c-4d10-85cc-<wbr>d573004a099d devices, skipping device: {device=vnc, specParams={displayNetwork=<wbr>ovirtmgmt, keyMap=fr, displayIp=192.168.0.6}, type=graphics, deviceId=813957b1-446a-4e88-<wbr>9e40-9fe76d2c442d, port=5901}<br>2018-02-12 16:46:31,151+01 INFO [org.ovirt.engine.core.<wbr>vdsbroker.monitoring.<wbr>VmDevicesMonitoring] (DefaultQuartzScheduler9) [54a65b66] Received a lease Device without an address when processing VM 3f57e669-5e4c-4d10-85cc-<wbr>d573004a099d devices, skipping device: {lease_id=3f57e669-5e4c-4d10-<wbr>85cc-d573004a099d, sd_id=1e51cecc-eb2e-47d0-b185-<wbr>920fdc7afa16, deviceId={uuid=a09949aa-5642-<wbr>4b6d-94a4-8b0d04257be5}, offset=6291456, device=lease, path=/rhev/data-center/mnt/<wbr>glusterSD/192.168.0.6:_DATA01/<wbr>1e51cecc-eb2e-47d0-b185-<wbr>920fdc7afa16/dom_md/xleases, type=lease}<br>2018-02-12 16:46:31,152+01 INFO [org.ovirt.engine.core.<wbr>vdsbroker.monitoring.<wbr>VmAnalyzer] (DefaultQuartzScheduler1) [27fac647] VM '3f57e669-5e4c-4d10-85cc-<wbr>d573004a099d'(Oracle_<wbr>SECONDARY) was unexpectedly detected as 'MigratingTo' on VDS 'd569c2dd-8f30-4878-8aea-<wbr>858db285cf69'(<a href="http://ginger.local.systea.fr" target="_blank">ginger.local.<wbr>systea.fr</a>) (expected on 'ce3938b1-b23f-4d22-840a-<wbr>f17d7cd87bb1')<br>2018-02-12 16:46:31,152+01 INFO [org.ovirt.engine.core.<wbr>vdsbroker.monitoring.<wbr>VmAnalyzer] (DefaultQuartzScheduler1) [27fac647] VM '3f57e669-5e4c-4d10-85cc-<wbr>d573004a099d' is migrating to VDS 'd569c2dd-8f30-4878-8aea-<wbr>858db285cf69'(<a href="http://ginger.local.systea.fr" target="_blank">ginger.local.<wbr>systea.fr</a>) ignoring it in the refresh until migration is done<br>....<br>2018-02-12 16:46:41,631+01 INFO [org.ovirt.engine.core.<wbr>vdsbroker.monitoring.<wbr>VmAnalyzer] (ForkJoinPool-1-worker-11) [] VM '3f57e669-5e4c-4d10-85cc-<wbr>d573004a099d' was reported as Down on VDS 'ce3938b1-b23f-4d22-840a-<wbr>f17d7cd87bb1'(<a href="http://victor.local.systea.fr" target="_blank">victor.local.<wbr>systea.fr</a>)<br>2018-02-12 16:46:41,632+01 INFO [org.ovirt.engine.core.<wbr>vdsbroker.vdsbroker.<wbr>DestroyVDSCommand] (ForkJoinPool-1-worker-11) [] START, DestroyVDSCommand(HostName = <a href="http://victor.local.systea.fr" target="_blank">victor.local.systea.fr</a>, DestroyVmVDSCommandParameters:<wbr>{runAsync='true', hostId='ce3938b1-b23f-4d22-<wbr>840a-f17d7cd87bb1', vmId='3f57e669-5e4c-4d10-85cc-<wbr>d573004a099d', force='false', secondsToWait='0', gracefully='false', reason='', ignoreNoVm='true'}), log id: 560eca57<br>2018-02-12 16:46:41,650+01 INFO [org.ovirt.engine.core.<wbr>vdsbroker.vdsbroker.<wbr>DestroyVDSCommand] (ForkJoinPool-1-worker-11) [] FINISH, DestroyVDSCommand, log id: 560eca57<br>2018-02-12 16:46:41,650+01 INFO [org.ovirt.engine.core.<wbr>vdsbroker.monitoring.<wbr>VmAnalyzer] (ForkJoinPool-1-worker-11) [] VM '3f57e669-5e4c-4d10-85cc-<wbr>d573004a099d'(Oracle_<wbr>SECONDARY) moved from 'MigratingFrom' --> 'Down'<br>2018-02-12 16:46:41,651+01 INFO [org.ovirt.engine.core.<wbr>vdsbroker.monitoring.<wbr>VmAnalyzer] (ForkJoinPool-1-worker-11) [] Handing over VM '3f57e669-5e4c-4d10-85cc-<wbr>d573004a099d'(Oracle_<wbr>SECONDARY) to Host 'd569c2dd-8f30-4878-8aea-<wbr>858db285cf69'. Setting VM to status 'MigratingTo'<br>2018-02-12 16:46:42,163+01 INFO [org.ovirt.engine.core.<wbr>vdsbroker.monitoring.<wbr>VmAnalyzer] (ForkJoinPool-1-worker-4) [] VM '3f57e669-5e4c-4d10-85cc-<wbr>d573004a099d'(Oracle_<wbr>SECONDARY) moved from 'MigratingTo' --> 'Up'<br>2018-02-12 16:46:42,169+01 INFO [org.ovirt.engine.core.<wbr>vdsbroker.vdsbroker.<wbr>MigrateStatusVDSCommand] (ForkJoinPool-1-worker-4) [] START, MigrateStatusVDSCommand(<wbr>HostName = <a href="http://ginger.local.systea.fr" target="_blank">ginger.local.systea.fr</a>, MigrateStatusVDSCommandParamet<wbr>ers:{runAsync='true', hostId='d569c2dd-8f30-4878-<wbr>8aea-858db285cf69', vmId='3f57e669-5e4c-4d10-85cc-<wbr>d573004a099d'}), log id: 7a25c281<br>2018-02-12 16:46:42,174+01 INFO [org.ovirt.engine.core.<wbr>vdsbroker.vdsbroker.<wbr>MigrateStatusVDSCommand] (ForkJoinPool-1-worker-4) [] FINISH, MigrateStatusVDSCommand, log id: 7a25c281<br>2018-02-12 16:46:42,194+01 INFO [org.ovirt.engine.core.dal.<wbr>dbbroker.auditloghandling.<wbr>AuditLogDirector] (ForkJoinPool-1-worker-4) [] EVENT_ID: VM_MIGRATION_DONE(63), Correlation ID: 2f712024-5982-46a8-82c8-<wbr>fd8293da5725, Job ID: 4bd19aa9-cc99-4d02-884e-<wbr>5a1e857a7738, Call Stack: null, Custom ID: null, Custom Event ID: -1, Message: Migration completed (VM: Oracle_SECONDARY, Source: <a href="http://victor.local.systea.fr" target="_blank">victor.local.systea.fr</a>, Destination: <a href="http://ginger.local.systea.fr" target="_blank">ginger.local.systea.fr</a>, Duration: 11 seconds, Total: 11 seconds, Actual downtime: (N/A))<br>2018-02-12 16:46:42,201+01 INFO [org.ovirt.engine.core.bll.<wbr>MigrateVmToServerCommand] (ForkJoinPool-1-worker-4) [] Lock freed to object 'EngineLock:{exclusiveLocks='[<wbr>3f57e669-5e4c-4d10-85cc-<wbr>d573004a099d=VM]', sharedLocks=''}'<br>2018-02-12 16:46:42,203+01 INFO [org.ovirt.engine.core.<wbr>vdsbroker.vdsbroker.<wbr>FullListVDSCommand] (ForkJoinPool-1-worker-4) [] START, FullListVDSCommand(HostName = <a href="http://ginger.local.systea.fr" target="_blank">ginger.local.systea.fr</a>, FullListVDSCommandParameters:{<wbr>runAsync='true', hostId='d569c2dd-8f30-4878-<wbr>8aea-858db285cf69', vmIds='[3f57e669-5e4c-4d10-<wbr>85cc-d573004a099d]'}), log id: 7cc65298<br>2018-02-12 16:46:42,254+01 INFO [org.ovirt.engine.core.<wbr>vdsbroker.vdsbroker.<wbr>FullListVDSCommand] (ForkJoinPool-1-worker-4) [] FINISH, FullListVDSCommand, return: [{acpiEnable=true, emulatedMachine=pc-i440fx-<wbr>rhel7.3.0, afterMigrationStatus=, tabletEnable=true, pid=18748, guestDiskMapping={}, transparentHugePages=true, timeOffset=0, cpuType=Nehalem, smp=2, guestNumaNodes=[Ljava.lang.<wbr>Object;@760085fd, custom={device_fbddd528-7d93-<wbr>49c6-a286-180e021cb274device_<wbr>879c93ab-4df1-435c-af02-<wbr>565039fcc254=VmDevice:{id='<wbr>VmDeviceId:{deviceId='<wbr>879c93ab-4df1-435c-af02-<wbr>565039fcc254', vmId='3f57e669-5e4c-4d10-85cc-<wbr>d573004a099d'}', device='unix', type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, controller=0, type=virtio-serial, port=1}', managed='false', plugged='true', readOnly='false', deviceAlias='channel0', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}, device_fbddd528-7d93-49c6-<wbr>a286-180e021cb274device_<wbr>879c93ab-4df1-435c-af02-<wbr>565039fcc254device_8945f61a-<wbr>abbe-4156-8485-<wbr>a4aa6f1908dbdevice_017b5e59-<wbr>01c4-4aac-bf0c-b5d9557284d6=<wbr>VmDevice:{id='VmDeviceId:{<wbr>deviceId='017b5e59-01c4-4aac-<wbr>bf0c-b5d9557284d6', vmId='3f57e669-5e4c-4d10-85cc-<wbr>d573004a099d'}', device='tablet', type='UNKNOWN', bootOrder='0', specParams='[]', address='{bus=0, type=usb, port=1}', managed='false', plugged='true', readOnly='false', deviceAlias='input0', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}, device_fbddd528-7d93-49c6-<wbr>a286-180e021cb274=VmDevice:{<wbr>id='VmDeviceId:{deviceId='<wbr>fbddd528-7d93-49c6-a286-<wbr>180e021cb274', vmId='3f57e669-5e4c-4d10-85cc-<wbr>d573004a099d'}', device='ide', type='CONTROLLER', bootOrder='0', specParams='[]', address='{slot=0x01, bus=0x00, domain=0x0000, type=pci, function=0x1}', managed='false', plugged='true', readOnly='false', deviceAlias='ide', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}, device_fbddd528-7d93-49c6-<wbr>a286-180e021cb274device_<wbr>879c93ab-4df1-435c-af02-<wbr>565039fcc254device_8945f61a-<wbr>abbe-4156-8485-a4aa6f1908db=<wbr>VmDevice:{id='VmDeviceId:{<wbr>deviceId='8945f61a-abbe-4156-<wbr>8485-a4aa6f1908db', vmId='3f57e669-5e4c-4d10-85cc-<wbr>d573004a099d'}', device='unix', type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, controller=0, type=virtio-serial, port=2}', managed='false', plugged='true', readOnly='false', deviceAlias='channel1', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}}, vmType=kvm, memSize=8192, smpCoresPerSocket=1, vmName=Oracle_SECONDARY, nice=0, status=Up, maxMemSize=32768, bootMenuEnable=false, vmId=3f57e669-5e4c-4d10-85cc-<wbr>d573004a099d, numOfIoThreads=2, smpThreadsPerCore=1, smartcardEnable=false, maxMemSlots=16, kvmEnable=true, pitReinjection=false, displayNetwork=ovirtmgmt, devices=[Ljava.lang.Object;@<wbr>2e4d3dd3, memGuaranteedSize=8192, maxVCpus=16, clientIp=, statusTime=<a href="tel:(430)%20425-9600" value="+14304259600" target="_blank">4304259600</a>, display=vnc}], log id: 7cc65298<br>2018-02-12 16:46:42,257+01 INFO [org.ovirt.engine.core.<wbr>vdsbroker.monitoring.<wbr>VmDevicesMonitoring] (ForkJoinPool-1-worker-4) [] Received a vnc Device without an address when processing VM 3f57e669-5e4c-4d10-85cc-<wbr>d573004a099d devices, skipping device: {device=vnc, specParams={displayNetwork=<wbr>ovirtmgmt, keyMap=fr, displayIp=192.168.0.5}, type=graphics, deviceId=813957b1-446a-4e88-<wbr>9e40-9fe76d2c442d, port=5901}<br>2018-02-12 16:46:42,257+01 INFO [org.ovirt.engine.core.<wbr>vdsbroker.monitoring.<wbr>VmDevicesMonitoring] (ForkJoinPool-1-worker-4) [] Received a lease Device without an address when processing VM 3f57e669-5e4c-4d10-85cc-<wbr>d573004a099d devices, skipping device: {lease_id=3f57e669-5e4c-4d10-<wbr>85cc-d573004a099d, sd_id=1e51cecc-eb2e-47d0-b185-<wbr>920fdc7afa16, deviceId={uuid=a09949aa-5642-<wbr>4b6d-94a4-8b0d04257be5}, offset=6291456, device=lease, path=/rhev/data-center/mnt/<wbr>glusterSD/192.168.0.6:_DATA01/<wbr>1e51cecc-eb2e-47d0-b185-<wbr>920fdc7afa16/dom_md/xleases, type=lease}<br>2018-02-12 16:46:46,260+01 INFO [org.ovirt.engine.core.<wbr>vdsbroker.vdsbroker.<wbr>FullListVDSCommand] (DefaultQuartzScheduler5) [7fcb200a] FINISH, FullListVDSCommand, return: [{acpiEnable=true, emulatedMachine=pc-i440fx-<wbr>rhel7.3.0, afterMigrationStatus=, tabletEnable=true, pid=18748, guestDiskMapping={0QEMU_QEMU_<wbr>HARDDISK_d890fa68-fba4-4f49-9=<wbr>{name=/dev/sda}, QEMU_DVD-ROM_QM00003={name=/<wbr>dev/sr0}}, transparentHugePages=true, timeOffset=0, cpuType=Nehalem, smp=2, guestNumaNodes=[Ljava.lang.<wbr>Object;@77951faf, custom={device_fbddd528-7d93-<wbr>49c6-a286-180e021cb274device_<wbr>879c93ab-4df1-435c-af02-<wbr>565039fcc254=VmDevice:{id='<wbr>VmDeviceId:{deviceId='<wbr>879c93ab-4df1-435c-af02-<wbr>565039fcc254', vmId='3f57e669-5e4c-4d10-85cc-<wbr>d573004a099d'}', device='unix', type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, controller=0, type=virtio-serial, port=1}', managed='false', plugged='true', readOnly='false', deviceAlias='channel0', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}, device_fbddd528-7d93-49c6-<wbr>a286-180e021cb274device_<wbr>879c93ab-4df1-435c-af02-<wbr>565039fcc254device_8945f61a-<wbr>abbe-4156-8485-<wbr>a4aa6f1908dbdevice_017b5e59-<wbr>01c4-4aac-bf0c-b5d9557284d6=<wbr>VmDevice:{id='VmDeviceId:{<wbr>deviceId='017b5e59-01c4-4aac-<wbr>bf0c-b5d9557284d6', vmId='3f57e669-5e4c-4d10-85cc-<wbr>d573004a099d'}', device='tablet', type='UNKNOWN', bootOrder='0', specParams='[]', address='{bus=0, type=usb, port=1}', managed='false', plugged='true', readOnly='false', deviceAlias='input0', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}, device_fbddd528-7d93-49c6-<wbr>a286-180e021cb274=VmDevice:{<wbr>id='VmDeviceId:{deviceId='<wbr>fbddd528-7d93-49c6-a286-<wbr>180e021cb274', vmId='3f57e669-5e4c-4d10-85cc-<wbr>d573004a099d'}', device='ide', type='CONTROLLER', bootOrder='0', specParams='[]', address='{slot=0x01, bus=0x00, domain=0x0000, type=pci, function=0x1}', managed='false', plugged='true', readOnly='false', deviceAlias='ide', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}, device_fbddd528-7d93-49c6-<wbr>a286-180e021cb274device_<wbr>879c93ab-4df1-435c-af02-<wbr>565039fcc254device_8945f61a-<wbr>abbe-4156-8485-a4aa6f1908db=<wbr>VmDevice:{id='VmDeviceId:{<wbr>deviceId='8945f61a-abbe-4156-<wbr>8485-a4aa6f1908db', vmId='3f57e669-5e4c-4d10-85cc-<wbr>d573004a099d'}', device='unix', type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, controller=0, type=virtio-serial, port=2}', managed='false', plugged='true', readOnly='false', deviceAlias='channel1', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}}, vmType=kvm, memSize=8192, smpCoresPerSocket=1, vmName=Oracle_SECONDARY, nice=0, status=Up, maxMemSize=32768, bootMenuEnable=false, vmId=3f57e669-5e4c-4d10-85cc-<wbr>d573004a099d, numOfIoThreads=2, smpThreadsPerCore=1, smartcardEnable=false, maxMemSlots=16, kvmEnable=true, pitReinjection=false, displayNetwork=ovirtmgmt, devices=[Ljava.lang.Object;@<wbr>286410fd, memGuaranteedSize=8192, maxVCpus=16, clientIp=, statusTime=<a href="tel:(430)%20426-3620" value="+14304263620" target="_blank">4304263620</a>, display=vnc}], log id: 58cdef4c<br>2018-02-12 16:46:46,267+01 INFO [org.ovirt.engine.core.<wbr>vdsbroker.monitoring.<wbr>VmDevicesMonitoring] (DefaultQuartzScheduler5) [7fcb200a] Received a vnc Device without an address when processing VM 3f57e669-5e4c-4d10-85cc-<wbr>d573004a099d devices, skipping device: {device=vnc, specParams={displayNetwork=<wbr>ovirtmgmt, keyMap=fr, displayIp=192.168.0.5}, type=graphics, deviceId=813957b1-446a-4e88-<wbr>9e40-9fe76d2c442d, port=5901}<br>2018-02-12 16:46:46,268+01 INFO [org.ovirt.engine.core.<wbr>vdsbroker.monitoring.<wbr>VmDevicesMonitoring] (DefaultQuartzScheduler5) [7fcb200a] Received a lease Device without an address when processing VM 3f57e669-5e4c-4d10-85cc-<wbr>d573004a099d devices, skipping device: {lease_id=3f57e669-5e4c-4d10-<wbr>85cc-d573004a099d, sd_id=1e51cecc-eb2e-47d0-b185-<wbr>920fdc7afa16, deviceId={uuid=a09949aa-5642-<wbr>4b6d-94a4-8b0d04257be5}, offset=6291456, device=lease, path=/rhev/data-center/mnt/<wbr>glusterSD/192.168.0.6:_DATA01/<wbr>1e51cecc-eb2e-47d0-b185-<wbr>920fdc7afa16/dom_md/xleases, type=lease}<p> </p></blockquote><br>For the VM with 2 vdisks we see :<blockquote><p>2018-02-12 16:49:06,112+01 INFO [org.ovirt.engine.core.bll.<wbr>MigrateVmToServerCommand] (default task-50) [92b5af33-cb87-4142-b8fe-<wbr>8b838dd7458e] Lock Acquired to object 'EngineLock:{exclusiveLocks='[<wbr>f7d4ec12-627a-4b83-b59e-<wbr>886400d55474=VM]', sharedLocks=''}'<br>2018-02-12 16:49:06,407+01 INFO [org.ovirt.engine.core.bll.<wbr>MigrateVmToServerCommand] (org.ovirt.thread.pool-6-<wbr>thread-49) [92b5af33-cb87-4142-b8fe-<wbr>8b838dd7458e] Running command: MigrateVmToServerCommand internal: false. Entities affected : ID: f7d4ec12-627a-4b83-b59e-<wbr>886400d55474 Type: VMAction group MIGRATE_VM with role type USER<br>2018-02-12 16:49:06,712+01 INFO [org.ovirt.engine.core.<wbr>vdsbroker.MigrateVDSCommand] (org.ovirt.thread.pool-6-<wbr>thread-49) [92b5af33-cb87-4142-b8fe-<wbr>8b838dd7458e] START, MigrateVDSCommand( MigrateVDSCommandParameters:{<wbr>runAsync='true', hostId='d569c2dd-8f30-4878-<wbr>8aea-858db285cf69', vmId='f7d4ec12-627a-4b83-b59e-<wbr>886400d55474', srcHost='192.168.0.5', dstVdsId='ce3938b1-b23f-4d22-<wbr>840a-f17d7cd87bb1', dstHost='<a href="http://192.168.0.6:54321" target="_blank">192.168.0.6:54321</a>', migrationMethod='ONLINE', tunnelMigration='false', migrationDowntime='0', autoConverge='true', migrateCompressed='false', consoleAddress='null', maxBandwidth='500', enableGuestEvents='true', maxIncomingMigrations='2', maxOutgoingMigrations='2', convergenceSchedule='[init=[{<wbr>name=setDowntime, params=[100]}], stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2, action={name=setDowntime, params=[200]}}, {limit=3, action={name=setDowntime, params=[300]}}, {limit=4, action={name=setDowntime, params=[400]}}, {limit=6, action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort, params=[]}}]]'}), log id: 3702a9e0<br>2018-02-12 16:49:06,713+01 INFO [org.ovirt.engine.core.<wbr>vdsbroker.vdsbroker.<wbr>MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-<wbr>thread-49) [92b5af33-cb87-4142-b8fe-<wbr>8b838dd7458e] START, MigrateBrokerVDSCommand(<wbr>HostName = <a href="http://ginger.local.systea.fr" target="_blank">ginger.local.systea.fr</a>, MigrateVDSCommandParameters:{<wbr>runAsync='true', hostId='d569c2dd-8f30-4878-<wbr>8aea-858db285cf69', vmId='f7d4ec12-627a-4b83-b59e-<wbr>886400d55474', srcHost='192.168.0.5', dstVdsId='ce3938b1-b23f-4d22-<wbr>840a-f17d7cd87bb1', dstHost='<a href="http://192.168.0.6:54321" target="_blank">192.168.0.6:54321</a>', migrationMethod='ONLINE', tunnelMigration='false', migrationDowntime='0', autoConverge='true', migrateCompressed='false', consoleAddress='null', maxBandwidth='500', enableGuestEvents='true', maxIncomingMigrations='2', maxOutgoingMigrations='2', convergenceSchedule='[init=[{<wbr>name=setDowntime, params=[100]}], stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2, action={name=setDowntime, params=[200]}}, {limit=3, action={name=setDowntime, params=[300]}}, {limit=4, action={name=setDowntime, params=[400]}}, {limit=6, action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort, params=[]}}]]'}), log id: 1840069c<br>2018-02-12 16:49:06,724+01 INFO [org.ovirt.engine.core.<wbr>vdsbroker.vdsbroker.<wbr>MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-<wbr>thread-49) [92b5af33-cb87-4142-b8fe-<wbr>8b838dd7458e] FINISH, MigrateBrokerVDSCommand, log id: 1840069c<br>2018-02-12 16:49:06,732+01 INFO [org.ovirt.engine.core.<wbr>vdsbroker.MigrateVDSCommand] (org.ovirt.thread.pool-6-<wbr>thread-49) [92b5af33-cb87-4142-b8fe-<wbr>8b838dd7458e] FINISH, MigrateVDSCommand, return: MigratingFrom, log id: 3702a9e0<br>2018-02-12 16:49:06,753+01 INFO [org.ovirt.engine.core.dal.<wbr>dbbroker.auditloghandling.<wbr>AuditLogDirector] (org.ovirt.thread.pool-6-<wbr>thread-49) [92b5af33-cb87-4142-b8fe-<wbr>8b838dd7458e] EVENT_ID: VM_MIGRATION_START(62), Correlation ID: 92b5af33-cb87-4142-b8fe-<wbr>8b838dd7458e, Job ID: f4f54054-f7c8-4481-8eda-<wbr>d5a15c383061, Call Stack: null, Custom ID: null, Custom Event ID: -1, Message: Migration started (VM: Oracle_PRIMARY, Source: <a href="http://ginger.local.systea.fr" target="_blank">ginger.local.systea.fr</a>, Destination: <a href="http://victor.local.systea.fr" target="_blank">victor.local.systea.fr</a>, User: admin@internal-authz).<br>...<br>2018-02-12 16:49:16,453+01 INFO [org.ovirt.engine.core.<wbr>vdsbroker.monitoring.<wbr>VmsStatisticsFetcher] (DefaultQuartzScheduler4) [162a5bc3] Fetched 2 VMs from VDS 'ce3938b1-b23f-4d22-840a-<wbr>f17d7cd87bb1'<br>2018-02-12 16:49:16,455+01 INFO [org.ovirt.engine.core.<wbr>vdsbroker.monitoring.<wbr>VmAnalyzer] (DefaultQuartzScheduler4) [162a5bc3] VM 'f7d4ec12-627a-4b83-b59e-<wbr>886400d55474'(Oracle_PRIMARY) was unexpectedly detected as 'MigratingTo' on VDS 'ce3938b1-b23f-4d22-840a-<wbr>f17d7cd87bb1'(<a href="http://victor.local.systea.fr" target="_blank">victor.local.<wbr>systea.fr</a>) (expected on 'd569c2dd-8f30-4878-8aea-<wbr>858db285cf69')<br>2018-02-12 16:49:16,455+01 INFO [org.ovirt.engine.core.<wbr>vdsbroker.monitoring.<wbr>VmAnalyzer] (DefaultQuartzScheduler4) [162a5bc3] VM 'f7d4ec12-627a-4b83-b59e-<wbr>886400d55474' is migrating to VDS 'ce3938b1-b23f-4d22-840a-<wbr>f17d7cd87bb1'(<a href="http://victor.local.systea.fr" target="_blank">victor.local.<wbr>systea.fr</a>) ignoring it in the refresh until migration is done<br>...<br>2018-02-12 16:49:31,484+01 INFO [org.ovirt.engine.core.<wbr>vdsbroker.monitoring.<wbr>VmAnalyzer] (DefaultQuartzScheduler5) [11a7619a] VM 'f7d4ec12-627a-4b83-b59e-<wbr>886400d55474'(Oracle_PRIMARY) was unexpectedly detected as 'MigratingTo' on VDS 'ce3938b1-b23f-4d22-840a-<wbr>f17d7cd87bb1'(<a href="http://victor.local.systea.fr" target="_blank">victor.local.<wbr>systea.fr</a>) (expected on 'd569c2dd-8f30-4878-8aea-<wbr>858db285cf69')<br>2018-02-12 16:49:31,484+01 INFO [org.ovirt.engine.core.<wbr>vdsbroker.monitoring.<wbr>VmAnalyzer] (DefaultQuartzScheduler5) [11a7619a] VM 'f7d4ec12-627a-4b83-b59e-<wbr>886400d55474' is migrating to VDS 'ce3938b1-b23f-4d22-840a-<wbr>f17d7cd87bb1'(<a href="http://victor.local.systea.fr" target="_blank">victor.local.<wbr>systea.fr</a>) ignoring it in the refresh until migration is done<br> </p></blockquote><br>and so on, last lines repeated indefinitly for hours since we poweroff the VM...<br>Is this something known ? Any idea about that ?<br><br>Thanks<br><br>Ovirt 4.1.6, updated last at feb-13. Gluster 3.12.1.<br><br>--<br><p class="m_8587729722327689770Text1">Cordialement,<br><br><b>Frank Soyer </b></p>
<br>______________________________<wbr>_________________<br>
Users mailing list<br>
<a href="mailto:Users@ovirt.org">Users@ovirt.org</a><br>
<a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/<wbr>mailman/listinfo/users</a><br>
<br></blockquote></div><br></div>