Hi,
Yes, on 2018-02-16 (vdsm logs) I tried with a VM standing on ginger (192.168.0.6) migrated (or failed to migrate...) to victor (192.168.0.5), while the engine.log in the first mail on 2018-02-12 was for VMs standing on victor, migrated (or failed to migrate...) to ginger. Symptoms were exactly the same, in both directions, and VMs works like a charm before, and even after (migration "killed" by a poweroff of VMs).
Am I the only one experimenting this problem ?
Thanks
--Cordialement,
Frank Soyer
Le Jeudi, Février 22, 2018 00:45 CET, Maor Lipchuk <mlipchuk@redhat.com> a écrit:
Hi Frank,Sorry about the delay repond.I've been going through the logs you attached, although I could not find any specific indication why the migration failed because of the disk you were mentionning.Does this VM run with both disks on the target host without migration?Regards,MaorOn Fri, Feb 16, 2018 at 11:03 AM, fsoyer <fsoyer@systea.fr> wrote:Hi Maor,
sorry for the double post, I've change the email adress of my account and supposed that I'd need to re-post it.
And thank you for your time. Here are the logs. I added a vdisk to an existing VM : it no more migrates, needing to poweroff it after minutes. Then simply deleting the second disk makes migrate it in exactly 9s without problem !
https://gist.github.com/fgth/4707446331d201eef574ac31b6e8956 1
https://gist.github.com/fgth/f8de9c22664aee53722af676bff8719 d
--Cordialement,
Frank SoyerLe Mercredi, Février 14, 2018 11:04 CET, Maor Lipchuk <mlipchuk@redhat.com> a écrit:
Hi Frank,I already replied on your last email.Can you provide the VDSM logs from the time of the migration failure for both hosts:Thanks,MaorOn Wed, Feb 14, 2018 at 11:23 AM, fsoyer <fsoyer@systea.fr> wrote:Hi all,
I discovered yesterday a problem when migrating VM with more than one vdisk.
On our test servers (oVirt4.1, shared storage with Gluster), I created 2 VMs needed for a test, from a template with a 20G vdisk. On this VMs I added a 100G vdisk (for this tests I didn't want to waste time to extend the existing vdisks... But I lost time finally...). The VMs with the 2 vdisks works well.
Now I saw some updates waiting on the host. I tried to put it in maintenance... But it stopped on the two VM. They were marked "migrating", but no more accessible. Other (small) VMs with only 1 vdisk was migrated without problem at the same time.
I saw that a kvm process for the (big) VMs was launched on the source AND destination host, but after tens of minutes, the migration and the VMs was always freezed. I tried to cancel the migration for the VMs : failed. The only way to stop it was to poweroff the VMs : the kvm process died on the 2 hosts and the GUI alerted on a failed migration.
In doubt, I tried to delete the second vdisk on one of this VMs : it migrates then without error ! And no access problem.
I tried to extend the first vdisk of the second VM, the delete the second vdisk : it migrates now without problem !
So after another test with a VM with 2 vdisks, I can say that this blocked the migration process :(
In engine.log, for a VMs with 1 vdisk migrating well, we see :2018-02-12 16:46:29,705+01 INFO [org.ovirt.engine.core.bll.MigrateVmToServerCommand] (default task-28) [2f712024-5982-46a8-82c8-fd829 3da5725] Lock Acquired to object 'EngineLock:{exclusiveLocks='[ 3f57e669-5e4c-4d10-85cc-d57300 4a099d=VM]', sharedLocks=''}'
2018-02-12 16:46:29,955+01 INFO [org.ovirt.engine.core.bll.MigrateVmToServerCommand] (org.ovirt.thread.pool-6-threa d-32) [2f712024-5982-46a8-82c8-fd829 3da5725] Running command: MigrateVmToServerCommand internal: false. Entities affected : ID: 3f57e669-5e4c-4d10-85cc-d57300 4a099d Type: VMAction group MIGRATE_VM with role type USER
2018-02-12 16:46:30,261+01 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (org.ovirt.thread.pool-6-threa d-32) [2f712024-5982-46a8-82c8-fd829 3da5725] START, MigrateVDSCommand( MigrateVDSCommandParameters:{r unAsync='true', hostId='ce3938b1-b23f-4d22-840 a-f17d7cd87bb1', vmId='3f57e669-5e4c-4d10-85cc- d573004a099d', srcHost='192.168.0.6', dstVdsId='d569c2dd-8f30-4878-8 aea-858db285cf69', dstHost='192.168.0.5:54321', migrationMethod='ONLINE', tunnelMigration='false', migrationDowntime='0', autoConverge='true', migrateCompressed='false', consoleAddress='null', maxBandwidth='500', enableGuestEvents='true', maxIncomingMigrations='2', maxOutgoingMigrations='2', convergenceSchedule='[init=[{n ame=setDowntime, params=[100]}], stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2, action={name=setDowntime, params=[200]}}, {limit=3, action={name=setDowntime, params=[300]}}, {limit=4, action={name=setDowntime, params=[400]}}, {limit=6, action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort, params=[]}}]]'}), log id: 14f61ee0
2018-02-12 16:46:30,262+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDS Command] (org.ovirt.thread.pool-6-threa d-32) [2f712024-5982-46a8-82c8-fd829 3da5725] START, MigrateBrokerVDSCommand(HostNa me = victor.local.systea.fr, MigrateVDSCommandParameters:{r unAsync='true', hostId='ce3938b1-b23f-4d22-840 a-f17d7cd87bb1', vmId='3f57e669-5e4c-4d10-85cc- d573004a099d', srcHost='192.168.0.6', dstVdsId='d569c2dd-8f30-4878-8 aea-858db285cf69', dstHost='192.168.0.5:54321', migrationMethod='ONLINE', tunnelMigration='false', migrationDowntime='0', autoConverge='true', migrateCompressed='false', consoleAddress='null', maxBandwidth='500', enableGuestEvents='true', maxIncomingMigrations='2', maxOutgoingMigrations='2', convergenceSchedule='[init=[{n ame=setDowntime, params=[100]}], stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2, action={name=setDowntime, params=[200]}}, {limit=3, action={name=setDowntime, params=[300]}}, {limit=4, action={name=setDowntime, params=[400]}}, {limit=6, action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort, params=[]}}]]'}), log id: 775cd381
2018-02-12 16:46:30,277+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDS Command] (org.ovirt.thread.pool-6-threa d-32) [2f712024-5982-46a8-82c8-fd829 3da5725] FINISH, MigrateBrokerVDSCommand, log id: 775cd381
2018-02-12 16:46:30,285+01 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (org.ovirt.thread.pool-6-threa d-32) [2f712024-5982-46a8-82c8-fd829 3da5725] FINISH, MigrateVDSCommand, return: MigratingFrom, log id: 14f61ee0
2018-02-12 16:46:30,301+01 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditL ogDirector] (org.ovirt.thread.pool-6-threa d-32) [2f712024-5982-46a8-82c8-fd829 3da5725] EVENT_ID: VM_MIGRATION_START(62), Correlation ID: 2f712024-5982-46a8-82c8-fd8293 da5725, Job ID: 4bd19aa9-cc99-4d02-884e-5a1e85 7a7738, Call Stack: null, Custom ID: null, Custom Event ID: -1, Message: Migration started (VM: Oracle_SECONDARY, Source: victor.local.systea.fr, Destination: ginger.local.systea.fr, User: admin@internal-authz).
2018-02-12 16:46:31,106+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVDSComma nd] (DefaultQuartzScheduler9) [54a65b66] START, FullListVDSCommand(HostName = victor.local.systea.fr, FullListVDSCommandParameters:{ runAsync='true', hostId='ce3938b1-b23f-4d22-840 a-f17d7cd87bb1', vmIds='[3f57e669-5e4c-4d10-85c c-d573004a099d]'}), log id: 54b4b435
2018-02-12 16:46:31,147+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVDSComma nd] (DefaultQuartzScheduler9) [54a65b66] FINISH, FullListVDSCommand, return: [{acpiEnable=true, emulatedMachine=pc-i440fx-rhel 7.3.0, tabletEnable=true, pid=1493, guestDiskMapping={0QEMU_QEMU_H ARDDISK_d890fa68-fba4-4f49-9={ name=/dev/sda}, QEMU_DVD-ROM_QM00003={name=/de v/sr0}}, transparentHugePages=true, timeOffset=0, cpuType=Nehalem, smp=2, pauseCode=NOERR, guestNumaNodes=[Ljava.lang.Obj ect;@1d9042cd, smartcardEnable=false, custom={device_fbddd528-7d93-4 9c6-a286-180e021cb274device_87 9c93ab-4df1-435c-af02-565039fc c254=VmDevice:{id='VmDeviceId: {deviceId='879c93ab-4df1-435c- af02-565039fcc254', vmId='3f57e669-5e4c-4d10-85cc- d573004a099d'}', device='unix', type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, controller=0, type=virtio-serial, port=1}', managed='false', plugged='true', readOnly='false', deviceAlias='channel0', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}, device_fbddd528-7d93-49c6-a286 -180e021cb274device_879c93ab-4 df1-435c-af02-565039fcc254devi ce_8945f61a-abbe-4156-8485-a4a a6f1908dbdevice_017b5e59-01c4- 4aac-bf0c-b5d9557284d6=VmDevic e:{id='VmDeviceId:{deviceId='0 17b5e59-01c4-4aac-bf0c-b5d9557 284d6', vmId='3f57e669-5e4c-4d10-85cc- d573004a099d'}', device='tablet', type='UNKNOWN', bootOrder='0', specParams='[]', address='{bus=0, type=usb, port=1}', managed='false', plugged='true', readOnly='false', deviceAlias='input0', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}, device_fbddd528-7d93-49c6-a286 -180e021cb274=VmDevice:{id='Vm DeviceId:{deviceId='fbddd528- 7d93-49c6-a286-180e021cb274', vmId='3f57e669-5e4c-4d10-85cc- d573004a099d'}', device='ide', type='CONTROLLER', bootOrder='0', specParams='[]', address='{slot=0x01, bus=0x00, domain=0x0000, type=pci, function=0x1}', managed='false', plugged='true', readOnly='false', deviceAlias='ide', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}, device_fbddd528-7d93-49c6-a286 -180e021cb274device_879c93ab-4 df1-435c-af02-565039fcc254devi ce_8945f61a-abbe-4156-8485-a4a a6f1908db=VmDevice:{id='VmDevi ceId:{deviceId='8945f61a-abbe- 4156-8485-a4aa6f1908db', vmId='3f57e669-5e4c-4d10-85cc- d573004a099d'}', device='unix', type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, controller=0, type=virtio-serial, port=2}', managed='false', plugged='true', readOnly='false', deviceAlias='channel1', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}}, vmType=kvm, memSize=8192, smpCoresPerSocket=1, vmName=Oracle_SECONDARY, nice=0, status=Migration Source, maxMemSize=32768, bootMenuEnable=false, vmId=3f57e669-5e4c-4d10-85cc-d 573004a099d, numOfIoThreads=2, smpThreadsPerCore=1, memGuaranteedSize=8192, kvmEnable=true, pitReinjection=false, displayNetwork=ovirtmgmt, devices=[Ljava.lang.Object;@28 ae66d7, display=vnc, maxVCpus=16, clientIp=, statusTime=4299484520, maxMemSlots=16}], log id: 54b4b435
2018-02-12 16:46:31,150+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFe tcher] (DefaultQuartzScheduler1) [27fac647] Fetched 3 VMs from VDS 'd569c2dd-8f30-4878-8aea-858db 285cf69'
2018-02-12 16:46:31,151+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmDevicesMonito ring] (DefaultQuartzScheduler9) [54a65b66] Received a vnc Device without an address when processing VM 3f57e669-5e4c-4d10-85cc-d57300 4a099d devices, skipping device: {device=vnc, specParams={displayNetwork=ovi rtmgmt, keyMap=fr, displayIp=192.168.0.6}, type=graphics, deviceId=813957b1-446a-4e88-9e 40-9fe76d2c442d, port=5901}
2018-02-12 16:46:31,151+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmDevicesMonito ring] (DefaultQuartzScheduler9) [54a65b66] Received a lease Device without an address when processing VM 3f57e669-5e4c-4d10-85cc-d57300 4a099d devices, skipping device: {lease_id=3f57e669-5e4c-4d10-8 5cc-d573004a099d, sd_id=1e51cecc-eb2e-47d0-b185- 920fdc7afa16, deviceId={uuid=a09949aa-5642-4 b6d-94a4-8b0d04257be5}, offset=6291456, device=lease, path=/rhev/data-center/mnt/glu sterSD/192.168.0.6:_DATA01/1e5 1cecc-eb2e-47d0-b185-920fdc7af a16/dom_md/xleases, type=lease}
2018-02-12 16:46:31,152+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (DefaultQuartzScheduler1) [27fac647] VM '3f57e669-5e4c-4d10-85cc-d5730 04a099d'(Oracle_SECONDARY) was unexpectedly detected as 'MigratingTo' on VDS 'd569c2dd-8f30-4878-8aea-858db 285cf69'(ginger.local.systea.f r ) (expected on 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1')
2018-02-12 16:46:31,152+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (DefaultQuartzScheduler1) [27fac647] VM '3f57e669-5e4c-4d10-85cc-d5730 04a099d' is migrating to VDS 'd569c2dd-8f30-4878-8aea-858db 285cf69'(ginger.local.systea.f r ) ignoring it in the refresh until migration is done
....
2018-02-12 16:46:41,631+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-11) [] VM '3f57e669-5e4c-4d10-85cc-d5730 04a099d' was reported as Down on VDS 'ce3938b1-b23f-4d22-840a-f17d7 cd87bb1'(victor.local.systea.f r )
2018-02-12 16:46:41,632+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSComman d] (ForkJoinPool-1-worker-11) [] START, DestroyVDSCommand(HostName = victor.local.systea.fr, DestroyVmVDSCommandParameters: {runAsync='true', hostId='ce3938b1-b23f-4d22-840 a-f17d7cd87bb1', vmId='3f57e669-5e4c-4d10-85cc- d573004a099d', force='false', secondsToWait='0', gracefully='false', reason='', ignoreNoVm='true'}), log id: 560eca57
2018-02-12 16:46:41,650+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSComman d] (ForkJoinPool-1-worker-11) [] FINISH, DestroyVDSCommand, log id: 560eca57
2018-02-12 16:46:41,650+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-11) [] VM '3f57e669-5e4c-4d10-85cc-d5730 04a099d'(Oracle_SECONDARY) moved from 'MigratingFrom' --> 'Down'
2018-02-12 16:46:41,651+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-11) [] Handing over VM '3f57e669-5e4c-4d10-85cc-d5730 04a099d'(Oracle_SECONDARY) to Host 'd569c2dd-8f30-4878-8aea-858db 285cf69'. Setting VM to status 'MigratingTo'
2018-02-12 16:46:42,163+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-4) [] VM '3f57e669-5e4c-4d10-85cc-d5730 04a099d'(Oracle_SECONDARY) moved from 'MigratingTo' --> 'Up'
2018-02-12 16:46:42,169+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDS Command] (ForkJoinPool-1-worker-4) [] START, MigrateStatusVDSCommand(HostNa me = ginger.local.systea.fr, MigrateStatusVDSCommandParamet ers:{runAsync='true', hostId='d569c2dd-8f30-4878-8ae a-858db285cf69', vmId='3f57e669-5e4c-4d10-85cc- d573004a099d'}), log id: 7a25c281
2018-02-12 16:46:42,174+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDS Command] (ForkJoinPool-1-worker-4) [] FINISH, MigrateStatusVDSCommand, log id: 7a25c281
2018-02-12 16:46:42,194+01 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditL ogDirector] (ForkJoinPool-1-worker-4) [] EVENT_ID: VM_MIGRATION_DONE(63), Correlation ID: 2f712024-5982-46a8-82c8-fd8293 da5725, Job ID: 4bd19aa9-cc99-4d02-884e-5a1e85 7a7738, Call Stack: null, Custom ID: null, Custom Event ID: -1, Message: Migration completed (VM: Oracle_SECONDARY, Source: victor.local.systea.fr, Destination: ginger.local.systea.fr, Duration: 11 seconds, Total: 11 seconds, Actual downtime: (N/A))
2018-02-12 16:46:42,201+01 INFO [org.ovirt.engine.core.bll.MigrateVmToServerCommand] (ForkJoinPool-1-worker-4) [] Lock freed to object 'EngineLock:{exclusiveLocks='[ 3f57e669-5e4c-4d10-85cc-d57300 4a099d=VM]', sharedLocks=''}'
2018-02-12 16:46:42,203+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVDSComma nd] (ForkJoinPool-1-worker-4) [] START, FullListVDSCommand(HostName = ginger.local.systea.fr, FullListVDSCommandParameters:{ runAsync='true', hostId='d569c2dd-8f30-4878-8ae a-858db285cf69', vmIds='[3f57e669-5e4c-4d10-85c c-d573004a099d]'}), log id: 7cc65298
2018-02-12 16:46:42,254+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVDSComma nd] (ForkJoinPool-1-worker-4) [] FINISH, FullListVDSCommand, return: [{acpiEnable=true, emulatedMachine=pc-i440fx-rhel 7.3.0, afterMigrationStatus=, tabletEnable=true, pid=18748, guestDiskMapping={}, transparentHugePages=true, timeOffset=0, cpuType=Nehalem, smp=2, guestNumaNodes=[Ljava.lang.Obj ect;@760085fd, custom={device_fbddd528-7d93-4 9c6-a286-180e021cb274device_87 9c93ab-4df1-435c-af02-565039fc c254=VmDevice:{id='VmDeviceId: {deviceId='879c93ab-4df1-435c- af02-565039fcc254', vmId='3f57e669-5e4c-4d10-85cc- d573004a099d'}', device='unix', type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, controller=0, type=virtio-serial, port=1}', managed='false', plugged='true', readOnly='false', deviceAlias='channel0', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}, device_fbddd528-7d93-49c6-a286 -180e021cb274device_879c93ab-4 df1-435c-af02-565039fcc254devi ce_8945f61a-abbe-4156-8485-a4a a6f1908dbdevice_017b5e59-01c4- 4aac-bf0c-b5d9557284d6=VmDevic e:{id='VmDeviceId:{deviceId='0 17b5e59-01c4-4aac-bf0c-b5d9557 284d6', vmId='3f57e669-5e4c-4d10-85cc- d573004a099d'}', device='tablet', type='UNKNOWN', bootOrder='0', specParams='[]', address='{bus=0, type=usb, port=1}', managed='false', plugged='true', readOnly='false', deviceAlias='input0', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}, device_fbddd528-7d93-49c6-a286 -180e021cb274=VmDevice:{id='Vm DeviceId:{deviceId='fbddd528- 7d93-49c6-a286-180e021cb274', vmId='3f57e669-5e4c-4d10-85cc- d573004a099d'}', device='ide', type='CONTROLLER', bootOrder='0', specParams='[]', address='{slot=0x01, bus=0x00, domain=0x0000, type=pci, function=0x1}', managed='false', plugged='true', readOnly='false', deviceAlias='ide', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}, device_fbddd528-7d93-49c6-a286 -180e021cb274device_879c93ab-4 df1-435c-af02-565039fcc254devi ce_8945f61a-abbe-4156-8485-a4a a6f1908db=VmDevice:{id='VmDevi ceId:{deviceId='8945f61a-abbe- 4156-8485-a4aa6f1908db', vmId='3f57e669-5e4c-4d10-85cc- d573004a099d'}', device='unix', type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, controller=0, type=virtio-serial, port=2}', managed='false', plugged='true', readOnly='false', deviceAlias='channel1', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}}, vmType=kvm, memSize=8192, smpCoresPerSocket=1, vmName=Oracle_SECONDARY, nice=0, status=Up, maxMemSize=32768, bootMenuEnable=false, vmId=3f57e669-5e4c-4d10-85cc-d 573004a099d, numOfIoThreads=2, smpThreadsPerCore=1, smartcardEnable=false, maxMemSlots=16, kvmEnable=true, pitReinjection=false, displayNetwork=ovirtmgmt, devices=[Ljava.lang.Object;@2e 4d3dd3, memGuaranteedSize=8192, maxVCpus=16, clientIp=, statusTime=4304259600, display=vnc}], log id: 7cc65298
2018-02-12 16:46:42,257+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmDevicesMonito ring] (ForkJoinPool-1-worker-4) [] Received a vnc Device without an address when processing VM 3f57e669-5e4c-4d10-85cc-d57300 4a099d devices, skipping device: {device=vnc, specParams={displayNetwork=ovi rtmgmt, keyMap=fr, displayIp=192.168.0.5}, type=graphics, deviceId=813957b1-446a-4e88-9e 40-9fe76d2c442d, port=5901}
2018-02-12 16:46:42,257+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmDevicesMonito ring] (ForkJoinPool-1-worker-4) [] Received a lease Device without an address when processing VM 3f57e669-5e4c-4d10-85cc-d57300 4a099d devices, skipping device: {lease_id=3f57e669-5e4c-4d10-8 5cc-d573004a099d, sd_id=1e51cecc-eb2e-47d0-b185- 920fdc7afa16, deviceId={uuid=a09949aa-5642-4 b6d-94a4-8b0d04257be5}, offset=6291456, device=lease, path=/rhev/data-center/mnt/glu sterSD/192.168.0.6:_DATA01/1e5 1cecc-eb2e-47d0-b185-920fdc7af a16/dom_md/xleases, type=lease}
2018-02-12 16:46:46,260+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVDSComma nd] (DefaultQuartzScheduler5) [7fcb200a] FINISH, FullListVDSCommand, return: [{acpiEnable=true, emulatedMachine=pc-i440fx-rhel 7.3.0, afterMigrationStatus=, tabletEnable=true, pid=18748, guestDiskMapping={0QEMU_QEMU_H ARDDISK_d890fa68-fba4-4f49-9={ name=/dev/sda}, QEMU_DVD-ROM_QM00003={name=/de v/sr0}}, transparentHugePages=true, timeOffset=0, cpuType=Nehalem, smp=2, guestNumaNodes=[Ljava.lang.Obj ect;@77951faf, custom={device_fbddd528-7d93-4 9c6-a286-180e021cb274device_87 9c93ab-4df1-435c-af02-565039fc c254=VmDevice:{id='VmDeviceId: {deviceId='879c93ab-4df1-435c- af02-565039fcc254', vmId='3f57e669-5e4c-4d10-85cc- d573004a099d'}', device='unix', type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, controller=0, type=virtio-serial, port=1}', managed='false', plugged='true', readOnly='false', deviceAlias='channel0', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}, device_fbddd528-7d93-49c6-a286 -180e021cb274device_879c93ab-4 df1-435c-af02-565039fcc254devi ce_8945f61a-abbe-4156-8485-a4a a6f1908dbdevice_017b5e59-01c4- 4aac-bf0c-b5d9557284d6=VmDevic e:{id='VmDeviceId:{deviceId='0 17b5e59-01c4-4aac-bf0c-b5d9557 284d6', vmId='3f57e669-5e4c-4d10-85cc- d573004a099d'}', device='tablet', type='UNKNOWN', bootOrder='0', specParams='[]', address='{bus=0, type=usb, port=1}', managed='false', plugged='true', readOnly='false', deviceAlias='input0', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}, device_fbddd528-7d93-49c6-a286 -180e021cb274=VmDevice:{id='Vm DeviceId:{deviceId='fbddd528- 7d93-49c6-a286-180e021cb274', vmId='3f57e669-5e4c-4d10-85cc- d573004a099d'}', device='ide', type='CONTROLLER', bootOrder='0', specParams='[]', address='{slot=0x01, bus=0x00, domain=0x0000, type=pci, function=0x1}', managed='false', plugged='true', readOnly='false', deviceAlias='ide', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}, device_fbddd528-7d93-49c6-a286 -180e021cb274device_879c93ab-4 df1-435c-af02-565039fcc254devi ce_8945f61a-abbe-4156-8485-a4a a6f1908db=VmDevice:{id='VmDevi ceId:{deviceId='8945f61a-abbe- 4156-8485-a4aa6f1908db', vmId='3f57e669-5e4c-4d10-85cc- d573004a099d'}', device='unix', type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, controller=0, type=virtio-serial, port=2}', managed='false', plugged='true', readOnly='false', deviceAlias='channel1', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}}, vmType=kvm, memSize=8192, smpCoresPerSocket=1, vmName=Oracle_SECONDARY, nice=0, status=Up, maxMemSize=32768, bootMenuEnable=false, vmId=3f57e669-5e4c-4d10-85cc-d 573004a099d, numOfIoThreads=2, smpThreadsPerCore=1, smartcardEnable=false, maxMemSlots=16, kvmEnable=true, pitReinjection=false, displayNetwork=ovirtmgmt, devices=[Ljava.lang.Object;@28 6410fd, memGuaranteedSize=8192, maxVCpus=16, clientIp=, statusTime=4304263620, display=vnc}], log id: 58cdef4c
2018-02-12 16:46:46,267+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmDevicesMonito ring] (DefaultQuartzScheduler5) [7fcb200a] Received a vnc Device without an address when processing VM 3f57e669-5e4c-4d10-85cc-d57300 4a099d devices, skipping device: {device=vnc, specParams={displayNetwork=ovi rtmgmt, keyMap=fr, displayIp=192.168.0.5}, type=graphics, deviceId=813957b1-446a-4e88-9e 40-9fe76d2c442d, port=5901}
2018-02-12 16:46:46,268+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmDevicesMonito ring] (DefaultQuartzScheduler5) [7fcb200a] Received a lease Device without an address when processing VM 3f57e669-5e4c-4d10-85cc-d57300 4a099d devices, skipping device: {lease_id=3f57e669-5e4c-4d10-8 5cc-d573004a099d, sd_id=1e51cecc-eb2e-47d0-b185- 920fdc7afa16, deviceId={uuid=a09949aa-5642-4 b6d-94a4-8b0d04257be5}, offset=6291456, device=lease, path=/rhev/data-center/mnt/glu sterSD/192.168.0.6:_DATA01/1e5 1cecc-eb2e-47d0-b185-920fdc7af a16/dom_md/xleases, type=lease}
For the VM with 2 vdisks we see :2018-02-12 16:49:06,112+01 INFO [org.ovirt.engine.core.bll.Mi
grateVmToServerCommand] (default task-50) [92b5af33-cb87-4142-b8fe-8b838 dd7458e] Lock Acquired to object 'EngineLock:{exclusiveLocks='[ f7d4ec12-627a-4b83-b59e-886400 d55474=VM]', sharedLocks=''}'
2018-02-12 16:49:06,407+01 INFO [org.ovirt.engine.core.bll.MigrateVmToServerCommand] (org.ovirt.thread.pool-6-threa d-49) [92b5af33-cb87-4142-b8fe-8b838 dd7458e] Running command: MigrateVmToServerCommand internal: false. Entities affected : ID: f7d4ec12-627a-4b83-b59e-886400 d55474 Type: VMAction group MIGRATE_VM with role type USER
2018-02-12 16:49:06,712+01 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (org.ovirt.thread.pool-6-threa d-49) [92b5af33-cb87-4142-b8fe-8b838 dd7458e] START, MigrateVDSCommand( MigrateVDSCommandParameters:{r unAsync='true', hostId='d569c2dd-8f30-4878-8ae a-858db285cf69', vmId='f7d4ec12-627a-4b83-b59e- 886400d55474', srcHost='192.168.0.5', dstVdsId='ce3938b1-b23f-4d22-8 40a-f17d7cd87bb1', dstHost='192.168.0.6:54321', migrationMethod='ONLINE', tunnelMigration='false', migrationDowntime='0', autoConverge='true', migrateCompressed='false', consoleAddress='null', maxBandwidth='500', enableGuestEvents='true', maxIncomingMigrations='2', maxOutgoingMigrations='2', convergenceSchedule='[init=[{n ame=setDowntime, params=[100]}], stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2, action={name=setDowntime, params=[200]}}, {limit=3, action={name=setDowntime, params=[300]}}, {limit=4, action={name=setDowntime, params=[400]}}, {limit=6, action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort, params=[]}}]]'}), log id: 3702a9e0
2018-02-12 16:49:06,713+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDS Command] (org.ovirt.thread.pool-6-threa d-49) [92b5af33-cb87-4142-b8fe-8b838 dd7458e] START, MigrateBrokerVDSCommand(HostNa me = ginger.local.systea.fr, MigrateVDSCommandParameters:{r unAsync='true', hostId='d569c2dd-8f30-4878-8ae a-858db285cf69', vmId='f7d4ec12-627a-4b83-b59e- 886400d55474', srcHost='192.168.0.5', dstVdsId='ce3938b1-b23f-4d22-8 40a-f17d7cd87bb1', dstHost='192.168.0.6:54321', migrationMethod='ONLINE', tunnelMigration='false', migrationDowntime='0', autoConverge='true', migrateCompressed='false', consoleAddress='null', maxBandwidth='500', enableGuestEvents='true', maxIncomingMigrations='2', maxOutgoingMigrations='2', convergenceSchedule='[init=[{n ame=setDowntime, params=[100]}], stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2, action={name=setDowntime, params=[200]}}, {limit=3, action={name=setDowntime, params=[300]}}, {limit=4, action={name=setDowntime, params=[400]}}, {limit=6, action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort, params=[]}}]]'}), log id: 1840069c
2018-02-12 16:49:06,724+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDS Command] (org.ovirt.thread.pool-6-threa d-49) [92b5af33-cb87-4142-b8fe-8b838 dd7458e] FINISH, MigrateBrokerVDSCommand, log id: 1840069c
2018-02-12 16:49:06,732+01 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (org.ovirt.thread.pool-6-threa d-49) [92b5af33-cb87-4142-b8fe-8b838 dd7458e] FINISH, MigrateVDSCommand, return: MigratingFrom, log id: 3702a9e0
2018-02-12 16:49:06,753+01 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditL ogDirector] (org.ovirt.thread.pool-6-threa d-49) [92b5af33-cb87-4142-b8fe-8b838 dd7458e] EVENT_ID: VM_MIGRATION_START(62), Correlation ID: 92b5af33-cb87-4142-b8fe-8b838d d7458e, Job ID: f4f54054-f7c8-4481-8eda-d5a15c 383061, Call Stack: null, Custom ID: null, Custom Event ID: -1, Message: Migration started (VM: Oracle_PRIMARY, Source: ginger.local.systea.fr, Destination: victor.local.systea.fr, User: admin@internal-authz).
...
2018-02-12 16:49:16,453+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFe tcher] (DefaultQuartzScheduler4) [162a5bc3] Fetched 2 VMs from VDS 'ce3938b1-b23f-4d22-840a-f17d7 cd87bb1'
2018-02-12 16:49:16,455+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (DefaultQuartzScheduler4) [162a5bc3] VM 'f7d4ec12-627a-4b83-b59e-88640 0d55474'(Oracle_PRIMARY) was unexpectedly detected as 'MigratingTo' on VDS 'ce3938b1-b23f-4d22-840a-f17d7 cd87bb1'(victor.local.systea.f r ) (expected on 'd569c2dd-8f30-4878-8aea-858db285cf69')
2018-02-12 16:49:16,455+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (DefaultQuartzScheduler4) [162a5bc3] VM 'f7d4ec12-627a-4b83-b59e-88640 0d55474' is migrating to VDS 'ce3938b1-b23f-4d22-840a-f17d7 cd87bb1'(victor.local.systea.f r ) ignoring it in the refresh until migration is done
...
2018-02-12 16:49:31,484+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (DefaultQuartzScheduler5) [11a7619a] VM 'f7d4ec12-627a-4b83-b59e-88640 0d55474'(Oracle_PRIMARY) was unexpectedly detected as 'MigratingTo' on VDS 'ce3938b1-b23f-4d22-840a-f17d7 cd87bb1'(victor.local.systea.f r ) (expected on 'd569c2dd-8f30-4878-8aea-858db285cf69')
2018-02-12 16:49:31,484+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (DefaultQuartzScheduler5) [11a7619a] VM 'f7d4ec12-627a-4b83-b59e-88640 0d55474' is migrating to VDS 'ce3938b1-b23f-4d22-840a-f17d7 cd87bb1'(victor.local.systea.f r ) ignoring it in the refresh until migration is done
and so on, last lines repeated indefinitly for hours since we poweroff the VM...
Is this something known ? Any idea about that ?
Thanks
Ovirt 4.1.6, updated last at feb-13. Gluster 3.12.1.
--Cordialement,
Frank Soyer
_______________________________________________
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users