[ovirt-users] VMs with multiple vdisks don't migrate

Milan Zamazal mzamazal at redhat.com
Mon Feb 26 11:59:37 UTC 2018


"fsoyer" <fsoyer at systea.fr> writes:

> I don't beleive that this is relatd to a host, tests have been done from victor
> source to ginger dest and ginger to victor. I don't see problems on storage
> (gluster 3.12 native managed by ovirt), when VMs with a uniq disk from 20 to
> 250G migrate without error in some seconds and with no downtime.

The host itself may be fine, but libvirt/QEMU running there may expose
problems, perhaps just for some VMs.  According to your logs something
is not behaving as expected on the source host during the faulty
migration.

> How ca I enable this libvirt debug mode ?

Set the following options in /etc/libvirt/libvirtd.conf (look for
examples in comments there)

- log_level=1
- log_outputs="1:file:/var/log/libvirt/libvirtd.log"

and restart libvirt.  Then /var/log/libvirt/libvirtd.log should contain
the log.  It will be huge, so I suggest to enable it only for the time
of reproducing the problem.

> --
>
> Cordialement,
>
> Frank Soyer 
>
>  
>
> Le Vendredi, Février 23, 2018 09:56 CET, Milan Zamazal <mzamazal at redhat.com> a écrit:
>  Maor Lipchuk <mlipchuk at redhat.com> writes:
>
>> I encountered a bug (see [1]) which contains the same error mentioned in
>> your VDSM logs (see [2]), but I doubt it is related.
>
> Indeed, it's not related.
>
> The error in vdsm_victor.log just means that the info gathering call
> tries to access libvirt domain before the incoming migration is
> completed. It's ugly but harmless.
>
>> Milan, maybe you have any advice to troubleshoot the issue? Will the
>> libvirt/qemu logs can help?
>
> It seems there is something wrong on (at least) the source host. There
> are no migration progress messages in the vdsm_ginger.log and there are
> warnings about stale stat samples. That looks like problems with
> calling libvirt – slow and/or stuck calls, maybe due to storage
> problems. The possibly faulty second disk could cause that.
>
> libvirt debug logs could tell us whether that is indeed the problem and
> whether it is caused by storage or something else.
>
>> I would suggest to open a bug on that issue so we can track it more
>> properly.
>>
>> Regards,
>> Maor
>>
>>
>> [1]
>> https://bugzilla.redhat.com/show_bug.cgi?id=1486543 - Migration leads to
>> VM running on 2 Hosts
>>
>> [2]
>> 2018-02-16 09:43:35,236+0100 ERROR (jsonrpc/7) [jsonrpc.JsonRpcServer]
>> Internal server error (__init__:577)
>> Traceback (most recent call last):
>> File "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 572,
>> in _handle_request
>> res = method(**params)
>> File "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 198, in
>> _dynamicMethod
>> result = fn(*methodArgs)
>> File "/usr/share/vdsm/API.py", line 1454, in getAllVmIoTunePolicies
>> io_tune_policies_dict = self._cif.getAllVmIoTunePolicies()
>> File "/usr/share/vdsm/clientIF.py", line 454, in getAllVmIoTunePolicies
>> 'current_values': v.getIoTune()}
>> File "/usr/share/vdsm/virt/vm.py", line 2859, in getIoTune
>> result = self.getIoTuneResponse()
>> File "/usr/share/vdsm/virt/vm.py", line 2878, in getIoTuneResponse
>> res = self._dom.blockIoTune(
>> File "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py", line 47,
>> in __getattr__
>> % self.vmid)
>> NotConnectedError: VM u'755cf168-de65-42ed-b22f-efe9136f7594' was not
>> started yet or was shut down
>>
>> On Thu, Feb 22, 2018 at 4:22 PM, fsoyer <fsoyer at systea.fr> wrote:
>>
>>> Hi,
>>> Yes, on 2018-02-16 (vdsm logs) I tried with a VM standing on ginger
>>> (192.168.0.6) migrated (or failed to migrate...) to victor (192.168.0.5),
>>> while the engine.log in the first mail on 2018-02-12 was for VMs standing
>>> on victor, migrated (or failed to migrate...) to ginger. Symptoms were
>>> exactly the same, in both directions, and VMs works like a charm before,
>>> and even after (migration "killed" by a poweroff of VMs).
>>> Am I the only one experimenting this problem ?
>>>
>>>
>>> Thanks
>>> --
>>>
>>> Cordialement,
>>>
>>> *Frank Soyer *
>>>
>>>
>>>
>>> Le Jeudi, Février 22, 2018 00:45 CET, Maor Lipchuk <mlipchuk at redhat.com>
>>> a écrit:
>>>
>>>
>>> Hi Frank,
>>>
>>> Sorry about the delay repond.
>>> I've been going through the logs you attached, although I could not find
>>> any specific indication why the migration failed because of the disk you
>>> were mentionning.
>>> Does this VM run with both disks on the target host without migration?
>>>
>>> Regards,
>>> Maor
>>>
>>>
>>> On Fri, Feb 16, 2018 at 11:03 AM, fsoyer <fsoyer at systea.fr> wrote:
>>>>
>>>> Hi Maor,
>>>> sorry for the double post, I've change the email adress of my account and
>>>> supposed that I'd need to re-post it.
>>>> And thank you for your time. Here are the logs. I added a vdisk to an
>>>> existing VM : it no more migrates, needing to poweroff it after minutes.
>>>> Then simply deleting the second disk makes migrate it in exactly 9s without
>>>> problem !
>>>> https://gist.github.com/fgth/4707446331d201eef574ac31b6e89561
>>>> https://gist.github.com/fgth/f8de9c22664aee53722af676bff8719d
>>>>
>>>> --
>>>>
>>>> Cordialement,
>>>>
>>>> *Frank Soyer *
>>>> Le Mercredi, Février 14, 2018 11:04 CET, Maor Lipchuk <
>>>> mlipchuk at redhat.com> a écrit:
>>>>
>>>>
>>>> Hi Frank,
>>>>
>>>> I already replied on your last email.
>>>> Can you provide the VDSM logs from the time of the migration failure for
>>>> both hosts:
>>>> ginger.local.systea.f <http://ginger.local.systea.fr/>r and v
>>>> ictor.local.systea.fr
>>>>
>>>> Thanks,
>>>> Maor
>>>>
>>>> On Wed, Feb 14, 2018 at 11:23 AM, fsoyer <fsoyer at systea.fr> wrote:
>>>>>
>>>>> Hi all,
>>>>> I discovered yesterday a problem when migrating VM with more than one
>>>>> vdisk.
>>>>> On our test servers (oVirt4.1, shared storage with Gluster), I created 2
>>>>> VMs needed for a test, from a template with a 20G vdisk. On this VMs I
>>>>> added a 100G vdisk (for this tests I didn't want to waste time to extend
>>>>> the existing vdisks... But I lost time finally...). The VMs with the 2
>>>>> vdisks works well.
>>>>> Now I saw some updates waiting on the host. I tried to put it in
>>>>> maintenance... But it stopped on the two VM. They were marked "migrating",
>>>>> but no more accessible. Other (small) VMs with only 1 vdisk was migrated
>>>>> without problem at the same time.
>>>>> I saw that a kvm process for the (big) VMs was launched on the source
>>>>> AND destination host, but after tens of minutes, the migration and the VMs
>>>>> was always freezed. I tried to cancel the migration for the VMs : failed.
>>>>> The only way to stop it was to poweroff the VMs : the kvm process died on
>>>>> the 2 hosts and the GUI alerted on a failed migration.
>>>>> In doubt, I tried to delete the second vdisk on one of this VMs : it
>>>>> migrates then without error ! And no access problem.
>>>>> I tried to extend the first vdisk of the second VM, the delete the
>>>>> second vdisk : it migrates now without problem !
>>>>>
>>>>> So after another test with a VM with 2 vdisks, I can say that this
>>>>> blocked the migration process :(
>>>>>
>>>>> In engine.log, for a VMs with 1 vdisk migrating well, we see :
>>>>>
>>>>> 2018-02-12 16:46:29,705+01 INFO [org.ovirt.engine.core.bll.MigrateVmToServerCommand]
>>>>> (default task-28) [2f712024-5982-46a8-82c8-fd8293da5725] Lock Acquired
>>>>> to object 'EngineLock:{exclusiveLocks='[3f57e669-5e4c-4d10-85cc-d573004a099d=VM]',
>>>>> sharedLocks=''}'
>>>>> 2018-02-12 16:46:29,955+01 INFO [org.ovirt.engine.core.bll.MigrateVmToServerCommand]
>>>>> (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725]
>>>>> Running command: MigrateVmToServerCommand internal: false. Entities
>>>>> affected : ID: 3f57e669-5e4c-4d10-85cc-d573004a099d Type: VMAction
>>>>> group MIGRATE_VM with role type USER
>>>>> 2018-02-12 16:46:30,261+01 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand]
>>>>> (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725]
>>>>> START, MigrateVDSCommand( MigrateVDSCommandParameters:{runAsync='true',
>>>>> hostId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1',
>>>>> vmId='3f57e669-5e4c-4d10-85cc-d573004a099d', srcHost='192.168.0.6',
>>>>> dstVdsId='d569c2dd-8f30-4878-8aea-858db285cf69', dstHost='
>>>>> 192.168.0.5:54321', migrationMethod='ONLINE', tunnelMigration='false',
>>>>> migrationDowntime='0', autoConverge='true', migrateCompressed='false',
>>>>> consoleAddress='null', maxBandwidth='500', enableGuestEvents='true',
>>>>> maxIncomingMigrations='2', maxOutgoingMigrations='2',
>>>>> convergenceSchedule='[init=[{name=setDowntime, params=[100]}],
>>>>> stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2,
>>>>> action={name=setDowntime, params=[200]}}, {limit=3,
>>>>> action={name=setDowntime, params=[300]}}, {limit=4,
>>>>> action={name=setDowntime, params=[400]}}, {limit=6,
>>>>> action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort,
>>>>> params=[]}}]]'}), log id: 14f61ee0
>>>>> 2018-02-12 16:46:30,262+01 INFO [org.ovirt.engine.core.vdsbro
>>>>> ker.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-thread-32)
>>>>> [2f712024-5982-46a8-82c8-fd8293da5725] START,
>>>>> MigrateBrokerVDSCommand(HostName = victor.local.systea.fr,
>>>>> MigrateVDSCommandParameters:{runAsync='true',
>>>>> hostId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1',
>>>>> vmId='3f57e669-5e4c-4d10-85cc-d573004a099d', srcHost='192.168.0.6',
>>>>> dstVdsId='d569c2dd-8f30-4878-8aea-858db285cf69', dstHost='
>>>>> 192.168.0.5:54321', migrationMethod='ONLINE', tunnelMigration='false',
>>>>> migrationDowntime='0', autoConverge='true', migrateCompressed='false',
>>>>> consoleAddress='null', maxBandwidth='500', enableGuestEvents='true',
>>>>> maxIncomingMigrations='2', maxOutgoingMigrations='2',
>>>>> convergenceSchedule='[init=[{name=setDowntime, params=[100]}],
>>>>> stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2,
>>>>> action={name=setDowntime, params=[200]}}, {limit=3,
>>>>> action={name=setDowntime, params=[300]}}, {limit=4,
>>>>> action={name=setDowntime, params=[400]}}, {limit=6,
>>>>> action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort,
>>>>> params=[]}}]]'}), log id: 775cd381
>>>>> 2018-02-12 16:46:30,277+01 INFO [org.ovirt.engine.core.vdsbro
>>>>> ker.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-thread-32)
>>>>> [2f712024-5982-46a8-82c8-fd8293da5725] FINISH, MigrateBrokerVDSCommand,
>>>>> log id: 775cd381
>>>>> 2018-02-12 16:46:30,285+01 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand]
>>>>> (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725]
>>>>> FINISH, MigrateVDSCommand, return: MigratingFrom, log id: 14f61ee0
>>>>> 2018-02-12 16:46:30,301+01 INFO [org.ovirt.engine.core.dal.db
>>>>> broker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-6-thread-32)
>>>>> [2f712024-5982-46a8-82c8-fd8293da5725] EVENT_ID:
>>>>> VM_MIGRATION_START(62), Correlation ID: 2f712024-5982-46a8-82c8-fd8293da5725,
>>>>> Job ID: 4bd19aa9-cc99-4d02-884e-5a1e857a7738, Call Stack: null, Custom
>>>>> ID: null, Custom Event ID: -1, Message: Migration started (VM:
>>>>> Oracle_SECONDARY, Source: victor.local.systea.fr, Destination:
>>>>> ginger.local.systea.fr, User: admin at internal-authz).
>>>>> 2018-02-12 16:46:31,106+01 INFO [org.ovirt.engine.core.vdsbro
>>>>> ker.vdsbroker.FullListVDSCommand] (DefaultQuartzScheduler9) [54a65b66]
>>>>> START, FullListVDSCommand(HostName = victor.local.systea.fr,
>>>>> FullListVDSCommandParameters:{runAsync='true',
>>>>> hostId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1',
>>>>> vmIds='[3f57e669-5e4c-4d10-85cc-d573004a099d]'}), log id: 54b4b435
>>>>> 2018-02-12 16:46:31,147+01 INFO [org.ovirt.engine.core.vdsbro
>>>>> ker.vdsbroker.FullListVDSCommand] (DefaultQuartzScheduler9) [54a65b66]
>>>>> FINISH, FullListVDSCommand, return: [{acpiEnable=true,
>>>>> emulatedMachine=pc-i440fx-rhel7.3.0, tabletEnable=true, pid=1493,
>>>>> guestDiskMapping={0QEMU_QEMU_HARDDISK_d890fa68-fba4-4f49-9={name=/dev/sda},
>>>>> QEMU_DVD-ROM_QM00003={name=/dev/sr0}}, transparentHugePages=true,
>>>>> timeOffset=0, cpuType=Nehalem, smp=2, pauseCode=NOERR,
>>>>> guestNumaNodes=[Ljava.lang.Object;@1d9042cd, smartcardEnable=false,
>>>>> custom={device_fbddd528-7d93-49c6-a286-180e021cb274device_87
>>>>> 9c93ab-4df1-435c-af02-565039fcc254=VmDevice:{id='VmDeviceId:
>>>>> {deviceId='879c93ab-4df1-435c-af02-565039fcc254',
>>>>> vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='unix',
>>>>> type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0,
>>>>> controller=0, type=virtio-serial, port=1}', managed='false',
>>>>> plugged='true', readOnly='false', deviceAlias='channel0',
>>>>> customProperties='[]', snapshotId='null', logicalName='null',
>>>>> hostDevice='null'}, device_fbddd528-7d93-49c6-a286
>>>>> -180e021cb274device_879c93ab-4df1-435c-af02-565039fcc254devi
>>>>> ce_8945f61a-abbe-4156-8485-a4aa6f1908dbdevice_017b5e59-01c4-
>>>>> 4aac-bf0c-b5d9557284d6=VmDevice:{id='VmDeviceId:{deviceId='0
>>>>> 17b5e59-01c4-4aac-bf0c-b5d9557284d6', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}',
>>>>> device='tablet', type='UNKNOWN', bootOrder='0', specParams='[]',
>>>>> address='{bus=0, type=usb, port=1}', managed='false', plugged='true',
>>>>> readOnly='false', deviceAlias='input0', customProperties='[]',
>>>>> snapshotId='null', logicalName='null', hostDevice='null'},
>>>>> device_fbddd528-7d93-49c6-a286-180e021cb274=VmDevice:{id='Vm
>>>>> DeviceId:{deviceId='fbddd528-7d93-49c6-a286-180e021cb274',
>>>>> vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='ide',
>>>>> type='CONTROLLER', bootOrder='0', specParams='[]', address='{slot=0x01,
>>>>> bus=0x00, domain=0x0000, type=pci, function=0x1}', managed='false',
>>>>> plugged='true', readOnly='false', deviceAlias='ide', customProperties='[]',
>>>>> snapshotId='null', logicalName='null', hostDevice='null'},
>>>>> device_fbddd528-7d93-49c6-a286-180e021cb274device_879c93ab-4
>>>>> df1-435c-af02-565039fcc254device_8945f61a-abbe-4156-8485-a4a
>>>>> a6f1908db=VmDevice:{id='VmDeviceId:{deviceId='8945f61a-abbe-4156-8485-a4aa6f1908db',
>>>>> vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='unix',
>>>>> type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0,
>>>>> controller=0, type=virtio-serial, port=2}', managed='false',
>>>>> plugged='true', readOnly='false', deviceAlias='channel1',
>>>>> customProperties='[]', snapshotId='null', logicalName='null',
>>>>> hostDevice='null'}}, vmType=kvm, memSize=8192, smpCoresPerSocket=1,
>>>>> vmName=Oracle_SECONDARY, nice=0, status=Migration Source, maxMemSize=32768,
>>>>> bootMenuEnable=false, vmId=3f57e669-5e4c-4d10-85cc-d573004a099d,
>>>>> numOfIoThreads=2, smpThreadsPerCore=1, memGuaranteedSize=8192,
>>>>> kvmEnable=true, pitReinjection=false, displayNetwork=ovirtmgmt,
>>>>> devices=[Ljava.lang.Object;@28ae66d7, display=vnc, maxVCpus=16,
>>>>> clientIp=, statusTime=4299484520, maxMemSlots=16}], log id: 54b4b435
>>>>> 2018-02-12 16:46:31,150+01 INFO [org.ovirt.engine.core.vdsbro
>>>>> ker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler1)
>>>>> [27fac647] Fetched 3 VMs from VDS 'd569c2dd-8f30-4878-8aea-858db285cf69'
>>>>> 2018-02-12 16:46:31,151+01 INFO [org.ovirt.engine.core.vdsbro
>>>>> ker.monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler9)
>>>>> [54a65b66] Received a vnc Device without an address when processing VM
>>>>> 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device:
>>>>> {device=vnc, specParams={displayNetwork=ovirtmgmt, keyMap=fr,
>>>>> displayIp=192.168.0.6}, type=graphics, deviceId=813957b1-446a-4e88-9e40-9fe76d2c442d,
>>>>> port=5901}
>>>>> 2018-02-12 16:46:31,151+01 INFO [org.ovirt.engine.core.vdsbro
>>>>> ker.monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler9)
>>>>> [54a65b66] Received a lease Device without an address when processing VM
>>>>> 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device:
>>>>> {lease_id=3f57e669-5e4c-4d10-85cc-d573004a099d,
>>>>> sd_id=1e51cecc-eb2e-47d0-b185-920fdc7afa16,
>>>>> deviceId={uuid=a09949aa-5642-4b6d-94a4-8b0d04257be5}, offset=6291456,
>>>>> device=lease, path=/rhev/data-center/mnt/glusterSD/192.168.0.6:
>>>>> _DATA01/1e51cecc-eb2e-47d0-b185-920fdc7afa16/dom_md/xleases, type=lease}
>>>>> 2018-02-12 16:46:31,152+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
>>>>> (DefaultQuartzScheduler1) [27fac647] VM '3f57e669-5e4c-4d10-85cc-d573004a099d'(Oracle_SECONDARY)
>>>>> was unexpectedly detected as 'MigratingTo' on VDS
>>>>> 'd569c2dd-8f30-4878-8aea-858db285cf69'(ginger.local.systea.fr)
>>>>> (expected on 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1')
>>>>> 2018-02-12 16:46:31,152+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
>>>>> (DefaultQuartzScheduler1) [27fac647] VM '3f57e669-5e4c-4d10-85cc-d573004a099d'
>>>>> is migrating to VDS 'd569c2dd-8f30-4878-8aea-858db285cf69'(
>>>>> ginger.local.systea.fr) ignoring it in the refresh until migration is
>>>>> done
>>>>> ....
>>>>> 2018-02-12 16:46:41,631+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
>>>>> (ForkJoinPool-1-worker-11) [] VM '3f57e669-5e4c-4d10-85cc-d573004a099d'
>>>>> was reported as Down on VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'(
>>>>> victor.local.systea.fr)
>>>>> 2018-02-12 16:46:41,632+01 INFO [org.ovirt.engine.core.vdsbro
>>>>> ker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-11) [] START,
>>>>> DestroyVDSCommand(HostName = victor.local.systea.fr,
>>>>> DestroyVmVDSCommandParameters:{runAsync='true',
>>>>> hostId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1',
>>>>> vmId='3f57e669-5e4c-4d10-85cc-d573004a099d', force='false',
>>>>> secondsToWait='0', gracefully='false', reason='', ignoreNoVm='true'}), log
>>>>> id: 560eca57
>>>>> 2018-02-12 16:46:41,650+01 INFO [org.ovirt.engine.core.vdsbro
>>>>> ker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-11) [] FINISH,
>>>>> DestroyVDSCommand, log id: 560eca57
>>>>> 2018-02-12 16:46:41,650+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
>>>>> (ForkJoinPool-1-worker-11) [] VM '3f57e669-5e4c-4d10-85cc-d573004a099d'(Oracle_SECONDARY)
>>>>> moved from 'MigratingFrom' --> 'Down'
>>>>> 2018-02-12 16:46:41,651+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
>>>>> (ForkJoinPool-1-worker-11) [] Handing over VM '3f57e669-5e4c-4d10-85cc-d573004a099d'(Oracle_SECONDARY)
>>>>> to Host 'd569c2dd-8f30-4878-8aea-858db285cf69'. Setting VM to status
>>>>> 'MigratingTo'
>>>>> 2018-02-12 16:46:42,163+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
>>>>> (ForkJoinPool-1-worker-4) [] VM '3f57e669-5e4c-4d10-85cc-d573004a099d'(Oracle_SECONDARY)
>>>>> moved from 'MigratingTo' --> 'Up'
>>>>> 2018-02-12 16:46:42,169+01 INFO [org.ovirt.engine.core.vdsbro
>>>>> ker.vdsbroker.MigrateStatusVDSCommand] (ForkJoinPool-1-worker-4) []
>>>>> START, MigrateStatusVDSCommand(HostName = ginger.local.systea.fr,
>>>>> MigrateStatusVDSCommandParameters:{runAsync='true',
>>>>> hostId='d569c2dd-8f30-4878-8aea-858db285cf69',
>>>>> vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}), log id: 7a25c281
>>>>> 2018-02-12 16:46:42,174+01 INFO [org.ovirt.engine.core.vdsbro
>>>>> ker.vdsbroker.MigrateStatusVDSCommand] (ForkJoinPool-1-worker-4) []
>>>>> FINISH, MigrateStatusVDSCommand, log id: 7a25c281
>>>>> 2018-02-12 16:46:42,194+01 INFO [org.ovirt.engine.core.dal.db
>>>>> broker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-4) []
>>>>> EVENT_ID: VM_MIGRATION_DONE(63), Correlation ID:
>>>>> 2f712024-5982-46a8-82c8-fd8293da5725, Job ID:
>>>>> 4bd19aa9-cc99-4d02-884e-5a1e857a7738, Call Stack: null, Custom ID:
>>>>> null, Custom Event ID: -1, Message: Migration completed (VM:
>>>>> Oracle_SECONDARY, Source: victor.local.systea.fr, Destination:
>>>>> ginger.local.systea.fr, Duration: 11 seconds, Total: 11 seconds, Actual
>>>>> downtime: (N/A))
>>>>> 2018-02-12 16:46:42,201+01 INFO [org.ovirt.engine.core.bll.MigrateVmToServerCommand]
>>>>> (ForkJoinPool-1-worker-4) [] Lock freed to object
>>>>> 'EngineLock:{exclusiveLocks='[3f57e669-5e4c-4d10-85cc-d573004a099d=VM]',
>>>>> sharedLocks=''}'
>>>>> 2018-02-12 16:46:42,203+01 INFO [org.ovirt.engine.core.vdsbro
>>>>> ker.vdsbroker.FullListVDSCommand] (ForkJoinPool-1-worker-4) [] START,
>>>>> FullListVDSCommand(HostName = ginger.local.systea.fr,
>>>>> FullListVDSCommandParameters:{runAsync='true',
>>>>> hostId='d569c2dd-8f30-4878-8aea-858db285cf69',
>>>>> vmIds='[3f57e669-5e4c-4d10-85cc-d573004a099d]'}), log id: 7cc65298
>>>>> 2018-02-12 16:46:42,254+01 INFO [org.ovirt.engine.core.vdsbro
>>>>> ker.vdsbroker.FullListVDSCommand] (ForkJoinPool-1-worker-4) [] FINISH,
>>>>> FullListVDSCommand, return: [{acpiEnable=true,
>>>>> emulatedMachine=pc-i440fx-rhel7.3.0, afterMigrationStatus=,
>>>>> tabletEnable=true, pid=18748, guestDiskMapping={},
>>>>> transparentHugePages=true, timeOffset=0, cpuType=Nehalem, smp=2,
>>>>> guestNumaNodes=[Ljava.lang.Object;@760085fd,
>>>>> custom={device_fbddd528-7d93-49c6-a286-180e021cb274device_87
>>>>> 9c93ab-4df1-435c-af02-565039fcc254=VmDevice:{id='VmDeviceId:
>>>>> {deviceId='879c93ab-4df1-435c-af02-565039fcc254',
>>>>> vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='unix',
>>>>> type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0,
>>>>> controller=0, type=virtio-serial, port=1}', managed='false',
>>>>> plugged='true', readOnly='false', deviceAlias='channel0',
>>>>> customProperties='[]', snapshotId='null', logicalName='null',
>>>>> hostDevice='null'}, device_fbddd528-7d93-49c6-a286
>>>>> -180e021cb274device_879c93ab-4df1-435c-af02-565039fcc254devi
>>>>> ce_8945f61a-abbe-4156-8485-a4aa6f1908dbdevice_017b5e59-01c4-
>>>>> 4aac-bf0c-b5d9557284d6=VmDevice:{id='VmDeviceId:{deviceId='0
>>>>> 17b5e59-01c4-4aac-bf0c-b5d9557284d6', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}',
>>>>> device='tablet', type='UNKNOWN', bootOrder='0', specParams='[]',
>>>>> address='{bus=0, type=usb, port=1}', managed='false', plugged='true',
>>>>> readOnly='false', deviceAlias='input0', customProperties='[]',
>>>>> snapshotId='null', logicalName='null', hostDevice='null'},
>>>>> device_fbddd528-7d93-49c6-a286-180e021cb274=VmDevice:{id='Vm
>>>>> DeviceId:{deviceId='fbddd528-7d93-49c6-a286-180e021cb274',
>>>>> vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='ide',
>>>>> type='CONTROLLER', bootOrder='0', specParams='[]', address='{slot=0x01,
>>>>> bus=0x00, domain=0x0000, type=pci, function=0x1}', managed='false',
>>>>> plugged='true', readOnly='false', deviceAlias='ide', customProperties='[]',
>>>>> snapshotId='null', logicalName='null', hostDevice='null'},
>>>>> device_fbddd528-7d93-49c6-a286-180e021cb274device_879c93ab-4
>>>>> df1-435c-af02-565039fcc254device_8945f61a-abbe-4156-8485-a4a
>>>>> a6f1908db=VmDevice:{id='VmDeviceId:{deviceId='8945f61a-abbe-4156-8485-a4aa6f1908db',
>>>>> vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='unix',
>>>>> type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0,
>>>>> controller=0, type=virtio-serial, port=2}', managed='false',
>>>>> plugged='true', readOnly='false', deviceAlias='channel1',
>>>>> customProperties='[]', snapshotId='null', logicalName='null',
>>>>> hostDevice='null'}}, vmType=kvm, memSize=8192, smpCoresPerSocket=1,
>>>>> vmName=Oracle_SECONDARY, nice=0, status=Up, maxMemSize=32768,
>>>>> bootMenuEnable=false, vmId=3f57e669-5e4c-4d10-85cc-d573004a099d,
>>>>> numOfIoThreads=2, smpThreadsPerCore=1, smartcardEnable=false,
>>>>> maxMemSlots=16, kvmEnable=true, pitReinjection=false,
>>>>> displayNetwork=ovirtmgmt, devices=[Ljava.lang.Object;@2e4d3dd3,
>>>>> memGuaranteedSize=8192, maxVCpus=16, clientIp=, statusTime=4304259600
>>>>> <(430)%20425-9600>, display=vnc}], log id: 7cc65298
>>>>> 2018-02-12 16:46:42,257+01 INFO [org.ovirt.engine.core.vdsbro
>>>>> ker.monitoring.VmDevicesMonitoring] (ForkJoinPool-1-worker-4) []
>>>>> Received a vnc Device without an address when processing VM
>>>>> 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device:
>>>>> {device=vnc, specParams={displayNetwork=ovirtmgmt, keyMap=fr,
>>>>> displayIp=192.168.0.5}, type=graphics, deviceId=813957b1-446a-4e88-9e40-9fe76d2c442d,
>>>>> port=5901}
>>>>> 2018-02-12 16:46:42,257+01 INFO [org.ovirt.engine.core.vdsbro
>>>>> ker.monitoring.VmDevicesMonitoring] (ForkJoinPool-1-worker-4) []
>>>>> Received a lease Device without an address when processing VM
>>>>> 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device:
>>>>> {lease_id=3f57e669-5e4c-4d10-85cc-d573004a099d,
>>>>> sd_id=1e51cecc-eb2e-47d0-b185-920fdc7afa16,
>>>>> deviceId={uuid=a09949aa-5642-4b6d-94a4-8b0d04257be5}, offset=6291456,
>>>>> device=lease, path=/rhev/data-center/mnt/glusterSD/192.168.0.6:
>>>>> _DATA01/1e51cecc-eb2e-47d0-b185-920fdc7afa16/dom_md/xleases, type=lease}
>>>>> 2018-02-12 16:46:46,260+01 INFO [org.ovirt.engine.core.vdsbro
>>>>> ker.vdsbroker.FullListVDSCommand] (DefaultQuartzScheduler5) [7fcb200a]
>>>>> FINISH, FullListVDSCommand, return: [{acpiEnable=true,
>>>>> emulatedMachine=pc-i440fx-rhel7.3.0, afterMigrationStatus=,
>>>>> tabletEnable=true, pid=18748, guestDiskMapping={0QEMU_QEMU_H
>>>>> ARDDISK_d890fa68-fba4-4f49-9={name=/dev/sda},
>>>>> QEMU_DVD-ROM_QM00003={name=/dev/sr0}}, transparentHugePages=true,
>>>>> timeOffset=0, cpuType=Nehalem, smp=2, guestNumaNodes=[Ljava.lang.Obj
>>>>> ect;@77951faf, custom={device_fbddd528-7d93-4
>>>>> 9c6-a286-180e021cb274device_879c93ab-4df1-435c-af02-565039fc
>>>>> c254=VmDevice:{id='VmDeviceId:{deviceId='879c93ab-4df1-435c-af02-565039fcc254',
>>>>> vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='unix',
>>>>> type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0,
>>>>> controller=0, type=virtio-serial, port=1}', managed='false',
>>>>> plugged='true', readOnly='false', deviceAlias='channel0',
>>>>> customProperties='[]', snapshotId='null', logicalName='null',
>>>>> hostDevice='null'}, device_fbddd528-7d93-49c6-a286
>>>>> -180e021cb274device_879c93ab-4df1-435c-af02-565039fcc254devi
>>>>> ce_8945f61a-abbe-4156-8485-a4aa6f1908dbdevice_017b5e59-01c4-
>>>>> 4aac-bf0c-b5d9557284d6=VmDevice:{id='VmDeviceId:{deviceId='0
>>>>> 17b5e59-01c4-4aac-bf0c-b5d9557284d6', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}',
>>>>> device='tablet', type='UNKNOWN', bootOrder='0', specParams='[]',
>>>>> address='{bus=0, type=usb, port=1}', managed='false', plugged='true',
>>>>> readOnly='false', deviceAlias='input0', customProperties='[]',
>>>>> snapshotId='null', logicalName='null', hostDevice='null'},
>>>>> device_fbddd528-7d93-49c6-a286-180e021cb274=VmDevice:{id='Vm
>>>>> DeviceId:{deviceId='fbddd528-7d93-49c6-a286-180e021cb274',
>>>>> vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='ide',
>>>>> type='CONTROLLER', bootOrder='0', specParams='[]', address='{slot=0x01,
>>>>> bus=0x00, domain=0x0000, type=pci, function=0x1}', managed='false',
>>>>> plugged='true', readOnly='false', deviceAlias='ide', customProperties='[]',
>>>>> snapshotId='null', logicalName='null', hostDevice='null'},
>>>>> device_fbddd528-7d93-49c6-a286-180e021cb274device_879c93ab-4
>>>>> df1-435c-af02-565039fcc254device_8945f61a-abbe-4156-8485-a4a
>>>>> a6f1908db=VmDevice:{id='VmDeviceId:{deviceId='8945f61a-abbe-4156-8485-a4aa6f1908db',
>>>>> vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='unix',
>>>>> type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0,
>>>>> controller=0, type=virtio-serial, port=2}', managed='false',
>>>>> plugged='true', readOnly='false', deviceAlias='channel1',
>>>>> customProperties='[]', snapshotId='null', logicalName='null',
>>>>> hostDevice='null'}}, vmType=kvm, memSize=8192, smpCoresPerSocket=1,
>>>>> vmName=Oracle_SECONDARY, nice=0, status=Up, maxMemSize=32768,
>>>>> bootMenuEnable=false, vmId=3f57e669-5e4c-4d10-85cc-d573004a099d,
>>>>> numOfIoThreads=2, smpThreadsPerCore=1, smartcardEnable=false,
>>>>> maxMemSlots=16, kvmEnable=true, pitReinjection=false,
>>>>> displayNetwork=ovirtmgmt, devices=[Ljava.lang.Object;@286410fd,
>>>>> memGuaranteedSize=8192, maxVCpus=16, clientIp=, statusTime=4304263620
>>>>> <(430)%20426-3620>, display=vnc}], log id: 58cdef4c
>>>>> 2018-02-12 16:46:46,267+01 INFO [org.ovirt.engine.core.vdsbro
>>>>> ker.monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler5)
>>>>> [7fcb200a] Received a vnc Device without an address when processing VM
>>>>> 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device:
>>>>> {device=vnc, specParams={displayNetwork=ovirtmgmt, keyMap=fr,
>>>>> displayIp=192.168.0.5}, type=graphics, deviceId=813957b1-446a-4e88-9e40-9fe76d2c442d,
>>>>> port=5901}
>>>>> 2018-02-12 16:46:46,268+01 INFO [org.ovirt.engine.core.vdsbro
>>>>> ker.monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler5)
>>>>> [7fcb200a] Received a lease Device without an address when processing VM
>>>>> 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device:
>>>>> {lease_id=3f57e669-5e4c-4d10-85cc-d573004a099d,
>>>>> sd_id=1e51cecc-eb2e-47d0-b185-920fdc7afa16,
>>>>> deviceId={uuid=a09949aa-5642-4b6d-94a4-8b0d04257be5}, offset=6291456,
>>>>> device=lease, path=/rhev/data-center/mnt/glusterSD/192.168.0.6:
>>>>> _DATA01/1e51cecc-eb2e-47d0-b185-920fdc7afa16/dom_md/xleases, type=lease}
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> For the VM with 2 vdisks we see :
>>>>>
>>>>> 2018-02-12 16:49:06,112+01 INFO [org.ovirt.engine.core.bll.MigrateVmToServerCommand]
>>>>> (default task-50) [92b5af33-cb87-4142-b8fe-8b838dd7458e] Lock Acquired
>>>>> to object 'EngineLock:{exclusiveLocks='[f7d4ec12-627a-4b83-b59e-886400d55474=VM]',
>>>>> sharedLocks=''}'
>>>>> 2018-02-12 16:49:06,407+01 INFO [org.ovirt.engine.core.bll.MigrateVmToServerCommand]
>>>>> (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4142-b8fe-8b838dd7458e]
>>>>> Running command: MigrateVmToServerCommand internal: false. Entities
>>>>> affected : ID: f7d4ec12-627a-4b83-b59e-886400d55474 Type: VMAction
>>>>> group MIGRATE_VM with role type USER
>>>>> 2018-02-12 16:49:06,712+01 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand]
>>>>> (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4142-b8fe-8b838dd7458e]
>>>>> START, MigrateVDSCommand( MigrateVDSCommandParameters:{runAsync='true',
>>>>> hostId='d569c2dd-8f30-4878-8aea-858db285cf69',
>>>>> vmId='f7d4ec12-627a-4b83-b59e-886400d55474', srcHost='192.168.0.5',
>>>>> dstVdsId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1', dstHost='
>>>>> 192.168.0.6:54321', migrationMethod='ONLINE', tunnelMigration='false',
>>>>> migrationDowntime='0', autoConverge='true', migrateCompressed='false',
>>>>> consoleAddress='null', maxBandwidth='500', enableGuestEvents='true',
>>>>> maxIncomingMigrations='2', maxOutgoingMigrations='2',
>>>>> convergenceSchedule='[init=[{name=setDowntime, params=[100]}],
>>>>> stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2,
>>>>> action={name=setDowntime, params=[200]}}, {limit=3,
>>>>> action={name=setDowntime, params=[300]}}, {limit=4,
>>>>> action={name=setDowntime, params=[400]}}, {limit=6,
>>>>> action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort,
>>>>> params=[]}}]]'}), log id: 3702a9e0
>>>>> 2018-02-12 16:49:06,713+01 INFO [org.ovirt.engine.core.vdsbro
>>>>> ker.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-thread-49)
>>>>> [92b5af33-cb87-4142-b8fe-8b838dd7458e] START,
>>>>> MigrateBrokerVDSCommand(HostName = ginger.local.systea.fr,
>>>>> MigrateVDSCommandParameters:{runAsync='true',
>>>>> hostId='d569c2dd-8f30-4878-8aea-858db285cf69',
>>>>> vmId='f7d4ec12-627a-4b83-b59e-886400d55474', srcHost='192.168.0.5',
>>>>> dstVdsId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1', dstHost='
>>>>> 192.168.0.6:54321', migrationMethod='ONLINE', tunnelMigration='false',
>>>>> migrationDowntime='0', autoConverge='true', migrateCompressed='false',
>>>>> consoleAddress='null', maxBandwidth='500', enableGuestEvents='true',
>>>>> maxIncomingMigrations='2', maxOutgoingMigrations='2',
>>>>> convergenceSchedule='[init=[{name=setDowntime, params=[100]}],
>>>>> stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2,
>>>>> action={name=setDowntime, params=[200]}}, {limit=3,
>>>>> action={name=setDowntime, params=[300]}}, {limit=4,
>>>>> action={name=setDowntime, params=[400]}}, {limit=6,
>>>>> action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort,
>>>>> params=[]}}]]'}), log id: 1840069c
>>>>> 2018-02-12 16:49:06,724+01 INFO [org.ovirt.engine.core.vdsbro
>>>>> ker.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-thread-49)
>>>>> [92b5af33-cb87-4142-b8fe-8b838dd7458e] FINISH, MigrateBrokerVDSCommand,
>>>>> log id: 1840069c
>>>>> 2018-02-12 16:49:06,732+01 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand]
>>>>> (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4142-b8fe-8b838dd7458e]
>>>>> FINISH, MigrateVDSCommand, return: MigratingFrom, log id: 3702a9e0
>>>>> 2018-02-12 16:49:06,753+01 INFO [org.ovirt.engine.core.dal.db
>>>>> broker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-6-thread-49)
>>>>> [92b5af33-cb87-4142-b8fe-8b838dd7458e] EVENT_ID:
>>>>> VM_MIGRATION_START(62), Correlation ID: 92b5af33-cb87-4142-b8fe-8b838dd7458e,
>>>>> Job ID: f4f54054-f7c8-4481-8eda-d5a15c383061, Call Stack: null, Custom
>>>>> ID: null, Custom Event ID: -1, Message: Migration started (VM:
>>>>> Oracle_PRIMARY, Source: ginger.local.systea.fr, Destination:
>>>>> victor.local.systea.fr, User: admin at internal-authz).
>>>>> ...
>>>>> 2018-02-12 16:49:16,453+01 INFO [org.ovirt.engine.core.vdsbro
>>>>> ker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler4)
>>>>> [162a5bc3] Fetched 2 VMs from VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'
>>>>> 2018-02-12 16:49:16,455+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
>>>>> (DefaultQuartzScheduler4) [162a5bc3] VM 'f7d4ec12-627a-4b83-b59e-886400d55474'(Oracle_PRIMARY)
>>>>> was unexpectedly detected as 'MigratingTo' on VDS
>>>>> 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'(victor.local.systea.fr)
>>>>> (expected on 'd569c2dd-8f30-4878-8aea-858db285cf69')
>>>>> 2018-02-12 16:49:16,455+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
>>>>> (DefaultQuartzScheduler4) [162a5bc3] VM 'f7d4ec12-627a-4b83-b59e-886400d55474'
>>>>> is migrating to VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'(
>>>>> victor.local.systea.fr) ignoring it in the refresh until migration is
>>>>> done
>>>>> ...
>>>>> 2018-02-12 16:49:31,484+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
>>>>> (DefaultQuartzScheduler5) [11a7619a] VM 'f7d4ec12-627a-4b83-b59e-886400d55474'(Oracle_PRIMARY)
>>>>> was unexpectedly detected as 'MigratingTo' on VDS
>>>>> 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'(victor.local.systea.fr)
>>>>> (expected on 'd569c2dd-8f30-4878-8aea-858db285cf69')
>>>>> 2018-02-12 16:49:31,484+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
>>>>> (DefaultQuartzScheduler5) [11a7619a] VM 'f7d4ec12-627a-4b83-b59e-886400d55474'
>>>>> is migrating to VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'(
>>>>> victor.local.systea.fr) ignoring it in the refresh until migration is
>>>>> done
>>>>>
>>>>>
>>>>>
>>>>> and so on, last lines repeated indefinitly for hours since we poweroff
>>>>> the VM...
>>>>> Is this something known ? Any idea about that ?
>>>>>
>>>>> Thanks
>>>>>
>>>>> Ovirt 4.1.6, updated last at feb-13. Gluster 3.12.1.
>>>>>
>>>>> --
>>>>>
>>>>> Cordialement,
>>>>>
>>>>> *Frank Soyer *
>>>>>
>>>>> _______________________________________________
>>>>> Users mailing list
>>>>> Users at ovirt.org
>>>>> http://lists.ovirt.org/mailman/listinfo/users


More information about the Users mailing list