Hey Johann,
I think this is not an actual bug but a general issue with the migration convergence + a
general problem with reported error messages
during migrations (both are being worked on and targeted to 4.0).
In your particular case the root cause seems like this from logs:
"The migration took 40 seconds which is exceeding the configured maximum time for
migrations of 32 s
econds. The migration will be aborted."
So I'd say the VM was able to trash the memory faster than being moved to destination
and after 40s it was aborted.
What you can do is to go to the vdsm.conf on each host and tweak migration progress
timeout, migration bandwidth (if you have capacity for more than 32 MBps on the migration
network per VM) and maybe max migration downtime (the time how long the VM can be paused
at the end of migration).
Also from inside of the guest you can try to reduce the load if possible so it will not
trash the memory so fast...
hope some of this helps,
Tomas
----- Original Message -----
From: "Johann Kiefel" <info(a)dshin.net>
To: users(a)ovirt.org
Sent: Thursday, December 17, 2015 4:32:36 PM
Subject: [ovirt-users] Migration of VM fails Failed in 'DestroyVDS' method
Message: VDSM command failed: Virtual
machine does not exist
Dear collegues,
please help us out. The migration of VMs from host to host fails, while we
think that we configured everything properly. We searched through the
mailing list and found nothing that describes/handles our error.
The Details:
We have three identical hosts and completed the self_hosted_engine
installation on all of them (Host OS is the latest CentOS 7). As shared
storage we use a NFS share hosted by an independent storage server. The
shares are presented on a dedicated network: (192.168.23.0/24). All of the
hosts have two networks. The management network is 192.168.10.0\24 and the
10Gbit storage network: 192.168.23.0/24.
Our hosted engine (192.168.10.20) is up and running. All three hosts are
connected and in status up. We create the second logical network "storage"
for our physical storage network. Syncing and testing it on every host. This
"storage" Network get the roles "migration" and "gluster"
the Hosted-Engine
UI.
We create and mount our Data, Iso and Export domains in the Hosted-Engine UI
over 192.168.23.0 (storage) network. We successfully install an CentOs 6 or
7 VM and can connect to them.
Now if we try to migrate the VM, the migration fails.
The Logs:
###################################################
engine.log from the Engine VM
###################################################
2015-12-17 13:50:09,033 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVDSCommand]
(DefaultQuartzScheduler_Worker-34) [20ed2774] START,
FullListVDSCommand(HostName = ,
FullListVDSCommandParameters:{runAsync='true', hostId='62e30
d77-fe4b-41d4-a9f7-b66d0a6ad7bd',
vds='Host[,62e30d77-fe4b-41d4-a9f7-b66d0a6ad7bd]',
vmIds='[408e7c39-d2a2-4412-8522-6eda216f17e0]'}), log id: 34cea1dd
2015-12-17 13:50:09,075 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVDSCommand]
(DefaultQuartzScheduler_Worker-34) [20ed2774] FINISH, FullListVDSCommand,
return: [{status=Up, nicModel=rtl8139,pv, emulatedMachine=pc, guestDisk
Mapping={398398f3-9572-4d67-8={name=/dev/vda}, QEMU_DVD-ROM={name=/dev/sr0}},
vmId=408e7c39-d2a2-4412-8522-6eda216f17e0, pid=19010,
devices=[Ljava.lang.Object;@69e4cf40, smp=4, vmType=kvm, displayIp=0,
display=vnc, displaySecurePort=-1, m
emSize=4096, displayPort=5900, cpuType=SandyBridge,
spiceSecureChannels=smain,sdisplay,sinputs,scursor,splayback,srecord,ssmartcard,susbredir,
statusTime=4455363570, vmName=HostedEngine, clientIp=, pauseCode=NOERR}],
log id: 34cea1dd
2015-12-17 13:50:09,076 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVDSCommand]
(DefaultQuartzScheduler_Worker-34) [20ed2774] START,
FullListVDSCommand(HostName = ,
FullListVDSCommandParameters:{runAsync='true', hostId='62e30
d77-fe4b-41d4-a9f7-b66d0a6ad7bd',
vds='Host[,62e30d77-fe4b-41d4-a9f7-b66d0a6ad7bd]',
vmIds='[0e956372-f4bd-4535-8472-73829277593f]'}), log id: 74f63ccf
2015-12-17 13:50:09,105 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVDSCommand]
(DefaultQuartzScheduler_Worker-34) [20ed2774] FINISH, FullListVDSCommand,
return: [{acpiEnable=true, emulatedMachine=pc-i440fx-rhel7.2.0, tabletE
nable=true, pid=30907, memGuaranteedSize=512, transparentHugePages=true,
displaySecurePort=-1, spiceSslCipherSuite=DEFAULT, cpuType=SandyBridge,
smp=1, pauseCode=NOERR, guestNumaNodes=[Ljava.lang.Object;@74afaff3,
smartcardEnable=false, n
umaTune={nodeset=0,1, mode=interleave}, custom={}, vmType=kvm, memSize=512,
smpCoresPerSocket=1, vmName=tiny-i1, nice=0, status=Up, maxMemSize=4194304,
bootMenuEnable=false, vmId=0e956372-f4bd-4535-8472-73829277593f,
displayIp=0, displayP
ort=5901, guestDiskMapping={},
spiceSecureChannels=smain,sinputs,scursor,splayback,srecord,sdisplay,susbredir,ssmartcard,
nicModel=rtl8139,pv, keyboardLayout=de, kvmEnable=true,
pitReinjection=false, devices=[Ljava.lang.Object;@3a48b65e,
display=vnc, timeOffset=3600, maxVCpus=16, clientIp=192.168.1.16,
statusTime=4455363610, maxMemSlots=16}], log id: 74f63ccf
2015-12-17 13:50:09,106 INFO [org.ovirt.engine.core.vdsbroker.VmsMonitoring]
(DefaultQuartzScheduler_Worker-34) [20ed2774] Received a memballoon Device
without an address when processing VM 0e956372-f4bd-4535-8472-73829277593f
devices, s
kipping device: {device=memballoon, specParams={model=none},
deviceType=balloon, type=balloon, alias=balloon0}
2015-12-17 13:50:09,106 INFO [org.ovirt.engine.core.vdsbroker.VmsMonitoring]
(DefaultQuartzScheduler_Worker-34) [20ed2774] Received a vnc Device without
an address when processing VM 0e956372-f4bd-4535-8472-73829277593f devices,
skipping
device: {device=vnc, deviceType=graphics, type=graphics,
deviceId=c6ebce28-25d5-418c-a46d-0c21d61a62fd, port=5901}
2015-12-17 13:50:09,107 ERROR [org.ovirt.engine.core.vdsbroker.VmsMonitoring]
(DefaultQuartzScheduler_Worker-34) [20ed2774] VM
'0e956372-f4bd-4535-8472-73829277593f' managed non pluggable device was
removed unexpectedly from libvirt: 'VmD
evice:{id='VmDeviceId:{deviceId='62338ecf-adf4-4e12-826b-cc837e75055e',
vmId='0e956372-f4bd-4535-8472-73829277593f'}', device='vnc',
type='GRAPHICS', bootOrder='0', specParams='[]',
address='', managed='true',
plugged='false', readOnly='f
alse', deviceAlias='', customProperties='[]',
snapshotId='null',
logicalName='null', usingScsiReservation='false'}'
2015-12-17 13:50:09,107 ERROR [org.ovirt.engine.core.vdsbroker.VmsMonitoring]
(DefaultQuartzScheduler_Worker-34) [20ed2774] VM
'0e956372-f4bd-4535-8472-73829277593f' managed non pluggable device was
removed unexpectedly from libvirt: 'VmD
evice:{id='VmDeviceId:{deviceId='8c2bad1b-ef19-4f43-9e62-d912f98e3901',
vmId='0e956372-f4bd-4535-8472-73829277593f'}', device='cdrom',
type='DISK',
bootOrder='0', specParams='[path=]', address='',
managed='true',
plugged='false', readOnly
='true', deviceAlias='', customProperties='[]',
snapshotId='null',
logicalName='null', usingScsiReservation='false'}'
2015-12-17 13:50:24,185 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVDSCommand]
(DefaultQuartzScheduler_Worker-74) [50b5a02f] START,
FullListVDSCommand(HostName = ,
FullListVDSCommandParameters:{runAsync='true', hostId='62e30
d77-fe4b-41d4-a9f7-b66d0a6ad7bd',
vds='Host[,62e30d77-fe4b-41d4-a9f7-b66d0a6ad7bd]',
vmIds='[408e7c39-d2a2-4412-8522-6eda216f17e0]'}), log id: 30a52aac
2015-12-17 13:50:25,103 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVDSCommand]
(DefaultQuartzScheduler_Worker-74) [50b5a02f] FINISH, FullListVDSCommand,
return: [{status=Up, nicModel=rtl8139,pv, emulatedMachine=pc, guestDisk
Mapping={398398f3-9572-4d67-8={name=/dev/vda}, QEMU_DVD-ROM={name=/dev/sr0}},
vmId=408e7c39-d2a2-4412-8522-6eda216f17e0, pid=19010,
devices=[Ljava.lang.Object;@793cf9d3, smp=4, vmType=kvm, displayIp=0,
display=vnc, displaySecurePort=-1, m
emSize=4096, displayPort=5900, cpuType=SandyBridge,
spiceSecureChannels=smain,sdisplay,sinputs,scursor,splayback,srecord,ssmartcard,susbredir,
statusTime=4455378730, vmName=HostedEngine, clientIp=, pauseCode=NOERR}],
log id: 30a52aac
2015-12-17 13:50:40,230 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVDSCommand]
(DefaultQuartzScheduler_Worker-91) [6b1ec6a9] START,
FullListVDSCommand(HostName = ,
FullListVDSCommandParameters:{runAsync='true', hostId='62e30
d77-fe4b-41d4-a9f7-b66d0a6ad7bd',
vds='Host[,62e30d77-fe4b-41d4-a9f7-b66d0a6ad7bd]',
vmIds='[408e7c39-d2a2-4412-8522-6eda216f17e0]'}), log id: 294d3ffe
2015-12-17 13:50:40,275 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVDSCommand]
(DefaultQuartzScheduler_Worker-91) [6b1ec6a9] FINISH, FullListVDSCommand,
return: [{status=Up, nicModel=rtl8139,pv, emulatedMachine=pc, guestDisk
Mapping={398398f3-9572-4d67-8={name=/dev/vda}, QEMU_DVD-ROM={name=/dev/sr0}},
vmId=408e7c39-d2a2-4412-8522-6eda216f17e0, pid=19010,
devices=[Ljava.lang.Object;@ea451e1, smp=4, vmType=kvm, displayIp=0,
display=vnc, displaySecurePort=-1, me
mSize=4096, displayPort=5900, cpuType=SandyBridge,
spiceSecureChannels=smain,sdisplay,sinputs,scursor,splayback,srecord,ssmartcard,susbredir,
statusTime=4455394760, vmName=HostedEngine, clientIp=, pauseCode=NOERR}],
log id: 294d3ffe
2015-12-17 13:50:55,641 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVDSCommand]
(DefaultQuartzScheduler_Worker-5) [] START, FullListVDSCommand(HostName = ,
FullListVDSCommandParameters:{runAsync='true', hostId='62e30d77-fe4b-
41d4-a9f7-b66d0a6ad7bd', vds='Host[,62e30d77-fe4b-41d4-a9f7-b66d0a6ad7bd]',
vmIds='[408e7c39-d2a2-4412-8522-6eda216f17e0]'}), log id: 2aedaa54
2015-12-17 13:50:56,157 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVDSCommand]
(DefaultQuartzScheduler_Worker-5) [] FINISH, FullListVDSCommand, return:
[{status=Up, nicModel=rtl8139,pv, emulatedMachine=pc, guestDiskMapping={
398398f3-9572-4d67-8={name=/dev/vda}, QEMU_DVD-ROM={name=/dev/sr0}},
vmId=408e7c39-d2a2-4412-8522-6eda216f17e0, pid=19010,
devices=[Ljava.lang.Object;@17b907dd, smp=4, vmType=kvm, displayIp=0,
display=vnc, displaySecurePort=-1, memSize=40
96, displayPort=5900, cpuType=SandyBridge,
spiceSecureChannels=smain,sdisplay,sinputs,scursor,splayback,srecord,ssmartcard,susbredir,
statusTime=4455410180, vmName=HostedEngine, clientIp=, pauseCode=NOERR}],
log id: 2aedaa54
2015-12-17 13:51:06,563 INFO
[org.ovirt.engine.core.bll.MigrateVmToServerCommand] (default task-4)
[5e46a29f] Lock Acquired to object
'EngineLock:{exclusiveLocks='[0e956372-f4bd-4535-8472-73829277593f=<VM,
ACTION_TYPE_FAILED_VM_IS_BEING_
MIGRATED$VmName tiny-i1>]', sharedLocks='null'}'
2015-12-17 13:51:06,595 INFO
[org.ovirt.engine.core.bll.scheduling.SchedulingManager] (default task-4)
[5e46a29f] Candidate host 'hosted_engine_1'
('62e30d77-fe4b-41d4-a9f7-b66d0a6ad7bd') was filtered out by
'VAR__FILTERTYPE__INTERNAL' f
ilter 'Migration' (correlation id: null)
2015-12-17 13:51:06,618 INFO
[org.ovirt.engine.core.bll.MigrateVmToServerCommand]
(org.ovirt.thread.pool-8-thread-20) [5e46a29f] Running command:
MigrateVmToServerCommand internal: false. Entities affected : ID:
0e956372-f4bd-4535-8472-
73829277593f Type: VMAction group MIGRATE_VM with role type USER
2015-12-17 13:51:06,639 INFO
[org.ovirt.engine.core.vdsbroker.MigrateVDSCommand]
(org.ovirt.thread.pool-8-thread-20) [5e46a29f] START, MigrateVDSCommand(
MigrateVDSCommandParameters:{runAsync='true',
hostId='62e30d77-fe4b-41d4-a9f7-b66d0
a6ad7bd', vmId='0e956372-f4bd-4535-8472-73829277593f',
srcHost='m1.example.de',
dstVdsId='e3189b9a-b725-4a7c-987c-6b22bd4abea2',
dstHost='n1.example.de:54321', migrationMethod='ONLINE',
tunnelMigration='false', migrationDowntime='0', au
toConverge='false', migrateCompressed='false',
consoleAddress='null'}), log
id: 25229fa3
2015-12-17 13:51:06,640 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand]
(org.ovirt.thread.pool-8-thread-20) [5e46a29f] START,
MigrateBrokerVDSCommand(HostName = hosted_engine_1,
MigrateVDSCommandParameters:{runAs
ync='true', hostId='62e30d77-fe4b-41d4-a9f7-b66d0a6ad7bd',
vmId='0e956372-f4bd-4535-8472-73829277593f', srcHost='m1.example.de',
dstVdsId='e3189b9a-b725-4a7c-987c-6b22bd4abea2',
dstHost='n1.example.de:54321', migrationMethod='ONLINE', t
unnelMigration='false', migrationDowntime='0',
autoConverge='false',
migrateCompressed='false', consoleAddress='null'}), log id: 4f18f441
2015-12-17 13:51:06,703 ERROR
[org.ovirt.engine.core.bll.host.provider.foreman.SystemProviderFinder]
(default task-6) [] Failed to find host on any provider by host name
'ov.example.de'
2015-12-17 13:51:07,184 ERROR
[org.ovirt.engine.core.bll.host.provider.foreman.SystemProviderFinder]
(default task-2) [] Failed to find host on any provider by host name
'ov.example.de'
2015-12-17 13:51:07,305 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand]
(org.ovirt.thread.pool-8-thread-20) [5e46a29f] FINISH,
MigrateBrokerVDSCommand, log id: 4f18f441
2015-12-17 13:51:07,310 INFO
[org.ovirt.engine.core.vdsbroker.MigrateVDSCommand]
(org.ovirt.thread.pool-8-thread-20) [5e46a29f] FINISH, MigrateVDSCommand,
return: MigratingFrom, log id: 25229fa3
2015-12-17 13:51:07,316 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-8-thread-20) [5e46a29f] Correlation ID: 5e46a29f, Job
ID: 8c267b6a-3d37-45b5-837c-39bcee545521, Call Stack: null,
Custom Event ID: -1, Message: Migration started (VM: tiny-i1, Source:
hosted_engine_1, Destination: hosted_engine_2, User: admin@internal).
2015-12-17 13:51:08,022 ERROR
[org.ovirt.engine.core.bll.host.provider.foreman.SystemProviderFinder]
(default task-15) [] Failed to find host on any provider by host name
'ov.example.de'
2015-12-17 13:51:11,224 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVDSCommand]
(DefaultQuartzScheduler_Worker-79) [] START, FullListVDSCommand(HostName = ,
FullListVDSCommandParameters:{runAsync='true', hostId='62e30d77-fe4b
-41d4-a9f7-b66d0a6ad7bd', vds='Host[,62e30d77-fe4b-41d4-a9f7-b66d0a6ad7bd]',
vmIds='[408e7c39-d2a2-4412-8522-6eda216f17e0]'}), log id: c4b9f52
2015-12-17 13:51:11,254 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVDSCommand]
(DefaultQuartzScheduler_Worker-79) [] FINISH, FullListVDSCommand, return:
[{status=Up, nicModel=rtl8139,pv, emulatedMachine=pc, guestDiskMapping=
{398398f3-9572-4d67-8={name=/dev/vda}, QEMU_DVD-ROM={name=/dev/sr0}},
vmId=408e7c39-d2a2-4412-8522-6eda216f17e0, pid=19010,
devices=[Ljava.lang.Object;@54d8b81c, smp=4, vmType=kvm, displayIp=0,
display=vnc, displaySecurePort=-1, memSize=4
096, displayPort=5900, cpuType=SandyBridge,
spiceSecureChannels=smain,sdisplay,sinputs,scursor,splayback,srecord,ssmartcard,susbredir,
statusTime=4455425760, vmName=HostedEngine, clientIp=, pauseCode=NOERR}],
log id: c4b9f52
2015-12-17 13:51:12,232 INFO [org.ovirt.engine.core.vdsbroker.VmAnalyzer]
(DefaultQuartzScheduler_Worker-83) [59659503] RefreshVmList VM id
'0e956372-f4bd-4535-8472-73829277593f' is migrating to VDS
'hosted_engine_2'
ignoring it in the r
efresh until migration is done
2015-12-17 13:51:26,290 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVDSCommand]
(DefaultQuartzScheduler_Worker-21) [2fb3e350] START,
FullListVDSCommand(HostName = ,
FullListVDSCommandParameters:{runAsync='true', hostId='62e30
d77-fe4b-41d4-a9f7-b66d0a6ad7bd',
vds='Host[,62e30d77-fe4b-41d4-a9f7-b66d0a6ad7bd]',
vmIds='[408e7c39-d2a2-4412-8522-6eda216f17e0]'}), log id: 29a38cc2
2015-12-17 13:51:26,690 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVDSCommand]
(DefaultQuartzScheduler_Worker-21) [2fb3e350] FINISH, FullListVDSCommand,
return: [{status=Up, nicModel=rtl8139,pv, emulatedMachine=pc, guestDisk
Mapping={398398f3-9572-4d67-8={name=/dev/vda}, QEMU_DVD-ROM={name=/dev/sr0}},
vmId=408e7c39-d2a2-4412-8522-6eda216f17e0, pid=19010,
devices=[Ljava.lang.Object;@306e29e9, smp=4, vmType=kvm, displayIp=0,
display=vnc, displaySecurePort=-1, m
emSize=4096, displayPort=5900, cpuType=SandyBridge,
spiceSecureChannels=smain,sdisplay,sinputs,scursor,splayback,srecord,ssmartcard,susbredir,
statusTime=4455440820, vmName=HostedEngine, clientIp=, pauseCode=NOERR}],
log id: 29a38cc2
2015-12-17 13:51:27,295 INFO [org.ovirt.engine.core.vdsbroker.VmAnalyzer]
(DefaultQuartzScheduler_Worker-85) [] RefreshVmList VM id
'0e956372-f4bd-4535-8472-73829277593f' is migrating to VDS
'hosted_engine_2'
ignoring it in the refresh u
ntil migration is done
2015-12-17 13:51:41,726 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVDSCommand]
(DefaultQuartzScheduler_Worker-60) [] START, FullListVDSCommand(HostName = ,
FullListVDSCommandParameters:{runAsync='true', hostId='62e30d77-fe4b
-41d4-a9f7-b66d0a6ad7bd', vds='Host[,62e30d77-fe4b-41d4-a9f7-b66d0a6ad7bd]',
vmIds='[408e7c39-d2a2-4412-8522-6eda216f17e0]'}), log id: 224c57c9
2015-12-17 13:51:42,263 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVDSCommand]
(DefaultQuartzScheduler_Worker-60) [] FINISH, FullListVDSCommand, return:
[{status=Up, nicModel=rtl8139,pv, emulatedMachine=pc, guestDiskMapping=
{398398f3-9572-4d67-8={name=/dev/vda}, QEMU_DVD-ROM={name=/dev/sr0}},
vmId=408e7c39-d2a2-4412-8522-6eda216f17e0, pid=19010,
devices=[Ljava.lang.Object;@29eb6e5, smp=4, vmType=kvm, displayIp=0,
display=vnc, displaySecurePort=-1, memSize=40
96, displayPort=5900, cpuType=SandyBridge,
spiceSecureChannels=smain,sdisplay,sinputs,scursor,splayback,srecord,ssmartcard,susbredir,
statusTime=4455456260, vmName=HostedEngine, clientIp=, pauseCode=NOERR}],
log id: 224c57c9
2015-12-17 13:51:42,311 INFO [org.ovirt.engine.core.vdsbroker.VmAnalyzer]
(DefaultQuartzScheduler_Worker-71) [433f9bb6] RefreshVmList VM id
'0e956372-f4bd-4535-8472-73829277593f' is migrating to VDS
'hosted_engine_2'
ignoring it in the r
efresh until migration is done
2015-12-17 13:51:47,215 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand]
(ForkJoinPool-1-worker-294) [] START, DestroyVDSCommand(HostName =
hosted_engine_2, DestroyVmVDSCommandParameters:{runAsync='true',
hostId='e3189b
9a-b725-4a7c-987c-6b22bd4abea2',
vmId='0e956372-f4bd-4535-8472-73829277593f',
force='false', secondsToWait='0', gracefully='false',
reason=''}), log id:
24778267
2015-12-17 13:51:47,318 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand]
(ForkJoinPool-1-worker-294) [] Failed in 'DestroyVDS' method
2015-12-17 13:51:47,321 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(ForkJoinPool-1-worker-294) [] Correlation ID: null, Call Stack: null,
Custom Event ID: -1, Message: VDSM hosted_engine_2 command failed:
Virtual machine does not exist
2015-12-17 13:51:47,321 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand]
(ForkJoinPool-1-worker-294) [] Command
'org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand' return value
'StatusOnlyReturnForXmlRpc [stat
us=StatusForXmlRpc [code=1, message=Virtual machine does not exist]]'
2015-12-17 13:51:47,321 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand]
(ForkJoinPool-1-worker-294) [] HostName = hosted_engine_2
2015-12-17 13:51:47,321 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand]
(ForkJoinPool-1-worker-294) [] Command 'DestroyVDSCommand(HostName =
hosted_engine_2, DestroyVmVDSCommandParameters:{runAsync='true',
hostId='e318
9b9a-b725-4a7c-987c-6b22bd4abea2',
vmId='0e956372-f4bd-4535-8472-73829277593f', force='false',
secondsToWait='0', gracefully='false', reason=''})' execution
failed:
VDSGenericException: VDSErrorException: Failed to DestroyVDS, error = Vir
tual machine does not exist, code = 1
2015-12-17 13:51:47,321 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand]
(ForkJoinPool-1-worker-294) [] FINISH, DestroyVDSCommand, log id: 24778267
2015-12-17 13:51:47,322 INFO [org.ovirt.engine.core.vdsbroker.VmAnalyzer]
(ForkJoinPool-1-worker-294) [] RefreshVmList VM id
'0e956372-f4bd-4535-8472-73829277593f' status = 'Down' on VDS
'hosted_engine_2' ignoring it in the refresh until
migration is done
2015-12-17 13:51:49,203 ERROR
[org.ovirt.engine.core.bll.host.provider.foreman.SystemProviderFinder]
(default task-42) [] Failed to find host on any provider by host name
'ov.example.de'
2015-12-17 13:51:55,416 INFO
[org.ovirt.vdsm.jsonrpc.client.reactors.ReactorClient] (SSL Stomp Reactor)
[] Connecting to n1.example.de/192.168.10.12
2015-12-17 13:51:57,272 INFO
[org.ovirt.vdsm.jsonrpc.client.reactors.ReactorClient] (SSL Stomp Reactor)
[] Connecting to m1.example.de/192.168.10.11
2015-12-17 13:51:57,916 INFO [org.ovirt.engine.core.vdsbroker.VmAnalyzer]
(DefaultQuartzScheduler_Worker-28) [ff193b8] VM
'0e956372-f4bd-4535-8472-73829277593f'(tiny-i1) moved from
'MigratingFrom'
--> 'Up'
2015-12-17 13:51:57,916 INFO [org.ovirt.engine.core.vdsbroker.VmAnalyzer]
(DefaultQuartzScheduler_Worker-28) [ff193b8] Adding VM
'0e956372-f4bd-4535-8472-73829277593f' to re-run list
###################################################
The vdsm.log on Source Host:
###################################################
Thread-103967::DEBUG::2015-12-17
14:51:40,619::fileSD::173::Storage.Misc.excCmd::(getReadDelay) /usr/bin/dd
if=/rhev/data-center/mnt/192.168.23.10:_mnt_sdb_nfs_ISO/e793de29-f95f-4512-9a57-ece72268b53f/dom_md/metadata
iflag=direct of=/dev/
null bs=4096 count=1 (cwd None)
Thread-103967::DEBUG::2015-12-17
14:51:40,626::fileSD::173::Storage.Misc.excCmd::(getReadDelay) SUCCESS:
<err> = '0+1 records in\n0+1 records out\n330 bytes (330 B) copied,
0.00023453 s, 1.4 MB/s\n'; <rc> = 0
Reactor thread::INFO::2015-12-17
14:51:41,423::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
Accepting connection from 127.0.0.1:36777
Reactor thread::DEBUG::2015-12-17
14:51:41,427::protocoldetector::82::ProtocolDetector.Detector::(__init__)
Using required_size=11
Reactor thread::INFO::2015-12-17
14:51:41,428::protocoldetector::118::ProtocolDetector.Detector::(handle_read)
Detected protocol xml from 127.0.0.1:36777
Reactor thread::DEBUG::2015-12-17
14:51:41,428::bindingxmlrpc::1297::XmlDetector::(handle_socket) xml over
http detected from ('127.0.0.1', 36777)
BindingXMLRPC::INFO::2015-12-17
14:51:41,428::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting
request handler for 127.0.0.1:36777
Thread-108220::INFO::2015-12-17
14:51:41,429::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request
handler for 127.0.0.1:36777 started
Thread-108220::DEBUG::2015-12-17
14:51:41,431::task::595::Storage.TaskManager.Task::(_updateState)
Task=`2cdd058a-ef2c-4eea-a172-33b07f23be33`::moving from state init -> state
preparing
Thread-108220::INFO::2015-12-17
14:51:41,432::logUtils::48::dispatcher::(wrapper) Run and protect:
repoStats(options=None)
Thread-108220::INFO::2015-12-17
14:51:41,432::logUtils::51::dispatcher::(wrapper) Run and protect:
repoStats, Return response: {u'94048698-25fe-4f3e-8f49-34b8633ee996':
{'code': 0, 'actual': True, 'version': 0, 'acquired':
True, 'delay':
'0.000238152', 'lastCheck': '6.9', 'valid': True},
u'e793de29-f95f-4512-9a57-ece72268b53f': {'code': 0, 'actual':
True,
'version': 0, 'acquired': True, 'delay': '0.00023453',
'lastCheck': '0.8',
'valid': True}, 'fac2b63e-4359-4ad8-871c-58
ae92254ba9': {'code': 0, 'actual': True, 'version': 3,
'acquired': True,
'delay': '0.000251451', 'lastCheck': '8.3',
'valid': True},
u'ff24327d-3080-4344-b083-43cc5acd5342': {'code': 0, 'actual':
True,
'version': 3, 'acquired': True, 'del
ay': '0.000231135', 'lastCheck': '8.5', 'valid':
True}}
Thread-108220::DEBUG::2015-12-17
14:51:41,432::task::1191::Storage.TaskManager.Task::(prepare)
Task=`2cdd058a-ef2c-4eea-a172-33b07f23be33`::finished:
{u'94048698-25fe-4f3e-8f49-34b8633ee996': {'code': 0, 'actual':
True,
'version': 0, 'acq
uired': True, 'delay': '0.000238152', 'lastCheck':
'6.9', 'valid': True},
u'e793de29-f95f-4512-9a57-ece72268b53f': {'code': 0, 'actual':
True,
'version': 0, 'acquired': True, 'delay': '0.00023453',
'lastCheck': '0.8',
'valid': True}, 'fac
2b63e-4359-4ad8-871c-58ae92254ba9': {'code': 0, 'actual': True,
'version': 3,
'acquired': True, 'delay': '0.000251451', 'lastCheck':
'8.3', 'valid':
True}, u'ff24327d-3080-4344-b083-43cc5acd5342': {'code': 0,
'actual': True,
'version': 3,
'acquired': True, 'delay': '0.000231135', 'lastCheck':
'8.5', 'valid':
True}}
Thread-108220::DEBUG::2015-12-17
14:51:41,432::task::595::Storage.TaskManager.Task::(_updateState)
Task=`2cdd058a-ef2c-4eea-a172-33b07f23be33`::moving from state preparing ->
state finished
Thread-108220::DEBUG::2015-12-17
14:51:41,432::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources {}
Thread-108220::DEBUG::2015-12-17
14:51:41,432::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
Thread-108220::DEBUG::2015-12-17
14:51:41,432::task::993::Storage.TaskManager.Task::(_decref)
Task=`2cdd058a-ef2c-4eea-a172-33b07f23be33`::ref 0 aborting False
Thread-108220::INFO::2015-12-17
14:51:41,489::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request
handler for 127.0.0.1:36777 stopped
Thread-108162::WARNING::2015-12-17
14:51:41,931::migration::526::virt.vm::(monitor_migration)
vmId=`0e956372-f4bd-4535-8472-73829277593f`::The migration took 40 seconds
which is exceeding the configured maximum time for migrations of 32 s
econds. The migration will be aborted.
Thread-108162::DEBUG::2015-12-17
14:51:41,933::migration::558::virt.vm::(stop)
vmId=`0e956372-f4bd-4535-8472-73829277593f`::stopping migration monitor
thread
Thread-108160::DEBUG::2015-12-17
14:51:42,372::migration::558::virt.vm::(stop)
vmId=`0e956372-f4bd-4535-8472-73829277593f`::stopping migration monitor
thread
Thread-108160::DEBUG::2015-12-17
14:51:42,372::migration::453::virt.vm::(stop)
vmId=`0e956372-f4bd-4535-8472-73829277593f`::stopping migration downtime
thread
Thread-108161::DEBUG::2015-12-17 14:51:42,372::migration::450::virt.vm::(run)
vmId=`0e956372-f4bd-4535-8472-73829277593f`::migration downtime thread
exiting
Thread-108160::ERROR::2015-12-17
14:51:42,372::migration::208::virt.vm::(_recover)
vmId=`0e956372-f4bd-4535-8472-73829277593f`::Operation abgebrochen:
Migrations-Job: abgebrochen durch Client
Thread-108160::DEBUG::2015-12-17
14:51:42,373::stompreactor::389::jsonrpc.AsyncoreClient::(send) Sending
response
Thread-108160::DEBUG::2015-12-17
14:51:42,406::__init__::206::jsonrpc.Notification::(emit) Sending event
{"params": {"notify_time": 4455461740,
"0e956372-f4bd-4535-8472-73829277593f": {"status": "Migration
Source"}},
"jsonrpc": "2.0", "me
thod": "|virt|VM_status|0e956372-f4bd-4535-8472-73829277593f"}
Thread-108160::ERROR::2015-12-17 14:51:42,407::migration::310::virt.vm::(run)
vmId=`0e956372-f4bd-4535-8472-73829277593f`::Failed to migrate
Traceback (most recent call last):
File "/usr/share/vdsm/virt/migration.py", line 294, in run
self._startUnderlyingMigration(time.time())
File "/usr/share/vdsm/virt/migration.py", line 364, in
_startUnderlyingMigration
self._perform_migration(duri, muri)
File "/usr/share/vdsm/virt/migration.py", line 403, in _perform_migration
self._vm._dom.migrateToURI3(duri, params, flags)
File "/usr/share/vdsm/virt/virdomain.py", line 68, in f
ret = attr(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line
124, in wrapper
ret = f(*args, **kwargs)
File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1836, in
migrateToURI3
if ret == -1: raise libvirtError ('virDomainMigrateToURI3() failed',
dom=self)
libvirtError: Operation abgebrochen: Migrations-Job: abgebrochen durch Client
Thread-103715::DEBUG::2015-12-17
14:51:42,895::fileSD::173::Storage.Misc.excCmd::(getReadDelay) /usr/bin/dd
if=/rhev/data-center/mnt/192.168.23.10:_mnt_sdb_nfs_DATA/ff24327d-3080-4344-b083-43cc5acd5342/dom_md/metadata
iflag=direct of=/dev
/null bs=4096 count=1 (cwd None)
Thread-103715::DEBUG::2015-12-17
14:51:42,905::fileSD::173::Storage.Misc.excCmd::(getReadDelay) SUCCESS:
<err> = '0+1 records in\n0+1 records out\n463 bytes (463 B) copied,
0.000402171 s, 1.2 MB/s\n'; <rc> = 0
Thread-113::DEBUG::2015-12-17
14:51:43,093::fileSD::173::Storage.Misc.excCmd::(getReadDelay) /usr/bin/dd
if=/rhev/data-center/mnt/192.168.23.10:_mnt_sdb_nfs_HOSTED__ENGINE/fac2b63e-4359-4ad8-871c-58ae92254ba9/dom_md/metadata
iflag=direct
of=/dev/null bs=4096 count=1 (cwd None)
Thread-113::DEBUG::2015-12-17
14:51:43,100::fileSD::173::Storage.Misc.excCmd::(getReadDelay) SUCCESS:
<err> = '0+1 records in\n0+1 records out\n316 bytes (316 B) copied,
0.000300531 s, 1.1 MB/s\n'; <rc> = 0
##################################
vdsm.log on target
##################################
(Is too long and attached) but below are the significant lines:
1:32,193::periodic::254::virt.periodic.VmDispatcher::(__call__) could not run
<class 'virt.periodic.DriveWatermarkMonitor'> on
[u'0e956372-f4bd-4535-8472-73829277593f']
Thread-4556::DEBUG::2015-12-17
14:51:32,752::fileSD::173::Storage.Misc.excCmd::(getReadDelay) /usr/bin/dd
if=/rhev/data-center/mnt/192.168.23.10:_mnt_sdb_nfs_ISO/e793de29-f95f-4512-9a57-ece72268b53f/dom_md/metadata
iflag=direct of=/dev/nu
ll bs=4096 count=1 (cwd None)
Thread-4556::DEBUG::2015-12-17
14:51:32,760::fileSD::173::Storage.Misc.excCmd::(getReadDelay) SUCCESS:
<err> = '0+1 records in\n0+1 records out\n330 bytes (330 B) copied,
0.000202573 s, 1.6 MB/s\n'; <rc> = 0
Thread-4246::DEBUG::2015-12-17
14:51:33,185::fileSD::173::Storage.Misc.excCmd::(getReadDelay) /usr/bin/dd
if=/rhev/data-center/mnt/192.168.23.10:_mnt_sdb_nfs_DATA/ff24327d-3080-4344-b083-43cc5acd5342/dom_md/metadata
iflag=direct of=/dev/n
ull bs=4096 count=1 (cwd None)
Thread-4246::DEBUG::2015-12-17
14:51:33,194::fileSD::173::Storage.Misc.excCmd::(getReadDelay) SUCCESS:
<err> = '0+1 records in\n0+1 records out\n463 bytes (463 B) copied,
0.000259942 s, 1.8 MB/s\n'; <rc> = 0
periodic/4::WARNING::2015-12-17
14:51:34,193::periodic::254::virt.periodic.VmDispatcher::(__call__) could
not run <class 'virt.periodic.DriveWatermarkMonitor'> on
[u'0e956372-f4bd-4535-8472-73829277593f']
periodic/2::WARNING::2015-12-17
14:51:36,194::periodic::254::virt.periodic.VmDispatcher::(__call__) could
not run <class 'virt.periodic.DriveWatermarkMonitor'> on
[u'0e956372-f4bd-4535-8472-73829277593f']
periodic/1::WARNING::2015-12-17
14:51:36,615::periodic::254::virt.periodic.VmDispatcher::(__call__) could
not run <class 'virt.periodic.BlockjobMonitor'> on
[u'0e956372-f4bd-4535-8472-73829277593f']
periodic/4::WARNING::2015-12-17
14:51:36,615::periodic::277::virt.vm::(__call__)
vmId=`0e956372-f4bd-4535-8472-73829277593f`::could not run on
0e956372-f4bd-4535-8472-73829277593f: domain not connected
periodic/2::DEBUG::2015-12-17
14:51:36,625::sampling::531::virt.sampling.VMBulkSampler::(__call__)
sampling 0 domains
.....
Thread-8857::INFO::2015-12-17
14:51:40,738::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request
handler for 127.0.0.1:58492 stopped
periodic/4::WARNING::2015-12-17
14:51:42,197::periodic::254::virt.periodic.VmDispatcher::(__call__) could
not run <class 'virt.periodic.DriveWatermarkMonitor'> on
[u'0e956372-f4bd-4535-8472-73829277593f']
Thread-4556::DEBUG::2015-12-17
14:51:42,763::fileSD::173::Storage.Misc.excCmd::(getReadDelay) /usr/bin/dd
if=/rhev/data-center/mnt/192.168.23.10:_mnt_sdb_nfs_ISO/e793de29-f95f-4512-9a57-ece72268b53f/dom_md/metadata
iflag=direct of=/dev/nu
ll bs=4096 count=1 (cwd None)
Thread-4556::DEBUG::2015-12-17
14:51:42,770::fileSD::173::Storage.Misc.excCmd::(getReadDelay) SUCCESS:
<err> = '0+1 records in\n0+1 records out\n330 bytes (330 B) copied,
0.000225483 s, 1.5 MB/s\n'; <rc> = 0
Thread-4246::DEBUG::2015-12-17
14:51:43,199::fileSD::173::Storage.Misc.excCmd::(getReadDelay) /usr/bin/dd
if=/rhev/data-center/mnt/192.168.23.10:_mnt_sdb_nfs_DATA/ff24327d-3080-4344-b083-43cc5acd5342/dom_md/metadata
iflag=direct of=/dev/n
ull bs=4096 count=1 (cwd None)
Thread-4246::DEBUG::2015-12-17
14:51:43,207::fileSD::173::Storage.Misc.excCmd::(getReadDelay) SUCCESS:
<err> = '0+1 records in\n0+1 records out\n463 bytes (463 B) copied,
0.000226464 s, 2.0 MB/s\n'; <rc> = 0
periodic/2::WARNING::2015-12-17
14:51:44,198::periodic::254::virt.periodic.VmDispatcher::(__call__) could
not run <class 'virt.periodic.DriveWatermarkMonitor'> on
[u'0e956372-f4bd-4535-8472-73829277593f']
.....
Thread-8781::ERROR::2015-12-17
14:51:46,636::vm::752::virt.vm::(_startUnderlyingVm)
vmId=`0e956372-f4bd-4535-8472-73829277593f`::Failed to start a migration
destination vm
Traceback (most recent call last):
File "/usr/share/vdsm/virt/vm.py", line 721, in _startUnderlyingVm
self._completeIncomingMigration()
File "/usr/share/vdsm/virt/vm.py", line 2785, in _completeIncomingMigration
self._incomingMigrationFinished.isSet(), usedTimeout)
File "/usr/share/vdsm/virt/vm.py", line 2838, in
_attachLibvirtDomainAfterMigration
raise MigrationError(e.get_error_message())
MigrationError: Domain nicht gefunden: Keine Domain mit ��bereinstimmender
UUID '0e956372-f4bd-4535-8472-73829277593f'
_______________________________________________
Users mailing list
Users(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/users