[Users] Live migration fails

Hideo GOTO gotoh at eis.co.jp
Tue Mar 27 15:35:11 UTC 2012


Live migration systematically fails in the following testing
environment, despite all my imaginable variety of configuration.

Excepting the live migration, everything works more or less fine.

Engine host: DELL R200 8GB RAM
Fedora16 + Ovirt-engine vers.3.0.0_0001-1.6.fc16

Node host1: DELL R415 / 16 core AMD / 32GB RAM
 OS: ovirt-node-image-2.2.3-1.1.fc16.iso

Node host2: DELL R415 / 8 core AMD / 16GB RAM
 OS: ovirt-node-image-2.2.3-1.1.fc16.iso

Storage server: CentOS6.2 x86_64 + iscsi-server (IET) + DRBD
DELL R410 x 2 , replicated using DRBD

500GB partion is attributed as one LUN to the storage domain, shared by
plural VMs ( 50GB each)

Storage network is a GB ehternet independent to the ovirtmgmt network.

Every network interface's FQDN and IP addresses are correctly resolved
by DNS without ambifuity, and NTP also assumes it work correctly. Error
message in the source node's vdsm.log explains the issue as connection
problem to the counterpart node by qemu+tls (Please see the here-below log).

Thanks in advance for any of your suggestion and/or advices.
It would much help me also if you could tell me about how to test and
diagnose the qemu+tls connction.

Hideo GOTO

P.S.
vdsm.log of the source ovirt node
==================================

Thread-7599::DEBUG::2012-03-27
14:46:36,816::clientIF::81::vds::(wrapper) return migrate
 with {'status': {'message': 'Migration process starting', 'code': 0}}
Thread-7600::DEBUG::2012-03-27
14:46:36,818::vm::116::vm.Vm::(_setupVdsConnection) vmId=
`6046aa18-5146-4284-8026-dca5979c3326`::Destination server is:
192.168.15.210:54321
Thread-7600::DEBUG::2012-03-27
14:46:36,822::vm::118::vm.Vm::(_setupVdsConnection) vmId=
`6046aa18-5146-4284-8026-dca5979c3326`::Initiating connection with
destination
Thread-7600::DEBUG::2012-03-27
14:46:36,853::libvirtvm::232::vm.Vm::(_getDiskStats) vmId
=`6046aa18-5146-4284-8026-dca5979c3326`::Disk hdc stats not available
Thread-7600::DEBUG::2012-03-27
14:46:36,855::libvirtvm::261::vm.Vm::(_getDiskLatency) vm
Id=`6046aa18-5146-4284-8026-dca5979c3326`::Disk vda latency not available
Thread-7600::DEBUG::2012-03-27
14:46:36,856::libvirtvm::261::vm.Vm::(_getDiskLatency) vm
Id=`6046aa18-5146-4284-8026-dca5979c3326`::Disk hdc latency not available
Thread-7600::DEBUG::2012-03-27
14:46:36,863::vm::164::vm.Vm::(_prepareGuest) vmId=`6046a
a18-5146-4284-8026-dca5979c3326`::migration Process begins
Thread-7600::DEBUG::2012-03-27 14:46:36,881::vm::211::vm.Vm::(run)
vmId=`6046aa18-5146-4
284-8026-dca5979c3326`::migration semaphore acquired
Thread-7602::DEBUG::2012-03-27
14:46:39,580::task::588::TaskManager.Task::(_updateState)
 Task=`c3896a9b-0a36-487c-b8bd-ebe2b85e7e1d`::moving from state init ->
state preparing
Thread-7602::INFO::2012-03-27
14:46:39,582::logUtils::37::dispatcher::(wrapper) Run and
protect: repoStats(options=None)
Thread-7602::INFO::2012-03-27
14:46:39,583::logUtils::39::dispatcher::(wrapper) Run and
protect: repoStats, Return response:
{'0bf36b0e-e6dc-4af1-90de-78375fe469b4': {'delay':
'0.00161194801331', 'lastCheck': 1332859591.922013, 'code': 0, 'valid':
True}, '2a60a743
-0b41-4b52-8b6e-a57565064ba4': {'delay': '0.00288891792297',
'lastCheck': 1332859598.455
845, 'code': 0, 'valid': True}}
Thread-7602::DEBUG::2012-03-27
14:46:39,586::task::588::TaskManager.Task::(_updateState)
 Task=`c3896a9b-0a36-487c-b8bd-ebe2b85e7e1d`::moving from state
preparing -> state finis
hed
Thread-7602::DEBUG::2012-03-27
14:46:39,587::resourceManager::806::ResourceManager.Owner
::(releaseAll) Owner.releaseAll requests {} resources {}
Thread-7602::DEBUG::2012-03-27
14:46:39,588::resourceManager::841::ResourceManager.Owner
::(cancelAll) Owner.cancelAll requests {}
Thread-7602::DEBUG::2012-03-27
14:46:39,590::task::980::TaskManager.Task::(_decref) Task
=`c3896a9b-0a36-487c-b8bd-ebe2b85e7e1d`::ref 0 aborting False
Thread-7603::DEBUG::2012-03-27
14:46:39,637::libvirtvm::232::vm.Vm::(_getDiskStats) vmId
=`6046aa18-5146-4284-8026-dca5979c3326`::Disk hdc stats not available
Thread-7603::DEBUG::2012-03-27
14:46:39,639::libvirtvm::261::vm.Vm::(_getDiskLatency) vm
Id=`6046aa18-5146-4284-8026-dca5979c3326`::Disk vda latency not available
Thread-7603::DEBUG::2012-03-27
14:46:39,640::libvirtvm::261::vm.Vm::(_getDiskLatency) vm
Id=`6046aa18-5146-4284-8026-dca5979c3326`::Disk hdc latency not available
Thread-7600::DEBUG::2012-03-27
14:46:39,693::libvirtvm::419::vm.Vm::(_startUnderlyingMigration)
vmId=`6046aa18-5146-4284-8026-dca5979c3326`::starting migration to
qemu+tls://192.168.15.210/system
Thread-7604::DEBUG::2012-03-27
14:46:39,695::libvirtvm::317::vm.Vm::(run) vmId=`6046aa18
-5146-4284-8026-dca5979c3326`::migration downtime thread started
Thread-7605::DEBUG::2012-03-27
14:46:39,697::libvirtvm::345::vm.Vm::(run)
vmId=`6046aa18-5146-4284-8026-dca5979c3326`::starting migration monitor
thread
Thread-7600::DEBUG::2012-03-27
14:46:39,747::libvirtvm::332::vm.Vm::(cancel)
vmId=`6046aa18-5146-4284-8026-dca5979c3326`::canceling migration
downtime thread
Thread-7600::DEBUG::2012-03-27
14:46:39,749::libvirtvm::382::vm.Vm::(stop)
vmId=`6046aa18-5146-4284-8026-dca5979c3326`::stopping migration monitor
thread
Thread-7604::DEBUG::2012-03-27
14:46:39,749::libvirtvm::329::vm.Vm::(run) vmId=`6046aa18
-5146-4284-8026-dca5979c3326`::migration downtime thread exiting
Thread-7600::ERROR::2012-03-27 14:46:39,752::vm::170::vm.Vm::(_recover)
vmId=`6046aa18-5146-4284-8026-dca5979c3326`::operation failed: Failed to
connect to remote libvirt URI qemu+tls://192.168.15.210/system
Thread-7600::ERROR::2012-03-27 14:46:40,321::vm::234::vm.Vm::(run)
vmId=`6046aa18-5146-4284-8026-dca5979c3326`::Traceback (most recent call
last):
  File "/usr/share/vdsm/vm.py", line 217, in run
  File "/usr/share/vdsm/libvirtvm.py", line 443, in
_startUnderlyingMigration
  File "/usr/share/vdsm/libvirtvm.py", line 483, in f
  File "/usr/share/vdsm/libvirtconnection.py", line 79, in wrapper
  File "/usr/lib64/python2.7/site-packages/libvirt.py", line 971, in
migrateToURI2
libvirtError: operation failed: Failed to connect to remote libvirt URI
qemu+tls://192.168.15.210/system




More information about the Users mailing list