On Tue, Sep 24, 2013 at 02:41:58PM -0300, emitor(a)gmail.com wrote:
Thanks for your answer Dan!
Yesterday was talking with an user in the IRC and gave me the hint to
upgrade the libvirt to the 1.1.2 after trying in his implementation the
live migration successfully.
I've upgraded the libvirt but I'm still having the issue. I send to you the
logs that you asked to me and the information bellow:
OS Version:
Fedora - 19 - 3
Kernel Version:
3.11.1 - 200.fc19.x86_64
KVM Version:
1.4.2 - 9.fc19
LIBVIRT Version:
libvirt-1.1.2-1.fc19
VDSM Version:
vdsm-4.12.1-2.fc19
SPICE Version:
0.12.4 - 1.fc19
iSCSI Initiator Name:
iqn.1994-05.com.redhat:d990cf85cdeb
SPM Priority:
Medium
Active VMs:
1
CPU Name:
Intel Westmere Family
CPU Type:
Intel(R) Xeon(R) CPU E5620 @ 2.40GHz
CPU Sockets:
1
CPU Cores per Socket:
4
CPU Threads per Core:
2 (SMT Enabled)
Physical Memory:
12007 MB total, 2762 MB used, 9245 MB free
Swap Size:
15999 MB total, 0 MB used, 15999 MB free
Shared Memory:
0%
Max free Memory for scheduling new VMs:
15511.5 MB
Memory Page Sharing:
Inactive
Automatic Large Pages:
Always
(Both hypervisors have the same hardware and software version)
I'm going to keep trying some things becouse something must get messed up
because now i have a VM with Debian that doesn't start giving me the error
"Failed to run VM debian on Host ovirt1." and "Failed to run VM debian on
Host ovirt2."
Anyway I'll wait for your answer.
Very Regards!
Emiliano
Your destination Vdsm has
vmId=`1f7e60c7-51cb-469a-8016-58a5837f3316`::The vm start process failed
Traceback (most recent call last):
File "/usr/share/vdsm/vm.py", line 2022, in _startUnderlyingVm
self._run()
File "/usr/share/vdsm/vm.py", line 2819, in _run
devices = self.buildConfDevices()
File "/usr/share/vdsm/vm.py", line 1839, in buildConfDevices
devices = self.getConfDevices()
File "/usr/share/vdsm/vm.py", line 1806, in getConfDevices
self.normalizeDrivesIndices(devices[DISK_DEVICES])
File "/usr/share/vdsm/vm.py", line 1990, in normalizeDrivesIndices
if drv['iface'] not in self._usedIndices:
KeyError: 'iface'
Which looks just like
Bug 1011472 - [vdsm] cannot recover VM upon vdsm restart after a disk has
been hot plugged to it.
Could it be that you have hot-plugged a disk to your VM at the source host?
Somehow, Vdsm forgets to keep the 'iface' element passed from Engine for the
hot-plugged disk.
Dan.