[Users] Migration issue Asking For Help
xianghuadu
xianghuadu at gmail.com
Tue Mar 5 09:28:17 UTC 2013
hi all
I recently in the research ovirt encounter a problem.
In the vm migration occurs when the error: Migration failed due to Error: Could not connect to peer host.
My environment is:
KVM dell 2950 * 2
storage iscsi-target
vm system windows 2008 r2
ovirt-log:
2013-03-05 14:52:23,074 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-42) [323d7ca8] VM centos 4cc23d92-8667-4710-9714-a67c0d178fa0 moved from MigratingFrom --> Up
2013-03-05 14:52:23,076 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-42) [323d7ca8] adding VM 4cc23d92-8667-4710-9714-a67c0d178fa0 to re-run list
2013-03-05 14:52:23,079 ERROR [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-42) [323d7ca8] Rerun vm 4cc23d92-8667-4710-9714-a67c0d178fa0. Called from vds 205
2013-03-05 14:52:23,085 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] (pool-3-thread-49) [323d7ca8] START, MigrateStatusVDSCommand(HostName = 205, HostId = 4e7d1ae2-824e-11e2-bb4c-00188be4de29, vmId=4cc23d92-8667-4710-9714-a67c0d178fa0), log id: 618085d
2013-03-05 14:52:23,131 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (pool-3-thread-49) [323d7ca8] Failed in MigrateStatusVDS method
2013-03-05 14:52:23,132 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (pool-3-thread-49) [323d7ca8] Error code noConPeer and error message VDSGenericException: VDSErrorException: Failed to MigrateStatusVDS, error = Could not connect to peer VDS
2013-03-05 14:52:23,134 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (pool-3-thread-49) [323d7ca8] Command org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand return value
Class Name: org.ovirt.engine.core.vdsbroker.vdsbroker.StatusOnlyReturnForXmlRpc
mStatus Class Name: org.ovirt.engine.core.vdsbroker.vdsbroker.StatusForXmlRpc
mCode 10
mMessage Could not connect to peer VDS
2013-03-05 14:52:23,138 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (pool-3-thread-49) [323d7ca8] HostName = 205
2013-03-05 14:52:23,139 ERROR [org.ovirt.engine.core.vdsbroker.VDSCommandBase] (pool-3-thread-49) [323d7ca8] Command MigrateStatusVDS execution failed. Exception: VDSErrorException: VDSGenericException: VDSErrorException: Failed to MigrateStatusVDS, error = Could not connect to peer VDS
2013-03-05 14:52:23,141 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] (pool-3-thread-49) [323d7ca8] FINISH, MigrateStatusVDSCommand, log
vdsm-log:
Thread-5969::DEBUG::2013-03-05 14:52:21,312::libvirtvm::283::vm.Vm::(_getDiskLatency) vmId=`4cc23d92-8667-4710-9714-a67c0d178fa0`::Disk vda latency not available
Thread-5622::ERROR::2013-03-05 14:52:22,890::vm::200::vm.Vm::(_recover) vmId=`4cc23d92-8667-4710-9714-a67c0d178fa0`::Failed to destroy remote VM
Traceback (most recent call last):
File "/usr/share/vdsm/vm.py", line 198, in _recover
self.destServer.destroy(self._vm.id)
File "/usr/lib64/python2.6/xmlrpclib.py", line 1199, in __call__
return self.__send(self.__name, args)
File "/usr/lib64/python2.6/xmlrpclib.py", line 1489, in __request
verbose=self.__verbose
File "/usr/lib64/python2.6/xmlrpclib.py", line 1253, in request
return self._parse_response(h.getfile(), sock)
File "/usr/lib64/python2.6/xmlrpclib.py", line 1382, in _parse_response
response = file.read(1024)
File "/usr/lib64/python2.6/socket.py", line 383, in read
data = self._sock.recv(left)
File "/usr/lib64/python2.6/ssl.py", line 215, in recv
return self.read(buflen)
File "/usr/lib64/python2.6/ssl.py", line 136, in read
return self._sslobj.read(len)
SSLError: The read operation timed out
Thread-5622::ERROR::2013-03-05 14:52:22,909::vm::283::vm.Vm::(run) vmId=`4cc23d92-8667-4710-9714-a67c0d178fa0`::Failed to migrate
Traceback (most recent call last):
File "/usr/share/vdsm/vm.py", line 268, in run
self._startUnderlyingMigration()
File "/usr/share/vdsm/libvirtvm.py", line 443, in _startUnderlyingMigration
response = self.destServer.migrationCreate(self._machineParams)
File "/usr/lib64/python2.6/xmlrpclib.py", line 1199, in __call__
return self.__send(self.__name, args)
File "/usr/lib64/python2.6/xmlrpclib.py", line 1489, in __request
verbose=self.__verbose
File "/usr/lib64/python2.6/xmlrpclib.py", line 1253, in request
return self._parse_response(h.getfile(), sock)
File "/usr/lib64/python2.6/xmlrpclib.py", line 1382, in _parse_response
response = file.read(1024)
File "/usr/lib64/python2.6/socket.py", line 383, in read
data = self._sock.recv(left)
File "/usr/lib64/python2.6/ssl.py", line 215, in recv
return self.read(buflen)
File "/usr/lib64/python2.6/ssl.py", line 136, in read
return self._sslobj.read(len)
SSLError: The read operation timed out
Thread-5971::DEBUG::2013-03-05 14:52:23,384::BindingXMLRPC::903::vds::(wrapper) client [192.168.1.201]::call vmGetStats with ('4cc23d92-8667-4710-9714-a67c0d178fa0',) {} flowID [323d7ca8]
Thread-5971::DEBUG::2013-03-05 14:52:23,385::libvirtvm::283::vm.Vm::(_getDiskLatency) vmId=`4cc23d92-8667-4710-9714-a67c0d178fa0`::Disk vda latency not available
Thread-5971::DEBUG::2013-03-05 14:52:23,385::BindingXMLRPC::910::vds::(wrapper) return vmGetStats with {'status': {'message': 'Done', 'code': 0}, 'statsList': [{'status': 'Up', 'username': 'Unknown', 'memUsage': '0', 'acpiEnable': 'true', 'pid': '3135', 'displayIp': '192.168.1.235', 'displayPort': u'5900', 'session': 'Unknown', 'displaySecurePort': '-1', 'timeOffset': '-2', 'hash': '-761593583205877164', 'balloonInfo': {'balloon_max': 524288, 'balloon_cur': 524288}, 'pauseCode': 'NOERR', 'clientIp': '', 'kvmEnable': 'true', 'network': {u'vnet0': {'macAddr': '00:1a:4a:a8:01:52', 'rxDropped': '0', 'rxErrors': '0', 'txDropped': '0', 'txRate': '0.0', 'rxRate': '0.0', 'txErrors': '0', 'state': 'unknown', 'speed': '1000', 'name': u'vnet0'}}, 'vmId': '4cc23d92-8667-4710-9714-a67c0d178fa0', 'displayType': 'vnc', 'cpuUser': '1.85', 'disks': {u'vda': {'readRate': '0.00', 'truesize': '21474836480', 'apparentsize': '21474836480', 'writeRate': '407.16', 'imageID': '6b252eb8-ab9f-445f-922e-52d86bc6d790'}, u'hdc': {'readLatency': '0', 'apparentsize': '0', 'writeLatency': '0', 'flushLatency': '0', 'readRate': '0.00', 'truesize': '0', 'writeRate': '0.00'}}, 'monitorResponse': '0', 'statsAge': '0.67', 'elapsedTime': '9684', 'vmType': 'kvm', 'cpuSys': '5.93', 'appsList': [], 'guestIPs': ''}]}
Thread-5972::DEBUG::2013-03-05 14:52:23,408::BindingXMLRPC::903::vds::(wrapper) client [192.168.1.201]::call vmGetMigrationStatus with ('4cc23d92-8667-4710-9714-a67c0d178fa0',) {} flowID [323d7ca8]
Thread-5972::DEBUG::2013-03-05 14:52:23,408::BindingXMLRPC::910::vds::(wrapper) return vmGetMigrationStatus with {'status': {'message': 'Could not connect to peer VDS', 'code': 10}, 'progress': 10}
Thread-21::DEBUG::2013-03-05 14:52:26,888::misc::83::Storage.Misc.excCmd::(<lambda>) '/bin/dd iflag=direct if=/dev/0e5827a5-6f3c-49be-be9b-0bfb65198644/metadata bs=4096 count=1' (cwd None)
Thread-21::DEBUG::2013-03-05 14:52:26,900::misc::83::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> = '1+0 records in\n1+0 records out\n4096 bytes (4.1 kB) copied, 0.00036552 s, 11.2 MB/s\n'; <rc> = 0
Thread-5976::DEBUG::2013-03-05 14:52:31,555::task::568::TaskManager.Task::(_updateState) Task=`da545231-9538-412d-966e-a056ca7d074a`::moving from state init -> state preparing
Thread-5976::INFO::2013-03-05 14:52:31,556::logUtils::37::dispatcher::(wrapper) Run and protect: repoStats(options=None)
Thread-5976::INFO::2013-03-05 14:52:31,556::logUtils::39::dispatcher::(wrapper) Run and protect: repoStats, Return response: {u'0e5827a5-6f3c-49be-be9b-0bfb65198644': {'delay': '0.0129568576813', 'lastCheck': '4.7', 'code': 0, 'valid': True}, u'4007f30a-f888-4587-b82c-40bcde401acb': {'delay': '0.0022189617157', 'lastCheck': '4.9', 'code': 0, 'valid': True}}
Thread-5976::DEBUG::2013-03-05 14:52:31,556::task::1151::TaskManager.Task::(prepare) Task=`da545231-9538-412d-966e-a056ca7d074a`::finished: {u'0e5827a5-6f3c-49be-be9b-0bfb65198644': {'delay': '0.0129568576813', 'lastCheck': '4.7', 'code': 0, 'valid': True}, u'4007f30a-f888-4587-b82c-40bcde401acb': {'delay': '0.0022189617157', 'lastCheck': '4.9', 'code': 0, 'valid': True}}
Thread-5976::DEBUG::2013-03-05 14:52:31,556::task::568::TaskManager.Task::(_updateState) Task=`da545231-9538-412d-966e-a056ca7d074a`::moving from state preparing -> state finished
Thread-5976::DEBUG::2013-03-05 14:52:31,556::resourceManager::809::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {}
Thread-5976::DEBUG::2013-03-05 14:52:31,556::resourceManager::844::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
Thread-5976::DEBUG::2013-03-05 14:52:31,557::task::957::TaskManager.Task::(_decref) Task=`da545231-9538-412d-966e-a056ca7d074a`::ref 0 aborting False
Thread-5977::DEBUG::2013-03-05 14:52:31,565::libvirtvm::283::vm.Vm::(_getDiskLatency) vmId=`4cc23d92-8667-4710-9714-a67c0d178fa0`::Disk vda latency not available
Thread-21::DEBUG::2013-03-05 14:52:36,904::misc::83::Storage.Misc.excCmd::(<lambda>) '/bin/dd iflag=direct if=/dev/0e5827a5-6f3c-49be-be9b-0bfb65198644/metadata bs=4096 count=1' (cwd None)
Thread-21::DEBUG::2013-03-05 14:52:36,917::misc::83::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> = '1+0 records in\n1+0 records out\n4096 bytes (4.1 kB) copied, 0.000400626 s, 10.2 MB/s\n'; <rc> = 0
VM Channels Listener::DEBUG::2013-03-05 14:52:41,337::vmChannels::60::vds::(_handle_timeouts) Timeout on fileno 18.
Thread-5982::DEBUG::2013-03-05 14:52:41,786::task::568::TaskManager.Task::(_updateState) Task=`91d8929e-1349-4d74-9bcd-e0df406ce455`::moving from state init -> state preparing
Thread-5982::INFO::2013-03-05 14:52:41,786::logUtils::37::dispatcher::(wrapper) Run and protect: repoStats(options=None)
Thread-5982::INFO::2013-03-05 14:52:41,786::logUtils::39::dispatcher::(wrapper) Run and protect: repoStats, Return response: {u'0e5827a5-6f3c-49be-be9b-0bfb65198644': {'delay': '0.0130889415741', 'lastCheck': '4.9', 'code': 0, 'valid': True}, u'4007f30a-f888-4587-b82c-40bcde401acb': {'delay': '0.00270700454712', 'lastCheck': '5.1', 'code': 0, 'valid': True}}
Thread-5982::DEBUG::2013-03-05 14:52:41,787::task::1151::TaskManager.Task::(prepare) Task=`91d8929e-1349-4d74-9bcd-e0df406ce455`::finished: {u'0e5827a5-6f3c-49be-be9b-0bfb65198644': {'delay': '0.0130889415741', 'lastCheck': '4.9', 'code': 0, 'valid': True}, u'4007f30a-f888-4587-b82c-40bcde401acb': {'delay': '0.00270700454712', 'lastCheck': '5.1', 'code': 0, 'valid': True}}
Thread-5982::DEBUG::2013-03-05 14:52:41,787::task::568::TaskManager.Task::(_updateState) Task=`91d8929e-1349-4d74-9bcd-e0df406ce455`::moving from state preparing -> state finished
Thread-5982::DEBUG::2013-03-05 14:52:41,787::resourceManager::809::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {}
Thread-5982::DEBUG::2013-03-05 14:52:41,787::resourceManager::844::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
Thread-5982::DEBUG::2013-03-05 14:52:41,787::task::957::TaskManager.Task::(_decref) Task=`91d8929e-1349-4d74-9bcd-e0df406ce455`::ref 0 aborting False
Thread-5983::DEBUG::2013-03-05 14:52:41,795::libvirtvm::283::vm.Vm::(_getDiskLatency) vmId=`4cc23d92-8667-4710-9714-a67c0d178fa0`::Disk vda latency not available
Thread-21::DEBUG::2013-03-05 14:52:46,921::misc::83::Storage.Misc.excCmd::(<lambda>) '/bin/dd iflag=direct if=/dev/0e5827a5-6f3c-49be-be9b-0bfb65198644/metadata bs=4096 count=1' (cwd None)
Thread-21::DEBUG::2013-03-05 14:52:46,933::misc::83::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> = '1+0 records in\n1+0 records out\n4096 bytes (4.1 kB) copied, 0.000389226 s, 10.5 MB/s\n'; <rc> = 0
Thread-5988::DEBUG::2013-03-05 14:52:52,015::task::568::TaskManager.Task::(_updateState) Task=`d8d9ccc1-f17b-4372-a27e-bbee4eda1736`::moving from state init -> state preparing
Thread-5988::INFO::2013-03-05 14:52:52,016::logUtils::37::dispatcher::(wrapper) Run and protect: repoStats(options=None)
Thread-5988::INFO::2013-03-05 14:52:52,016::logUtils::39::dispatcher::(wrapper) Run and protect: repoStats, Return response: {u'0e5827a5-6f3c-49be-be9b-0bfb65198644': {'delay': '0.0129730701447', 'lastCheck': '5.1', 'code': 0, 'valid': True}, u'4007f30a-f888-4587-b82c-40bcde401acb': {'delay': '0.00222110748291', 'lastCheck': '5.3', 'code': 0, 'valid': True}}
Thread-5988::DEBUG::2013-03-05 14:52:52,016::task::1151::TaskManager.Task::(prepare) Task=`d8d9ccc1-f17b-4372-a27e-bbee4eda1736`::finished: {u'0e5827a5-6f3c-49be-be9b-0bfb65198644': {'delay': '0.0129730701447', 'lastCheck': '5.1', 'code': 0, 'valid': True}, u'4007f30a-f888-4587-b82c-40bcde401acb': {'delay': '0.00222110748291', 'lastCheck': '5.3', 'code': 0, 'valid': True}}
Thread-5988::DEBUG::2013-03-05 14:52:52,016::task::568::TaskManager.Task::(_updateState) Task=`d8d9ccc1-f17b-4372-a27e-bbee4eda1736`::moving from state preparing -> state finished
xianghuadu
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20130305/ef926b81/attachment-0001.html>
More information about the Users
mailing list