<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<HTML><HEAD>
<META content="text/html; charset=utf-8" http-equiv=Content-Type>
<STYLE>
BLOCKQUOTE {
        MARGIN-TOP: 0px; MARGIN-BOTTOM: 0px; MARGIN-LEFT: 2em
}
OL {
        MARGIN-TOP: 0px; MARGIN-BOTTOM: 0px
}
UL {
        MARGIN-TOP: 0px; MARGIN-BOTTOM: 0px
}
P {
        MARGIN-TOP: 0px; MARGIN-BOTTOM: 0px
}
BODY {
        LINE-HEIGHT: 1.5; FONT-FAMILY: 微软雅黑; COLOR: #000080; FONT-SIZE: 10.5pt
}
</STYLE>
<META name=GENERATOR content="MSHTML 8.00.7601.17744"></HEAD>
<BODY style="MARGIN: 10px">
<DIV>hi mark</DIV>
<DIV>
I can and you send Chinese email communication
</DIV>
<DIV> </DIV>
<HR style="WIDTH: 210px; HEIGHT: 1px" align=left color=#b5c4df SIZE=1>
<DIV><SPAN>xianghuadu</SPAN></DIV>
<DIV> </DIV>
<DIV
style="BORDER-BOTTOM: medium none; BORDER-LEFT: medium none; PADDING-BOTTOM: 0cm; PADDING-LEFT: 0cm; PADDING-RIGHT: 0cm; BORDER-TOP: #b5c4df 1pt solid; BORDER-RIGHT: medium none; PADDING-TOP: 3pt">
<DIV
style="PADDING-BOTTOM: 8px; PADDING-LEFT: 8px; PADDING-RIGHT: 8px; BACKGROUND: #efefef; COLOR: #000000; FONT-SIZE: 12px; PADDING-TOP: 8px">
<DIV><B>From:</B> <A href="mailto:wudxw@linux.vnet.ibm.com">Mark
Wu</A></DIV>
<DIV><B>Date:</B> 2013-03-05 18:33</DIV>
<DIV><B>To:</B> <A href="mailto:xianghuadu@gmail.com">xianghuadu</A></DIV>
<DIV><B>CC:</B> <A href="mailto:users@ovirt.org">users</A></DIV>
<DIV><B>Subject:</B> Re: [Users] Migration issue Asking For
Help</DIV></DIV></DIV>
<DIV>
<DIV>On Tue 05 Mar 2013 05:28:17 PM CST, xianghuadu wrote:</DIV>
<DIV>> hi all</DIV>
<DIV>> I recently in the research ovirt encounter a problem.</DIV>
<DIV>> In the vm migration occurs when the error: Migration failed due to Error: Could not connect to peer host.</DIV>
<DIV>> My environment is:</DIV>
<DIV>> KVM dell 2950 * 2</DIV>
<DIV>> storage iscsi-target</DIV>
<DIV>> vm system windows 2008 r2</DIV>
<DIV>> ovirt-log:</DIV>
<DIV>> 2013-03-05 14:52:23,074 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-42) [323d7ca8] VM centos 4cc23d92-8667-4710-9714-a67c0d178fa0 moved from MigratingFrom --> Up</DIV>
<DIV>> 2013-03-05 14:52:23,076 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-42) [323d7ca8] adding VM 4cc23d92-8667-4710-9714-a67c0d178fa0 to re-run list</DIV>
<DIV>> 2013-03-05 14:52:23,079 ERROR [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-42) [323d7ca8] Rerun vm 4cc23d92-8667-4710-9714-a67c0d178fa0. Called from vds 205</DIV>
<DIV>> 2013-03-05 14:52:23,085 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] (pool-3-thread-49) [323d7ca8] START, MigrateStatusVDSCommand(HostName = 205, HostId = 4e7d1ae2-824e-11e2-bb4c-00188be4de29, vmId=4cc23d92-8667-4710-9714-a67c0d178fa0), log id: 618085d</DIV>
<DIV>> 2013-03-05 14:52:23,131 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (pool-3-thread-49) [323d7ca8] Failed in MigrateStatusVDS method</DIV>
<DIV>> 2013-03-05 14:52:23,132 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (pool-3-thread-49) [323d7ca8] Error code noConPeer and error message VDSGenericException: VDSErrorException: Failed to MigrateStatusVDS, error = Could not connect to peer VDS</DIV>
<DIV>> 2013-03-05 14:52:23,134 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (pool-3-thread-49) [323d7ca8] Command org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand return value</DIV>
<DIV>></DIV>
<DIV>> Class Name: org.ovirt.engine.core.vdsbroker.vdsbroker.StatusOnlyReturnForXmlRpc</DIV>
<DIV>> mStatus Class Name: org.ovirt.engine.core.vdsbroker.vdsbroker.StatusForXmlRpc</DIV>
<DIV>> mCode 10</DIV>
<DIV>> mMessage Could not connect to peer VDS</DIV>
<DIV>> 2013-03-05 14:52:23,138 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (pool-3-thread-49) [323d7ca8] HostName = 205</DIV>
<DIV>> 2013-03-05 14:52:23,139 ERROR [org.ovirt.engine.core.vdsbroker.VDSCommandBase] (pool-3-thread-49) [323d7ca8] Command MigrateStatusVDS execution failed. Exception: VDSErrorException: VDSGenericException: VDSErrorException: Failed to MigrateStatusVDS, error = Could not connect to peer VDS</DIV>
<DIV>> 2013-03-05 14:52:23,141 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] (pool-3-thread-49) [323d7ca8] FINISH, MigrateStatusVDSCommand, log</DIV>
<DIV>></DIV>
<DIV>></DIV>
<DIV>> vdsm-log:</DIV>
<DIV>> Thread-5969::DEBUG::2013-03-05 14:52:21,312::libvirtvm::283::vm.Vm::(_getDiskLatency) vmId=`4cc23d92-8667-4710-9714-a67c0d178fa0`::Disk vda latency not available</DIV>
<DIV>> Thread-5622::ERROR::2013-03-05 14:52:22,890::vm::200::vm.Vm::(_recover) vmId=`4cc23d92-8667-4710-9714-a67c0d178fa0`::Failed to destroy remote VM</DIV>
<DIV>> Traceback (most recent call last):</DIV>
<DIV>> File "/usr/share/vdsm/vm.py", line 198, in _recover</DIV>
<DIV>> self.destServer.destroy(self._vm.id)</DIV>
<DIV>> File "/usr/lib64/python2.6/xmlrpclib.py", line 1199, in __call__</DIV>
<DIV>> return self.__send(self.__name, args)</DIV>
<DIV>> File "/usr/lib64/python2.6/xmlrpclib.py", line 1489, in __request</DIV>
<DIV>> verbose=self.__verbose</DIV>
<DIV>> File "/usr/lib64/python2.6/xmlrpclib.py", line 1253, in request</DIV>
<DIV>> return self._parse_response(h.getfile(), sock)</DIV>
<DIV>> File "/usr/lib64/python2.6/xmlrpclib.py", line 1382, in _parse_response</DIV>
<DIV>> response = file.read(1024)</DIV>
<DIV>> File "/usr/lib64/python2.6/socket.py", line 383, in read</DIV>
<DIV>> data = self._sock.recv(left)</DIV>
<DIV>> File "/usr/lib64/python2.6/ssl.py", line 215, in recv</DIV>
<DIV>> return self.read(buflen)</DIV>
<DIV>> File "/usr/lib64/python2.6/ssl.py", line 136, in read</DIV>
<DIV>> return self._sslobj.read(len)</DIV>
<DIV>> SSLError: The read operation timed out</DIV>
<DIV>> Thread-5622::ERROR::2013-03-05 14:52:22,909::vm::283::vm.Vm::(run) vmId=`4cc23d92-8667-4710-9714-a67c0d178fa0`::Failed to migrate</DIV>
<DIV>> Traceback (most recent call last):</DIV>
<DIV>> File "/usr/share/vdsm/vm.py", line 268, in run</DIV>
<DIV>> self._startUnderlyingMigration()</DIV>
<DIV>> File "/usr/share/vdsm/libvirtvm.py", line 443, in _startUnderlyingMigration</DIV>
<DIV>> response = self.destServer.migrationCreate(self._machineParams)</DIV>
<DIV>> File "/usr/lib64/python2.6/xmlrpclib.py", line 1199, in __call__</DIV>
<DIV>> return self.__send(self.__name, args)</DIV>
<DIV>> File "/usr/lib64/python2.6/xmlrpclib.py", line 1489, in __request</DIV>
<DIV>> verbose=self.__verbose</DIV>
<DIV>> File "/usr/lib64/python2.6/xmlrpclib.py", line 1253, in request</DIV>
<DIV>> return self._parse_response(h.getfile(), sock)</DIV>
<DIV>> File "/usr/lib64/python2.6/xmlrpclib.py", line 1382, in _parse_response</DIV>
<DIV>> response = file.read(1024)</DIV>
<DIV>> File "/usr/lib64/python2.6/socket.py", line 383, in read</DIV>
<DIV>> data = self._sock.recv(left)</DIV>
<DIV>> File "/usr/lib64/python2.6/ssl.py", line 215, in recv</DIV>
<DIV>> return self.read(buflen)</DIV>
<DIV>> File "/usr/lib64/python2.6/ssl.py", line 136, in read</DIV>
<DIV>> return self._sslobj.read(len)</DIV>
<DIV>> SSLError: The read operation timed out</DIV>
<DIV>> Thread-5971::DEBUG::2013-03-05 14:52:23,384::BindingXMLRPC::903::vds::(wrapper) client [192.168.1.201]::call vmGetStats with ('4cc23d92-8667-4710-9714-a67c0d178fa0',) {} flowID [323d7ca8]</DIV>
<DIV>> Thread-5971::DEBUG::2013-03-05 14:52:23,385::libvirtvm::283::vm.Vm::(_getDiskLatency) vmId=`4cc23d92-8667-4710-9714-a67c0d178fa0`::Disk vda latency not available</DIV>
<DIV>> Thread-5971::DEBUG::2013-03-05 14:52:23,385::BindingXMLRPC::910::vds::(wrapper) return vmGetStats with {'status': {'message': 'Done', 'code': 0}, 'statsList': [{'status': 'Up', 'username': 'Unknown', 'memUsage': '0', 'acpiEnable': 'true', 'pid': '3135', 'displayIp': '192.168.1.235', 'displayPort': u'5900', 'session': 'Unknown', 'displaySecurePort': '-1', 'timeOffset': '-2', 'hash': '-761593583205877164', 'balloonInfo': {'balloon_max': 524288, 'balloon_cur': 524288}, 'pauseCode': 'NOERR', 'clientIp': '', 'kvmEnable': 'true', 'network': {u'vnet0': {'macAddr': '00:1a:4a:a8:01:52', 'rxDropped': '0', 'rxErrors': '0', 'txDropped': '0', 'txRate': '0.0', 'rxRate': '0.0', 'txErrors': '0', 'state': 'unknown', 'speed': '1000', 'name': u'vnet0'}}, 'vmId': '4cc23d92-8667-4710-9714-a67c0d178fa0', 'displayType': 'vnc', 'cpuUser': '1.85', 'disks': {u'vda': {'readRate': '0.00', 'truesize': '21474836480', 'apparentsize': '21474836480', 'writeRate': '407.16', 'imageID': '6b252eb8-ab9f-445f-9!</DIV>
<DIV> 22e-52d86b</DIV>
<DIV>c6d790'}, u'hdc': {'readLatency': '0', 'apparentsize': '0', 'writeLatency': '0', 'flushLatency': '0', 'readRate': '0.00', 'truesize': '0', 'writeRate': '0.00'}}, 'monitorResponse': '0', 'statsAge': '0.67', 'elapsedTime': '9684', 'vmType': 'kvm', 'cpuSys': '5.93', 'appsList': [], 'guestIPs': ''}]}</DIV>
<DIV>> Thread-5972::DEBUG::2013-03-05 14:52:23,408::BindingXMLRPC::903::vds::(wrapper) client [192.168.1.201]::call vmGetMigrationStatus with ('4cc23d92-8667-4710-9714-a67c0d178fa0',) {} flowID [323d7ca8]</DIV>
<DIV>> Thread-5972::DEBUG::2013-03-05 14:52:23,408::BindingXMLRPC::910::vds::(wrapper) return vmGetMigrationStatus with {'status': {'message': 'Could not connect to peer VDS', 'code': 10}, 'progress': 10}</DIV>
<DIV>> Thread-21::DEBUG::2013-03-05 14:52:26,888::misc::83::Storage.Misc.excCmd::(<lambda>) '/bin/dd iflag=direct if=/dev/0e5827a5-6f3c-49be-be9b-0bfb65198644/metadata bs=4096 count=1' (cwd None)</DIV>
<DIV>> Thread-21::DEBUG::2013-03-05 14:52:26,900::misc::83::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> = '1+0 records in\n1+0 records out\n4096 bytes (4.1 kB) copied, 0.00036552 s, 11.2 MB/s\n'; <rc> = 0</DIV>
<DIV>> Thread-5976::DEBUG::2013-03-05 14:52:31,555::task::568::TaskManager.Task::(_updateState) Task=`da545231-9538-412d-966e-a056ca7d074a`::moving from state init -> state preparing</DIV>
<DIV>> Thread-5976::INFO::2013-03-05 14:52:31,556::logUtils::37::dispatcher::(wrapper) Run and protect: repoStats(options=None)</DIV>
<DIV>> Thread-5976::INFO::2013-03-05 14:52:31,556::logUtils::39::dispatcher::(wrapper) Run and protect: repoStats, Return response: {u'0e5827a5-6f3c-49be-be9b-0bfb65198644': {'delay': '0.0129568576813', 'lastCheck': '4.7', 'code': 0, 'valid': True}, u'4007f30a-f888-4587-b82c-40bcde401acb': {'delay': '0.0022189617157', 'lastCheck': '4.9', 'code': 0, 'valid': True}}</DIV>
<DIV>> Thread-5976::DEBUG::2013-03-05 14:52:31,556::task::1151::TaskManager.Task::(prepare) Task=`da545231-9538-412d-966e-a056ca7d074a`::finished: {u'0e5827a5-6f3c-49be-be9b-0bfb65198644': {'delay': '0.0129568576813', 'lastCheck': '4.7', 'code': 0, 'valid': True}, u'4007f30a-f888-4587-b82c-40bcde401acb': {'delay': '0.0022189617157', 'lastCheck': '4.9', 'code': 0, 'valid': True}}</DIV>
<DIV>> Thread-5976::DEBUG::2013-03-05 14:52:31,556::task::568::TaskManager.Task::(_updateState) Task=`da545231-9538-412d-966e-a056ca7d074a`::moving from state preparing -> state finished</DIV>
<DIV>> Thread-5976::DEBUG::2013-03-05 14:52:31,556::resourceManager::809::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {}</DIV>
<DIV>> Thread-5976::DEBUG::2013-03-05 14:52:31,556::resourceManager::844::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}</DIV>
<DIV>> Thread-5976::DEBUG::2013-03-05 14:52:31,557::task::957::TaskManager.Task::(_decref) Task=`da545231-9538-412d-966e-a056ca7d074a`::ref 0 aborting False</DIV>
<DIV>> Thread-5977::DEBUG::2013-03-05 14:52:31,565::libvirtvm::283::vm.Vm::(_getDiskLatency) vmId=`4cc23d92-8667-4710-9714-a67c0d178fa0`::Disk vda latency not available</DIV>
<DIV>> Thread-21::DEBUG::2013-03-05 14:52:36,904::misc::83::Storage.Misc.excCmd::(<lambda>) '/bin/dd iflag=direct if=/dev/0e5827a5-6f3c-49be-be9b-0bfb65198644/metadata bs=4096 count=1' (cwd None)</DIV>
<DIV>> Thread-21::DEBUG::2013-03-05 14:52:36,917::misc::83::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> = '1+0 records in\n1+0 records out\n4096 bytes (4.1 kB) copied, 0.000400626 s, 10.2 MB/s\n'; <rc> = 0</DIV>
<DIV>> VM Channels Listener::DEBUG::2013-03-05 14:52:41,337::vmChannels::60::vds::(_handle_timeouts) Timeout on fileno 18.</DIV>
<DIV>> Thread-5982::DEBUG::2013-03-05 14:52:41,786::task::568::TaskManager.Task::(_updateState) Task=`91d8929e-1349-4d74-9bcd-e0df406ce455`::moving from state init -> state preparing</DIV>
<DIV>> Thread-5982::INFO::2013-03-05 14:52:41,786::logUtils::37::dispatcher::(wrapper) Run and protect: repoStats(options=None)</DIV>
<DIV>> Thread-5982::INFO::2013-03-05 14:52:41,786::logUtils::39::dispatcher::(wrapper) Run and protect: repoStats, Return response: {u'0e5827a5-6f3c-49be-be9b-0bfb65198644': {'delay': '0.0130889415741', 'lastCheck': '4.9', 'code': 0, 'valid': True}, u'4007f30a-f888-4587-b82c-40bcde401acb': {'delay': '0.00270700454712', 'lastCheck': '5.1', 'code': 0, 'valid': True}}</DIV>
<DIV>> Thread-5982::DEBUG::2013-03-05 14:52:41,787::task::1151::TaskManager.Task::(prepare) Task=`91d8929e-1349-4d74-9bcd-e0df406ce455`::finished: {u'0e5827a5-6f3c-49be-be9b-0bfb65198644': {'delay': '0.0130889415741', 'lastCheck': '4.9', 'code': 0, 'valid': True}, u'4007f30a-f888-4587-b82c-40bcde401acb': {'delay': '0.00270700454712', 'lastCheck': '5.1', 'code': 0, 'valid': True}}</DIV>
<DIV>> Thread-5982::DEBUG::2013-03-05 14:52:41,787::task::568::TaskManager.Task::(_updateState) Task=`91d8929e-1349-4d74-9bcd-e0df406ce455`::moving from state preparing -> state finished</DIV>
<DIV>> Thread-5982::DEBUG::2013-03-05 14:52:41,787::resourceManager::809::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {}</DIV>
<DIV>> Thread-5982::DEBUG::2013-03-05 14:52:41,787::resourceManager::844::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}</DIV>
<DIV>> Thread-5982::DEBUG::2013-03-05 14:52:41,787::task::957::TaskManager.Task::(_decref) Task=`91d8929e-1349-4d74-9bcd-e0df406ce455`::ref 0 aborting False</DIV>
<DIV>> Thread-5983::DEBUG::2013-03-05 14:52:41,795::libvirtvm::283::vm.Vm::(_getDiskLatency) vmId=`4cc23d92-8667-4710-9714-a67c0d178fa0`::Disk vda latency not available</DIV>
<DIV>> Thread-21::DEBUG::2013-03-05 14:52:46,921::misc::83::Storage.Misc.excCmd::(<lambda>) '/bin/dd iflag=direct if=/dev/0e5827a5-6f3c-49be-be9b-0bfb65198644/metadata bs=4096 count=1' (cwd None)</DIV>
<DIV>> Thread-21::DEBUG::2013-03-05 14:52:46,933::misc::83::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> = '1+0 records in\n1+0 records out\n4096 bytes (4.1 kB) copied, 0.000389226 s, 10.5 MB/s\n'; <rc> = 0</DIV>
<DIV>> Thread-5988::DEBUG::2013-03-05 14:52:52,015::task::568::TaskManager.Task::(_updateState) Task=`d8d9ccc1-f17b-4372-a27e-bbee4eda1736`::moving from state init -> state preparing</DIV>
<DIV>> Thread-5988::INFO::2013-03-05 14:52:52,016::logUtils::37::dispatcher::(wrapper) Run and protect: repoStats(options=None)</DIV>
<DIV>> Thread-5988::INFO::2013-03-05 14:52:52,016::logUtils::39::dispatcher::(wrapper) Run and protect: repoStats, Return response: {u'0e5827a5-6f3c-49be-be9b-0bfb65198644': {'delay': '0.0129730701447', 'lastCheck': '5.1', 'code': 0, 'valid': True}, u'4007f30a-f888-4587-b82c-40bcde401acb': {'delay': '0.00222110748291', 'lastCheck': '5.3', 'code': 0, 'valid': True}}</DIV>
<DIV>> Thread-5988::DEBUG::2013-03-05 14:52:52,016::task::1151::TaskManager.Task::(prepare) Task=`d8d9ccc1-f17b-4372-a27e-bbee4eda1736`::finished: {u'0e5827a5-6f3c-49be-be9b-0bfb65198644': {'delay': '0.0129730701447', 'lastCheck': '5.1', 'code': 0, 'valid': True}, u'4007f30a-f888-4587-b82c-40bcde401acb': {'delay': '0.00222110748291', 'lastCheck': '5.3', 'code': 0, 'valid': True}}</DIV>
<DIV>> Thread-5988::DEBUG::2013-03-05 14:52:52,016::task::568::TaskManager.Task::(_updateState) Task=`d8d9ccc1-f17b-4372-a27e-bbee4eda1736`::moving from state preparing -> state finished</DIV>
<DIV>></DIV>
<DIV>> ------------------------------------------------------------------------</DIV>
<DIV>> xianghuadu</DIV>
<DIV>></DIV>
<DIV>></DIV>
<DIV>> _______________________________________________</DIV>
<DIV>> Users mailing list</DIV>
<DIV>> Users@ovirt.org</DIV>
<DIV>> http://lists.ovirt.org/mailman/listinfo/users</DIV>
<DIV> </DIV>
<DIV>The vdsm log you pasted was logged by the recovery code on migration </DIV>
<DIV>failure, not migration failure itself. You could paste the log </DIV>
<DIV>messages above that here.</DIV>
<DIV> From the traceback in _recovery, I suspect it's related to ssl. Does </DIV>
<DIV>it work if you disable ssl? For how to disable ssl, please see </DIV>
<DIV>http://www.ovirt.org/OVirt_-_disable_SSL_in_VDSM</DIV>
<DIV> </DIV>
<DIV> </DIV></DIV></BODY></HTML>