[Users] Help: Fatal error during migration

T-Sinjon tscbj1989 at gmail.com
Tue May 8 22:16:45 EDT 2012


After looking into the log , it says :

virNetSocketNewConnectTCP:432 : unable to connect to server at '172.30.0.231:16514': Connection refused
2012-04-26 10:14:23.996+0000: 16970: debug : do_open:1078 : driver 8 remote returned ERROR
2012-04-26 10:14:23.996+0000: 16970: error : doPeer2PeerMigrate:2129 : operation failed: Failed to connect to remote libvirt URI qemu+tls://172.30.0.231/system

the tls is not working 
I have resolved this by set the node 1 and node 2 libvirt conf  " listen_tls = 1 ", then the migration going on 

thx all the same.


On 26 Apr, 2012, at 6:28 PM, T-Sinjon wrote:

> thanks Heim
> 
> node1 is target node, node2 is source node , there're too many logs , i put the relative logs to the bellow files 
> <node1-vdsm-log.txt><node2-vdsm-log.txt><node2-libvirtd-log.txt>
> 
> 
> On 26 Apr, 2012, at 3:07 PM, Itamar Heim wrote:
> 
>> On 04/26/2012 06:12 AM, Gmail wrote:
>>> hello,dears:
>>> 
>>> i have a ovirt-engine with two node
>>> 
>>> 172.30.0.229	ovirt-engine.local
>>> 172.30.0.231	ovirt-node-1.local
>>> 172.30.0.232	ovirt-node-2.local
>>> 
>>> When i use the live migration ,migrate a vm from  node2 to node1 , it failed with the error code 12&&  error msg  'Fatal error during migration' ,
>>> I don't know why , could anybody help me ? thanks
>>> 
>>> Detail log as bellow :
>>> 
>>> tail -f /var/log/ovirt-engine/engine.log
>>> 2012-04-25 22:59:52,744 INFO  [org.ovirt.engine.core.bll.VdsSelector] (http--0.0.0.0-8080-16) Checking for a specific VDS only - id:ae567034-5d8e-11e1-bdc9-a7168ad4d39f, name:ovirt-node-1.local, host_name(ip):172.30.0.231
>>> 2012-04-25 22:59:52,745 INFO  [org.ovirt.engine.core.bll.MigrateVmToServerCommand] (pool-5-thread-46) Running command: MigrateVmToServerCommand internal: false. Entities affected :  ID: f73f17a8-a418-4318-af0e-2cea18ab597a Type: VM
>>> 2012-04-25 22:59:52,748 INFO  [org.ovirt.engine.core.bll.VdsSelector] (pool-5-thread-46) Checking for a specific VDS only - id:ae567034-5d8e-11e1-bdc9-a7168ad4d39f, name:ovirt-node-1.local, host_name(ip):172.30.0.231
>>> 2012-04-25 22:59:52,754 INFO  [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (pool-5-thread-46) START, MigrateVDSCommand(vdsId = bcee74d4-5d8e-11e1-a03e-8bc4768fc531, vmId=f73f17a8-a418-4318-af0e-2cea18ab597a, srcHost=172.30.0.232, dstVdsId=ae567034-5d8e-11e1-bdc9-a7168ad4d39f, dstHost=172.30.0.231:54321, migrationMethod=ONLINE), log id: 444b3006
>>> 2012-04-25 22:59:52,757 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (pool-5-thread-46) VdsBroker::migrate::Entered (vm_guid=f73f17a8-a418-4318-af0e-2cea18ab597a, srcHost=172.30.0.232, dstHost=172.30.0.231:54321,  method=online
>>> 2012-04-25 22:59:52,757 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (pool-5-thread-46) START, MigrateBrokerVDSCommand(vdsId = bcee74d4-5d8e-11e1-a03e-8bc4768fc531, vmId=f73f17a8-a418-4318-af0e-2cea18ab597a, srcHost=172.30.0.232, dstVdsId=ae567034-5d8e-11e1-bdc9-a7168ad4d39f, dstHost=172.30.0.231:54321, migrationMethod=ONLINE), log id: 2a26d38e
>>> 2012-04-25 22:59:52,778 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (pool-5-thread-46) FINISH, MigrateBrokerVDSCommand, log id: 2a26d38e
>>> 2012-04-25 22:59:52,817 INFO  [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (pool-5-thread-46) FINISH, MigrateVDSCommand, return: MigratingFrom, log id: 444b3006
>>> 2012-04-25 22:59:53,281 ERROR [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-40) Rerun vm f73f17a8-a418-4318-af0e-2cea18ab597a. Called from vds ovirt-node-2.local
>>> 2012-04-25 22:59:53,286 INFO  [org.ovirt.engine.core.vdsbroker.UpdateVdsDynamicDataVDSCommand] (pool-5-thread-46) START, UpdateVdsDynamicDataVDSCommand(vdsId = ae567034-5d8e-11e1-bdc9-a7168ad4d39f, vdsDynamic=org.ovirt.engine.core.common.businessentities.VdsDynamic at 34c39cde), log id: 4c2ed94e
>>> 2012-04-25 22:59:53,299 INFO  [org.ovirt.engine.core.vdsbroker.UpdateVdsDynamicDataVDSCommand] (pool-5-thread-46) FINISH, UpdateVdsDynamicDataVDSCommand, log id: 4c2ed94e
>>> 2012-04-25 22:59:53,304 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] (pool-5-thread-46) START, MigrateStatusVDSCommand(vdsId = bcee74d4-5d8e-11e1-a03e-8bc4768fc531, vmId=f73f17a8-a418-4318-af0e-2cea18ab597a), log id: 69d2528
>>> 2012-04-25 22:59:53,320 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (pool-5-thread-46) Failed in MigrateStatusVDS method
>>> 2012-04-25 22:59:53,320 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (pool-5-thread-46) Error code migrateErr and error message VDSGenericException: VDSErrorException: Failed to MigrateStatusVDS, error = Fatal error during migration
>>> 2012-04-25 22:59:53,321 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (pool-5-thread-46) Command org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand return value
>>> Class Name: org.ovirt.engine.core.vdsbroker.vdsbroker.StatusOnlyReturnForXmlRpc
>>> mStatus                       Class Name: org.ovirt.engine.core.vdsbroker.vdsbroker.StatusForXmlRpc
>>> mCode                         12
>>> mMessage                      Fatal error during migration
>>> 
>>> 
>>> 2012-04-25 22:59:53,321 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (pool-5-thread-46) Vds: ovirt-node-2.local
>>> 2012-04-25 22:59:53,321 ERROR [org.ovirt.engine.core.vdsbroker.VDSCommandBase] (pool-5-thread-46) Command MigrateStatusVDS execution failed. Exception: VDSErrorException: VDSGenericException: VDSErrorException: Failed to MigrateStatusVDS, error = Fatal error during migration
>>> 2012-04-25 22:59:53,321 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] (pool-5-thread-46) FINISH, MigrateStatusVDSCommand, log id: 69d2528
>>> _______________________________________________
>>> Users mailing list
>>> Users at ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>> 
>> vdsm logs (and maybe libvirt as well) from source and target nodes
> 




More information about the Users mailing list