
Hi. There are 2 nodes: 10.2.20.8 and 10.1.20.7 Migrate virtual machines from 10.2.20.8 to 10.1.20.7 is successful. But when trying to migrate from 10.2.20.8 to 10.1.20.7 error: Migration failed due to Error: Migration destination has an invalid hostname (VM: 1, Source Host: 10.1.20.7). In the logs on the vdsm.log 10.1.20.7: Thread-5217 :: DEBUG :: 2012-02-17 11:07:08,435 :: clientIF :: 54 :: vds :: (wrapper) [10.1.20.2] :: call migrate with ({'src': '10 .1 .20.7 ',' dst ': '10 .2.20.8:54321', 'vmId': '616938ca-d34f-437d-9e10-760d55eeadf6 ',' method ':' online '},) {} Thread-5217 :: DEBUG :: 2012-02-17 11:07:08,436 :: clientIF :: 357 :: vds :: (migrate) {'src': '10 .1.20.7 ',' dst ': '10 .2. 20.8:54321 ',' vmId ': '616938ca-d34f-437d-9e10-760d55eeadf6', 'method': 'online'} Thread-5218 :: DEBUG :: 2012-02-17 11:07:08,437 :: vm :: 122 :: vm.Vm :: (_setupVdsConnection) vmId = `616938ca-d34f-437d-9e10-760d55eeadf6` :: Destination server is: https://10.2.20.8:54321 Thread-5217 :: DEBUG :: 2012-02-17 11:07:08,438 :: clientIF :: 59 :: vds :: (wrapper) return migrate with {'status': {'message': 'Migration process starting' , 'code': 0}} Thread-5218 :: DEBUG :: 2012-02-17 11:07:08,438 :: vm :: 124 :: vm.Vm :: (_setupVdsConnection) vmId = `616938ca-d34f-437d-9e10-760d55eeadf6` :: Initiating connection with destination Thread-5218 :: DEBUG :: 2012-02-17 11:07:08,490 :: vm :: 170 :: vm.Vm :: (_prepareGuest) vmId = `616938ca-d34f-437d-9e10-760d55eeadf6` :: migration Process begins Thread-5218 :: DEBUG :: 2012-02-17 11:07:08,493 :: vm :: 217 :: vm.Vm :: (run) vmId = `616938ca-d34f-437d-9e10-760d55eeadf6` :: migration semaphore acquired Thread-5218 :: ERROR :: 2012-02-17 11:07:08,517 :: vm :: 176 :: vm.Vm :: (_recover) vmId = `616938ca-d34f-437d-9e10-760d55eeadf6` :: migration destination error: Migration destination has an invalid hostname Thread-5218 :: ERROR :: 2012-02-17 11:07:08,570 :: vm :: 231 :: vm.Vm :: (run) vmId = `616938ca-d34f-437d-9e10-760d55eeadf6` :: Traceback (most recent call last): File "/ usr / share / vdsm / vm.py", line 223, in run self._startUnderlyingMigration () File "/ usr / share / vdsm / libvirtvm.py", line 400, in _startUnderlyingMigration raise RuntimeError ('migration destination error:' + response ['status'] ['message']) RuntimeError: migration destination error: Migration destination has an invalid hostname Thread-5220 :: DEBUG :: 2012-02-17 11:07:09,299 :: clientIF :: 54 :: vds :: (wrapper) [10.1.20.2] :: call getVmStats with ('616938ca-d34f-437d- 9e10-760d55eeadf6 ',) {} Thread-5220 :: DEBUG :: 2012-02-17 11:07:09,300 :: clientIF :: 59 :: vds :: (wrapper) return getVmStats with {'status': {' message ':' Done ',' code ': 0},' statsList ': [{' status ':' Up ',' username ':' Unknown ',' memUsage ': '0', 'acpiEnable': 'true', 'pid': '8620 ',' displayIp ': '0', 'displayPort': u'5903 ',' session ':' Unknown ',' displaySecurePort ': u'5904', 'timeOffset': '0 ',' clientIp ':'' , 'kvmEnable': 'true', 'network': {u'vnet3 ': {' macAddr ': u'00: 1a: 4a: a8: 7a: 06', 'rxDropped': '0 ',' rxErrors' : '0 ',' txDropped ': '0', 'txRate': '0 .0 ',' rxRate ': '0 .0', 'txErrors': '0 ',' state ':' unknown ',' speed ':' 1000 ',' name ': u'vnet3'}}, 'vmId': '616938ca-d34f-437d-9e10-760d55eeadf6 ',' monitorResponse ': '0', 'cpuUser': '0 .00 ',' disks': {u'vda ': {' readLatency ': '0', 'apparentsize': '1073741824 ',' writeLatency ': '0', 'imageID': '4 b62aa22-c3e8-423e-b547-b4bc21c24ef7 ',' flushLatency ' : '0 ',' readRate ': '0 .00', 'truesize': '1073745920 ',' writeRate ': '0 .00'}}, 'boot': 'c', 'statsAge': '0 .05 ',' cpuIdle ' : '100 .00 ',' elapsedTime ': '833', 'vmType': 'kvm', 'cpuSys': '0 .00', 'appsList': [], 'guestIPs':'',' displayType ':' qxl ' , 'nice':''}]} Thread-5221 :: DEBUG :: 2012-02-17 11:07:09,317 :: clientIF :: 54 :: vds :: (wrapper) [10.1.20.2] :: call migrateStatus with ('616938ca-d34f-437d- 9e10-760d55eeadf6 ',) {} Thread-5221 :: DEBUG :: 2012-02-17 11:07:09,317 :: clientIF :: 59 :: vds :: (wrapper) return migrateStatus with {'status': {' message ':' Migration destination has an invalid hostname ',' code ': 39}} In the logs on the vdsm.log 10.2.20.8: Thread-1825::DEBUG::2012-02-17 11:09:06,431::clientIF::54::vds::(wrapper) [10.1.20.7]::call getVmStats with ('616938ca-d34f-437d-9e10-760d55eeadf6',) {} Thread-1825::DEBUG::2012-02-17 11:09:06,432::clientIF::59::vds::(wrapper) return getVmStats with {'status': {'message': 'Virtual machine does not exist', 'code': 1}} Thread-1826::DEBUG::2012-02-17 11:09:06,454::clientIF::54::vds::(wrapper) [10.1.20.7]::call migrationCreate with ({'bridge': 'ovirtmgmt', 'acpiEnable': 'true', 'emulatedMachine': 'pc', 'afterMigrationStatus': 'Up', 'spiceSecureChannels': 'smain,sinputs', 'vmId': '616938ca-d34f-437d-9e10-760d55eeadf6', 'transparentHugePages': 'true', 'displaySecurePort': '5904', 'spiceSslCipherSuite': 'DEFAULT', 'cpuType': 'Conroe', 'custom': {}, 'migrationDest': 'libvirt', 'macAddr': '00:1a:4a:a8:7a:06', 'boot': 'c', 'smp': '1', 'vmType': 'kvm', '_srcDomXML': "<domain type='kvm' id='14'>\n <name>1</name>\n <uuid>616938ca-d34f-437d-9e10-760d55eeadf6</uuid>\n <memory>524288</memory>\n <currentMemory>524288</currentMemory>\n <vcpu>1</vcpu>\n <cputune>\n <shares>1020</shares>\n <period>100000</period>\n <quota>-1</quota>\n </cputune>\n <sysinfo type='smbios'>\n <system>\n <entry name='manufacturer'>Red Hat</entry>\n <entry name='product'>RHEV Hypervisor</entry>\n <entry name='version'>6.2-1.1</entry>\n <entry name='serial'>54748E0A-54FC-6615-54FD-661559792E0B_00:1C:C4:74:B0:96</entry> \n <entry name='uuid'>616938ca-d34f-437d-9e10-760d55eeadf6</entry>\n </system>\n </sysinfo>\n <os>\n <type arch='x86_64' machine='rhel6.2.0'>hvm</type>\n <boot dev='hd'/>\n <smbios mode='sysinfo'/>\n </os>\n <features>\n <acpi/>\n </features>\n <cpu match='exact'>\n <model>Conroe</model>\n <topology sockets='1' cores='1' threads='1'/>\n </cpu>\n <clock offset='variable' adjustment='0'>\n <timer name='rtc' tickpolicy='catchup'/>\n </clock>\n <on_poweroff>destroy</on_poweroff>\n <on_reboot>restart</on_reboot>\n <on_crash>destroy</on_crash>\n <devices>\n <emulator>/usr/libexec/qemu-kvm</emulator>\n <disk type='file' device='disk'>\n <driver name='qemu' type='raw' cache='none' error_policy='stop' io='threads'/>\n <source file='/rhev/data-center/6e541d98-5630-11e1-b4e4-001cc43ceea0/a409075b-cd33-4 c20-b743-67901c7b3c02/images/4b62aa22-c3e8-423e-b547-b4bc21c24ef7/936f1fe4-1 533-42fb-9f8d-2d712f6498e1'/>\n <target dev='vda' bus='virtio'/>\n <serial>3e-b547-b4bc21c24ef7</serial>\n <alias name='virtio-disk0'/>\n <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>\n </disk>\n <disk type='file' device='cdrom'>\n <driver name='qemu' type='raw'/>\n <target dev='hdc' bus='ide'/>\n <readonly/>\n <alias name='ide0-1-0'/>\n <address type='drive' controller='0' bus='1' unit='0'/>\n </disk>\n <controller type='virtio-serial' index='0' ports='16'>\n <alias name='virtio-serial0'/>\n <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>\n </controller>\n <controller type='ide' index='0'>\n <alias name='ide0'/>\n <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>\n </controller>\n <interface type='bridge'>\n <mac address='00:1a:4a:a8:7a:06'/>\n <source bridge='ovirtmgmt'/>\n <target dev='vnet3'/>\n <model type='virtio'/>\n <alias name='net0'/>\n <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>\n </interface>\n <channel type='unix'>\n <source mode='bind' path='/var/lib/libvirt/qemu/channels/1.com.redhat.rhevm.vdsm'/>\n <target type='virtio' name='com.redhat.rhevm.vdsm'/>\n <alias name='channel0'/>\n <address type='virtio-serial' controller='0' bus='0' port='1'/>\n </channel>\n <channel type='spicevmc'>\n <target type='virtio' name='com.redhat.spice.0'/>\n <alias name='channel1'/>\n <address type='virtio-serial' controller='0' bus='0' port='2'/>\n </channel>\n <input type='mouse' bus='ps2'/>\n <graphics type='spice' port='5903' tlsPort='5904' autoport='yes' listen='0' keymap='en-us' passwdValidTo='2012-02-17T15:55:50' connected='disconnect'>\n <listen type='address' address='0'/>\n <channel name='main' mode='secure'/>\n <channel name='inputs' mode='secure'/>\n </graphics>\n <video>\n <model type='qxl' vram='65536' heads='1'/>\n <alias name='video0'/>\n <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>\n </video>\n <memballoon model='none'>\n <alias name='balloon0'/>\n </memballoon>\n </devices>\n <seclabel type='dynamic' model='selinux' relabel='yes'>\n <label>system_u:system_r:svirt_t:s0:c655,c750</label>\n <imagelabel>system_u:object_r:svirt_image_t:s0:c655,c750</imagelabel>\n </seclabel>\n</domain>\n", 'memSize': 512, 'elapsedTimeOffset': 979.12826704978943, 'vmName': '1', 'spiceMonitors': '1', 'nice': '0', 'status': 'Up', 'displayIp': '0', 'drives': [{'index': '0', 'domainID': 'a409075b-cd33-4c20-b743-67901c7b3c02', 'apparentsize': '1073741824', 'format': 'raw', 'boot': 'true', 'volumeID': '936f1fe4-1533-42fb-9f8d-2d712f6498e1', 'imageID': '4b62aa22-c3e8-423e-b547-b4bc21c24ef7', 'blockDev': False, 'truesize': '1073745920', 'poolID': '6e541d98-5630-11e1-b4e4-001cc43ceea0', 'path': '/rhev/data-center/6e541d98-5630-11e1-b4e4-001cc43ceea0/a409075b-cd33-4c20-b 743-67901c7b3c02/images/4b62aa22-c3e8-423e-b547-b4bc21c24ef7/936f1fe4-1533-4 2fb-9f8d-2d712f6498e1', 'serial': '3e-b547-b4bc21c24ef7', 'propagateErrors': 'off', 'if': 'virtio'}], 'displayPort': '5903', 'smpCoresPerSocket': '1', 'clientIp': '', 'nicModel': 'pv', 'keyboardLayout': 'en-us', 'kvmEnable': 'true', 'username': 'Unknown', 'timeOffset': '0', 'guestIPs': '', 'display': 'qxl'},) {} Thread-1826::DEBUG::2012-02-17 11:09:06,455::clientIF::770::vds::(migrationCreate) Migration create Thread-1826::ERROR::2012-02-17 11:09:06,459::clientIF::773::vds::(migrationCreate) Migration failed: local hostname is not correct Thread-1826::DEBUG::2012-02-17 11:09:06,459::clientIF::59::vds::(wrapper) return migrationCreate with {'status': {'message': 'Migration destination has an invalid hostname', 'code': 39}} Thread-1827::DEBUG::2012-02-17 11:09:06,508::clientIF::54::vds::(wrapper) [10.1.20.7]::call destroy with ('616938ca-d34f-437d-9e10-760d55eeadf6',) {} Thread-1827::INFO::2012-02-17 11:09:06,509::clientIF::450::vds::(destroy) vmContainerLock acquired by vm 616938ca-d34f-437d-9e10-760d55eeadf6 Thread-1827::DEBUG::2012-02-17 11:09:06,509::clientIF::59::vds::(wrapper) return destroy with {'status': {'message': 'Virtual machine does not exist', 'code': 1}} Thread-1828::INFO::2012-02-17 11:09:07,859::dispatcher::94::Storage.Dispatcher.Protect::(run) Run and protect: repoStats, args: () Thread-1828::DEBUG::2012-02-17 11:09:07,859::task::495::TaskManager.Task::(_debug) Task b7166e0f-1483-4635-a4e5-330b878f0218: moving from state init -> state preparing Thread-1828::DEBUG::2012-02-17 11:09:07,859::task::495::TaskManager.Task::(_debug) Task b7166e0f-1483-4635-a4e5-330b878f0218: finished: {'cc23ab64-d477-4b76-851c-8862a59a6e06': {'delay': '0.0019850730896', 'lastCheck': 1329494942.650471, 'valid': True, 'code': 0}, 'a409075b-cd33-4c20-b743-67901c7b3c02': {'delay': '0.00191402435303', 'lastCheck': 1329494942.7406051, 'valid': True, 'code': 0}} Thread-1828::DEBUG::2012-02-17 11:09:07,860::task::495::TaskManager.Task::(_debug) Task b7166e0f-1483-4635-a4e5-330b878f0218: moving from state preparing -> state finished Thread-1828::DEBUG::2012-02-17 11:09:07,860::resourceManager::786::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-1828::DEBUG::2012-02-17 11:09:07,860::resourceManager::821::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-1828::DEBUG::2012-02-17 11:09:07,861::task::495::TaskManager.Task::(_debug) Task b7166e0f-1483-4635-a4e5-330b878f0218: ref 0 aborting False Thread-1828::INFO::2012-02-17 11:09:07,861::dispatcher::100::Storage.Dispatcher.Protect::(run) Run and protect: repoStats, Return response: {'status': {'message': 'OK', 'code': 0}, 'cc23ab64-d477-4b76-851c-8862a59a6e06': {'delay': '0.0019850730896', 'lastCheck': 1329494942.650471, 'valid': True, 'code': 0}, 'a409075b-cd33-4c20-b743-67901c7b3c02': {'delay': '0.00191402435303', 'lastCheck': 1329494942.7406051, 'valid': True, 'code': 0}} How do I fix it? I checked the host name on both nodes and found that they resolves correctly (there is an entry in /etc/hostname). In DNS hostname is not registered (!)