yep, sure enough:
[root@d0lppn032 ~]# cat /var/log/libvirtd.log|grep CA
2013-01-09 22:45:30.310+0000: 4413: error :
virNetTLSContextCheckCertFile:92 : Cannot read CA certificate
'/etc/pki/CA/cacert.pem': No such file or directory
[root@d0lppn032 ~]# locate cacert.pem
/etc/pki/vdsm/certs/cacert.pem
I just installed symbolic links for the CA cert and the client cert and
key to the vdsm certs/key on the hosts and everything was fine then.
---
Jeff
is there a config file somewhere i can edit and hopefully get this working?
On 1/9/13 4:39 PM, "Jeff Bailey" <bailey(a)cs.kent.edu> wrote:
> I don't know any details about the dreyou version you're running but
> look in the libvirtd log on the source host and see if there is a
> complaint about a missing CA cert. That problem existed in the 3.2
> alpha but should have been fixed in the nightlies. I don't know whether
> it made it to the version you're running. It could also be something
> totally different :)
>
> ---
> Jeff
>
> On 1/9/2013 8:54 AM, Jonathan Horne wrote:
>> from source:
>>
>> [root@d0lppn032 vdsm]# egrep 'Thread-30209|Thread-30210'
>> /var/log/vdsm/vdsm.log
>> Thread-30209::DEBUG::2013-01-09
>> 07:49:08,543::BindingXMLRPC::908::vds::(wrapper) client
>> [10.32.0.21]::call
>> vmMigrate with ({'src': 'd0lppn032.skopos.me', 'dst':
>> 'd0lppn031.skopos.me:54321', 'vmId':
>> 'e00adf83-78d2-4f65-a259-0e01680f57fd', 'method':
'online'},) {} flowID
>> [343d5f4c]
>> Thread-30209::DEBUG::2013-01-09 07:49:08,543::API::435::vds::(migrate)
>> {'src': 'd0lppn032.skopos.me', 'dst':
'd0lppn031.skopos.me:54321',
>> 'vmId':
>> 'e00adf83-78d2-4f65-a259-0e01680f57fd', 'method':
'online'}
>> Thread-30210::DEBUG::2013-01-09
>> 07:49:08,544::vm::139::vm.Vm::(_setupVdsConnection)
>> vmId=`e00adf83-78d2-4f65-a259-0e01680f57fd`::Destination server is:
>> d0lppn031.skopos.me:54321
>> Thread-30210::DEBUG::2013-01-09
>> 07:49:08,545::vm::141::vm.Vm::(_setupVdsConnection)
>> vmId=`e00adf83-78d2-4f65-a259-0e01680f57fd`::Initiating connection with
>> destination
>> Thread-30209::DEBUG::2013-01-09
>> 07:49:08,545::BindingXMLRPC::915::vds::(wrapper) return vmMigrate with
>> {'status': {'message': 'Migration process starting',
'code': 0}}
>> Thread-30210::DEBUG::2013-01-09
>> 07:49:08,614::libvirtvm::298::vm.Vm::(_getDiskLatency)
>> vmId=`e00adf83-78d2-4f65-a259-0e01680f57fd`::Disk vda latency not
>> available
>> Thread-30210::DEBUG::2013-01-09
>> 07:49:08,659::vm::190::vm.Vm::(_prepareGuest)
>> vmId=`e00adf83-78d2-4f65-a259-0e01680f57fd`::migration Process begins
>> Thread-30210::DEBUG::2013-01-09 07:49:08,751::vm::261::vm.Vm::(run)
>> vmId=`e00adf83-78d2-4f65-a259-0e01680f57fd`::migration semaphore
>> acquired
>> Thread-30210::DEBUG::2013-01-09
>> 07:49:09,168::libvirtvm::469::vm.Vm::(_startUnderlyingMigration)
>> vmId=`e00adf83-78d2-4f65-a259-0e01680f57fd`::starting migration to
>> qemu+tls://d0lppn031.skopos.me/system
>> Thread-30210::DEBUG::2013-01-09
>> 07:49:09,230::libvirtvm::370::vm.Vm::(cancel)
>> vmId=`e00adf83-78d2-4f65-a259-0e01680f57fd`::canceling migration
>> downtime
>> thread
>> Thread-30210::DEBUG::2013-01-09
>> 07:49:09,231::libvirtvm::429::vm.Vm::(stop)
>> vmId=`e00adf83-78d2-4f65-a259-0e01680f57fd`::stopping migration monitor
>> thread
>> Thread-30210::ERROR::2013-01-09 07:49:09,232::vm::196::vm.Vm::(_recover)
>> vmId=`e00adf83-78d2-4f65-a259-0e01680f57fd`::operation failed: Failed to
>> connect to remote libvirt URI qemu+tls://d0lppn031.skopos.me/system
>> Thread-30210::ERROR::2013-01-09 07:49:09,374::vm::284::vm.Vm::(run)
>> vmId=`e00adf83-78d2-4f65-a259-0e01680f57fd`::Failed to migrate
>>
>>
>>
>> and from destination:
>>
>> [root@d0lppn031 vdsm]# egrep 'Thread-40128|Thread-40129'
>> /var/log/vdsm/vdsm.log
>> Thread-40128::DEBUG::2013-01-09
>> 07:49:08,943::BindingXMLRPC::908::vds::(wrapper) client
>> [10.32.0.32]::call
>> vmMigrationCreate with ({'username': 'root',
'acpiEnable': 'true',
>> 'emulatedMachine': 'pc', 'afterMigrationStatus':
'Up', 'vmId':
>> 'e00adf83-78d2-4f65-a259-0e01680f57fd', 'transparentHugePages':
'true',
>> 'displaySecurePort': '5901', 'timeOffset': 1,
'cpuType': 'SandyBridge',
>> 'custom': {}, 'migrationDest': 'libvirt', 'smp':
'4', 'vmType': 'kvm',
>> 'spiceSslCipherSuite': 'DEFAULT', '_srcDomXML':
"<domain type='kvm'
>> id='2'>\n <name>d0lpvf051</name>\n
>> <uuid>e00adf83-78d2-4f65-a259-0e01680f57fd</uuid>\n <memory
>> unit='KiB'>4194304</memory>\n <currentMemory
>> unit='KiB'>4194304</currentMemory>\n <vcpu
>> placement='static'>4</vcpu>\n
>> <cputune>\n <shares>1020</shares>\n </cputune>\n
<sysinfo
>> type='smbios'>\n <system>\n <entry
>> name='manufacturer'>oVirt</entry>\n <entry
name='product'>oVirt
>> Node</entry>\n <entry
name='version'>6-3.el6.centos.9</entry>\n
>> <entry
name='serial'>4C4C4544-004D-4E10-8036-B3C04F575731</entry>\n
>> <entry
name='uuid'>e00adf83-78d2-4f65-a259-0e01680f57fd</entry>\n
>> </system>\n </sysinfo>\n <os>\n <type
arch='x86_64'
>> machine='rhel6.3.0'>hvm</type>\n <boot
dev='hd'/>\n <smbios
>> mode='sysinfo'/>\n </os>\n <features>\n
<acpi/>\n </features>\n
>> <cpu mode='custom' match='exact'>\n <model
>> fallback='allow'>SandyBridge</model>\n <topology
sockets='4'
>> cores='1'
>> threads='1'/>\n </cpu>\n <clock offset='variable'
adjustment='1'>\n
>> <timer name='rtc' tickpolicy='catchup'/>\n
</clock>\n
>> <on_poweroff>destroy</on_poweroff>\n
<on_reboot>restart</on_reboot>\n
>> <on_crash>destroy</on_crash>\n <devices>\n
>> <emulator>/usr/libexec/qemu-kvm</emulator>\n <disk
type='file'
>> device='cdrom'>\n <driver name='qemu'
type='raw'/>\n <source
>> startupPolicy='optional'/>\n <target dev='hdc'
bus='ide'/>\n
>> <readonly/>\n <serial></serial>\n <alias
name='ide0-1-0'/>\n
>> <address type='drive' controller='0' bus='1'
target='0' unit='0'/>\n
>> </disk>\n <disk type='block' device='disk'
snapshot='no'>\n
>> <driver name='qemu' type='qcow2' cache='none'
error_policy='stop'
>> io='native'/>\n <source
>>
>> dev='/rhev/data-center/fdab6642-8928-43d5-8ca1-b68792d70729/d668d949-42f5
>> -4
>>
>> 9a3-b33f-0b174ac226c5/images/0a9e06f9-2e85-4410-930d-7e381128c212/93410a3
>> 2-
>> 21d8-498c-b12a-efb43a2ef881'/>\n <target dev='vda'
bus='virtio'/>\n
>> <serial>0a9e06f9-2e85-4410-930d-7e381128c212</serial>\n
<alias
>> name='virtio-disk0'/>\n <address type='pci'
domain='0x0000'
>> bus='0x00' slot='0x06' function='0x0'/>\n
</disk>\n <controller
>> type='usb' index='0'>\n <alias
name='usb0'/>\n <address
>> type='pci' domain='0x0000' bus='0x00' slot='0x01'
function='0x2'/>\n
>> </controller>\n <controller type='ide'
index='0'>\n <alias
>> name='ide0'/>\n <address type='pci'
domain='0x0000' bus='0x00'
>> slot='0x01' function='0x1'/>\n </controller>\n
<controller
>> type='virtio-serial' index='0'>\n <alias
name='virtio-serial0'/>\n
>> <address type='pci' domain='0x0000' bus='0x00'
slot='0x05'
>> function='0x0'/>\n </controller>\n <interface
type='bridge'>\n
>> <mac address='00:1a:4a:20:00:9e'/>\n <source
bridge='ovirtmgmt'/>\n
>> <target dev='vnet0'/>\n <model
type='virtio'/>\n
>> <filterref
>> filter='vdsm-no-mac-spoofing'/>\n <alias
name='net0'/>\n
>> <address type='pci' domain='0x0000' bus='0x00'
slot='0x03'
>> function='0x0'/>\n </interface>\n <interface
type='bridge'>\n
>> <mac address='00:1a:4a:20:00:9f'/>\n <source
bridge='ovirtmgmt'/>\n
>> <target dev='vnet1'/>\n <model
type='virtio'/>\n
>> <filterref
>> filter='vdsm-no-mac-spoofing'/>\n <alias
name='net1'/>\n
>> <address type='pci' domain='0x0000' bus='0x00'
slot='0x04'
>> function='0x0'/>\n </interface>\n <console
type='pty'
>> tty='/dev/pts/1'>\n <source path='/dev/pts/1'/>\n
<target
>> type='virtio' port='0'/>\n <alias
name='console0'/>\n
>> </console>\n
>> <channel type='unix'>\n <source mode='bind'
>>
>>
path='/var/lib/libvirt/qemu/channels/d0lpvf051.com.redhat.rhevm.vdsm'/>\n
>> <target type='virtio'
name='com.redhat.rhevm.vdsm'/>\n <alias
>> name='channel0'/>\n <address type='virtio-serial'
controller='0'
>> bus='0' port='1'/>\n </channel>\n <channel
type='unix'>\n
>> <source mode='bind'
>>
>>
path='/var/lib/libvirt/qemu/channels/d0lpvf051.org.qemu.guest_agent.0'/>\
>> n
>> <target type='virtio'
name='org.qemu.guest_agent.0'/>\n
>> <alias
>> name='channel1'/>\n <address type='virtio-serial'
controller='0'
>> bus='0' port='2'/>\n </channel>\n <channel
type='spicevmc'>\n
>> <target type='virtio' name='com.redhat.spice.0'/>\n
<alias
>> name='channel2'/>\n <address type='virtio-serial'
controller='0'
>> bus='0' port='3'/>\n </channel>\n <input
type='mouse'
>> bus='ps2'/>\n
>> <graphics type='spice' port='5900' tlsPort='5901'
autoport='yes'
>> listen='0' keymap='en-us'
passwdValidTo='2013-01-09T00:22:14'
>> connected='keep'>\n <listen type='address'
address='0'/>\n
>> <channel name='main' mode='secure'/>\n <channel
name='display'
>> mode='secure'/>\n <channel name='inputs'
mode='secure'/>\n
>> <channel name='cursor' mode='secure'/>\n <channel
name='playback'
>> mode='secure'/>\n <channel name='record'
mode='secure'/>\n
>> <channel name='smartcard' mode='secure'/>\n
<channel
>> name='usbredir'
>> mode='secure'/>\n </graphics>\n <video>\n
<model type='qxl'
>> vram='65536' heads='1'/>\n <alias
name='video0'/>\n <address
>> type='pci' domain='0x0000' bus='0x00' slot='0x02'
function='0x0'/>\n
>> </video>\n <memballoon model='virtio'>\n <alias
>> name='balloon0'/>\n <address type='pci'
domain='0x0000' bus='0x00'
>> slot='0x07' function='0x0'/>\n </memballoon>\n
</devices>\n
>> <seclabel
>> type='none'/>\n</domain>\n", 'memSize': 4096,
'elapsedTimeOffset':
>> 48875.878304004669, 'vmName': 'd0lpvf051', 'nice':
'0', 'status': 'Up',
>> 'clientIp': '', 'displayIp': '0',
'displayPort': '5900',
>> 'smpCoresPerSocket': '1', 'smartcardEnable':
'false', 'guestIPs':
>> '10.32.0.51', 'nicModel': 'rtl8139,pv',
'keyboardLayout': 'en-us',
>> 'kvmEnable': 'true', 'pitReinjection': 'false',
'devices':
>> [{'specParams':
>> {'vram': '65536'}, 'alias': 'video0',
'deviceId':
>> '99c46e79-710c-4f60-87eb-904fa496f0a7', 'address':
{'slot': '0x02',
>> 'bus':
>> '0x00', 'domain': '0x0000', 'type':
'pci', 'function': '0x0'}, 'device':
>> 'qxl', 'type': 'video'}, {'nicModel':
'pv', 'macAddr':
>> '00:1a:4a:20:00:9e', 'network': 'ovirtmgmt',
'alias': 'net0', 'filter':
>> 'vdsm-no-mac-spoofing', 'specParams': {}, 'deviceId':
>> '054f3e29-d531-438b-8d1c-09cbed863bea', 'address':
{'slot': '0x03',
>> 'bus':
>> '0x00', 'domain': '0x0000', 'type':
'pci', 'function': '0x0'}, 'device':
>> 'bridge', 'type': 'interface', 'name':
'vnet0'}, {'nicModel': 'pv',
>> 'macAddr': '00:1a:4a:20:00:9f', 'network':
'ovirtmgmt', 'alias': 'net1',
>> 'filter': 'vdsm-no-mac-spoofing', 'specParams': {},
'deviceId':
>> '48ff9261-fbd8-4f0f-a823-4c6451226b95', 'address':
{'slot': '0x04',
>> 'bus':
>> '0x00', 'domain': '0x0000', 'type':
'pci', 'function': '0x0'}, 'device':
>> 'bridge', 'type': 'interface', 'name':
'vnet1'}, {'target': 4194304,
>> 'specParams': {'model': 'virtio'}, 'alias':
'balloon0', 'deviceId':
>> 'df32c652-f9c6-42db-ae88-579ee90aa714', 'address':
{'slot': '0x07',
>> 'bus':
>> '0x00', 'domain': '0x0000', 'type':
'pci', 'function': '0x0'}, 'device':
>> 'memballoon', 'type': 'balloon'}, {'index':
'2', 'iface': 'ide', 'name':
>> 'hdc', 'alias': 'ide0-1-0', 'shared':
'false', 'specParams': {'path':
>> ''},
>> 'readonly': 'True', 'deviceId':
'62d5042c-3b75-4cb3-b175-373f66c7356b',
>> 'address': {'bus': '1', 'controller':
'0', 'type': 'drive', 'target':
>> '0',
>> 'unit': '0'}, 'device': 'cdrom', 'path':
'', 'type': 'disk'},
>> {'address':
>> {'slot': '0x06', 'bus': '0x00', 'domain':
'0x0000', 'type': 'pci',
>> 'function': '0x0'}, 'index': 0, 'iface':
'virtio', 'apparentsize':
>> '4294967296', 'alias': 'virtio-disk0',
'imageID':
>> '0a9e06f9-2e85-4410-930d-7e381128c212', 'readonly':
'False', 'shared':
>> 'false', 'truesize': '4294967296', 'type':
'disk', 'domainID':
>> 'd668d949-42f5-49a3-b33f-0b174ac226c5', 'reqsize': '0',
'format': 'cow',
>> 'deviceId': '0a9e06f9-2e85-4410-930d-7e381128c212',
'poolID':
>> 'fdab6642-8928-43d5-8ca1-b68792d70729', 'device': 'disk',
'path':
>>
>> '/rhev/data-center/fdab6642-8928-43d5-8ca1-b68792d70729/d668d949-42f5-49a
>> 3-
>>
>> b33f-0b174ac226c5/images/0a9e06f9-2e85-4410-930d-7e381128c212/93410a32-21
>> d8
>> -498c-b12a-efb43a2ef881', 'propagateErrors': 'off',
'optional': 'false',
>> 'name': 'vda', 'volumeID':
'93410a32-21d8-498c-b12a-efb43a2ef881',
>> 'specParams': {}, 'volumeChain': [{'path':
>>
>> '/rhev/data-center/fdab6642-8928-43d5-8ca1-b68792d70729/d668d949-42f5-49a
>> 3-
>>
>> b33f-0b174ac226c5/images/0a9e06f9-2e85-4410-930d-7e381128c212/93410a32-21
>> d8
>> -498c-b12a-efb43a2ef881', 'domainID':
>> 'd668d949-42f5-49a3-b33f-0b174ac226c5', 'volumeID':
>> '93410a32-21d8-498c-b12a-efb43a2ef881', 'imageID':
>> '0a9e06f9-2e85-4410-930d-7e381128c212'}]}, {'device':
'usb', 'alias':
>> 'usb0', 'type': 'controller', 'address':
{'slot': '0x01', 'bus': '0x00',
>> 'domain': '0x0000', 'type': 'pci',
'function': '0x2'}}, {'device':
>> 'ide',
>> 'alias': 'ide0', 'type': 'controller',
'address': {'slot': '0x01',
>> 'bus':
>> '0x00', 'domain': '0x0000', 'type':
'pci', 'function': '0x1'}},
>> {'device':
>> 'virtio-serial', 'alias': 'virtio-serial0',
'type': 'controller',
>> 'address': {'slot': '0x05', 'bus':
'0x00', 'domain': '0x0000', 'type':
>> 'pci', 'function': '0x0'}}, {'device':
'unix', 'alias': 'channel0',
>> 'type': 'channel', 'address': {'bus':
'0', 'controller': '0', 'type':
>> 'virtio-serial', 'port': '1'}}, {'device':
'unix', 'alias': 'channel1',
>> 'type': 'channel', 'address': {'bus':
'0', 'controller': '0', 'type':
>> 'virtio-serial', 'port': '2'}}, {'device':
'spicevmc', 'alias':
>> 'channel2', 'type': 'channel', 'address':
{'bus': '0', 'controller':
>> '0',
>> 'type': 'virtio-serial', 'port': '3'}}],
'spiceSecureChannels':
>> 'smain,sinputs,scursor,splayback,srecord,sdisplay,susbredir,ssmartcard',
>> 'display': 'qxl'},) {}
>> Thread-40128::DEBUG::2013-01-09
>> 07:49:08,943::API::470::vds::(migrationCreate) Migration create
>> Thread-40128::INFO::2013-01-09
>> 07:49:08,944::API::607::vds::(_getNetworkIp) network None: using 0
>> Thread-40128::INFO::2013-01-09
>> 07:49:08,944::clientIF::334::vds::(createVm) vmContainerLock acquired by
>> vm e00adf83-78d2-4f65-a259-0e01680f57fd
>> Thread-40129::DEBUG::2013-01-09
>> 07:49:08,952::vm::651::vm.Vm::(_startUnderlyingVm)
>> vmId=`e00adf83-78d2-4f65-a259-0e01680f57fd`::Start
>> Thread-40128::DEBUG::2013-01-09
>> 07:49:08,953::clientIF::348::vds::(createVm) Total desktops after
>> creation
>> of e00adf83-78d2-4f65-a259-0e01680f57fd is 1
>> Thread-40128::DEBUG::2013-01-09
>>
>> 07:49:08,954::libvirtvm::2921::vm.Vm::(waitForMigrationDestinationPrepare
>> )
>> vmId=`e00adf83-78d2-4f65-a259-0e01680f57fd`::migration destination:
>> waiting for VM creation
>> Thread-40129::DEBUG::2013-01-09
>> 07:49:08,954::vm::655::vm.Vm::(_startUnderlyingVm)
>> vmId=`e00adf83-78d2-4f65-a259-0e01680f57fd`::_ongoingCreations acquired
>> Thread-40129::INFO::2013-01-09
>> 07:49:08,955::libvirtvm::1430::vm.Vm::(_run)
>> vmId=`e00adf83-78d2-4f65-a259-0e01680f57fd`::VM wrapper has started
>> Thread-40128::DEBUG::2013-01-09
>>
>> 07:49:08,955::libvirtvm::2927::vm.Vm::(waitForMigrationDestinationPrepare
>> )
>> vmId=`e00adf83-78d2-4f65-a259-0e01680f57fd`::migration destination:
>> waiting 36s for path preparation
>> Thread-40129::WARNING::2013-01-09
>> 07:49:08,955::vm::455::vm.Vm::(getConfDevices)
>> vmId=`e00adf83-78d2-4f65-a259-0e01680f57fd`::Unknown type found, device:
>> '{'device': 'unix', 'alias': 'channel0',
'type': 'channel', 'address':
>> {'bus': '0', 'controller': '0', 'type':
'virtio-serial', 'port': '1'}}'
>> found
>> Thread-40129::WARNING::2013-01-09
>> 07:49:08,956::vm::455::vm.Vm::(getConfDevices)
>> vmId=`e00adf83-78d2-4f65-a259-0e01680f57fd`::Unknown type found, device:
>> '{'device': 'unix', 'alias': 'channel1',
'type': 'channel', 'address':
>> {'bus': '0', 'controller': '0', 'type':
'virtio-serial', 'port': '2'}}'
>> found
>> Thread-40129::WARNING::2013-01-09
>> 07:49:08,957::vm::455::vm.Vm::(getConfDevices)
>> vmId=`e00adf83-78d2-4f65-a259-0e01680f57fd`::Unknown type found, device:
>> '{'device': 'spicevmc', 'alias': 'channel2',
'type': 'channel',
>> 'address':
>> {'bus': '0', 'controller': '0', 'type':
'virtio-serial', 'port': '3'}}'
>> found
>> Thread-40129::DEBUG::2013-01-09
>> 07:49:08,958::task::568::TaskManager.Task::(_updateState)
>> Task=`fce2eb32-aa53-4742-962d-11a07cc9f06d`::moving from state init ->
>> state preparing
>> Thread-40129::INFO::2013-01-09
>> 07:49:08,958::logUtils::41::dispatcher::(wrapper) Run and protect:
>> getVolumeSize(sdUUID='d668d949-42f5-49a3-b33f-0b174ac226c5',
>> spUUID='fdab6642-8928-43d5-8ca1-b68792d70729',
>> imgUUID='0a9e06f9-2e85-4410-930d-7e381128c212',
>> volUUID='93410a32-21d8-498c-b12a-efb43a2ef881', options=None)
>> Thread-40129::INFO::2013-01-09
>> 07:49:08,959::logUtils::44::dispatcher::(wrapper) Run and protect:
>> getVolumeSize, Return response: {'truesize': '5368709120',
>> 'apparentsize':
>> '5368709120'}
>> Thread-40129::DEBUG::2013-01-09
>> 07:49:08,959::task::1151::TaskManager.Task::(prepare)
>> Task=`fce2eb32-aa53-4742-962d-11a07cc9f06d`::finished: {'truesize':
>> '5368709120', 'apparentsize': '5368709120'}
>> Thread-40129::DEBUG::2013-01-09
>> 07:49:08,959::task::568::TaskManager.Task::(_updateState)
>> Task=`fce2eb32-aa53-4742-962d-11a07cc9f06d`::moving from state preparing
>> -> state finished
>> Thread-40129::DEBUG::2013-01-09
>> 07:49:08,960::resourceManager::830::ResourceManager.Owner::(releaseAll)
>> Owner.releaseAll requests {} resources {}
>> Thread-40129::DEBUG::2013-01-09
>> 07:49:08,960::resourceManager::864::ResourceManager.Owner::(cancelAll)
>> Owner.cancelAll requests {}
>> Thread-40129::DEBUG::2013-01-09
>> 07:49:08,960::task::957::TaskManager.Task::(_decref)
>> Task=`fce2eb32-aa53-4742-962d-11a07cc9f06d`::ref 0 aborting False
>> Thread-40129::INFO::2013-01-09
>> 07:49:08,961::clientIF::316::vds::(prepareVolumePath) prepared volume
>> path:
>> Thread-40129::DEBUG::2013-01-09
>> 07:49:08,961::task::568::TaskManager.Task::(_updateState)
>> Task=`d6ca414a-5a97-47a2-a2ec-ea3acd992aca`::moving from state init ->
>> state preparing
>> Thread-40129::INFO::2013-01-09
>> 07:49:08,961::logUtils::41::dispatcher::(wrapper) Run and protect:
>> prepareImage(sdUUID='d668d949-42f5-49a3-b33f-0b174ac226c5',
>> spUUID='fdab6642-8928-43d5-8ca1-b68792d70729',
>> imgUUID='0a9e06f9-2e85-4410-930d-7e381128c212',
>> volUUID='93410a32-21d8-498c-b12a-efb43a2ef881')
>> Thread-40129::DEBUG::2013-01-09
>> 07:49:08,962::resourceManager::190::ResourceManager.Request::(__init__)
>>
>> ResName=`Storage.d668d949-42f5-49a3-b33f-0b174ac226c5`ReqID=`4261e498-e72
>> 0-
>> 42fd-8e48-ab390bd84a29`::Request was made in
>> '/usr/share/vdsm/storage/resourceManager.py' line '189' at
'__init__'
>> Thread-40129::DEBUG::2013-01-09
>> 07:49:08,962::resourceManager::504::ResourceManager::(registerResource)
>> Trying to register resource
>> 'Storage.d668d949-42f5-49a3-b33f-0b174ac226c5'
>> for lock type 'shared'
>> Thread-40129::DEBUG::2013-01-09
>> 07:49:08,963::resourceManager::547::ResourceManager::(registerResource)
>> Resource 'Storage.d668d949-42f5-49a3-b33f-0b174ac226c5' is free. Now
>> locking as 'shared' (1 active user)
>> Thread-40129::DEBUG::2013-01-09
>> 07:49:08,963::resourceManager::227::ResourceManager.Request::(grant)
>>
>> ResName=`Storage.d668d949-42f5-49a3-b33f-0b174ac226c5`ReqID=`4261e498-e72
>> 0-
>> 42fd-8e48-ab390bd84a29`::Granted request
>> Thread-40129::DEBUG::2013-01-09
>> 07:49:08,964::task::794::TaskManager.Task::(resourceAcquired)
>> Task=`d6ca414a-5a97-47a2-a2ec-ea3acd992aca`::_resourcesAcquired:
>> Storage.d668d949-42f5-49a3-b33f-0b174ac226c5 (shared)
>> Thread-40129::DEBUG::2013-01-09
>> 07:49:08,964::task::957::TaskManager.Task::(_decref)
>> Task=`d6ca414a-5a97-47a2-a2ec-ea3acd992aca`::ref 1 aborting False
>> Thread-40129::DEBUG::2013-01-09
>> 07:49:08,965::lvm::409::OperationMutex::(_reloadlvs) Operation 'lvm
>> reload
>> operation' got the operation mutex
>> Thread-40129::DEBUG::2013-01-09
>> 07:49:08,965::misc::84::Storage.Misc.excCmd::(<lambda>) '/usr/bin/sudo
>> -n
>> /sbin/lvm lvs --config " devices { preferred_names =
>> [\\"^/dev/mapper/\\"]
>> ignore_suspended_devices=1 write_cache_state=0
>> disable_after_error_count=3
>> filter = [ \\"a%36090a028108c8e5dca0e950a0000e01a%\\",
\\"r%.*%\\" ] }
>> global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 }
>> backup { retain_min = 50 retain_days = 0 } " --noheadings --units b
>> --nosuffix --separator | -o
>> uuid,name,vg_name,attr,size,seg_start_pe,devices,tags
>> d668d949-42f5-49a3-b33f-0b174ac226c5' (cwd None)
>> Thread-40129::DEBUG::2013-01-09
>> 07:49:09,081::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS:
<err> =
>> ''; <rc> = 0
>> Thread-40129::DEBUG::2013-01-09
>> 07:49:09,094::lvm::442::OperationMutex::(_reloadlvs) Operation 'lvm
>> reload
>> operation' released the operation mutex
>> Thread-40129::DEBUG::2013-01-09
>> 07:49:09,106::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/dd
>> iflag=direct skip=4 bs=512
>> if=/dev/d668d949-42f5-49a3-b33f-0b174ac226c5/metadata count=1' (cwd
>> None)
>> Thread-40129::DEBUG::2013-01-09
>> 07:49:09,125::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS:
<err> =
>> '1+0 records in\n1+0 records out\n512 bytes (512 B) copied, 0.00859049
>> s,
>> 59.6 kB/s\n'; <rc> = 0
>> Thread-40129::DEBUG::2013-01-09
>> 07:49:09,126::misc::325::Storage.Misc::(validateDDBytes) err: ['1+0
>> records in', '1+0 records out', '512 bytes (512 B) copied,
0.00859049 s,
>> 59.6 kB/s'], size: 512
>> Thread-40129::INFO::2013-01-09
>> 07:49:09,126::image::344::Storage.Image::(getChain)
>> sdUUID=d668d949-42f5-49a3-b33f-0b174ac226c5
>> imgUUID=0a9e06f9-2e85-4410-930d-7e381128c212
>> chain=[<storage.blockVolume.BlockVolume object at 0x7f5760041490>]
>> Thread-40129::INFO::2013-01-09
>> 07:49:09,127::logUtils::44::dispatcher::(wrapper) Run and protect:
>> prepareImage, Return response: {'path':
>>
>> '/rhev/data-center/fdab6642-8928-43d5-8ca1-b68792d70729/d668d949-42f5-49a
>> 3-
>>
>> b33f-0b174ac226c5/images/0a9e06f9-2e85-4410-930d-7e381128c212/93410a32-21
>> d8
>> -498c-b12a-efb43a2ef881', 'chain': [{'path':
>>
>> '/rhev/data-center/fdab6642-8928-43d5-8ca1-b68792d70729/d668d949-42f5-49a
>> 3-
>>
>> b33f-0b174ac226c5/images/0a9e06f9-2e85-4410-930d-7e381128c212/93410a32-21
>> d8
>> -498c-b12a-efb43a2ef881', 'domainID':
>> 'd668d949-42f5-49a3-b33f-0b174ac226c5', 'volumeID':
>> '93410a32-21d8-498c-b12a-efb43a2ef881', 'imageID':
>> '0a9e06f9-2e85-4410-930d-7e381128c212'}]}
>> Thread-40129::DEBUG::2013-01-09
>> 07:49:09,128::task::1151::TaskManager.Task::(prepare)
>> Task=`d6ca414a-5a97-47a2-a2ec-ea3acd992aca`::finished: {'path':
>>
>> '/rhev/data-center/fdab6642-8928-43d5-8ca1-b68792d70729/d668d949-42f5-49a
>> 3-
>>
>> b33f-0b174ac226c5/images/0a9e06f9-2e85-4410-930d-7e381128c212/93410a32-21
>> d8
>> -498c-b12a-efb43a2ef881', 'chain': [{'path':
>>
>> '/rhev/data-center/fdab6642-8928-43d5-8ca1-b68792d70729/d668d949-42f5-49a
>> 3-
>>
>> b33f-0b174ac226c5/images/0a9e06f9-2e85-4410-930d-7e381128c212/93410a32-21
>> d8
>> -498c-b12a-efb43a2ef881', 'domainID':
>> 'd668d949-42f5-49a3-b33f-0b174ac226c5', 'volumeID':
>> '93410a32-21d8-498c-b12a-efb43a2ef881', 'imageID':
>> '0a9e06f9-2e85-4410-930d-7e381128c212'}]}
>> Thread-40129::DEBUG::2013-01-09
>> 07:49:09,128::task::568::TaskManager.Task::(_updateState)
>> Task=`d6ca414a-5a97-47a2-a2ec-ea3acd992aca`::moving from state preparing
>> -> state finished
>> Thread-40129::DEBUG::2013-01-09
>> 07:49:09,128::resourceManager::830::ResourceManager.Owner::(releaseAll)
>> Owner.releaseAll requests {} resources
>> {'Storage.d668d949-42f5-49a3-b33f-0b174ac226c5': < ResourceRef
>> 'Storage.d668d949-42f5-49a3-b33f-0b174ac226c5', isValid: 'True'
obj:
>> 'None'>}
>> Thread-40129::DEBUG::2013-01-09
>> 07:49:09,129::resourceManager::864::ResourceManager.Owner::(cancelAll)
>> Owner.cancelAll requests {}
>> Thread-40129::DEBUG::2013-01-09
>> 07:49:09,129::resourceManager::557::ResourceManager::(releaseResource)
>> Trying to release resource
>> 'Storage.d668d949-42f5-49a3-b33f-0b174ac226c5'
>> Thread-40129::DEBUG::2013-01-09
>> 07:49:09,130::resourceManager::573::ResourceManager::(releaseResource)
>> Released resource 'Storage.d668d949-42f5-49a3-b33f-0b174ac226c5' (0
>> active
>> users)
>> Thread-40129::DEBUG::2013-01-09
>> 07:49:09,130::resourceManager::578::ResourceManager::(releaseResource)
>> Resource 'Storage.d668d949-42f5-49a3-b33f-0b174ac226c5' is free, finding
>> out if anyone is waiting for it.
>> Thread-40129::DEBUG::2013-01-09
>> 07:49:09,131::resourceManager::585::ResourceManager::(releaseResource)
>> No
>> one is waiting for resource
>> 'Storage.d668d949-42f5-49a3-b33f-0b174ac226c5', Clearing records.
>> Thread-40129::DEBUG::2013-01-09
>> 07:49:09,131::task::957::TaskManager.Task::(_decref)
>> Task=`d6ca414a-5a97-47a2-a2ec-ea3acd992aca`::ref 0 aborting False
>> Thread-40129::INFO::2013-01-09
>> 07:49:09,131::clientIF::316::vds::(prepareVolumePath) prepared volume
>> path:
>>
>> /rhev/data-center/fdab6642-8928-43d5-8ca1-b68792d70729/d668d949-42f5-49a3
>> -b
>>
>> 33f-0b174ac226c5/images/0a9e06f9-2e85-4410-930d-7e381128c212/93410a32-21d
>> 8-
>> 498c-b12a-efb43a2ef881
>> Thread-40128::DEBUG::2013-01-09
>> 07:49:09,145::API::483::vds::(migrationCreate) Destination VM creation
>> succeeded
>> Thread-40129::DEBUG::2013-01-09
>> 07:49:09,144::vm::672::vm.Vm::(_startUnderlyingVm)
>> vmId=`e00adf83-78d2-4f65-a259-0e01680f57fd`::_ongoingCreations released
>> Thread-40128::DEBUG::2013-01-09
>> 07:49:09,147::BindingXMLRPC::915::vds::(wrapper) return
>> vmMigrationCreate
>> with {'status': {'message': 'Done', 'code': 0},
'migrationPort': 0,
>> 'params': {'status': 'Migration Destination',
'acpiEnable': 'true',
>> 'emulatedMachine': 'pc', 'afterMigrationStatus':
'Up',
>> 'spiceSecureChannels':
>> 'smain,sinputs,scursor,splayback,srecord,sdisplay,susbredir,ssmartcard',
>> 'pid': '0', 'transparentHugePages': 'true',
'displaySecurePort': '-1',
>> 'timeOffset': 1, 'cpuType': 'SandyBridge', 'smp':
'4', 'migrationDest':
>> 'libvirt', 'custom': {}, 'vmType': 'kvm',
'spiceSslCipherSuite':
>> 'DEFAULT', 'memSize': 4096, 'vmName':
'd0lpvf051', 'nice': '0',
>> 'username': 'root', 'vmId':
'e00adf83-78d2-4f65-a259-0e01680f57fd',
>> 'displayIp': '0', 'keyboardLayout': 'en-us',
'displayPort': '-1',
>> 'smartcardEnable': 'false', 'guestIPs':
'10.32.0.51', 'nicModel':
>> 'rtl8139,pv', 'smpCoresPerSocket': '1',
'kvmEnable': 'true',
>> 'pitReinjection': 'false', 'devices': [{'device':
'usb', 'alias':
>> 'usb0',
>> 'type': 'controller', 'address': {'slot':
'0x01', 'bus': '0x00',
>> 'domain':
>> '0x0000', 'type': 'pci', 'function':
'0x2'}}, {'device': 'ide', 'alias':
>> 'ide0', 'type': 'controller', 'address':
{'slot': '0x01', 'bus': '0x00',
>> 'domain': '0x0000', 'type': 'pci',
'function': '0x1'}}, {'device':
>> 'virtio-serial', 'alias': 'virtio-serial0',
'type': 'controller',
>> 'address': {'slot': '0x05', 'bus':
'0x00', 'domain': '0x0000', 'type':
>> 'pci', 'function': '0x0'}}, {'specParams':
{'vram': '65536'}, 'alias':
>> 'video0', 'deviceId':
'99c46e79-710c-4f60-87eb-904fa496f0a7', 'address':
>> {'slot': '0x02', 'bus': '0x00', 'domain':
'0x0000', 'type': 'pci',
>> 'function': '0x0'}, 'device': 'qxl',
'type': 'video'}, {'nicModel':
>> 'pv',
>> 'macAddr': '00:1a:4a:20:00:9e', 'network':
'ovirtmgmt', 'alias': 'net0',
>> 'filter': 'vdsm-no-mac-spoofing', 'specParams': {},
'deviceId':
>> '054f3e29-d531-438b-8d1c-09cbed863bea', 'address':
{'slot': '0x03',
>> 'bus':
>> '0x00', 'domain': '0x0000', 'type':
'pci', 'function': '0x0'}, 'device':
>> 'bridge', 'type': 'interface', 'name':
'vnet0'}, {'nicModel': 'pv',
>> 'macAddr': '00:1a:4a:20:00:9f', 'network':
'ovirtmgmt', 'alias': 'net1',
>> 'filter': 'vdsm-no-mac-spoofing', 'specParams': {},
'deviceId':
>> '48ff9261-fbd8-4f0f-a823-4c6451226b95', 'address':
{'slot': '0x04',
>> 'bus':
>> '0x00', 'domain': '0x0000', 'type':
'pci', 'function': '0x0'}, 'device':
>> 'bridge', 'type': 'interface', 'name':
'vnet1'}, {'target': 4194304,
>> 'specParams': {'model': 'virtio'}, 'alias':
'balloon0', 'deviceId':
>> 'df32c652-f9c6-42db-ae88-579ee90aa714', 'address':
{'slot': '0x07',
>> 'bus':
>> '0x00', 'domain': '0x0000', 'type':
'pci', 'function': '0x0'}, 'device':
>> 'memballoon', 'type': 'balloon'}, {'index':
'2', 'iface': 'ide', 'name':
>> 'hdc', 'alias': 'ide0-1-0', 'shared':
'false', 'specParams': {'path':
>> ''},
>> 'readonly': 'True', 'deviceId':
'62d5042c-3b75-4cb3-b175-373f66c7356b',
>> 'address': {'bus': '1', 'controller':
'0', 'type': 'drive', 'target':
>> '0',
>> 'unit': '0'}, 'device': 'cdrom', 'path':
'', 'type': 'disk'},
>> {'address':
>> {'slot': '0x06', 'bus': '0x00', 'domain':
'0x0000', 'type': 'pci',
>> 'function': '0x0'}, 'index': 0, 'iface':
'virtio', 'apparentsize':
>> '5368709120', 'alias': 'virtio-disk0',
'imageID':
>> '0a9e06f9-2e85-4410-930d-7e381128c212', 'readonly':
'False', 'shared':
>> 'false', 'truesize': '5368709120', 'type':
'disk', 'domainID':
>> 'd668d949-42f5-49a3-b33f-0b174ac226c5', 'reqsize': '0',
'format': 'cow',
>> 'deviceId': '0a9e06f9-2e85-4410-930d-7e381128c212',
'poolID':
>> 'fdab6642-8928-43d5-8ca1-b68792d70729', 'device': 'disk',
'path':
>>
>> '/rhev/data-center/fdab6642-8928-43d5-8ca1-b68792d70729/d668d949-42f5-49a
>> 3-
>>
>> b33f-0b174ac226c5/images/0a9e06f9-2e85-4410-930d-7e381128c212/93410a32-21
>> d8
>> -498c-b12a-efb43a2ef881', 'propagateErrors': 'off',
'optional': 'false',
>> 'name': 'vda', 'volumeID':
'93410a32-21d8-498c-b12a-efb43a2ef881',
>> 'specParams': {}, 'volumeChain': [{'path':
>>
>> '/rhev/data-center/fdab6642-8928-43d5-8ca1-b68792d70729/d668d949-42f5-49a
>> 3-
>>
>> b33f-0b174ac226c5/images/0a9e06f9-2e85-4410-930d-7e381128c212/93410a32-21
>> d8
>> -498c-b12a-efb43a2ef881', 'domainID':
>> 'd668d949-42f5-49a3-b33f-0b174ac226c5', 'volumeID':
>> '93410a32-21d8-498c-b12a-efb43a2ef881', 'imageID':
>> '0a9e06f9-2e85-4410-930d-7e381128c212'}]}, {'device':
'unix', 'alias':
>> 'channel0', 'type': 'channel', 'address':
{'bus': '0', 'controller':
>> '0',
>> 'type': 'virtio-serial', 'port': '1'}},
{'device': 'unix', 'alias':
>> 'channel1', 'type': 'channel', 'address':
{'bus': '0', 'controller':
>> '0',
>> 'type': 'virtio-serial', 'port': '2'}},
{'device': 'spicevmc', 'alias':
>> 'channel2', 'type': 'channel', 'address':
{'bus': '0', 'controller':
>> '0',
>> 'type': 'virtio-serial', 'port': '3'}}],
'clientIp': '', 'display':
>> 'qxl'}}
>> Thread-40129::DEBUG::2013-01-09
>> 07:49:09,148::libvirtvm::1751::vm.Vm::(_waitForIncomingMigrationFinish)
>> vmId=`e00adf83-78d2-4f65-a259-0e01680f57fd`::Waiting 300 seconds for end
>> of migration
>> Thread-40129::ERROR::2013-01-09
>> 07:49:09,310::vm::696::vm.Vm::(_startUnderlyingVm)
>> vmId=`e00adf83-78d2-4f65-a259-0e01680f57fd`::The vm start process failed
>> Thread-40129::DEBUG::2013-01-09
>> 07:49:09,311::vm::1045::vm.Vm::(setDownStatus)
>> vmId=`e00adf83-78d2-4f65-a259-0e01680f57fd`::Changed state to Down:
>> Domain
>> not found: no domain with matching uuid
>> 'e00adf83-78d2-4f65-a259-0e01680f57fd'
>>
>>
>>
>> thank you for your help!
>> jonathan
>>
>>
>>
>>
>> On 1/9/13 1:18 AM, "Haim Ateya" <hateya(a)redhat.com> wrote:
>>
>>> ----- Original Message -----
>>>> From: "Jonathan Horne" <jhorne(a)skopos.us>
>>>> To: "Jonathan Horne" <jhorne(a)skopos.us>, users(a)ovirt.org
>>>> Sent: Tuesday, January 8, 2013 10:26:52 PM
>>>> Subject: Re: [Users] ovirt 3.2 migrations failing
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> so far i see this, and it looks related:
>>> this only means libvirt can't find the guest on the host, was it on
>>> source or destination?
>>>
>>> please run the following:
>>>
>>> on source server:
>>>
>>> - egrep 'vmMigrate|_setupVdsConnection' /var/log/vdsm/vdsm.log
>>> - then, from output, get the Thread number of both commands and run
>>> grep
>>> again:
>>> - egrep 'Thread-$x|Thread-$y' /var/log/vdsm/vdsm.log
>>>
>>> on destination server:
>>>
>>> - egrep 'vmMigrationCreate|prepareImage' /var/log/vdsm/vdsm.log
>>> - then, from output, get the Thread number of both commands and run
>>> grep
>>> again:
>>> - egrep 'Thread-$x|Thread-$y' /var/log/vdsm/vdsm.log
>>>
>>> please paste it here.
>>>
>>> Haim
>>>
>>>
>>>
>>>
>>>>
>>>> Traceback (most recent call last):
>>>> File "/usr/share/vdsm/vm.py", line 676, in _startUnderlyingVm
>>>> self._waitForIncomingMigrationFinish()
>>>> File "/usr/share/vdsm/libvirtvm.py", line 1757, in
>>>> _waitForIncomingMigrationFinish
>>>> self._connection.lookupByUUIDString(self.id),
>>>> File
"/usr/lib64/python2.6/site-packages/vdsm/libvirtconnection.py",
>>>> line 111, in wrapper
>>>> ret = f(*args, **kwargs)
>>>> File "/usr/lib64/python2.6/site-packages/libvirt.py", line
2682, in
>>>> lookupByUUIDString
>>>> if ret is None:raise libvirtError('virDomainLookupByUUIDString()
>>>> failed', conn=self)
>>>> libvirtError: Domain not found: no domain with matching uuid
>>>> '063c7cbe-c569-4df3-b9a7-6474c41d797e'
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> From: Jonathan Horne < jhorne(a)skopos.us >
>>>> Date: Tuesday, January 8, 2013 2:15 PM
>>>> To: " users(a)ovirt.org " < users(a)ovirt.org >
>>>> Subject: [Users] ovirt 3.2 migrations failing
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> i just built up 2 nodes and a manager on 3.2 dreyou packages, and now
>>>> that i have a VM up and installed with rhev agent, the VM is unable
>>>> to migrate. the failure is pretty much immediate.
>>>>
>>>>
>>>> i don't know where to begin troubleshooting this, can someone help
me
>>>> get going in the right direction? just let me know what logs are
>>>> appropriate and i will post them up.
>>>>
>>>>
>>>> thanks,
>>>> jonathan
>>>>
>>>>
>>>>
>>>> This is a PRIVATE message. If you are not the intended recipient,
>>>> please delete without copying and kindly advise us by e-mail of the
>>>> mistake in delivery. NOTE: Regardless of content, this e-mail shall
>>>> not operate to bind SKOPOS to any order or other contract unless
>>>> pursuant to explicit written agreement or government initiative
>>>> expressly permitting the use of e-mail for such purpose.
>>>>
>>>> This is a PRIVATE message. If you are not the intended recipient,
>>>> please delete without copying and kindly advise us by e-mail of the
>>>> mistake in delivery. NOTE: Regardless of content, this e-mail shall
>>>> not operate to bind SKOPOS to any order or other contract unless
>>>> pursuant to explicit written agreement or government initiative
>>>> expressly permitting the use of e-mail for such purpose.
>>>> _______________________________________________
>>>> Users mailing list
>>>> Users(a)ovirt.org
>>>>
http://lists.ovirt.org/mailman/listinfo/users
>>>>
>> ________________________________
>> This is a PRIVATE message. If you are not the intended recipient,
>> please delete without copying and kindly advise us by e-mail of the
>> mistake in delivery. NOTE: Regardless of content, this e-mail shall not
>> operate to bind SKOPOS to any order or other contract unless pursuant to
>> explicit written agreement or government initiative expressly permitting
>> the use of e-mail for such purpose.
>> _______________________________________________
>> Users mailing list
>> Users(a)ovirt.org
>>
http://lists.ovirt.org/mailman/listinfo/users
> _______________________________________________
> Users mailing list
> Users(a)ovirt.org
>
http://lists.ovirt.org/mailman/listinfo/users
________________________________
This is a PRIVATE message. If you are not the intended recipient, please delete without
copying and kindly advise us by e-mail of the mistake in delivery. NOTE: Regardless of
content, this e-mail shall not operate to bind SKOPOS to any order or other contract
unless pursuant to explicit written agreement or government initiative expressly
permitting the use of e-mail for such purpose.