Hi guys,
Apologies for the late reply.
The VM (Tux) was created about 2 years ago, it was converted from a
physical machine using Clonezilla. It's been migrated a number of
times in the past, only now when trying to move it off node03 is it
giving this error.
I've looked for any attached images/cd's and found none unfortunately.
Thank you so much for your assistance so far.
Regards.
Neil Wilson.
On Wed, Jan 8, 2014 at 12:23 PM, Dafna Ron <dron(a)redhat.com> wrote:
Thread-847747::INFO::2014-01-07
14:30:32,353::logUtils::44::dispatcher::(wrapper) Run and protect:
inappropriateDevices(thiefId='63da7faa-f92a-4652-90f2-b6660a4fb7b3')
Thread-847747::INFO::2014-01-07
14:30:32,354::logUtils::47::dispatcher::(wrapper) Run and protect:
inappropriateDevices, Return response: None
Please check if the vm's were booted with a cd...
bject at 0x7fb1f00cbbd0>> log:<logUtils.SimpleLogAdapter instance at
0x7fb1f00be7e8> name:hdc networkDev:False path: readonly:True reqsize:0
serial: truesize:0 *type:cdrom* volExtensionChunk:1024
watermarkLimit:536870912
Traceback (most recent call last):
File "/usr/share/vdsm/clientIF.py", line 356, in teardownVolumePath
res = self.irs.teardownImage(drive['domainID'],
File "/usr/share/vdsm/vm.py", line 1386, in __getitem__
raise KeyError(key)
KeyError: 'domainID'
Thread-847747::WARNING::2014-01-07
14:30:32,351::clientIF::362::vds::(teardownVolumePath) Drive is not a vdsm
image: VOLWM_CHUNK_MB:1024 VOLWM_CHUNK_REPLICATE_MULT:2 VOLWM_FREE_PCT:50
_blockDev:True _checkIoTuneCategories:<bound method D
rive._checkIoTuneCategories of <vm.Drive object at 0x7fb1f00cbc10>>
_customize:<bound method Drive._customize of <vm.Drive object at
0x7fb1f00cbc10>> _deviceXML:<disk device="disk"
snapshot="no" type="block">
<driver cache="none" error_policy="stop"
io="native" name="qemu"
type="raw"/>
<source
dev="/rhev/data-center/28adaf38-a4f6-11e1-a859-cb68949043e4/0e6991ae-6238-4c61-96d2-ca8fed35161e/images/9f16f896-1da3-4f9a-a305-ac9c4f51a482/e04c6600-abb9-4ebc-a9b3-77b6c536e258"/>
<target bus="ide" dev="hda"/>
<serial>9f16f896-1da3-4f9a-a305-ac9c4f51a482</serial>
<alias name="ide0-0-0"/>
<address bus="0" controller="0" target="0"
type="drive" unit="0"/>
On 01/08/2014 06:28 AM, Neil wrote:
>
> Hi Dafna,
>
> Thanks for the reply.
>
> Attached is the log from the source server (node03).
>
> I'll reply to your other questions as soon as I'm back in the office
> this afternoon, have to run off to a meeting.
>
> Regards.
>
> Neil Wilson.
>
>
> On Tue, Jan 7, 2014 at 8:13 PM, Dafna Ron <dron(a)redhat.com> wrote:
>>
>> Ok... several things :)
>>
>> 1. for migration we need to see vdsm logs from both src and dst.
>>
>> 2. Is it possible that the vm has an iso attached? because I see that you
>> are having problems with the iso domain:
>>
>> 2014-01-07 14:26:27,714 ERROR
>> [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand]
>> (pool-6-thread-48) Domain e9ab725d-69c1-4a59-b225-b995d095c289:bla-iso
>> was
>> reported with error code 358
>>
>>
>> Thread-1165153::DEBUG::2014-01-07
>> 13:39:42,460::libvirtconnection::108::libvirtconnection::(wrapper)
>> Unknown
>> libvirterror: ecode: 42 edom: 10 level: 2 message: Domain not found: no
>> domain with matching uuid '63da7faa-f92a-4652-90f2-b6660a4fb7b3'
>>
>> hread-19::ERROR::2014-01-07
>> 13:01:02,621::sdc::143::Storage.StorageDomainCache::(_findDomain) domain
>> e9ab725d-69c1-4a59-b225-b995d095c289 not found
>> Traceback (most recent call last):
>> File "/usr/share/vdsm/storage/sdc.py", line 141, in _findDomain
>> dom = findMethod(sdUUID)
>> File "/usr/share/vdsm/storage/sdc.py", line 171, in
>> _findUnfetchedDomain
>> raise se.StorageDomainDoesNotExist(sdUUID)
>> StorageDomainDoesNotExist: Storage domain does not exist:
>> (u'e9ab725d-69c1-4a59-b225-b995d095c289',)
>> Thread-19::ERROR::2014-01-07
>>
>> 13:01:02,622::domainMonitor::225::Storage.DomainMonitorThread::(_monitorDomain)
>> Error while collecting domain e9ab725d-69c1-4a59-b225-b995d095c289
>> monitoring information
>> Traceback (most recent call last):
>> File "/usr/share/vdsm/storage/domainMonitor.py", line 190, in
>> _monitorDomain
>> self.domain = sdCache.produce(self.sdUUID)
>> File "/usr/share/vdsm/storage/sdc.py", line 98, in produce
>> domain.getRealDomain()
>> File "/usr/share/vdsm/storage/sdc.py", line 52, in getRealDomain
>> return self._cache._realProduce(self._sdUUID)
>> File "/usr/share/vdsm/storage/sdc.py", line 122, in _realProduce
>> domain = self._findDomain(sdUUID)
>> File "/usr/share/vdsm/storage/sdc.py", line 141, in _findDomain
>> dom = findMethod(sdUUID)
>> File "/usr/share/vdsm/storage/sdc.py", line 171, in
>> _findUnfetchedDomain
>> raise se.StorageDomainDoesNotExist(sdUUID)
>> StorageDomainDoesNotExist: Storage domain does not exist:
>> (u'e9ab725d-69c1-4a59-b225-b995d095c289',)
>> Dummy-29013::DEBUG::2014-01-07
>> 13:01:03,507::storage_mailbox::733::Storage.Misc.excCmd::(_checkForMail)
>> 'dd
>>
>> if=/rhev/data-center/28adaf38-a4f6-11e1-a859-cb68949043e4/mastersd/dom_md/inbox
>> iflag=direct,fullblock count=1 bs=1024000' (cwd N
>> one)
>>
>> 3. The migration fails with libvirt error but we need the trace from the
>> second log:
>>
>> Thread-1165153::DEBUG::2014-01-07
>> 13:39:42,451::sampling::292::vm.Vm::(stop)
>> vmId=`63da7faa-f92a-4652-90f2-b6660a4fb7b3`::Stop statistics collection
>> Thread-1163583::DEBUG::2014-01-07
>> 13:39:42,452::sampling::323::vm.Vm::(run)
>> vmId=`63da7faa-f92a-4652-90f2-b6660a4fb7b3`::Stats thread finished
>> Thread-1165153::DEBUG::2014-01-07
>> 13:39:42,460::libvirtconnection::108::libvirtconnection::(wrapper)
>> Unknown
>> libvirterror: ecode: 42 edom: 10 level: 2 message: Domain not found: no
>> domain with matching uuid '63da7faa-f92a-4652-90f2-b6660
>> a4fb7b3'
>>
>>
>> 4. But I am worried about this and would more info about this vm...
>>
>> Thread-247::ERROR::2014-01-07
>> 15:35:14,868::sampling::355::vm.Vm::(collect)
>> vmId=`63da7faa-f92a-4652-90f2-b6660a4fb7b3`::Stats function failed:
>> <AdvancedStatsFunction _highWrite at 0x2ce0998>
>> Traceback (most recent call last):
>> File "/usr/share/vdsm/sampling.py", line 351, in collect
>> statsFunction()
>> File "/usr/share/vdsm/sampling.py", line 226, in __call__
>> retValue = self._function(*args, **kwargs)
>> File "/usr/share/vdsm/vm.py", line 509, in _highWrite
>> if not vmDrive.blockDev or vmDrive.format != 'cow':
>> AttributeError: 'Drive' object has no attribute 'format'
>>
>> How did you create this vm? was it from the UI? was it from a script?
>> what
>> are the parameters you used?
>>
>> Thanks,
>>
>> Dafna
>>
>>
>>
>> On 01/07/2014 04:34 PM, Neil wrote:
>>>
>>> Hi Elad,
>>>
>>> Thanks for assisting me, yes the same condition exists, if I try to
>>> migrate Tux it says "The VM Tux is being migrated".
>>>
>>>
>>> Below are the details requested.
>>>
>>>
>>> [root@node01 ~]# virsh -r list
>>> Id Name State
>>> ----------------------------------------------------
>>> 1 adam running
>>>
>>> [root@node01 ~]# pgrep qemu
>>> 11232
>>> [root@node01 ~]# vdsClient -s 0 list table
>>> 63da7faa-f92a-4652-90f2-b6660a4fb7b3 11232 adam Up
>>>
>>>
>>> [root@node03 ~]# virsh -r list
>>> Id Name State
>>> ----------------------------------------------------
>>> 7 tux running
>>>
>>> [root@node03 ~]# pgrep qemu
>>> 32333
>>> [root@node03 ~]# vdsClient -s 0 list table
>>> 2736197b-6dc3-4155-9a29-9306ca64881d 32333 tux Up
>>>
>>> Thanks.
>>>
>>> Regards.
>>>
>>> Neil Wilson.
>>>
>>>
>>> On Tue, Jan 7, 2014 at 4:43 PM, Elad Ben Aharon <ebenahar(a)redhat.com>
>>> wrote:
>>>>
>>>> Is it still in the same condition?
>>>> If yes, please add the outputs from both hosts for:
>>>>
>>>> #virsh -r list
>>>> #pgrep qemu
>>>> #vdsClient -s 0 list table (or 'vdsClient 0 list table' if
you are
>>>> working in insecure mode)
>>>>
>>>>
>>>> Thnaks,
>>>>
>>>> Elad Ben Aharon
>>>> RHEV-QE storage team
>>>>
>>>>
>>>>
>>>>
>>>> ----- Original Message -----
>>>> From: "Neil" <nwilson123(a)gmail.com>
>>>> To: users(a)ovirt.org
>>>> Sent: Tuesday, January 7, 2014 4:21:43 PM
>>>> Subject: [Users] Migration Failed
>>>>
>>>> Hi guys,
>>>>
>>>> I've tried to migrate a VM from one host(node03) to another(node01),
>>>> and it failed to migrate, and the VM(tux) remained on the original
>>>> host. I've now tried to migrate the same VM again, and it picks up
>>>> that the previous migration is still in progress and refuses to
>>>> migrate.
>>>>
>>>> I've checked for the KVM process on each of the hosts and the VM is
>>>> definitely still running on node03 so there doesn't appear to be any
>>>> chance of the VM trying to run on both hosts (which I've had before
>>>> which is very scary).
>>>>
>>>> These are my versions... and attached are my engine.log and my vdsm.log
>>>>
>>>> Centos 6.5
>>>> ovirt-iso-uploader-3.3.1-1.el6.noarch
>>>> ovirt-host-deploy-1.1.2-1.el6.noarch
>>>> ovirt-release-el6-9-1.noarch
>>>> ovirt-engine-setup-3.3.1-2.el6.noarch
>>>> ovirt-engine-3.3.1-2.el6.noarch
>>>> ovirt-host-deploy-java-1.1.2-1.el6.noarch
>>>> ovirt-image-uploader-3.3.1-1.el6.noarch
>>>> ovirt-engine-dbscripts-3.3.1-2.el6.noarch
>>>> ovirt-engine-cli-3.3.0.6-1.el6.noarch
>>>> ovirt-engine-websocket-proxy-3.3.1-2.el6.noarch
>>>> ovirt-engine-userportal-3.3.1-2.el6.noarch
>>>> ovirt-log-collector-3.3.1-1.el6.noarch
>>>> ovirt-engine-tools-3.3.1-2.el6.noarch
>>>> ovirt-engine-lib-3.3.1-2.el6.noarch
>>>> ovirt-engine-webadmin-portal-3.3.1-2.el6.noarch
>>>> ovirt-engine-backend-3.3.1-2.el6.noarch
>>>> ovirt-engine-sdk-python-3.3.0.8-1.el6.noarch
>>>> ovirt-engine-restapi-3.3.1-2.el6.noarch
>>>>
>>>>
>>>> vdsm-python-4.13.0-11.el6.x86_64
>>>> vdsm-cli-4.13.0-11.el6.noarch
>>>> vdsm-xmlrpc-4.13.0-11.el6.noarch
>>>> vdsm-4.13.0-11.el6.x86_64
>>>> vdsm-python-cpopen-4.13.0-11.el6.x86_64
>>>>
>>>> I've had a few issues with this particular installation in the past,
>>>> as it's from a very old pre release of ovirt, then upgrading to the
>>>> dreyou repo, then finally moving to the official Centos ovirt repo.
>>>>
>>>> Thanks, any help is greatly appreciated.
>>>>
>>>> Regards.
>>>>
>>>> Neil Wilson.
>>>>
>>>> _______________________________________________
>>>> Users mailing list
>>>> Users(a)ovirt.org
>>>>
http://lists.ovirt.org/mailman/listinfo/users
>>>
>>> _______________________________________________
>>> Users mailing list
>>> Users(a)ovirt.org
>>>
http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>>
>> --
>> Dafna Ron
--
Dafna Ron