Hi Ayal,
my bad, noob mistake. Fed19 host is running lvm2-2.02.98-9.fc19.x86_64,
the Fed18 machines are running lvm2-2.02.98-4.fc18.x86_64, they're all
coming from the stable repos. Think this gap in versions could be the cause?
Cheers,
Daniele
On 07/22/2013 12:42 PM, Ayal Baron wrote:
----- Original Message -----
> Hi Ayal,
> thanks for your reply! Well, I just realized that I didn't give the
> whole picture: I've got a Fed18 3.2 engine and a Fed18 host running as
> SPM. I added a second Fed19 host whose log you're looking at right now.
> So, yes - that's the Fed19 host trying to become SPM when the Fed18 host
> goes in maintenance. As far as I understand, the error message you've
> found makes me think there's something missing when the Fed19 host tries
> to vgchange the tags on the storage domain's volume group:
>
> # vgs
> VG #PV #LV #SN Attr VSize VFree
> a646d974-19be-4a8d-bf68-a3974d46a823 1 17 0 wz--n- 511.62g 473.12g
> fedora 1 3 0 wz--n- 67.50g 0
>
> Now, if I manually run the same command on the Fed19 while it's trying
> to be elected as SPM:
>
> /usr/bin/sudo -n /sbin/lvm vgchange --config " devices { preferred_names
> = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0
> disable_after_error_count=3 filter = [
> \\"a%360060160f4d03100dc456812fdb3e211%\\", \\"r%.*%\\" ] }
global {
> locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup {
> retain_min = 50 retain_days = 0 } " --deltag MDT_POOL_SPM_LVER=18
> --deltag MDT_POOL_SPM_ID=-1 --deltag
> MDT__SHA_CKSUM=7d5585b7b7adeabd4cb193ea6f3e5af06bd8f64c --addtag
> MDT_POOL_SPM_ID=1 --addtag
> MDT__SHA_CKSUM=79a6a0705dc0011d7f44f24b5c639f5d8baae074 --addtag
> MDT_POOL_SPM_LVER=19 a646d974-19be-4a8d-bf68-a3974d46a823
>
> Parse error at byte 31 (line 1): expected a value
> Failed to set overridden configuration entries.
> Run `vgchange --help' for more information.
>
> Does this ring a bell?
This looks like shell issues (you cannot simply copy the command from the log and run it
as it goes through some shell processing when run from python.
What version of lvm do you have on this host?
> Cheers,
> Daniele
>
> On 07/21/2013 01:03 PM, Ayal Baron wrote:
>> Hi Daniele,
>>
>> Your lvm version seems wrong:
>>
>> 9b238eef-380f-4fd4-aca1-465f675b7656::DEBUG::2013-07-18
>> 11:01:07,161::misc::84::Storage.Misc.excCmd::(<lambda>) '/usr/bin/sudo
-n
>> /sbin/lvm vgchange ... --deltag MDT_POOL_SPM_LVER=18 --deltag
>> MDT_POOL_SPM_ID=-1 --deltag
>> MDT__SHA_CKSUM=7d5585b7b7adeabd4cb193ea6f3e5af06bd8f64c --addtag
>> MDT_POOL_SPM_ID=1 --addtag
>> MDT__SHA_CKSUM=79a6a0705dc0011d7f44f24b5c639f5d8baae074 --addtag
>> MDT_POOL_SPM_LVER=19 a646d974-19be-4a8d-bf68-a3974d46a823'
>>
>> 9b238eef-380f-4fd4-aca1-465f675b7656::DEBUG::2013-07-18
>> 11:01:07,200::misc::84::Storage.Misc.excCmd::(<lambda>) FAILED: <err>
= '
>> /dev/mapper/360060160f4d03100dc456812fdb3e211: lseek 18446744073575333888
>> failed: Invalid argument\n'; <rc> = 5
>>
>> I'm guessing this is the Fedora 18?
>> Can you attach the vdsm.log for the Fedora 19 as well?
>>
>> ----- Original Message -----
>>> Hello everybody,
>>> I've been experimenting a bit with Fedora 19 + vdsm and I'm getting
this
>>> funny VolumeGroupReplaceTagError . I've got a 3.2 engine running with
two
>>> hosts, a Fed18 and a Fed19, both sporting vdsm-4.10.3-17. I can run VMs on
>>> both hostss but I can't migrate them and, worst of them all, the engine
>>> can't elect the Fe19 host as SPM - it just keeps trying and trying to
no
>>> avail. What's even weirder is that the Fed19's vdsm keeps
complaining
>>> about
>>> not being able to find the storage, even when VMs are up and running on
>>> the
>>> very same host. Both hosts are accessing the same LUN via fibre channel.
>>>
>>> here is an excerpt from the offending vdsm running a VM:
>>>
>>> 9b238eef-380f-4fd4-aca1-465f675b7656::DEBUG::2013-07-18
>>> 11:01:07,078::misc::84::Storage.Misc.excCmd::(<lambda>)
'/usr/bin/sudo -n
>>> /sbin/lvm vgs --config " devices { preferred_names =
[\\"^/dev/mapper/\\"]
>>> ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3
>>> filter = [ \\"a%360060160f4d03100dc456812fdb3e211%\\",
\\"r%.*%\\" ] }
>>> global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup
>>> {
>>> retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix
>>> --separator | -o
>>>
uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free
>>> a646d974-19be-4a8d-bf68-a3974d46a823' (cwd None)
>>> 9b238eef-380f-4fd4-aca1-465f675b7656::DEBUG::2013-07-18
>>> 11:01:07,115::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS:
<err> =
>>> '';
>>> <rc> = 0
>>> 9b238eef-380f-4fd4-aca1-465f675b7656::DEBUG::2013-07-18
>>> 11:01:07,117::lvm::397::OperationMutex::(_reloadvgs) Operation 'lvm
reload
>>> operation' released the operation mutex
>>> 9b238eef-380f-4fd4-aca1-465f675b7656::DEBUG::2013-07-18
>>> 11:01:07,117::persistentDict::234::Storage.PersistentDict::(refresh) read
>>> lines
>>>
(VGTagMetadataRW)=[u'PV0=pv:mpathb,uuid:TMIy2w-1GR2-DW93-SETn-AhJG-yXfA-mHYwWW,pestart:0,pecount:4093,mapoffset:0',
>>> 'VERSION=3', 'LOGBLKSIZE=512',
'DESCRIPTION=provisioning1',
>>> 'LEASERETRIES=3', 'LOCKRENEWALINTERVALSEC=5',
'LOCKPOLICY=',
>>> 'PHYBLKSIZE=512',
'SDUUID=a646d974-19be-4a8d-bf68-a3974d46a823',
>>> 'CLASS=Data', 'TYPE=FCP',
'VGUUID=bqt7AY-kDVm-rZnW-IMI6-83Zy-5Go2-HvNY6Y',
>>> 'IOOPTIMEOUTSEC=10', 'LEASETIMESEC=60',
'MASTER_VERSION=1',
>>> 'POOL_UUID=c923d5bb-ea08-44ea-937d-10eea4485ae2',
'ROLE=Master',
>>> 'POOL_DESCRIPTION=AreaProvisioning',
>>>
u'POOL_DOMAINS=a646d974-19be-4a8d-bf68-a3974d46a823:Active,53b27dc5-fa49-4257-962d-b475e160d7b3:Active',
>>> 'POOL_SPM_LVER=18', 'POOL_SPM_ID=-1',
>>> '_SHA_CKSUM=7d5585b7b7adeabd4cb193ea6f3e5af06bd8f64c']
>>> 9b238eef-380f-4fd4-aca1-465f675b7656::DEBUG::2013-07-18
>>> 11:01:07,118::persistentDict::167::Storage.PersistentDict::(transaction)
>>> Starting transaction
>>> 9b238eef-380f-4fd4-aca1-465f675b7656::DEBUG::2013-07-18
>>> 11:01:07,118::persistentDict::173::Storage.PersistentDict::(transaction)
>>> Flushing changes
>>> 9b238eef-380f-4fd4-aca1-465f675b7656::DEBUG::2013-07-18
>>> 11:01:07,119::persistentDict::299::Storage.PersistentDict::(flush) about
>>> to
>>> write lines (VGTagMetadataRW)=['CLASS=Data',
'DESCRIPTION=provisioning1',
>>> 'IOOPTIMEOUTSEC=10', 'LEASERETRIES=3',
'LEASETIMESEC=60', 'LOCKPOLICY=',
>>> 'LOCKRENEWALINTERVALSEC=5', 'LOGBLKSIZE=512',
'MASTER_VERSION=1',
>>> 'PHYBLKSIZE=512', 'POOL_DESCRIPTION=AreaProvisioning',
>>>
u'POOL_DOMAINS=a646d974-19be-4a8d-bf68-a3974d46a823:Active,53b27dc5-fa49-4257-962d-b475e160d7b3:Active',
>>> 'POOL_SPM_ID=1', 'POOL_SPM_LVER=19',
>>> 'POOL_UUID=c923d5bb-ea08-44ea-937d-10eea4485ae2',
>>>
u'PV0=pv:mpathb,uuid:TMIy2w-1GR2-DW93-SETn-AhJG-yXfA-mHYwWW,pestart:0,pecount:4093,mapoffset:0',
>>> 'ROLE=Master', 'SDUUID=a646d974-19be-4a8d-bf68-a3974d46a823',
'TYPE=FCP',
>>> 'VERSION=3',
'VGUUID=bqt7AY-kDVm-rZnW-IMI6-83Zy-5Go2-HvNY6Y',
>>> '_SHA_CKSUM=79a6a0705dc0011d7f44f24b5c639f5d8baae074']
>>> 9b238eef-380f-4fd4-aca1-465f675b7656::DEBUG::2013-07-18
>>> 11:01:07,119::lvm::483::OperationMutex::(_invalidatevgs) Operation 'lvm
>>> invalidate operation' got the operation mutex
>>> 9b238eef-380f-4fd4-aca1-465f675b7656::DEBUG::2013-07-18
>>> 11:01:07,119::lvm::485::OperationMutex::(_invalidatevgs) Operation 'lvm
>>> invalidate operation' released the operation mutex
>>> 9b238eef-380f-4fd4-aca1-465f675b7656::DEBUG::2013-07-18
>>> 11:01:07,120::lvm::493::OperationMutex::(_invalidatelvs) Operation 'lvm
>>> invalidate operation' got the operation mutex
>>> 9b238eef-380f-4fd4-aca1-465f675b7656::DEBUG::2013-07-18
>>> 11:01:07,120::lvm::505::OperationMutex::(_invalidatelvs) Operation 'lvm
>>> invalidate operation' released the operation mutex
>>> 9b238eef-380f-4fd4-aca1-465f675b7656::DEBUG::2013-07-18
>>> 11:01:07,120::lvm::368::OperationMutex::(_reloadvgs) Operation 'lvm
reload
>>> operation' got the operation mutex
>>> 9b238eef-380f-4fd4-aca1-465f675b7656::DEBUG::2013-07-18
>>> 11:01:07,121::misc::84::Storage.Misc.excCmd::(<lambda>)
'/usr/bin/sudo -n
>>> /sbin/lvm vgs --config " devices { preferred_names =
[\\"^/dev/mapper/\\"]
>>> ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3
>>> filter = [ \\"a%360060160f4d03100dc456812fdb3e211%\\",
\\"r%.*%\\" ] }
>>> global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup
>>> {
>>> retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix
>>> --separator | -o
>>>
uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free
>>> a646d974-19be-4a8d-bf68-a3974d46a823' (cwd None)
>>> 9b238eef-380f-4fd4-aca1-465f675b7656::DEBUG::2013-07-18
>>> 11:01:07,157::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS:
<err> =
>>> '';
>>> <rc> = 0
>>> 9b238eef-380f-4fd4-aca1-465f675b7656::DEBUG::2013-07-18
>>> 11:01:07,159::lvm::397::OperationMutex::(_reloadvgs) Operation 'lvm
reload
>>> operation' released the operation mutex
>>> 9b238eef-380f-4fd4-aca1-465f675b7656::DEBUG::2013-07-18
>>> 11:01:07,160::blockSD::300::storage.Metadata.VGTagMetadataRW::(writelines)
>>> Updating metadata adding=MDT_POOL_SPM_ID=1,
>>> MDT__SHA_CKSUM=79a6a0705dc0011d7f44f24b5c639f5d8baae074,
>>> MDT_POOL_SPM_LVER=19 removing=MDT_POOL_SPM_ID=-1,
>>> MDT__SHA_CKSUM=7d5585b7b7adeabd4cb193ea6f3e5af06bd8f64c,
>>> MDT_POOL_SPM_LVER=18
>>> 9b238eef-380f-4fd4-aca1-465f675b7656::DEBUG::2013-07-18
>>> 11:01:07,161::misc::84::Storage.Misc.excCmd::(<lambda>)
'/usr/bin/sudo -n
>>> /sbin/lvm vgchange --config " devices { preferred_names =
>>> [\\"^/dev/mapper/\\"] ignore_suspended_devices=1
write_cache_state=0
>>> disable_after_error_count=3 filter = [
>>> \\"a%360060160f4d03100dc456812fdb3e211%\\", \\"r%.*%\\" ]
} global {
>>> locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup {
>>> retain_min = 50 retain_days = 0 } " --deltag MDT_POOL_SPM_LVER=18
--deltag
>>> MDT_POOL_SPM_ID=-1 --deltag
>>> MDT__SHA_CKSUM=7d5585b7b7adeabd4cb193ea6f3e5af06bd8f64c --addtag
>>> MDT_POOL_SPM_ID=1 --addtag
>>> MDT__SHA_CKSUM=79a6a0705dc0011d7f44f24b5c639f5d8baae074 --addtag
>>> MDT_POOL_SPM_LVER=19 a646d974-19be-4a8d-bf68-a3974d46a823' (cwd None)
>>> 9b238eef-380f-4fd4-aca1-465f675b7656::DEBUG::2013-07-18
>>> 11:01:07,200::misc::84::Storage.Misc.excCmd::(<lambda>) FAILED:
<err> = '
>>> /dev/mapper/360060160f4d03100dc456812fdb3e211: lseek 18446744073575333888
>>> failed: Invalid argument\n'; <rc> = 5
>>> 9b238eef-380f-4fd4-aca1-465f675b7656::DEBUG::2013-07-18
>>> 11:01:07,204::lvm::483::OperationMutex::(_invalidatevgs) Operation 'lvm
>>> invalidate operation' got the operation mutex
>>> 9b238eef-380f-4fd4-aca1-465f675b7656::DEBUG::2013-07-18
>>> 11:01:07,204::lvm::485::OperationMutex::(_invalidatevgs) Operation 'lvm
>>> invalidate operation' released the operation mutex
>>> 9b238eef-380f-4fd4-aca1-465f675b7656::WARNING::2013-07-18
>>> 11:01:07,205::persistentDict::178::Storage.PersistentDict::(transaction)
>>> Error in transaction, rolling back changes
>>> Traceback (most recent call last):
>>> File "/usr/share/vdsm/storage/persistentDict.py", line 174, in
transaction
>>> self.flush(self._metadata)
>>> File "/usr/share/vdsm/storage/persistentDict.py", line 300, in
flush
>>> self._metaRW.writelines(lines)
>>> File "/usr/share/vdsm/storage/blockSD.py", line 301, in
writelines
>>> lvm.changeVGTags(self._vgName, delTags=toRemove, addTags=toAdd)
>>> File "/usr/share/vdsm/storage/lvm.py", line 1219, in changeVGTags
>>> (vgName, ", ".join(delTags), ", ".join(addTags),
err[-1]))
>>> VolumeGroupReplaceTagError: Replace Volume Group tag error:
>>> ('vg:a646d974-19be-4a8d-bf68-a3974d46a823 del:MDT_POOL_SPM_LVER=18,
>>> MDT_POOL_SPM_ID=-1,
>>> MDT__SHA_CKSUM=7d5585b7b7adeabd4cb193ea6f3e5af06bd8f64c
>>> add:MDT_POOL_SPM_ID=1,
>>> MDT__SHA_CKSUM=79a6a0705dc0011d7f44f24b5c639f5d8baae074,
>>> MDT_POOL_SPM_LVER=19 ( /dev/mapper/360060160f4d03100dc456812fdb3e211:
>>> lseek
>>> 18446744073575333888 failed: Invalid argument)',)
>>> 9b238eef-380f-4fd4-aca1-465f675b7656::DEBUG::2013-07-18
>>> 11:01:07,206::persistentDict::299::Storage.PersistentDict::(flush) about
>>> to
>>> write lines (VGTagMetadataRW)=['CLASS=Data',
'DESCRIPTION=provisioning1',
>>> 'IOOPTIMEOUTSEC=10', 'LEASERETRIES=3',
'LEASETIMESEC=60', 'LOCKPOLICY=',
>>> 'LOCKRENEWALINTERVALSEC=5', 'LOGBLKSIZE=512',
'MASTER_VERSION=1',
>>> 'PHYBLKSIZE=512', 'POOL_DESCRIPTION=AreaProvisioning',
>>>
u'POOL_DOMAINS=a646d974-19be-4a8d-bf68-a3974d46a823:Active,53b27dc5-fa49-4257-962d-b475e160d7b3:Active',
>>> 'POOL_SPM_ID=-1', 'POOL_SPM_LVER=18',
>>> 'POOL_UUID=c923d5bb-ea08-44ea-937d-10eea4485ae2',
>>>
u'PV0=pv:mpathb,uuid:TMIy2w-1GR2-DW93-SETn-AhJG-yXfA-mHYwWW,pestart:0,pecount:4093,mapoffset:0',
>>> 'ROLE=Master', 'SDUUID=a646d974-19be-4a8d-bf68-a3974d46a823',
'TYPE=FCP',
>>> 'VERSION=3',
'VGUUID=bqt7AY-kDVm-rZnW-IMI6-83Zy-5Go2-HvNY6Y',
>>> '_SHA_CKSUM=7d5585b7b7adeabd4cb193ea6f3e5af06bd8f64c']
>>> 9b238eef-380f-4fd4-aca1-465f675b7656::DEBUG::2013-07-18
>>> 11:01:07,206::lvm::483::OperationMutex::(_invalidatevgs) Operation 'lvm
>>> invalidate operation' got the operation mutex
>>> 9b238eef-380f-4fd4-aca1-465f675b7656::DEBUG::2013-07-18
>>> 11:01:07,206::lvm::485::OperationMutex::(_invalidatevgs) Operation 'lvm
>>> invalidate operation' released the operation mutex
>>> 9b238eef-380f-4fd4-aca1-465f675b7656::DEBUG::2013-07-18
>>> 11:01:07,207::lvm::493::OperationMutex::(_invalidatelvs) Operation 'lvm
>>> invalidate operation' got the operation mutex
>>> 9b238eef-380f-4fd4-aca1-465f675b7656::DEBUG::2013-07-18
>>> 11:01:07,207::lvm::505::OperationMutex::(_invalidatelvs) Operation 'lvm
>>> invalidate operation' released the operation mutex
>>> 9b238eef-380f-4fd4-aca1-465f675b7656::DEBUG::2013-07-18
>>> 11:01:07,207::lvm::368::OperationMutex::(_reloadvgs) Operation 'lvm
reload
>>> operation' got the operation mutex
>>> 9b238eef-380f-4fd4-aca1-465f675b7656::DEBUG::2013-07-18
>>> 11:01:07,208::misc::84::Storage.Misc.excCmd::(<lambda>)
'/usr/bin/sudo -n
>>> /sbin/lvm vgs --config " devices { preferred_names =
[\\"^/dev/mapper/\\"]
>>> ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3
>>> filter = [ \\"a%360060160f4d03100dc456812fdb3e211%\\",
\\"r%.*%\\" ] }
>>> global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup
>>> {
>>> retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix
>>> --separator | -o
>>>
uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free
>>> a646d974-19be-4a8d-bf68-a3974d46a823' (cwd None)
>>> 9b238eef-380f-4fd4-aca1-465f675b7656::DEBUG::2013-07-18
>>> 11:01:07,244::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS:
<err> =
>>> '';
>>> <rc> = 0
>>> 9b238eef-380f-4fd4-aca1-465f675b7656::DEBUG::2013-07-18
>>> 11:01:07,246::lvm::397::OperationMutex::(_reloadvgs) Operation 'lvm
reload
>>> operation' released the operation mutex
>>> 9b238eef-380f-4fd4-aca1-465f675b7656::ERROR::2013-07-18
>>> 11:01:07,247::sp::314::Storage.StoragePool::(startSpm) Unexpected error
>>> Traceback (most recent call last):
>>> File "/usr/share/vdsm/storage/sp.py", line 268, in startSpm
>>> PMDK_SPM_ID: self.id}, __securityOverride=True)
>>> File "/usr/share/vdsm/storage/securable.py", line 68, in wrapper
>>> return f(self, *args, **kwargs)
>>> File "/usr/share/vdsm/storage/sp.py", line 1478, in setMetaParams
>>> self._metadata.update(params)
>>> File "/usr/share/vdsm/storage/persistentDict.py", line 121, in
update
>>> self._dict.update(metadata)
>>> File "/usr/share/vdsm/storage/persistentDict.py", line 214, in
update
>>> self._metadata.update(metadata)
>>> File "/usr/lib64/python2.7/contextlib.py", line 24, in __exit__
>>> self.gen.next()
>>> File "/usr/share/vdsm/storage/persistentDict.py", line 174, in
transaction
>>> self.flush(self._metadata)
>>> File "/usr/share/vdsm/storage/persistentDict.py", line 300, in
flush
>>> self._metaRW.writelines(lines)
>>> File "/usr/share/vdsm/storage/blockSD.py", line 301, in
writelines
>>> lvm.changeVGTags(self._vgName, delTags=toRemove, addTags=toAdd)
>>> File "/usr/share/vdsm/storage/lvm.py", line 1219, in changeVGTags
>>> (vgName, ", ".join(delTags), ", ".join(addTags),
err[-1]))
>>> VolumeGroupReplaceTagError: Replace Volume Group tag error:
>>> ('vg:a646d974-19be-4a8d-bf68-a3974d46a823 del:MDT_POOL_SPM_LVER=18,
>>> MDT_POOL_SPM_ID=-1,
>>> MDT__SHA_CKSUM=7d5585b7b7adeabd4cb193ea6f3e5af06bd8f64c
>>> add:MDT_POOL_SPM_ID=1,
>>> MDT__SHA_CKSUM=79a6a0705dc0011d7f44f24b5c639f5d8baae074,
>>> MDT_POOL_SPM_LVER=19 ( /dev/mapper/360060160f4d03100dc456812fdb3e211:
>>> lseek
>>> 18446744073575333888 failed: Invalid argument)',)
>>>
>>> meanwhile, on the engine side of the thing (same behaviour at a different
>>> time of the day):
>>>
>>> 2013-07-18 15:19:18,736 INFO
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand]
>>> (DefaultQuartzScheduler_Worker-10) [1cdd7d6f] START,
>>> HSMClearTaskVDSCommand(HostName = heidi, HostId =
>>> 7c0f1d65-750e-458c-95ac-90846d4755bf,
>>> taskId=4879a4ab-41fc-4991-a93b-7407a21701e0), log id: bff1ad1
>>> 2013-07-18 15:19:18,745 INFO
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand]
>>> (DefaultQuartzScheduler_Worker-10) [1cdd7d6f] FINISH,
>>> HSMClearTaskVDSCommand, log id: bff1ad1
>>> 2013-07-18 15:19:18,746 INFO
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand]
>>> (DefaultQuartzScheduler_Worker-10) [1cdd7d6f] FINISH, SpmStartVDSCommand,
>>> return:
>>> org.ovirt.engine.core.common.businessentities.SpmStatusResult@1939957e,
>>> log
>>> id: 72518013
>>> 2013-07-18 15:19:18,749 INFO
>>> [org.ovirt.engine.core.bll.storage.SetStoragePoolStatusCommand]
>>> (DefaultQuartzScheduler_Worker-10) [13859022] Running command:
>>> SetStoragePoolStatusCommand internal: true. Entities affected : ID:
>>> c923d5bb-ea08-44ea-937d-10eea4485ae2 Type: StoragePool
>>> 2013-07-18 15:19:18,783 INFO
>>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
>>> (DefaultQuartzScheduler_Worker-10) [13859022] No string for
>>> SYSTEM_CHANGE_STORAGE_POOL_STATUS_PROBLEMATIC type. Use default Log
>>> 2013-07-18 15:19:18,790 ERROR
>>> [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand]
>>> (DefaultQuartzScheduler_Worker-10) [13859022]
>>> IrsBroker::Failed::GetStoragePoolInfoVDS due to:
>>> IrsSpmStartFailedException:
>>> IRSGenericException: IRSErrorException: SpmStart failed
>>> 2013-07-18 15:19:28,893 INFO
>>> [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand]
>>> (DefaultQuartzScheduler_Worker-5) [19567f5a] hostFromVds::selectedVds -
>>> heidi, spmStatus Free, storage pool AreaProvisioning
>>> 2013-07-18 15:19:28,899 INFO
>>> [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand]
>>> (DefaultQuartzScheduler_Worker-5) [19567f5a] starting spm on vds heidi,
>>> storage pool AreaProvisioning, prevId -1, LVER 18
>>> 2013-07-18 15:19:28,925 INFO
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand]
>>> (DefaultQuartzScheduler_Worker-5) [19567f5a] START,
>>> SpmStartVDSCommand(HostName = heidi, HostId =
>>> 7c0f1d65-750e-458c-95ac-90846d4755bf, storagePoolId =
>>> c923d5bb-ea08-44ea-937d-10eea4485ae2, prevId=-1, prevLVER=18,
>>> storagePoolFormatType=V3, recoveryMode=Manual, SCSIFencing=false), log id:
>>> 1101aacc
>>> 2013-07-18 15:19:28,942 INFO
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand]
>>> (DefaultQuartzScheduler_Worker-5) [19567f5a] spmStart polling started:
>>> taskId = 03d8ee8f-cf3d-42c5-be4a-a9136f5e9fa1
>>> 2013-07-18 15:19:29,974 ERROR
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
>>> (DefaultQuartzScheduler_Worker-5) [19567f5a] Failed in HSMGetTaskStatusVDS
>>> method
>>> 2013-07-18 15:19:29,976 ERROR
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
>>> (DefaultQuartzScheduler_Worker-5) [19567f5a] Error code
>>> VolumeGroupReplaceTagError and error message VDSGenericException:
>>> VDSErrorException: Failed to HSMGetTaskStatusVDS, error = Replace Volume
>>> Group tag error
>>> 2013-07-18 15:19:29,977 INFO
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand]
>>> (DefaultQuartzScheduler_Worker-5) [19567f5a] spmStart polling ended:
>>> taskId
>>> = 03d8ee8f-cf3d-42c5-be4a-a9136f5e9fa1 task status = finished
>>> 2013-07-18 15:19:29,978 ERROR
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand]
>>> (DefaultQuartzScheduler_Worker-5) [19567f5a] Start SPM Task failed -
>>> result:
>>> cleanSuccess, message: VDSGenericException: VDSErrorException: Failed to
>>> HSMGetTaskStatusVDS, error = Replace Volume Group tag error
>>> 2013-07-18 15:19:29,985 INFO
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand]
>>> (DefaultQuartzScheduler_Worker-5) [19567f5a] spmStart polling ended, spm
>>> status: Free
>>> 2013-07-18 15:19:30,010 INFO
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand]
>>> (DefaultQuartzScheduler_Worker-5) [19567f5a] START,
>>> HSMClearTaskVDSCommand(HostName = heidi, HostId =
>>> 7c0f1d65-750e-458c-95ac-90846d4755bf,
>>> taskId=03d8ee8f-cf3d-42c5-be4a-a9136f5e9fa1), log id: 115a9661
>>> 2013-07-18 15:19:30,018 INFO
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand]
>>> (DefaultQuartzScheduler_Worker-5) [19567f5a] FINISH,
>>> HSMClearTaskVDSCommand,
>>> log id: 115a9661
>>> 2013-07-18 15:19:30,019 INFO
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand]
>>> (DefaultQuartzScheduler_Worker-5) [19567f5a] FINISH, SpmStartVDSCommand,
>>> return:
>>> org.ovirt.engine.core.common.businessentities.SpmStatusResult@5246c32d,
>>> log
>>> id: 1101aacc
>>> 2013-07-18 15:19:30,022 INFO
>>> [org.ovirt.engine.core.bll.storage.SetStoragePoolStatusCommand]
>>> (DefaultQuartzScheduler_Worker-5) [23b7e98e] Running command:
>>> SetStoragePoolStatusCommand internal: true. Entities affected : ID:
>>> c923d5bb-ea08-44ea-937d-10eea4485ae2 Type: StoragePool
>>> 2013-07-18 15:19:30,054 INFO
>>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
>>> (DefaultQuartzScheduler_Worker-5) [23b7e98e] No string for
>>> SYSTEM_CHANGE_STORAGE_POOL_STATUS_PROBLEMATIC type. Use default Log
>>> 2013-07-18 15:19:30,061 ERROR
>>> [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand]
>>> (DefaultQuartzScheduler_Worker-5) [23b7e98e]
>>> IrsBroker::Failed::GetStoragePoolInfoVDS due to:
>>> IrsSpmStartFailedException:
>>> IRSGenericException: IRSErrorException: SpmStart failed
>>>
>>> the whole vdsm log is attached. Got any clues?
>>>
>>> Thanks,
>>> Daniele Pavia
>>>
>>> Engineering I.I.
>>> R&D Labs
>>>
>>>
>>> _______________________________________________
>>> Users mailing list
>>> Users(a)ovirt.org
>>>
http://lists.ovirt.org/mailman/listinfo/users
>>>
>> _______________________________________________
>> Users mailing list
>> Users(a)ovirt.org
>>
http://lists.ovirt.org/mailman/listinfo/users
>