[Users] storage domain reactivate not working

Rene Rosenberger r.rosenberger at netbiscuits.com
Mon Apr 2 06:30:16 UTC 2012


Hi,

ok, but how can i delete it if nothing goes. I want to generate a new storage domain.

-----Ursprüngliche Nachricht-----
Von: Saggi Mizrahi [mailto:smizrahi at redhat.com]
Gesendet: Freitag, 30. März 2012 21:00
An: rvaknin at redhat.com
Cc: users at ovirt.org; Rene Rosenberger
Betreff: Re: AW: [Users] storage domain reactivate not working

I am currently working on patches to fix the issues with upgraded domains. I've been ill for the most part of last week so it is taking a bit more time then it should.

----- Original Message -----
> From: "Rami Vaknin" <rvaknin at redhat.com>
> To: "Saggi Mizrahi" <smizrahi at redhat.com>, "Rene Rosenberger"
> <r.rosenberger at netbiscuits.com>
> Cc: users at ovirt.org
> Sent: Thursday, March 29, 2012 11:57:08 AM
> Subject: Fwd: AW: [Users] storage domain reactivate not working
>
> Rene, VDSM can't read the storage domain's metadata, the problem is
> that vdsm tries to read the metadata using 'dd' command which applies
> to the old version of storage domains as in the new format the
> metadata is saved as vg tags. Are you using storage domain version
> lower that V2? Can you attach the full log?
>
> Saggi, any thoughts on that?
>
> -------- Original Message --------
> Subject:      AW: [Users] storage domain reactivate not working
> Date:         Thu, 29 Mar 2012 06:33:27 -0400
> From:         Rene Rosenberger <r.rosenberger at netbiscuits.com>
> To:   rvaknin at redhat.com <rvaknin at redhat.com> , users at ovirt.org
> <users at ovirt.org>
>
>
>
>
> Hi,
>
>
>
> not sure if the logs i posted is waht you need . The thing ist hat the
> iscsi target is connected but in web gui it is locked. Can I unlock
> it?
>
>
>
> Regards, rene
>
>
>
>
>
> Von: Rene Rosenberger
> Gesendet: Donnerstag, 29. März 2012 12:00
> An: Rene Rosenberger; rvaknin at redhat.com ; users at ovirt.org
> Betreff: AW: [Users] storage domain reactivate not working
>
>
>
> Hi,
>
>
>
> This is the roor message:
>
>
>
> Thread-5448::DEBUG::2012-03-29
> 11:57:46,310::misc::1032::SamplingMethod::(__call__) Returning last
> result
>
> Thread-5448::DEBUG::2012-03-29
> 11:57:46,313::lvm::349::OperationMutex::(_reloadvgs) Operation 'lvm
> reload operation' got the operation mutex
>
> Thread-5448::DEBUG::2012-03-29
> 11:57:46,322::lvm::284::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n
> /sbin/lvm vgs --config " devices { preferred_names =
> [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0
> disable_after_error_count=3 filter = [
> \\"a%360014052dd702d2defc8d459adba02dc|360014057fda80efdcae4d414eda829
> d7%\\ ", \\"r%.*%\\ " ] } global { locking_type=1
> prioritise_write_locks=1
> wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } "
> --noheadings --units b --nosuffix --separator | -o
> uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_m
> da_size,vg_mda_free 8ed25a57-f53a-4cf0-bb92-781f3ce36a48' (cwd None)
>
> Thread-5448::DEBUG::2012-03-29
> 11:57:47,096::lvm::284::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = '
> /dev/mapper/360014057fda80efdcae4d414eda829d7: read failed after 0 of
> 4096 at 2147483582464: Input/output error\n
> /dev/mapper/360014057fda80efdcae4d414eda829d7: read failed after 0 of
> 4096 at 2147483639808: Input/output error\n
> /dev/mapper/360014057fda80efdcae4d414eda829d7: read failed after 0 of
> 4096 at 0: Input/output error\n WARNING: Error counts reached a limit
> of 3. Device /dev/mapper/360014057fda80efdcae4d414eda829d7 was
> disabled\n'; <rc> = 0
>
> Thread-5448::DEBUG::2012-03-29
> 11:57:47,105::lvm::376::OperationMutex::(_reloadvgs) Operation 'lvm
> reload operation' released the operation mutex
>
> Thread-5448::DEBUG::2012-03-29
> 11:57:47,107::persistentDict::175::Storage.PersistentDict::(__init__)
> Created a persistant dict with LvMetadataRW backend
>
> Thread-5448::DEBUG::2012-03-29
> 11:57:47,110::blockSD::177::Storage.Misc.excCmd::(readlines)
> '/bin/dd iflag=direct skip=0 bs=2048
> if=/dev/8ed25a57-f53a-4cf0-bb92-781f3ce36a48/metadata count=1' (cwd
> None)
>
> Thread-5448::DEBUG::2012-03-29
> 11:57:47,155::blockSD::177::Storage.Misc.excCmd::(readlines) FAILED:
> <err> = "/bin/dd: reading
> `/dev/8ed25a57-f53a-4cf0-bb92-781f3ce36a48/metadata': Input/output
> error\n0+0 records in\n0+0 records out\n0 bytes (0 B) copied,
> 0.000525019 s, 0.0 kB/s\n"; <rc> = 1
>
> Thread-5448::ERROR::2012-03-29
> 11:57:47,158::sdc::113::Storage.StorageDomainCache::(_findDomain)
> Error while looking for domain
> `8ed25a57-f53a-4cf0-bb92-781f3ce36a48`
>
> Traceback (most recent call last):
>
> File "/usr/share/vdsm/storage/sdc.py", line 109, in _findDomain
>
> return mod.findDomain(sdUUID)
>
> File "/usr/share/vdsm/storage/blockSD.py", line 1051, in findDomain
>
> return BlockStorageDomain(BlockStorageDomain.findDomainPath(sdUUID))
>
> File "/usr/share/vdsm/storage/blockSD.py", line 241, in __init__
>
> metadata = selectMetadata(sdUUID)
>
> File "/usr/share/vdsm/storage/blockSD.py", line 210, in selectMetadata
>
> if len(mdProvider) > 0:
>
> File "/usr/share/vdsm/storage/persistentDict.py", line 51, in __len__
>
> return len(self.keys())
>
> File "/usr/share/vdsm/storage/persistentDict.py", line 95, in keys
>
> return list(self.__iter__())
>
> File "/usr/share/vdsm/storage/persistentDict.py", line 92, in __iter__
>
> return ifilter(lambda k: k in self._validatorDict,
> self._dict.__iter__())
>
> File "/usr/share/vdsm/storage/persistentDict.py", line 209, in
> __iter__
>
> with self._accessWrapper():
>
> File "/usr/lib64/python2.6/contextlib.py", line 16, in __enter__
>
> return self.gen.next()
>
> File "/usr/share/vdsm/storage/persistentDict.py", line 137, in
> _accessWrapper
>
> self.refresh()
>
> File "/usr/share/vdsm/storage/persistentDict.py", line 214, in refresh
>
> lines = self._metaRW.readlines()
>
> File "/usr/share/vdsm/storage/blockSD.py", line 177, in readlines
>
> m = misc.readblockSUDO(self.metavol, self._offset, self._size)
>
> File "/usr/share/vdsm/storage/misc.py", line 307, in readblockSUDO
>
> raise se.MiscBlockReadException(name, offset, size)
>
> MiscBlockReadException: Internal block device read failure:
> 'name=/dev/8ed25a57-f53a-4cf0-bb92-781f3ce36a48/metadata, offset=0,
> size=2048'
>
> Thread-5448::DEBUG::2012-03-29
> 11:57:47,177::resourceManager::538::ResourceManager::(releaseResource)
> Trying to release resource
> 'Storage.13080edc-77ea-11e1-b6a4-525400c49d2a'
>
> Thread-5448::DEBUG::2012-03-29
> 11:57:47,180::resourceManager::553::ResourceManager::(releaseResource)
> Released resource 'Storage.13080edc-77ea-11e1-b6a4-525400c49d2a' (0
> active users)
>
> Thread-5448::DEBUG::2012-03-29
> 11:57:47,181::resourceManager::558::ResourceManager::(releaseResource)
> Resource 'Storage.13080edc-77ea-11e1-b6a4-525400c49d2a' is free,
> finding out if anyone is waiting for it.
>
> Thread-5448::DEBUG::2012-03-29
> 11:57:47,183::resourceManager::565::ResourceManager::(releaseResource)
> No one is waiting for resource
> 'Storage.13080edc-77ea-11e1-b6a4-525400c49d2a', Clearing records.
>
> Thread-5448::ERROR::2012-03-29
> 11:57:47,185::task::853::TaskManager.Task::(_setError)
> Task=`838b7ca3-9f79-4c87-a2f0-12cad48cc127`::Unexpected error
>
> Traceback (most recent call last):
>
> File "/usr/share/vdsm/storage/task.py", line 861, in _run
>
> return fn(*args, **kargs)
>
> File "/usr/share/vdsm/logUtils.py", line 38, in wrapper
>
> res = f(*args, **kwargs)
>
> File "/usr/share/vdsm/storage/hsm.py", line 813, in connectStoragePool
>
> return self._connectStoragePool(spUUID, hostID, scsiKey, msdUUID,
> masterVersion, options)
>
> File "/usr/share/vdsm/storage/hsm.py", line 855, in
> _connectStoragePool
>
> res = pool.connect(hostID, scsiKey, msdUUID, masterVersion)
>
> File "/usr/share/vdsm/storage/sp.py", line 641, in connect
>
> self.__rebuild(msdUUID=msdUUID, masterVersion=masterVersion)
>
> File "/usr/share/vdsm/storage/sp.py", line 1107, in __rebuild
>
> self.masterDomain = self.getMasterDomain(msdUUID=msdUUID,
> masterVersion=masterVersion)
>
> File "/usr/share/vdsm/storage/sp.py", line 1442, in getMasterDomain
>
> raise se.StoragePoolMasterNotFound(self.spUUID, msdUUID)
>
> StoragePoolMasterNotFound: Cannot find master domain:
> 'spUUID=13080edc-77ea-11e1-b6a4-525400c49d2a,
> msdUUID=8ed25a57-f53a-4cf0-bb92-781f3ce36a48'
>
> Thread-5448::DEBUG::2012-03-29
> 11:57:47,188::task::872::TaskManager.Task::(_run)
> Task=`838b7ca3-9f79-4c87-a2f0-12cad48cc127`::Task._run:
> 838b7ca3-9f79-4c87-a2f0-12cad48cc127
> ('13080edc-77ea-11e1-b6a4-525400c49d2a', 1,
> '13080edc-77ea-11e1-b6a4-525400c49d2a',
> '8ed25a57-f53a-4cf0-bb92-781f3ce36a48', 1) {} failed - stopping task
>
> Thread-5448::DEBUG::2012-03-29
> 11:57:47,190::task::1199::TaskManager.Task::(stop)
> Task=`838b7ca3-9f79-4c87-a2f0-12cad48cc127`::stopping in state
> preparing (force False)
>
> Thread-5448::DEBUG::2012-03-29
> 11:57:47,192::task::978::TaskManager.Task::(_decref)
> Task=`838b7ca3-9f79-4c87-a2f0-12cad48cc127`::ref 1 aborting True
>
> Thread-5448::INFO::2012-03-29
> 11:57:47,193::task::1157::TaskManager.Task::(prepare)
> Task=`838b7ca3-9f79-4c87-a2f0-12cad48cc127`::aborting: Task is
> aborted: 'Cannot find master domain' - code 304
>
> Thread-5448::DEBUG::2012-03-29
> 11:57:47,195::task::1162::TaskManager.Task::(prepare)
> Task=`838b7ca3-9f79-4c87-a2f0-12cad48cc127`::Prepare: aborted:
> Cannot find master domain
>
> Thread-5448::DEBUG::2012-03-29
> 11:57:47,197::task::978::TaskManager.Task::(_decref)
> Task=`838b7ca3-9f79-4c87-a2f0-12cad48cc127`::ref 0 aborting True
>
> Thread-5448::DEBUG::2012-03-29
> 11:57:47,198::task::913::TaskManager.Task::(_doAbort)
> Task=`838b7ca3-9f79-4c87-a2f0-12cad48cc127`::Task._doAbort: force
> False
>
> Thread-5448::DEBUG::2012-03-29
> 11:57:47,200::resourceManager::844::ResourceManager.Owner::(cancelAll)
> Owner.cancelAll requests {}
>
> Thread-5448::DEBUG::2012-03-29
> 11:57:47,201::task::588::TaskManager.Task::(_updateState)
> Task=`838b7ca3-9f79-4c87-a2f0-12cad48cc127`::moving from state
> preparing -> state aborting
>
> Thread-5448::DEBUG::2012-03-29
> 11:57:47,203::task::537::TaskManager.Task::(__state_aborting)
> Task=`838b7ca3-9f79-4c87-a2f0-12cad48cc127`::_aborting: recover policy
> none
>
> Thread-5448::DEBUG::2012-03-29
> 11:57:47,205::task::588::TaskManager.Task::(_updateState)
> Task=`838b7ca3-9f79-4c87-a2f0-12cad48cc127`::moving from state
> aborting -> state failed
>
> Thread-5448::DEBUG::2012-03-29
> 11:57:47,206::resourceManager::809::ResourceManager.Owner::(releaseAll
> ) Owner.releaseAll requests {} resources {}
>
> Thread-5448::DEBUG::2012-03-29
> 11:57:47,208::resourceManager::844::ResourceManager.Owner::(cancelAll)
> Owner.cancelAll requests {}
>
> Thread-5448::ERROR::2012-03-29
> 11:57:47,209::dispatcher::89::Storage.Dispatcher.Protect::(run)
> {'status': {'message': "Cannot find master domain:
> 'spUUID=13080edc-77ea-11e1-b6a4-525400c49d2a,
> msdUUID=8ed25a57-f53a-4cf0-bb92-781f3ce36a48'", 'code': 304}}
>
> Thread-5453::DEBUG::2012-03-29
> 11:57:48,736::task::588::TaskManager.Task::(_updateState)
> Task=`b801beea-93db-44e7-ba71-5d1eca7eecae`::moving from state init
> -> state preparing
>
> Thread-5453::INFO::2012-03-29
> 11:57:48,739::logUtils::37::dispatcher::(wrapper) Run and protect:
> repoStats(options=None)
>
> Thread-5453::INFO::2012-03-29
> 11:57:48,740::logUtils::39::dispatcher::(wrapper) Run and protect:
> repoStats, Return response: {}
>
> Thread-5453::DEBUG::2012-03-29
> 11:57:48,742::task::1172::TaskManager.Task::(prepare)
> Task=`b801beea-93db-44e7-ba71-5d1eca7eecae`::finished: {}
>
> Thread-5453::DEBUG::2012-03-29
> 11:57:48,743::task::588::TaskManager.Task::(_updateState)
> Task=`b801beea-93db-44e7-ba71-5d1eca7eecae`::moving from state
> preparing -> state finished
>
> Thread-5453::DEBUG::2012-03-29
> 11:57:48,745::resourceManager::809::ResourceManager.Owner::(releaseAll
> ) Owner.releaseAll requests {} resources {}
>
> Thread-5453::DEBUG::2012-03-29
> 11:57:48,746::resourceManager::844::ResourceManager.Owner::(cancelAll)
> Owner.cancelAll requests {}
>
> Thread-5453::DEBUG::2012-03-29
> 11:57:48,748::task::978::TaskManager.Task::(_decref)
> Task=`b801beea-93db-44e7-ba71-5d1eca7eecae`::ref 0 aborting False
>
>
>
> Regards, rene
>
>
>
>
>
> Von: users-bounces at ovirt.org [ mailto:users-bounces at ovirt.org ] Im
> Auftrag von Rene Rosenberger
> Gesendet: Donnerstag, 29. März 2012 11:47
> An: rvaknin at redhat.com ; users at ovirt.org
> Betreff: Re: [Users] storage domain reactivate not working
>
>
>
> Hi,
>
>
>
> i did:
>
>
>
> [root at KVM-DMZ-04 vdsm]# iscsiadm -m node -T
> iqn.2004-04.com.qnap:ts-419uplus:iscsi.dmznas01.c2b74d -u
>
> Logging out of session [sid: 2, target:
> iqn.2004-04.com.qnap:ts-419uplus:iscsi.dmznas01.c2b74d, portal:
> 192.168.xxx.xxx,3260]
>
> Logout of [sid: 2, target:
> iqn.2004-04.com.qnap:ts-419uplus:iscsi.dmznas01.c2b74d, portal:
> 192.168.xxx.xxx,3260] successful.
>
> [root at KVM-DMZ-04 vdsm]# iscsiadm -m node -T
> iqn.2004-04.com.qnap:ts-419uplus:iscsi.dmznas01.c2b74d -l
>
> Logging in to [iface: default, target:
> iqn.2004-04.com.qnap:ts-419uplus:iscsi.dmznas01.c2b74d, portal:
> 192.168.xxx.xxx,3260] (multiple)
>
> Login to [iface: default, target:
> iqn.2004-04.com.qnap:ts-419uplus:iscsi.dmznas01.c2b74d, portal:
> 192.168.xxx.xxx,3260] successful.
>
>
>
> But the data center is locked in webinterface. System -> default ->
> storage -> DMZ-NAS-01 -> Data Center = locked
>
>
>
> Regards, rene
>
>
>
>
>
> Von: Rami Vaknin [ mailto:rvaknin at redhat.com ]
> Gesendet: Donnerstag, 29. März 2012 11:32
> An: Rene Rosenberger; users at ovirt.org
> Betreff: Re: [Users] storage domain reactivate not working
>
>
>
> On 03/29/2012 11:28 AM, Rene Rosenberger wrote:
>
> Hi,
>
>
>
> i have rebootet my iscsi device without mainenance mode. Now it is
> inactive. When I want to reactivate it again it fails. What can I do?
>
> Depends why it fails, vdsm log can help.
>
> You can check whether or not you're connected from the hosts to the
> iscsi target or reconnect using:
> iscsiadm -m discoverydb --discover -t st -p your_iscsi_server_fqdn
> iscsiadm -m node -T your_target_name -l
>
> Than try to activate it.
>
>
>
>
>
> Regards, rene
>
>
>
>
>
>
> _______________________________________________ Users mailing list
> Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users
>
>
>
>
> -- Thanks, Rami Vaknin, QE @ Red Hat, TLV, IL.


More information about the Users mailing list