[ovirt-users] Sharing iSCSI data stroage domain across multiple clusters in the same datacenter
santosh
sbahir at commvault.com
Tue Aug 5 15:36:59 UTC 2014
On 08/04/2014 04:42 PM, Itamar Heim wrote:
> On 08/04/2014 05:46 PM, santosh wrote:
>> On 08/03/2014 03:01 PM, Itamar Heim wrote:
>>> On 07/30/2014 10:35 PM, santosh wrote:
>>>> Hi,
>>>> *
>>>> **Can we share the iSCSI data storage domain across multiple clusters in
>>>> the same datacenter?*
>>>>
>>>> Following are the setup details which I tried.
>>>>
>>>> - One datacenter, Say DC1
>>>> - in DC1, two clusters, say CL1 and CL2
>>>> - In CL1, one host, say H1. And in CL2 one host, say H2
>>>> - iSCSI Data Storage domain is configured where external storage
>>>> LUNs are exported to host H1(A host in CL1 of Datacenter).
>>>>
>>>>
>>>> While adding H1 to CL1 is succeeded; addition of H2 in CL2 is failing
>>>> with following error in vdsm.log.
>>>>
>>>> Traceback (most recent call last):
>>>> File "/usr/share/vdsm/storage/task.py", line 873, in _run
>>>> return fn(*args, **kargs)
>>>> File "/usr/share/vdsm/logUtils.py", line 45, in wrapper
>>>> res = f(*args, **kwargs)
>>>> File "/usr/share/vdsm/storage/hsm.py", line 1020, in
>>>> connectStoragePool
>>>> spUUID, hostID, msdUUID, masterVersion, domainsMap)
>>>> File "/usr/share/vdsm/storage/hsm.py", line 1091, in
>>>> _connectStoragePool
>>>> res = pool.connect(hostID, msdUUID, masterVersion)
>>>> File "/usr/share/vdsm/storage/sp.py", line 630, in connect
>>>> self.__rebuild(msdUUID=msdUUID, masterVersion=masterVersion)
>>>> File "/usr/share/vdsm/storage/sp.py", line 1153, in __rebuild
>>>> self.setMasterDomain(msdUUID, masterVersion)
>>>> File "/usr/share/vdsm/storage/sp.py", line 1360, in setMasterDomain
>>>> raise se.StoragePoolMasterNotFound(self.spUUID, msdUUID)
>>>> StoragePoolMasterNotFound: Cannot find master domain:
>>>> 'spUUID=a4dfaf64-adfa-4cfa-88d5-986fbdb2b2b2,
>>>> msdUUID=741f7913-09ad-4d96-a225-3bda6d06e042'
>>>> Thread-13::DEBUG::2014-07-30
>>>> 15:24:49,780::task::885::TaskManager.Task::(_run)
>>>> Task=`07997682-8d6b-42fd-acb3-1360f14860d6`::Task._run:
>>>> 07997682-8d6b-42fd-acb3-1360f14860d6
>>>> ('a4dfaf64-adfa-4cfa-88d5-986fbdb2b2b2', 2,
>>>> '741f7913-09ad-4d96-a225-3bda6d06e042', 1, None) {} failed -
>>>> stopping task
>>>> Thread-13::DEBUG::2014-07-30
>>>> 15:24:49,780::task::1211::TaskManager.Task::(stop)
>>>> Task=`07997682-8d6b-42fd-acb3-1360f14860d6`::stopping in state
>>>> preparing (force False)
>>>> Thread-13::DEBUG::2014-07-30
>>>> 15:24:49,780::task::990::TaskManager.Task::(_decref)
>>>> Task=`07997682-8d6b-42fd-acb3-1360f14860d6`::ref 1 aborting True
>>>> *Thread-13::INFO::2014-07-30
>>>> 15:24:49,780::task::1168::TaskManager.Task::(prepare)
>>>> Task=`07997682-8d6b-42fd-acb3-1360f14860d6`::aborting: Task is
>>>> aborted: 'Cannot find master domain' - code 304*
>>>> Thread-13::DEBUG::2014-07-30
>>>> 15:24:49,781::task::1173::TaskManager.Task::(prepare)
>>>> Task=`07997682-8d6b-42fd-acb3-1360f14860d6`::Prepare: aborted:
>>>> Cannot find master domain
>>>> Thread-13::DEBUG::2014-07-30
>>>> 15:24:49,781::task::990::TaskManager.Task::(_decref)
>>>> Task=`07997682-8d6b-42fd-acb3-1360f14860d6`::ref 0 aborting True
>>>> Thread-13::DEBUG::2014-07-30
>>>> 15:24:49,781::task::925::TaskManager.Task::(_doAbort)
>>>> Task=`07997682-8d6b-42fd-acb3-1360f14860d6`::Task._doAbort: force False
>>>> Thread-13::DEBUG::2014-07-30
>>>> 15:24:49,781::resourceManager::977::ResourceManager.Owner::(cancelAll)
>>>> Owner.cancelAll requests {}
>>>> Thread-13::DEBUG::2014-07-30
>>>> 15:24:49,781::task::595::TaskManager.Task::(_updateState)
>>>> Task=`07997682-8d6b-42fd-acb3-1360f14860d6`::moving from state
>>>> preparing -> state aborting
>>>> Thread-13::DEBUG::2014-07-30
>>>> 15:24:49,781::task::550::TaskManager.Task::(__state_aborting)
>>>> Task=`07997682-8d6b-42fd-acb3-1360f14860d6`::_aborting: recover
>>>> policy none
>>>> Thread-13::DEBUG::2014-07-30
>>>> 15:24:49,782::task::595::TaskManager.Task::(_updateState)
>>>> Task=`07997682-8d6b-42fd-acb3-1360f14860d6`::moving from state
>>>> aborting -> state failed
>>>> Thread-13::DEBUG::2014-07-30
>>>> 15:24:49,782::resourceManager::940::ResourceManager.Owner::(releaseAll)
>>>> Owner.releaseAll requests {} resources {}
>>>> Thread-13::DEBUG::2014-07-30
>>>> 15:24:49,782::resourceManager::977::ResourceManager.Owner::(cancelAll)
>>>> Owner.cancelAll requests {}
>>>> Thread-13::ERROR::2014-07-30
>>>> 15:24:49,782::dispatcher::65::Storage.Dispatcher.Protect::(run)
>>>> {'status': {'message': "Cannot find master domain:
>>>> 'spUUID=a4dfaf64-adfa-4cfa-88d5-986fbdb2b2b2,
>>>> msdUUID=741f7913-09ad-4d96-a225-3bda6d06e042'", 'code': 304}}
>>>>
>>>> _*Please advise if I need to have one Storage Domain per cluster in
>>>> given datacenter.*_
>>>>
>>>> Thanks, Santosh.
>>>>
>>>>
>>>> ***************************Legal Disclaimer***************************
>>>> "This communication may contain confidential and privileged material for the
>>>> sole use of the intended recipient. Any unauthorized review, use or distribution
>>>> by others is strictly prohibited. If you have received the message by mistake,
>>>> please advise the sender by reply email and delete the message. Thank you."
>>>> **********************************************************************
>>>>
>>>>
>>>>
>>>> _______________________________________________
>>>> Users mailing list
>>>> Users at ovirt.org
>>>> http://lists.ovirt.org/mailman/listinfo/users
>>>>
>>> no. you can access a storage domain from multiple clusters in same DC.
>> [Santosh] That is my understanding also.
>>> which type of storage?
>> [Santosh] 'Storage' is of 'iSCSI' type. Domain type is 'Data (Master)'
>>> could it be its "protecting" shared access to the DC?
>> [Santosh] How should I find it out?
> which type of storage array?
[Santosh:1] NetApp 8.1.3 7-Mode
>>> is moving the host to same cluster as the SPM resolving the issue for
>>> you (it shouldn't)
>> [Santosh] Moving the host to the same cluster as that of SPM did not
>> resolve the issue.
> and if you move the SPM host to maint, does the 2nd host able to become
> an SPM/access the storage (which would indicate an issue with storage
> side 'protection'.
[Santosh:1] After moving SPM host to maint, 2nd host is able to become
SPM an access the storage.
[Santosh:1] I will check the storage side protection.
> else, are you sure all hosts can access the storage?
>
>>
>> Thanks for the reply.
>> Regards, Santosh.
>>
>>
>> ***************************Legal Disclaimer***************************
>> "This communication may contain confidential and privileged material for the
>> sole use of the intended recipient. Any unauthorized review, use or distribution
>> by others is strictly prohibited. If you have received the message by mistake,
>> please advise the sender by reply email and delete the message. Thank you."
>> **********************************************************************
>>
***************************Legal Disclaimer***************************
"This communication may contain confidential and privileged material for the
sole use of the intended recipient. Any unauthorized review, use or distribution
by others is strictly prohibited. If you have received the message by mistake,
please advise the sender by reply email and delete the message. Thank you."
**********************************************************************
More information about the Users
mailing list