[Users] Ovirt 3.3.2 Cannot attach POSIX (gluster) storage domain

Steve Dainard sdainard at miovision.com
Tue Feb 4 20:14:10 UTC 2014


I should be able to provide any logs required, I've reverted to my NFS
storage domain but can move a host over to POSIX whenever necessary.

*Steve Dainard *
IT Infrastructure Manager
Miovision <http://miovision.com/> | *Rethink Traffic*
519-513-2407 ex.250
877-646-8476 (toll-free)

*Blog <http://miovision.com/blog>  |  **LinkedIn
<https://www.linkedin.com/company/miovision-technologies>  |  Twitter
<https://twitter.com/miovision>  |  Facebook
<https://www.facebook.com/miovision>*
------------------------------
 Miovision Technologies Inc. | 148 Manitou Drive, Suite 101, Kitchener, ON,
Canada | N2C 1L3
This e-mail may contain information that is privileged or confidential. If
you are not the intended recipient, please delete the e-mail and any
attachments and notify us immediately.


On Tue, Feb 4, 2014 at 10:20 AM, Elad Ben Aharon <ebenahar at redhat.com>wrote:

> Nir,
> Can you take a look? the user gets the same Sanlock exception as reported
> here:  https://bugzilla.redhat.com/show_bug.cgi?id=1046430
>
> ----- Original Message -----
> From: "Steve Dainard" <sdainard at miovision.com>
> To: "users" <users at ovirt.org>
> Sent: Tuesday, February 4, 2014 5:09:43 PM
> Subject: [Users] Ovirt 3.3.2 Cannot attach POSIX (gluster) storage domain
>
> I can successfully create a POSIX storage domain backed by gluster, but at
> the end of creation I get an error message "failed to acquire host id".
>
> Note that I have successfully created/activated NFS DC/SD on the same
> ovirt/hosts.
>
> I have some logs when I tried to attach to the DC after failure:
>
> engine.log
>
> 2014-02-04 09:54:04,324 INFO
> [org.ovirt.engine.core.bll.storage.AddStoragePoolWithStoragesCommand]
> (ajp--127.0.0.1-8702-3) [1dd40406] Lock Acquired to object EngineLock [ex
> clusiveLocks= key: 8c4e8898-c91a-4d49-98e8-b6467791a9cc value: POOL
> , sharedLocks= ]
> 2014-02-04 09:54:04,473 INFO
> [org.ovirt.engine.core.bll.storage.AddStoragePoolWithStoragesCommand]
> (pool-6-thread-42) [1dd40406] Running command: AddStoragePoolWithStorages
> Command internal: false. Entities affected : ID:
> 8c4e8898-c91a-4d49-98e8-b6467791a9cc Type: StoragePool
> 2014-02-04 09:54:04,673 INFO
> [org.ovirt.engine.core.bll.storage.ConnectStorageToVdsCommand]
> (pool-6-thread-42) [3f86c31b] Running command: ConnectStorageToVdsCommand
> intern
> al: true. Entities affected : ID: aaa00000-0000-0000-0000-123456789aaa
> Type: System
> 2014-02-04 09:54:04,682 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
> (pool-6-thread-42) [3f86c31b] START, ConnectStorageServerVDSCommand(
> HostName = ovirt001, HostId = 48f13d47-8346-4ff6-81ca-4f4324069db3,
> storagePoolId = 00000000-0000-0000-0000-000000000000, storageType =
> POSIXFS, connectionList = [{ id: 87f9
> ff74-93c4-4fe5-9a56-ed5338290af9, connection: 10.0.10.3:/rep2, iqn: null,
> vfsType: glusterfs, mountOptions: null, nfsVersion: null, nfsRetrans: null,
> nfsTimeo: null };]), lo
> g id: 332ff091
> 2014-02-04 09:54:05,089 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
> (pool-6-thread-42) [3f86c31b] FINISH, ConnectStorageServerVDSCommand
> , return: {87f9ff74-93c4-4fe5-9a56-ed5338290af9=0}, log id: 332ff091
> 2014-02-04 09:54:05,093 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStoragePoolVDSCommand]
> (pool-6-thread-42) [3f86c31b] START, CreateStoragePoolVDSCommand(HostNa
> me = ovirt001, HostId = 48f13d47-8346-4ff6-81ca-4f4324069db3,
> storagePoolId=8c4e8898-c91a-4d49-98e8-b6467791a9cc, storageType=POSIXFS,
> storagePoolName=IT, masterDomainId=471
> 487ed-2946-4dfc-8ec3-96546006be12,
> domainsIdList=[471487ed-2946-4dfc-8ec3-96546006be12], masterVersion=3), log
> id: 1be84579
> 2014-02-04 09:54:08,833 ERROR
> [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorag
> ePoolVDSCommand] (pool-6-thread-42) [3f86c31b] Failed in
> CreateStoragePoolVDS method
> 2014-02-04 09:54:08,834 ERROR
> [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStoragePoolVDSCommand]
> (pool-6-thread-42) [3f86c31b] Error code AcquireHostIdFailure and error
> message VDSGenericException: VDSErrorException: Failed to
> CreateStoragePoolVDS, error = Cannot acquire host id:
> ('471487ed-2946-4dfc-8ec3-96546006be12', SanlockException(22, 'Sanlock
> lockspace add failure', 'Invalid argument'))
> 2014-02-04 09:54:08,835 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStoragePoolVDSCommand]
> (pool-6-thread-42) [3f86c31b] Command
> org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStoragePoolVDSCommand
> return value
> StatusOnlyReturnForXmlRpc [mStatus=StatusForXmlRpc [mCode=661,
> mMessage=Cannot acquire host id: ('471487ed-2946-4dfc-8ec3-96546006be12',
> SanlockException(22, 'Sanlock lockspace add failure', 'Invalid argument'))]]
> 2014-02-04 09:54:08,836 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStoragePoolVDSCommand]
> (pool-6-thread-42) [3f86c31b] HostName = ovirt001
> 2014-02-04 09:54:08,840 ERROR
> [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStoragePoolVDSCommand]
> (pool-6-thread-42) [3f86c31b] Command CreateStoragePoolVDS execution
> failed. Exception: VDSErrorException: VDSGenericException:
> VDSErrorException: Failed to CreateStoragePoolVDS, error = Cannot acquire
> host id: ('471487ed-2946-4dfc-8ec3-96546006be12', SanlockException(22,
> 'Sanlock lockspace add failure', 'Invalid argument'))
> 2014-02-04 09:54:08,840 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStoragePoolVDSCommand]
> (pool-6-thread-42) [3f86c31b] FINISH, CreateStoragePoolVDSCommand, log id:
> 1be84579
> 2014-02-04 09:54:08,841 ERROR
> [org.ovirt.engine.core.bll.storage.AddStoragePoolWithStoragesCommand]
> (pool-6-thread-42) [3f86c31b] Command
> org.ovirt.engine.core.bll.storage.AddStoragePoolWithStoragesCommand throw
> Vdc Bll exception. With error message VdcBLLException:
> org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
> VDSGenericException: VDSErrorException: Failed to CreateStoragePoolVDS,
> error = Cannot acquire host id: ('471487ed-2946-4dfc-8ec3-96546006be12',
> SanlockException(22, 'Sanlock lockspace add failure', 'Invalid argument'))
> (Failed with error AcquireHostIdFailure and code 661)
> 2014-02-04 09:54:08,867 INFO
> [org.ovirt.engine.core.bll.storage.AddStoragePoolWithStoragesCommand]
> (pool-6-thread-42) [3f86c31b] Command
> [id=373987cb-b54d-4174-b4a9-195be631f0d7]: Compensating CHANGED_ENTITY of
> org.ovirt.engine.core.common.businessentities.StoragePool; snapshot:
> id=8c4e8898-c91a-4d49-98e8-b6467791a9cc.
> 2014-02-04 09:54:08,871 INFO
> [org.ovirt.engine.core.bll.storage.AddStoragePoolWithStoragesCommand]
> (pool-6-thread-42) [3f86c31b] Command
> [id=373987cb-b54d-4174-b4a9-195be631f0d7]: Compensating NEW_ENTITY_ID of
> org.ovirt.engine.core.common.businessentities.StoragePoolIsoMap; snapshot:
> storagePoolId = 8c4e8898-c91a-4d49-98e8-b6467791a9cc, storageId =
> 471487ed-2946-4dfc-8ec3-96546006be12.
> 2014-02-04 09:54:08,879 INFO
> [org.ovirt.engine.core.bll.storage.AddStoragePoolWithStoragesCommand]
> (pool-6-thread-42) [3f86c31b] Command
> [id=373987cb-b54d-4174-b4a9-195be631f0d7]: Compensating CHANGED_ENTITY of
> org.ovirt.engine.core.common.businessentities.StorageDomainStatic;
> snapshot: id=471487ed-2946-4dfc-8ec3-96546006be12.
> 2014-02-04 09:54:08,951 INFO
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (pool-6-thread-42) [3f86c31b] Correlation ID: 1dd40406, Job ID:
> 07003dff-9e0e-42ae-8f88-6b055b45f797, Call Stack: null, Custom Event ID:
> -1, Message: Failed to attach Storage Domains to Data Center IT. (User:
> admin at internal)
> 2014-02-04 09:54:08,975 INFO
> [org.ovirt.engine.core.bll.storage.AddStoragePoolWithStoragesCommand]
> (pool-6-thread-42) [3f86c31b] Lock freed to object EngineLock
> [exclusiveLocks= key: 8c4e8898-c91a-4d49-98e8-b6467791a9cc value: POOL
> , sharedLocks= ]
>
> vdsm.log
>
> Thread-30::DEBUG::2014-02-04
> 09:54:04,692::BindingXMLRPC::167::vds::(wrapper) client [10.0.10.2] flowID
> [3f86c31b]
> Thread-30::DEBUG::2014-02-04
> 09:54:04,692::task::579::TaskManager.Task::(_updateState)
> Task=`218dcde9-bbc7-4d5a-ad53-0bab556c6261`::moving from state init ->
> state preparing
> Thread-30::INFO::2014-02-04
> 09:54:04,693::logUtils::44::dispatcher::(wrapper) Run and protect:
> connectStorageServer(domType=6,
> spUUID='00000000-0000-0000-0000-000000000000', conList=[{'port': '',
> 'connection': '10.0.10.3:/rep2', 'iqn': '', 'portal': '', 'user': '',
> 'vfs_type': 'glusterfs', 'password': '******', 'id':
> '87f9ff74-93c4-4fe5-9a56-ed5338290af9'}], options=None)
> Thread-30::DEBUG::2014-02-04
> 09:54:04,698::mount::226::Storage.Misc.excCmd::(_runcmd) '/usr/bin/sudo -n
> /bin/mount -t glusterfs 10.0.10.3:/rep2 /rhev/data-center/mnt/10.0.10.3:_rep2'
> (cwd None)
> Thread-30::DEBUG::2014-02-04
> 09:54:05,067::hsm::2315::Storage.HSM::(__prefetchDomains) posix local path:
> /rhev/data-center/mnt/10.0.10.3:_rep2
> Thread-30::DEBUG::2014-02-04
> 09:54:05,078::hsm::2333::Storage.HSM::(__prefetchDomains) Found SD uuids:
> ('471487ed-2946-4dfc-8ec3-96546006be12',)
> Thread-30::DEBUG::2014-02-04
> 09:54:05,078::hsm::2389::Storage.HSM::(connectStorageServer) knownSDs:
> {471487ed-2946-4dfc-8ec3-96546006be12: storage.nfsSD.findDomain}
> Thread-30::INFO::2014-02-04
> 09:54:05,078::logUtils::47::dispatcher::(wrapper) Run and protect:
> connectStorageServer, Return response: {'statuslist': [{'status': 0, 'id':
> '87f9ff74-93c4-4fe5-9a56-ed5338290af9'}]}
> Thread-30::DEBUG::2014-02-04
> 09:54:05,079::task::1168::TaskManager.Task::(prepare)
> Task=`218dcde9-bbc7-4d5a-ad53-0bab556c6261`::finished: {'statuslist':
> [{'status': 0, 'id': '87f9ff74-93c4-4fe5-9a56-ed5338290af9'}]}
> Thread-30::DEBUG::2014-02-04
> 09:54:05,079::task::579::TaskManager.Task::(_updateState)
> Task=`218dcde9-bbc7-4d5a-ad53-0bab556c6261`::moving from state preparing ->
> state finished
> Thread-30::DEBUG::2014-02-04
> 09:54:05,079::resourceManager::939::ResourceManager.Owner::(releaseAll)
> Owner.releaseAll requests {} resources {}
> Thread-30::DEBUG::2014-02-04
> 09:54:05,079::resourceManager::976::ResourceManager.Owner::(cancelAll)
> Owner.cancelAll requests {}
> Thread-30::DEBUG::2014-02-04
> 09:54:05,079::task::974::TaskManager.Task::(_decref)
> Task=`218dcde9-bbc7-4d5a-ad53-0bab556c6261`::ref 0 aborting False
> Thread-31::DEBUG::2014-02-04
> 09:54:05,098::BindingXMLRPC::167::vds::(wrapper) client [10.0.10.2] flowID
> [3f86c31b]
> Thread-31::DEBUG::2014-02-04
> 09:54:05,099::task::579::TaskManager.Task::(_updateState)
> Task=`66924dbf-5a1c-473e-a158-d038aae38dc3`::moving from state init ->
> state preparing
> Thread-31::INFO::2014-02-04
> 09:54:05,099::logUtils::44::dispatcher::(wrapper) Run and protect:
> createStoragePool(poolType=None,
> spUUID='8c4e8898-c91a-4d49-98e8-b6467791a9cc', poolName='IT',
> masterDom='471487ed-2946-4dfc-8ec3-96546006be12',
> domList=['471487ed-2946-4dfc-8ec3-96546006be12'], masterVersion=3,
> lockPolicy=None, lockRenewalIntervalSec=5, leaseTimeSec=60,
> ioOpTimeoutSec=10, leaseRetries=3, options=None)
> Thread-31::DEBUG::2014-02-04
> 09:54:05,099::misc::809::SamplingMethod::(__call__) Trying to enter
> sampling method (storage.sdc.refreshStorage)
> Thread-31::DEBUG::2014-02-04
> 09:54:05,100::misc::811::SamplingMethod::(__call__) Got in to sampling
> method
> Thread-31::DEBUG::2014-02-04
> 09:54:05,100::misc::809::SamplingMethod::(__call__) Trying to enter
> sampling method (storage.iscsi.rescan)
> Thread-31::DEBUG::2014-02-04
> 09:54:05,100::misc::811::SamplingMethod::(__call__) Got in to sampling
> method
> Thread-31::DEBUG::2014-02-04
> 09:54:05,100::iscsiadm::91::Storage.Misc.excCmd::(_runCmd) '/usr/bin/sudo
> -n /sbin/iscsiadm -m session -R' (cwd None)
> Thread-31::DEBUG::2014-02-04
> 09:54:05,114::iscsiadm::91::Storage.Misc.excCmd::(_runCmd) FAILED: <err> =
> 'iscsiadm: No session found.\n'; <rc> = 21
> Thread-31::DEBUG::2014-02-04
> 09:54:05,115::misc::819::SamplingMethod::(__call__) Returning last result
> Thread-31::DEBUG::2014-02-04
> 09:54:07,144::multipath::112::Storage.Misc.excCmd::(rescan) '/usr/bin/sudo
> -n /sbin/multipath -r' (cwd None)
> Thread-31::DEBUG::2014-02-04
> 09:54:07,331::multipath::112::Storage.Misc.excCmd::(rescan) SUCCESS: <err>
> = ''; <rc> = 0
> Thread-31::DEBUG::2014-02-04
> 09:54:07,332::lvm::510::OperationMutex::(_invalidateAllPvs) Operation 'lvm
> invalidate operation' got the operation mutex
> Thread-31::DEBUG::2014-02-04
> 09:54:07,333::lvm::512::OperationMutex::(_invalidateAllPvs) Operation 'lvm
> invalidate operation' released the operation mutex
> Thread-31::DEBUG::2014-02-04
> 09:54:07,333::lvm::521::OperationMutex::(_invalidateAllVgs) Operation 'lvm
> invalidate operation' got the operation mutex
> Thread-31::DEBUG::2014-02-04
> 09:54:07,333::lvm::523::OperationMutex::(_invalidateAllVgs) Operation 'lvm
> invalidate operation' released the operation mutex
> Thread-31::DEBUG::2014-02-04
> 09:54:07,333::lvm::541::OperationMutex::(_invalidateAllLvs) Operation 'lvm
> invalidate operation' got the operation mutex
> Thread-31::DEBUG::2014-02-04
> 09:54:07,334::lvm::543::OperationMutex::(_invalidateAllLvs) Operation 'lvm
> invalidate operation' released the operation mutex
> Thread-31::DEBUG::2014-02-04
> 09:54:07,334::misc::819::SamplingMethod::(__call__) Returning last result
> Thread-31::DEBUG::2014-02-04
> 09:54:07,499::fileSD::137::Storage.StorageDomain::(__init__) Reading domain
> in path /rhev/data-center/mnt/10.0.10.3:
> _rep2/471487ed-2946-4dfc-8ec3-96546006be12
> Thread-31::DEBUG::2014-02-04
> 09:54:07,605::persistentDict::192::Storage.PersistentDict::(__init__)
> Created a persistent dict with FileMetadataRW backend
> Thread-31::DEBUG::2014-02-04
> 09:54:07,647::persistentDict::234::Storage.PersistentDict::(refresh) read
> lines (FileMetadataRW)=['CLASS=Data', 'DESCRIPTION=gluster-store-rep2',
> 'IOOPTIMEOUTSEC=10', 'LEASERETRIES=3', 'LEASETIMESEC=60', 'LOCKPOLICY=',
> 'LOCKRENEWALINTERVALSEC=5', 'POOL_UUID=', 'REMOTE_PATH=10.0.10.3:/rep2',
> 'ROLE=Regular', 'SDUUID=471487ed-2946-4dfc-8ec3-96546006be12',
> 'TYPE=POSIXFS', 'VERSION=3',
> '_SHA_CKSUM=469191aac3fb8ef504b6a4d301b6d8be6fffece1']
> Thread-31::DEBUG::2014-02-04
> 09:54:07,683::fileSD::558::Storage.StorageDomain::(imageGarbageCollector)
> Removing remnants of deleted images []
> Thread-31::DEBUG::2014-02-04
> 09:54:07,684::resourceManager::420::ResourceManager::(registerNamespace)
> Registering namespace '471487ed-2946-4dfc-8ec3-96546006be12_imageNS'
> Thread-31::DEBUG::2014-02-04
> 09:54:07,684::resourceManager::420::ResourceManager::(registerNamespace)
> Registering namespace '471487ed-2946-4dfc-8ec3-96546006be12_volumeNS'
> Thread-31::INFO::2014-02-04
> 09:54:07,684::fileSD::299::Storage.StorageDomain::(validate)
> sdUUID=471487ed-2946-4dfc-8ec3-96546006be12
> Thread-31::DEBUG::2014-02-04
> 09:54:07,692::persistentDict::234::Storage.PersistentDict::(refresh) read
> lines (FileMetadataRW)=['CLASS=Data', 'DESCRIPTION=gluster-store-rep2',
> 'IOOPTIMEOUTSEC=10', 'LEASERETRIES=3', 'LEASETIMESEC=60', 'LOCKPOLICY=',
> 'LOCKRENEWALINTERVALSEC=5', 'POOL_UUID=', 'REMOTE_PATH=10.0.10.3:/rep2',
> 'ROLE=Regular', 'SDUUID=471487ed-2946-4dfc-8ec3-96546006be12',
> 'TYPE=POSIXFS', 'VERSION=3',
> '_SHA_CKSUM=469191aac3fb8ef504b6a4d301b6d8be6fffece1']
> Thread-31::DEBUG::2014-02-04
> 09:54:07,693::resourceManager::197::ResourceManager.Request::(__init__)
> ResName=`Storage.8c4e8898-c91a-4d49-98e8-b6467791a9cc`ReqID=`e0a3d477-b953-49d9-ab78-67695a6bc6d5`::Request
> was made in '/usr/share/vdsm/storage/hsm.py' line '971' at
> 'createStoragePool'
> Thread-31::DEBUG::2014-02-04
> 09:54:07,693::resourceManager::541::ResourceManager::(registerResource)
> Trying to register resource 'Storage.8c4e8898-c91a-4d49-98e8-b6467791a9cc'
> for lock type 'exclusive'
> Thread-31::DEBUG::2014-02-04
> 09:54:07,693::resourceManager::600::ResourceManager::(registerResource)
> Resource 'Storage.8c4e8898-c91a-4d49-98e8-b6467791a9cc' is free. Now
> locking as 'exclusive' (1 active user)
> Thread-31::DEBUG::2014-02-04
> 09:54:07,693::resourceManager::237::ResourceManager.Request::(grant)
> ResName=`Storage.8c4e8898-c91a-4d49-98e8-b6467791a9cc`ReqID=`e0a3d477-b953-49d9-ab78-67695a6bc6d5`::Granted
> request
> Thread-31::DEBUG::2014-02-04
> 09:54:07,694::task::811::TaskManager.Task::(resourceAcquired)
> Task=`66924dbf-5a1c-473e-a158-d038aae38dc3`::_resourcesAcquired:
> Storage.8c4e8898-c91a-4d49-98e8-b6467791a9cc (exclusive)
> Thread-31::DEBUG::2014-02-04
> 09:54:07,694::task::974::TaskManager.Task::(_decref)
> Task=`66924dbf-5a1c-473e-a158-d038aae38dc3`::ref 1 aborting False
> Thread-31::DEBUG::2014-02-04
> 09:54:07,694::resourceManager::197::ResourceManager.Request::(__init__)
> ResName=`Storage.471487ed-2946-4dfc-8ec3-96546006be12`ReqID=`bc20dd7e-d351-47c5-8ed3-78b1b11d703a`::Request
> was made in '/usr/share/vdsm/storage/hsm.py' line '973' at
> 'createStoragePool'
> Thread-31::DEBUG::2014-02-04
> 09:54:07,694::resourceManager::541::ResourceManager::(registerResource)
> Trying to register resource 'Storage.471487ed-2946-4dfc-8ec3-96546006be12'
> for lock type 'exclusive'
> Thread-31::DEBUG::2014-02-04
> 09:54:07,695::resourceManager::600::ResourceManager::(registerResource)
> Resource 'Storage.471487ed-2946-4dfc-8ec3-96546006be12' is free. Now
> locking as 'exclusive' (1 active user)
> Thread-31::DEBUG::2014-02-04
> 09:54:07,695::resourceManager::237::ResourceManager.Request::(grant)
> ResName=`Storage.471487ed-2946-4dfc-8ec3-96546006be12`ReqID=`bc20dd7e-d351-47c5-8ed3-78b1b11d703a`::Granted
> request
> Thread-31::DEBUG::2014-02-04
> 09:54:07,695::task::811::TaskManager.Task::(resourceAcquired)
> Task=`66924dbf-5a1c-473e-a158-d038aae38dc3`::_resourcesAcquired:
> Storage.471487ed-2946-4dfc-8ec3-96546006be12 (exclusive)
> Thread-31::DEBUG::2014-02-04
> 09:54:07,695::task::974::TaskManager.Task::(_decref)
> Task=`66924dbf-5a1c-473e-a158-d038aae38dc3`::ref 1 aborting False
> Thread-31::INFO::2014-02-04
> 09:54:07,696::sp::593::Storage.StoragePool::(create)
> spUUID=8c4e8898-c91a-4d49-98e8-b6467791a9cc poolName=IT
> master_sd=471487ed-2946-4dfc-8ec3-96546006be12
> domList=['471487ed-2946-4dfc-8ec3-96546006be12'] masterVersion=3
> {'LEASETIMESEC': 60, 'IOOPTIMEOUTSEC': 10, 'LEASERETRIES': 3,
> 'LOCKRENEWALINTERVALSEC': 5}
> Thread-31::INFO::2014-02-04
> 09:54:07,696::fileSD::299::Storage.StorageDomain::(validate)
> sdUUID=471487ed-2946-4dfc-8ec3-96546006be12
> Thread-31::DEBUG::2014-02-04
> 09:54:07,703::persistentDict::234::Storage.PersistentDict::(refresh) read
> lines (FileMetadataRW)=['CLASS=Data', 'DESCRIPTION=gluster-store-rep2',
> 'IOOPTIMEOUTSEC=10', 'LEASERETRIES=3', 'LEASETIMESEC=60', 'LOCKPOLICY=',
> 'LOCKRENEWALINTERVALSEC=5', 'POOL_UUID=', 'REMOTE_PATH=10.0.10.3:/rep2',
> 'ROLE=Regular', 'SDUUID=471487ed-2946-4dfc-8ec3-96546006be12',
> 'TYPE=POSIXFS', 'VERSION=3',
> '_SHA_CKSUM=469191aac3fb8ef504b6a4d301b6d8be6fffece1']
> Thread-31::DEBUG::2014-02-04
> 09:54:07,710::persistentDict::234::Storage.PersistentDict::(refresh) read
> lines (FileMetadataRW)=['CLASS=Data', 'DESCRIPTION=gluster-store-rep2',
> 'IOOPTIMEOUTSEC=10', 'LEASERETRIES=3', 'LEASETIMESEC=60', 'LOCKPOLICY=',
> 'LOCKRENEWALINTERVALSEC=5', 'POOL_UUID=', 'REMOTE_PATH=10.0.10.3:/rep2',
> 'ROLE=Regular', 'SDUUID=471487ed-2946-4dfc-8ec3-96546006be12',
> 'TYPE=POSIXFS', 'VERSION=3',
> '_SHA_CKSUM=469191aac3fb8ef504b6a4d301b6d8be6fffece1']
> Thread-31::DEBUG::2014-02-04
> 09:54:07,711::persistentDict::167::Storage.PersistentDict::(transaction)
> Starting transaction
> Thread-31::DEBUG::2014-02-04
> 09:54:07,711::persistentDict::175::Storage.PersistentDict::(transaction)
> Finished transaction
> Thread-31::INFO::2014-02-04
> 09:54:07,711::clusterlock::174::SANLock::(acquireHostId) Acquiring host id
> for domain 471487ed-2946-4dfc-8ec3-96546006be12 (id: 250)
> Thread-31::ERROR::2014-02-04
> 09:54:08,722::task::850::TaskManager.Task::(_setError)
> Task=`66924dbf-5a1c-473e-a158-d038aae38dc3`::Unexpected error
> Traceback (most recent call last):
> File "/usr/share/vdsm/storage/task.py", line 857, in _run
> return fn(*args, **kargs)
> File "/usr/share/vdsm/logUtils.py", line 45, in wrapper
> res = f(*args, **kwargs)
> File "/usr/share/vdsm/storage/hsm.py", line 977, in createStoragePool
> masterVersion, leaseParams)
> File "/usr/share/vdsm/storage/sp.py", line 618, in create
> self._acquireTemporaryClusterLock(msdUUID, leaseParams)
> File "/usr/share/vdsm/storage/sp.py", line 560, in
> _acquireTemporaryClusterLock
> msd.acquireHostId( self.id )
> File "/usr/share/vdsm/storage/sd.py", line 458, in acquireHostId
> self._clusterLock.acquireHostId(hostId, async)
> File "/usr/share/vdsm/storage/clusterlock.py", line 189, in acquireHostId
> raise se.AcquireHostIdFailure(self._sdUUID, e)
> AcquireHostIdFailure: Cannot acquire host id:
> ('471487ed-2946-4dfc-8ec3-96546006be12', SanlockException(22, 'Sanlock
> lockspace add failure', 'Invalid argument'))
> Thread-31::DEBUG::2014-02-04
> 09:54:08,826::task::869::TaskManager.Task::(_run)
> Task=`66924dbf-5a1c-473e-a158-d038aae38dc3`::Task._run:
> 66924dbf-5a1c-473e-a158-d038aae38dc3 (None,
> '8c4e8898-c91a-4d49-98e8-b6467791a9cc', 'IT',
> '471487ed-2946-4dfc-8ec3-96546006be12',
> ['471487ed-2946-4dfc-8ec3-96546006be12'], 3, None, 5, 60, 10, 3) {} failed
> - stopping task
> Thread-31::DEBUG::2014-02-04
> 09:54:08,826::task::1194::TaskManager.Task::(stop)
> Task=`66924dbf-5a1c-473e-a158-d038aae38dc3`::stopping in state preparing
> (force False)
> Thread-31::DEBUG::2014-02-04
> 09:54:08,826::task::974::TaskManager.Task::(_decref)
> Task=`66924dbf-5a1c-473e-a158-d038aae38dc3`::ref 1 aborting True
> Thread-31::INFO::2014-02-04
> 09:54:08,826::task::1151::TaskManager.Task::(prepare)
> Task=`66924dbf-5a1c-473e-a158-d038aae38dc3`::aborting: Task is aborted:
> 'Cannot acquire host id' - code 661
> Thread-31::DEBUG::2014-02-04
> 09:54:08,826::task::1156::TaskManager.Task::(prepare)
> Task=`66924dbf-5a1c-473e-a158-d038aae38dc3`::Prepare: aborted: Cannot
> acquire host id
> Thread-31::DEBUG::2014-02-04
> 09:54:08,827::task::974::TaskManager.Task::(_decref)
> Task=`66924dbf-5a1c-473e-a158-d038aae38dc3`::ref 0 aborting True
> Thread-31::DEBUG::2014-02-04
> 09:54:08,827::task::909::TaskManager.Task::(_doAbort)
> Task=`66924dbf-5a1c-473e-a158-d038aae38dc3`::Task._doAbort: force False
> Thread-31::DEBUG::2014-02-04
> 09:54:08,827::resourceManager::976::ResourceManager.Owner::(cancelAll)
> Owner.cancelAll requests {}
> Thread-31::DEBUG::2014-02-04
> 09:54:08,827::task::579::TaskManager.Task::(_updateState)
> Task=`66924dbf-5a1c-473e-a158-d038aae38dc3`::moving from state preparing ->
> state aborting
> Thread-31::DEBUG::2014-02-04
> 09:54:08,827::task::534::TaskManager.Task::(__state_aborting)
> Task=`66924dbf-5a1c-473e-a158-d038aae38dc3`::_aborting: recover policy none
> Thread-31::DEBUG::2014-02-04
> 09:54:08,827::task::579::TaskManager.Task::(_updateState)
> Task=`66924dbf-5a1c-473e-a158-d038aae38dc3`::moving from state aborting ->
> state failed
> Thread-31::DEBUG::2014-02-04
> 09:54:08,827::resourceManager::939::ResourceManager.Owner::(releaseAll)
> Owner.releaseAll requests {} resources
> {'Storage.471487ed-2946-4dfc-8ec3-96546006be12': < ResourceRef
> 'Storage.471487ed-2946-4dfc-8ec3-96546006be12', isValid: 'True' obj:
> 'None'>, 'Storage.8c4e8898-c91a-4d49-98e8-b6467791a9cc': < ResourceRef
> 'Storage.8c4e8898-c91a-4d49-98e8-b6467791a9cc', isValid: 'True' obj:
> 'None'>}
> Thread-31::DEBUG::2014-02-04
> 09:54:08,828::resourceManager::976::ResourceManager.Owner::(cancelAll)
> Owner.cancelAll requests {}
> Thread-31::DEBUG::2014-02-04
> 09:54:08,828::resourceManager::615::ResourceManager::(releaseResource)
> Trying to release resource 'Storage.471487ed-2946-4dfc-8ec3-96546006be12'
> Thread-31::DEBUG::2014-02-04
> 09:54:08,828::resourceManager::634::ResourceManager::(releaseResource)
> Released resource 'Storage.471487ed-2946-4dfc-8ec3-96546006be12' (0 active
> users)
> Thread-31::DEBUG::2014-02-04
> 09:54:08,828::resourceManager::640::ResourceManager::(releaseResource)
> Resource 'Storage.471487ed-2946-4dfc-8ec3-96546006be12' is free, finding
> out if anyone is waiting for it.
> Thread-31::DEBUG::2014-02-04
> 09:54:08,828::resourceManager::648::ResourceManager::(releaseResource) No
> one is waiting for resource 'Storage.471487ed-2946-4dfc-8ec3-96546006be12',
> Clearing records.
> Thread-31::DEBUG::2014-02-04
> 09:54:08,828::resourceManager::615::ResourceManager::(releaseResource)
> Trying to release resource 'Storage.8c4e8898-c91a-4d49-98e8-b6467791a9cc'
> Thread-31::DEBUG::2014-02-04
> 09:54:08,829::resourceManager::634::ResourceManager::(releaseResource)
> Released resource 'Storage.8c4e8898-c91a-4d49-98e8-b6467791a9cc' (0 active
> users)
> Thread-31::DEBUG::2014-02-04
> 09:54:08,829::resourceManager::640::ResourceManager::(releaseResource)
> Resource 'Storage.8c4e8898-c91a-4d49-98e8-b6467791a9cc' is free, finding
> out if anyone is waiting for it.
> Thread-31::DEBUG::2014-02-04
> 09:54:08,829::resourceManager::648::ResourceManager::(releaseResource) No
> one is waiting for resource 'Storage.8c4e8898-c91a-4d49-98e8-b6467791a9cc',
> Clearing records.
> Thread-31::ERROR::2014-02-04
> 09:54:08,829::dispatcher::67::Storage.Dispatcher.Protect::(run) {'status':
> {'message': "Cannot acquire host id:
> ('471487ed-2946-4dfc-8ec3-96546006be12', SanlockException(22, 'Sanlock
> lockspace add failure', 'Invalid argument'))", 'code': 661}}
>
>
> Storage domain metadata file:
>
> CLASS=Data
> DESCRIPTION=gluster-store-rep2
> IOOPTIMEOUTSEC=10
> LEASERETRIES=3
> LEASETIMESEC=60
> LOCKPOLICY=
> LOCKRENEWALINTERVALSEC=5
> POOL_UUID=
> REMOTE_PATH=10.0.10.3:/rep2
> ROLE=Regular
> SDUUID=471487ed-2946-4dfc-8ec3-96546006be12
> TYPE=POSIXFS
> VERSION=3
> _SHA_CKSUM=469191aac3fb8ef504b6a4d301b6d8be6fffece1
>
>
>
> Steve Dainard
> IT Infrastructure Manager
> Miovision | Rethink Traffic
> 519-513-2407 ex.250
> 877-646-8476 (toll-free)
>
> Blog | LinkedIn | Twitter | Facebook
> Miovision Technologies Inc. | 148 Manitou Drive, Suite 101, Kitchener, ON,
> Canada | N2C 1L3
> This e-mail may contain information that is privileged or confidential. If
> you are not the intended recipient, please delete the e-mail and any
> attachments and notify us immediately.
>
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20140204/5687a3aa/attachment-0001.html>


More information about the Users mailing list