[Users] Ovirt 3.3 Fedora 19 add gluster storage permissions error
Steve Dainard
sdainard at miovision.com
Fri Sep 20 17:30:59 UTC 2013
Awesome, thanks guys. Its weird that that article tells you to set with
'key=value' rather than 'key value' must be some legacy stuff.
Once those changes are in place I hit a different error. Deepak, maybe
you've seen this one on new storage domain add:
*[root at ovirt-manager2 ~]# tail -f /var/log/ovirt-engine/engine.log *
2013-09-20 13:16:36,226 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStoragePoolVDSCommand]
(ajp--127.0.0.1-8702-9) Command CreateStoragePoolVDS execution failed.
Exception: VDSErrorException: VDSGenericException: VDSErrorException:
Failed to CreateStoragePoolVDS, error = Cannot acquire host id:
('17d21ac7-5859-4f25-8de7-2a9433d50c11', SanlockException(22, 'Sanlock
lockspace add failure', 'Invalid argument'))
2013-09-20 13:16:36,229 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStoragePoolVDSCommand]
(ajp--127.0.0.1-8702-9) FINISH, CreateStoragePoolVDSCommand, log id:
672635cc
2013-09-20 13:16:36,231 ERROR
[org.ovirt.engine.core.bll.storage.AddStoragePoolWithStoragesCommand]
(ajp--127.0.0.1-8702-9) Command
org.ovirt.engine.core.bll.storage.AddStoragePoolWithStoragesCommand throw
Vdc Bll exception. With error message VdcBLLException:
org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
VDSGenericException: VDSErrorException: Failed to CreateStoragePoolVDS,
error = Cannot acquire host id: ('17d21ac7-5859-4f25-8de7-2a9433d50c11',
SanlockException(22, 'Sanlock lockspace add failure', 'Invalid argument'))
(Failed with VDSM error AcquireHostIdFailure and code 661)
2013-09-20 13:16:36,296 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(ajp--127.0.0.1-8702-9) Correlation ID: 11070337, Call Stack: null, Custom
Event ID: -1, Message: Failed to attach Storage Domains to Data Center
Default. (User: admin at internal)
2013-09-20 13:16:36,299 INFO
[org.ovirt.engine.core.bll.storage.AddStoragePoolWithStoragesCommand]
(ajp--127.0.0.1-8702-9) Lock freed to object EngineLock [exclusiveLocks=
key: 5849b030-626e-47cb-ad90-3ce782d831b3 value: POOL
, sharedLocks= ]
2013-09-20 13:16:36,387 INFO
[org.ovirt.engine.core.bll.storage.AttachStorageDomainToPoolCommand]
(ajp--127.0.0.1-8702-9) Command [id=293a1e97-e949-4c17-92c6-c01f2221204e]:
Compensating CHANGED_ENTITY of
org.ovirt.engine.core.common.businessentities.StoragePool; snapshot:
id=5849b030-626e-47cb-ad90-3ce782d831b3.
2013-09-20 13:16:36,398 INFO
[org.ovirt.engine.core.bll.storage.AttachStorageDomainToPoolCommand]
(ajp--127.0.0.1-8702-9) Command [id=293a1e97-e949-4c17-92c6-c01f2221204e]:
Compensating NEW_ENTITY_ID of
org.ovirt.engine.core.common.businessentities.StoragePoolIsoMap; snapshot:
storagePoolId = 5849b030-626e-47cb-ad90-3ce782d831b3, storageId =
17d21ac7-5859-4f25-8de7-2a9433d50c11.
2013-09-20 13:16:36,425 INFO
[org.ovirt.engine.core.bll.storage.AttachStorageDomainToPoolCommand]
(ajp--127.0.0.1-8702-9) Command [id=293a1e97-e949-4c17-92c6-c01f2221204e]:
Compensating CHANGED_ENTITY of
org.ovirt.engine.core.common.businessentities.StorageDomainStatic;
snapshot: id=17d21ac7-5859-4f25-8de7-2a9433d50c11.
2013-09-20 13:16:36,464 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(ajp--127.0.0.1-8702-9) Correlation ID: 302ae6eb, Job ID:
014ec59b-e6d7-4e5e-b588-4fb0dfa8f1c8, Call Stack: null, Custom Event ID:
-1, Message: Failed to attach Storage Domain rep2-virt to Data Center
Default. (User: admin at internal)
*[root at ovirt001 ~]# tail -f /var/log/vdsm/vdsm.log *
Thread-32374::DEBUG::2013-09-20
13:16:18,107::task::579::TaskManager.Task::(_updateState)
Task=`f4cab975-d5fa-463a-990e-ab32686c6806`::moving from state init ->
state preparing
Thread-32374::INFO::2013-09-20
13:16:18,107::logUtils::44::dispatcher::(wrapper) Run and protect:
repoStats(options=None)
Thread-32374::INFO::2013-09-20
13:16:18,107::logUtils::47::dispatcher::(wrapper) Run and protect:
repoStats, Return response: {}
Thread-32374::DEBUG::2013-09-20
13:16:18,108::task::1168::TaskManager.Task::(prepare)
Task=`f4cab975-d5fa-463a-990e-ab32686c6806`::finished: {}
Thread-32374::DEBUG::2013-09-20
13:16:18,108::task::579::TaskManager.Task::(_updateState)
Task=`f4cab975-d5fa-463a-990e-ab32686c6806`::moving from state preparing ->
state finished
Thread-32374::DEBUG::2013-09-20
13:16:18,108::resourceManager::939::ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources {}
Thread-32374::DEBUG::2013-09-20
13:16:18,108::resourceManager::976::ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
Thread-32374::DEBUG::2013-09-20
13:16:18,108::task::974::TaskManager.Task::(_decref)
Task=`f4cab975-d5fa-463a-990e-ab32686c6806`::ref 0 aborting False
Thread-32379::DEBUG::2013-09-20
13:16:29,509::BindingXMLRPC::177::vds::(wrapper) client [10.0.0.34]
Thread-32379::DEBUG::2013-09-20
13:16:29,510::task::579::TaskManager.Task::(_updateState)
Task=`1ad55ba1-afaa-4524-b3fa-3d55a421e8bc`::moving from state init ->
state preparing
Thread-32379::INFO::2013-09-20
13:16:29,510::logUtils::44::dispatcher::(wrapper) Run and protect:
connectStorageServer(domType=7,
spUUID='00000000-0000-0000-0000-000000000000', conList=[{'port': '',
'connection': '192.168.1.1:rep2-virt', 'iqn': '', 'portal': '', 'user': '',
'vfs_type': 'glusterfs', 'password': '******', 'id':
'00000000-0000-0000-0000-000000000000'}], options=None)
Thread-32379::DEBUG::2013-09-20
13:16:29,516::hsm::2333::Storage.HSM::(__prefetchDomains) glusterDomPath:
glusterSD/*
Thread-32379::DEBUG::2013-09-20
13:16:29,523::hsm::2345::Storage.HSM::(__prefetchDomains) Found SD uuids: ()
Thread-32379::DEBUG::2013-09-20
13:16:29,523::hsm::2396::Storage.HSM::(connectStorageServer) knownSDs:
{b72b61d1-e11c-496d-ad3a-6f566a1f0ad1: storage.glusterSD.findDomain,
983c4aa1-7b00-4d3b-b6ad-1fd2cf9297ce: storage.glusterSD.findDomain,
b91afb39-f96e-4eb3-bc6c-9f08fa16869c: storage.glusterSD.findDomain}
Thread-32379::INFO::2013-09-20
13:16:29,523::logUtils::47::dispatcher::(wrapper) Run and protect:
connectStorageServer, Return response: {'statuslist': [{'status': 0, 'id':
'00000000-0000-0000-0000-000000000000'}]}
Thread-32379::DEBUG::2013-09-20
13:16:29,523::task::1168::TaskManager.Task::(prepare)
Task=`1ad55ba1-afaa-4524-b3fa-3d55a421e8bc`::finished: {'statuslist':
[{'status': 0, 'id': '00000000-0000-0000-0000-000000000000'}]}
Thread-32379::DEBUG::2013-09-20
13:16:29,524::task::579::TaskManager.Task::(_updateState)
Task=`1ad55ba1-afaa-4524-b3fa-3d55a421e8bc`::moving from state preparing ->
state finished
Thread-32379::DEBUG::2013-09-20
13:16:29,524::resourceManager::939::ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources {}
Thread-32379::DEBUG::2013-09-20
13:16:29,524::resourceManager::976::ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
Thread-32379::DEBUG::2013-09-20
13:16:29,524::task::974::TaskManager.Task::(_decref)
Task=`1ad55ba1-afaa-4524-b3fa-3d55a421e8bc`::ref 0 aborting False
Thread-32382::DEBUG::2013-09-20
13:16:29,888::BindingXMLRPC::177::vds::(wrapper) client [10.0.0.34]
Thread-32382::DEBUG::2013-09-20
13:16:29,888::task::579::TaskManager.Task::(_updateState)
Task=`a3ba925b-65ee-42a6-8506-927a06f63995`::moving from state init ->
state preparing
Thread-32382::INFO::2013-09-20
13:16:29,889::logUtils::44::dispatcher::(wrapper) Run and protect:
createStorageDomain(storageType=7,
sdUUID='17d21ac7-5859-4f25-8de7-2a9433d50c11', domainName='rep2-virt',
typeSpecificArg='192.168.1.1:rep2-virt', domClass=1, domVersion='3',
options=None)
Thread-32382::DEBUG::2013-09-20
13:16:29,889::misc::807::SamplingMethod::(__call__) Trying to enter
sampling method (storage.sdc.refreshStorage)
Thread-32382::DEBUG::2013-09-20
13:16:29,889::misc::809::SamplingMethod::(__call__) Got in to sampling
method
Thread-32382::DEBUG::2013-09-20
13:16:29,889::misc::807::SamplingMethod::(__call__) Trying to enter
sampling method (storage.iscsi.rescan)
Thread-32382::DEBUG::2013-09-20
13:16:29,889::misc::809::SamplingMethod::(__call__) Got in to sampling
method
Thread-32382::DEBUG::2013-09-20
13:16:29,889::iscsiadm::91::Storage.Misc.excCmd::(_runCmd) '/usr/bin/sudo
-n /sbin/iscsiadm -m session -R' (cwd None)
Thread-32382::DEBUG::2013-09-20
13:16:29,904::iscsiadm::91::Storage.Misc.excCmd::(_runCmd) FAILED: <err> =
'iscsiadm: No session found.\n'; <rc> = 21
Thread-32382::DEBUG::2013-09-20
13:16:29,904::misc::817::SamplingMethod::(__call__) Returning last result
Thread-32382::DEBUG::2013-09-20
13:16:32,931::multipath::111::Storage.Misc.excCmd::(rescan) '/usr/bin/sudo
-n /sbin/multipath' (cwd None)
Thread-32382::DEBUG::2013-09-20
13:16:33,255::multipath::111::Storage.Misc.excCmd::(rescan) SUCCESS: <err>
= ''; <rc> = 0
Thread-32382::DEBUG::2013-09-20
13:16:33,256::lvm::483::OperationMutex::(_invalidateAllPvs) Operation 'lvm
invalidate operation' got the operation mutex
Thread-32382::DEBUG::2013-09-20
13:16:33,256::lvm::485::OperationMutex::(_invalidateAllPvs) Operation 'lvm
invalidate operation' released the operation mutex
Thread-32382::DEBUG::2013-09-20
13:16:33,256::lvm::494::OperationMutex::(_invalidateAllVgs) Operation 'lvm
invalidate operation' got the operation mutex
Thread-32382::DEBUG::2013-09-20
13:16:33,256::lvm::496::OperationMutex::(_invalidateAllVgs) Operation 'lvm
invalidate operation' released the operation mutex
Thread-32382::DEBUG::2013-09-20
13:16:33,257::lvm::514::OperationMutex::(_invalidateAllLvs) Operation 'lvm
invalidate operation' got the operation mutex
Thread-32382::DEBUG::2013-09-20
13:16:33,257::lvm::516::OperationMutex::(_invalidateAllLvs) Operation 'lvm
invalidate operation' released the operation mutex
Thread-32382::DEBUG::2013-09-20
13:16:33,257::misc::817::SamplingMethod::(__call__) Returning last result
Thread-32382::ERROR::2013-09-20
13:16:33,257::sdc::137::Storage.StorageDomainCache::(_findDomain) looking
for unfetched domain 17d21ac7-5859-4f25-8de7-2a9433d50c11
Thread-32382::ERROR::2013-09-20
13:16:33,257::sdc::154::Storage.StorageDomainCache::(_findUnfetchedDomain)
looking for domain 17d21ac7-5859-4f25-8de7-2a9433d50c11
Thread-32382::DEBUG::2013-09-20
13:16:33,258::lvm::374::OperationMutex::(_reloadvgs) Operation 'lvm reload
operation' got the operation mutex
Thread-32382::DEBUG::2013-09-20
13:16:33,259::lvm::311::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n
/sbin/lvm vgs --config " devices { preferred_names = [\\"^/dev/mapper/\\"]
ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3
filter = [ \'r|.*|\' ] } global { locking_type=1
prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50
retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o
uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free
17d21ac7-5859-4f25-8de7-2a9433d50c11' (cwd None)
Thread-32382::DEBUG::2013-09-20
13:16:33,285::lvm::311::Storage.Misc.excCmd::(cmd) FAILED: <err> = '
Volume group "17d21ac7-5859-4f25-8de7-2a9433d50c11" not found\n'; <rc> = 5
Thread-32382::WARNING::2013-09-20
13:16:33,286::lvm::379::Storage.LVM::(_reloadvgs) lvm vgs failed: 5 [] ['
Volume group "17d21ac7-5859-4f25-8de7-2a9433d50c11" not found']
Thread-32382::DEBUG::2013-09-20
13:16:33,286::lvm::403::OperationMutex::(_reloadvgs) Operation 'lvm reload
operation' released the operation mutex
Thread-32382::ERROR::2013-09-20
13:16:33,295::sdc::143::Storage.StorageDomainCache::(_findDomain) domain
17d21ac7-5859-4f25-8de7-2a9433d50c11 not found
Traceback (most recent call last):
File "/usr/share/vdsm/storage/sdc.py", line 141, in _findDomain
dom = findMethod(sdUUID)
File "/usr/share/vdsm/storage/sdc.py", line 171, in _findUnfetchedDomain
raise se.StorageDomainDoesNotExist(sdUUID)
StorageDomainDoesNotExist: Storage domain does not exist:
('17d21ac7-5859-4f25-8de7-2a9433d50c11',)
Thread-32382::INFO::2013-09-20
13:16:33,295::nfsSD::69::Storage.StorageDomain::(create)
sdUUID=17d21ac7-5859-4f25-8de7-2a9433d50c11 domainName=rep2-virt
remotePath=192.168.1.1:rep2-virt domClass=1
Thread-32382::DEBUG::2013-09-20
13:16:33,430::persistentDict::192::Storage.PersistentDict::(__init__)
Created a persistent dict with FileMetadataRW backend
Thread-32382::DEBUG::2013-09-20
13:16:33,445::persistentDict::234::Storage.PersistentDict::(refresh) read
lines (FileMetadataRW)=[]
Thread-32382::WARNING::2013-09-20
13:16:33,446::persistentDict::256::Storage.PersistentDict::(refresh) data
has no embedded checksum - trust it as it is
Thread-32382::DEBUG::2013-09-20
13:16:33,447::persistentDict::167::Storage.PersistentDict::(transaction)
Starting transaction
Thread-32382::DEBUG::2013-09-20
13:16:33,448::persistentDict::173::Storage.PersistentDict::(transaction)
Flushing changes
Thread-32382::DEBUG::2013-09-20
13:16:33,449::persistentDict::299::Storage.PersistentDict::(flush) about to
write lines (FileMetadataRW)=['CLASS=Data', 'DESCRIPTION=rep2-virt',
'IOOPTIMEOUTSEC=1', 'LEASERETRIES=3', 'LEASETIMESEC=5', 'LOCKPOLICY=',
'LOCKRENEWALINTERVALSEC=5', 'POOL_UUID=', 'REMOTE_PATH=192.168.1.1:rep2-virt',
'ROLE=Regular', 'SDUUID=17d21ac7-5859-4f25-8de7-2a9433d50c11',
'TYPE=GLUSTERFS', 'VERSION=3',
'_SHA_CKSUM=2b07fbc8c65e20eef5180ab785016bde543c6746']
Thread-32382::DEBUG::2013-09-20
13:16:33,454::persistentDict::175::Storage.PersistentDict::(transaction)
Finished transaction
Thread-32382::DEBUG::2013-09-20
13:16:33,457::fileSD::153::Storage.StorageDomain::(__init__) Reading domain
in path /rhev/data-center/mnt/glusterSD/192.168.1.1:
rep2-virt/17d21ac7-5859-4f25-8de7-2a9433d50c11
Thread-32382::DEBUG::2013-09-20
13:16:33,457::persistentDict::192::Storage.PersistentDict::(__init__)
Created a persistent dict with FileMetadataRW backend
Thread-32382::DEBUG::2013-09-20
13:16:33,469::persistentDict::234::Storage.PersistentDict::(refresh) read
lines (FileMetadataRW)=['CLASS=Data', 'DESCRIPTION=rep2-virt',
'IOOPTIMEOUTSEC=1', 'LEASERETRIES=3', 'LEASETIMESEC=5', 'LOCKPOLICY=',
'LOCKRENEWALINTERVALSEC=5', 'POOL_UUID=', 'REMOTE_PATH=192.168.1.1:rep2-virt',
'ROLE=Regular', 'SDUUID=17d21ac7-5859-4f25-8de7-2a9433d50c11',
'TYPE=GLUSTERFS', 'VERSION=3',
'_SHA_CKSUM=2b07fbc8c65e20eef5180ab785016bde543c6746']
Thread-32382::DEBUG::2013-09-20
13:16:33,472::fileSD::535::Storage.StorageDomain::(imageGarbageCollector)
Removing remnants of deleted images []
Thread-32382::DEBUG::2013-09-20
13:16:33,472::resourceManager::420::ResourceManager::(registerNamespace)
Registering namespace '17d21ac7-5859-4f25-8de7-2a9433d50c11_imageNS'
Thread-32382::DEBUG::2013-09-20
13:16:33,473::resourceManager::420::ResourceManager::(registerNamespace)
Registering namespace '17d21ac7-5859-4f25-8de7-2a9433d50c11_volumeNS'
Thread-32382::DEBUG::2013-09-20
13:16:33,473::clusterlock::137::initSANLock::(initSANLock) Initializing
SANLock for domain 17d21ac7-5859-4f25-8de7-2a9433d50c11
Thread-32387::DEBUG::2013-09-20
13:16:33,717::task::579::TaskManager.Task::(_updateState)
Task=`0e11c8e5-4e40-4d28-9eaf-129db67b2f4d`::moving from state init ->
state preparing
Thread-32387::INFO::2013-09-20
13:16:33,718::logUtils::44::dispatcher::(wrapper) Run and protect:
repoStats(options=None)
Thread-32387::INFO::2013-09-20
13:16:33,718::logUtils::47::dispatcher::(wrapper) Run and protect:
repoStats, Return response: {}
Thread-32387::DEBUG::2013-09-20
13:16:33,718::task::1168::TaskManager.Task::(prepare)
Task=`0e11c8e5-4e40-4d28-9eaf-129db67b2f4d`::finished: {}
Thread-32387::DEBUG::2013-09-20
13:16:33,718::task::579::TaskManager.Task::(_updateState)
Task=`0e11c8e5-4e40-4d28-9eaf-129db67b2f4d`::moving from state preparing ->
state finished
Thread-32387::DEBUG::2013-09-20
13:16:33,718::resourceManager::939::ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources {}
Thread-32387::DEBUG::2013-09-20
13:16:33,718::resourceManager::976::ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
Thread-32387::DEBUG::2013-09-20
13:16:33,719::task::974::TaskManager.Task::(_decref)
Task=`0e11c8e5-4e40-4d28-9eaf-129db67b2f4d`::ref 0 aborting False
Thread-32382::ERROR::2013-09-20
13:16:34,126::clusterlock::145::initSANLock::(initSANLock) Cannot
initialize SANLock for domain 17d21ac7-5859-4f25-8de7-2a9433d50c11
Traceback (most recent call last):
File "/usr/share/vdsm/storage/clusterlock.py", line 140, in initSANLock
sanlock.init_lockspace(sdUUID, idsPath)
SanlockException: (22, 'Sanlock lockspace init failure', 'Invalid argument')
Thread-32382::WARNING::2013-09-20
13:16:34,127::sd::428::Storage.StorageDomain::(initSPMlease) lease did not
initialize successfully
Traceback (most recent call last):
File "/usr/share/vdsm/storage/sd.py", line 423, in initSPMlease
self._clusterLock.initLock()
File "/usr/share/vdsm/storage/clusterlock.py", line 163, in initLock
initSANLock(self._sdUUID, self._idsPath, self._leasesPath)
File "/usr/share/vdsm/storage/clusterlock.py", line 146, in initSANLock
raise se.ClusterLockInitError()
ClusterLockInitError: Could not initialize cluster lock: ()
Thread-32382::DEBUG::2013-09-20
13:16:34,127::hsm::2624::Storage.HSM::(createStorageDomain) knownSDs:
{b72b61d1-e11c-496d-ad3a-6f566a1f0ad1: storage.glusterSD.findDomain,
983c4aa1-7b00-4d3b-b6ad-1fd2cf9297ce: storage.glusterSD.findDomain,
b91afb39-f96e-4eb3-bc6c-9f08fa16869c: storage.glusterSD.findDomain,
17d21ac7-5859-4f25-8de7-2a9433d50c11: storage.glusterSD.findDomain}
Thread-32382::INFO::2013-09-20
13:16:34,128::logUtils::47::dispatcher::(wrapper) Run and protect:
createStorageDomain, Return response: None
Thread-32382::DEBUG::2013-09-20
13:16:34,128::task::1168::TaskManager.Task::(prepare)
Task=`a3ba925b-65ee-42a6-8506-927a06f63995`::finished: None
Thread-32382::DEBUG::2013-09-20
13:16:34,128::task::579::TaskManager.Task::(_updateState)
Task=`a3ba925b-65ee-42a6-8506-927a06f63995`::moving from state preparing ->
state finished
Thread-32382::DEBUG::2013-09-20
13:16:34,128::resourceManager::939::ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources {}
Thread-32382::DEBUG::2013-09-20
13:16:34,128::resourceManager::976::ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
Thread-32382::DEBUG::2013-09-20
13:16:34,129::task::974::TaskManager.Task::(_decref)
Task=`a3ba925b-65ee-42a6-8506-927a06f63995`::ref 0 aborting False
Thread-32389::DEBUG::2013-09-20
13:16:34,219::BindingXMLRPC::177::vds::(wrapper) client [10.0.0.34]
Thread-32389::DEBUG::2013-09-20
13:16:34,219::task::579::TaskManager.Task::(_updateState)
Task=`a0d5d4d6-dcb7-4293-bf8a-cf1e2204f586`::moving from state init ->
state preparing
Thread-32389::INFO::2013-09-20
13:16:34,220::logUtils::44::dispatcher::(wrapper) Run and protect:
getStorageDomainStats(sdUUID='17d21ac7-5859-4f25-8de7-2a9433d50c11',
options=None)
Thread-32389::DEBUG::2013-09-20
13:16:34,220::resourceManager::197::ResourceManager.Request::(__init__)
ResName=`Storage.17d21ac7-5859-4f25-8de7-2a9433d50c11`ReqID=`9f37d808-9ad2-4c06-99ef-449b43049e80`::Request
was made in '/usr/share/vdsm/storage/hsm.py' line '2784' at
'getStorageDomainStats'
Thread-32389::DEBUG::2013-09-20
13:16:34,220::resourceManager::541::ResourceManager::(registerResource)
Trying to register resource 'Storage.17d21ac7-5859-4f25-8de7-2a9433d50c11'
for lock type 'shared'
Thread-32389::DEBUG::2013-09-20
13:16:34,221::resourceManager::600::ResourceManager::(registerResource)
Resource 'Storage.17d21ac7-5859-4f25-8de7-2a9433d50c11' is free. Now
locking as 'shared' (1 active user)
Thread-32389::DEBUG::2013-09-20
13:16:34,221::resourceManager::237::ResourceManager.Request::(grant)
ResName=`Storage.17d21ac7-5859-4f25-8de7-2a9433d50c11`ReqID=`9f37d808-9ad2-4c06-99ef-449b43049e80`::Granted
request
Thread-32389::DEBUG::2013-09-20
13:16:34,221::task::811::TaskManager.Task::(resourceAcquired)
Task=`a0d5d4d6-dcb7-4293-bf8a-cf1e2204f586`::_resourcesAcquired:
Storage.17d21ac7-5859-4f25-8de7-2a9433d50c11 (shared)
Thread-32389::DEBUG::2013-09-20
13:16:34,221::task::974::TaskManager.Task::(_decref)
Task=`a0d5d4d6-dcb7-4293-bf8a-cf1e2204f586`::ref 1 aborting False
Thread-32389::INFO::2013-09-20
13:16:34,223::logUtils::47::dispatcher::(wrapper) Run and protect:
getStorageDomainStats, Return response: {'stats': {'mdasize': 0,
'mdathreshold': True, 'mdavalid': True, 'diskfree': '14182986809344',
'disktotal': '14199600185344', 'mdafree': 0}}
Thread-32389::DEBUG::2013-09-20
13:16:34,223::task::1168::TaskManager.Task::(prepare)
Task=`a0d5d4d6-dcb7-4293-bf8a-cf1e2204f586`::finished: {'stats':
{'mdasize': 0, 'mdathreshold': True, 'mdavalid': True, 'diskfree':
'14182986809344', 'disktotal': '14199600185344', 'mdafree': 0}}
Thread-32389::DEBUG::2013-09-20
13:16:34,223::task::579::TaskManager.Task::(_updateState)
Task=`a0d5d4d6-dcb7-4293-bf8a-cf1e2204f586`::moving from state preparing ->
state finished
Thread-32389::DEBUG::2013-09-20
13:16:34,223::resourceManager::939::ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources
{'Storage.17d21ac7-5859-4f25-8de7-2a9433d50c11': < ResourceRef
'Storage.17d21ac7-5859-4f25-8de7-2a9433d50c11', isValid: 'True' obj:
'None'>}
Thread-32389::DEBUG::2013-09-20
13:16:34,224::resourceManager::976::ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
Thread-32389::DEBUG::2013-09-20
13:16:34,224::resourceManager::615::ResourceManager::(releaseResource)
Trying to release resource 'Storage.17d21ac7-5859-4f25-8de7-2a9433d50c11'
Thread-32389::DEBUG::2013-09-20
13:16:34,224::resourceManager::634::ResourceManager::(releaseResource)
Released resource 'Storage.17d21ac7-5859-4f25-8de7-2a9433d50c11' (0 active
users)
Thread-32389::DEBUG::2013-09-20
13:16:34,224::resourceManager::640::ResourceManager::(releaseResource)
Resource 'Storage.17d21ac7-5859-4f25-8de7-2a9433d50c11' is free, finding
out if anyone is waiting for it.
Thread-32389::DEBUG::2013-09-20
13:16:34,224::resourceManager::648::ResourceManager::(releaseResource) No
one is waiting for resource 'Storage.17d21ac7-5859-4f25-8de7-2a9433d50c11',
Clearing records.
Thread-32389::DEBUG::2013-09-20
13:16:34,225::task::974::TaskManager.Task::(_decref)
Task=`a0d5d4d6-dcb7-4293-bf8a-cf1e2204f586`::ref 0 aborting False
Thread-32390::DEBUG::2013-09-20
13:16:35,099::BindingXMLRPC::177::vds::(wrapper) client [10.0.0.34]
Thread-32390::DEBUG::2013-09-20
13:16:35,099::task::579::TaskManager.Task::(_updateState)
Task=`089c5f71-9cbb-4626-8f2d-3ed3547a98cd`::moving from state init ->
state preparing
Thread-32390::INFO::2013-09-20
13:16:35,099::logUtils::44::dispatcher::(wrapper) Run and protect:
connectStorageServer(domType=7,
spUUID='00000000-0000-0000-0000-000000000000', conList=[{'port': '',
'connection': '192.168.1.1:rep2-virt', 'iqn': '', 'portal': '', 'user': '',
'vfs_type': 'glusterfs', 'password': '******', 'id':
'cecee482-87e1-4ecc-8bda-0e0ec84d7792'}], options=None)
Thread-32390::DEBUG::2013-09-20
13:16:35,105::hsm::2333::Storage.HSM::(__prefetchDomains) glusterDomPath:
glusterSD/*
Thread-32390::DEBUG::2013-09-20
13:16:35,112::hsm::2345::Storage.HSM::(__prefetchDomains) Found SD uuids:
('17d21ac7-5859-4f25-8de7-2a9433d50c11',
'17d21ac7-5859-4f25-8de7-2a9433d50c11')
Thread-32390::DEBUG::2013-09-20
13:16:35,113::hsm::2396::Storage.HSM::(connectStorageServer) knownSDs:
{b72b61d1-e11c-496d-ad3a-6f566a1f0ad1: storage.glusterSD.findDomain,
983c4aa1-7b00-4d3b-b6ad-1fd2cf9297ce: storage.glusterSD.findDomain,
b91afb39-f96e-4eb3-bc6c-9f08fa16869c: storage.glusterSD.findDomain,
17d21ac7-5859-4f25-8de7-2a9433d50c11: storage.glusterSD.findDomain}
Thread-32390::INFO::2013-09-20
13:16:35,113::logUtils::47::dispatcher::(wrapper) Run and protect:
connectStorageServer, Return response: {'statuslist': [{'status': 0, 'id':
'cecee482-87e1-4ecc-8bda-0e0ec84d7792'}]}
Thread-32390::DEBUG::2013-09-20
13:16:35,113::task::1168::TaskManager.Task::(prepare)
Task=`089c5f71-9cbb-4626-8f2d-3ed3547a98cd`::finished: {'statuslist':
[{'status': 0, 'id': 'cecee482-87e1-4ecc-8bda-0e0ec84d7792'}]}
Thread-32390::DEBUG::2013-09-20
13:16:35,113::task::579::TaskManager.Task::(_updateState)
Task=`089c5f71-9cbb-4626-8f2d-3ed3547a98cd`::moving from state preparing ->
state finished
Thread-32390::DEBUG::2013-09-20
13:16:35,113::resourceManager::939::ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources {}
Thread-32390::DEBUG::2013-09-20
13:16:35,114::resourceManager::976::ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
Thread-32390::DEBUG::2013-09-20
13:16:35,114::task::974::TaskManager.Task::(_decref)
Task=`089c5f71-9cbb-4626-8f2d-3ed3547a98cd`::ref 0 aborting False
Thread-32393::DEBUG::2013-09-20
13:16:35,148::BindingXMLRPC::177::vds::(wrapper) client [10.0.0.34]
Thread-32393::DEBUG::2013-09-20
13:16:35,148::task::579::TaskManager.Task::(_updateState)
Task=`72348d40-8442-4dbf-bc66-1d354da5fc31`::moving from state init ->
state preparing
Thread-32393::INFO::2013-09-20
13:16:35,148::logUtils::44::dispatcher::(wrapper) Run and protect:
createStoragePool(poolType=None,
spUUID='5849b030-626e-47cb-ad90-3ce782d831b3', poolName='Default',
masterDom='17d21ac7-5859-4f25-8de7-2a9433d50c11',
domList=['17d21ac7-5859-4f25-8de7-2a9433d50c11'], masterVersion=9,
lockPolicy=None, lockRenewalIntervalSec=5, leaseTimeSec=60,
ioOpTimeoutSec=10, leaseRetries=3, options=None)
Thread-32393::INFO::2013-09-20
13:16:35,149::fileSD::315::Storage.StorageDomain::(validate)
sdUUID=17d21ac7-5859-4f25-8de7-2a9433d50c11
Thread-32393::DEBUG::2013-09-20
13:16:35,161::persistentDict::234::Storage.PersistentDict::(refresh) read
lines (FileMetadataRW)=['CLASS=Data', 'DESCRIPTION=rep2-virt',
'IOOPTIMEOUTSEC=1', 'LEASERETRIES=3', 'LEASETIMESEC=5', 'LOCKPOLICY=',
'LOCKRENEWALINTERVALSEC=5', 'POOL_UUID=', 'REMOTE_PATH=192.168.1.1:rep2-virt',
'ROLE=Regular', 'SDUUID=17d21ac7-5859-4f25-8de7-2a9433d50c11',
'TYPE=GLUSTERFS', 'VERSION=3',
'_SHA_CKSUM=2b07fbc8c65e20eef5180ab785016bde543c6746']
Thread-32393::DEBUG::2013-09-20
13:16:35,162::resourceManager::197::ResourceManager.Request::(__init__)
ResName=`Storage.5849b030-626e-47cb-ad90-3ce782d831b3`ReqID=`31060ad0-6633-4bbf-a859-b3f0c27af760`::Request
was made in '/usr/share/vdsm/storage/hsm.py' line '954' at
'createStoragePool'
Thread-32393::DEBUG::2013-09-20
13:16:35,162::resourceManager::541::ResourceManager::(registerResource)
Trying to register resource 'Storage.5849b030-626e-47cb-ad90-3ce782d831b3'
for lock type 'exclusive'
Thread-32393::DEBUG::2013-09-20
13:16:35,162::resourceManager::600::ResourceManager::(registerResource)
Resource 'Storage.5849b030-626e-47cb-ad90-3ce782d831b3' is free. Now
locking as 'exclusive' (1 active user)
Thread-32393::DEBUG::2013-09-20
13:16:35,162::resourceManager::237::ResourceManager.Request::(grant)
ResName=`Storage.5849b030-626e-47cb-ad90-3ce782d831b3`ReqID=`31060ad0-6633-4bbf-a859-b3f0c27af760`::Granted
request
Thread-32393::DEBUG::2013-09-20
13:16:35,163::task::811::TaskManager.Task::(resourceAcquired)
Task=`72348d40-8442-4dbf-bc66-1d354da5fc31`::_resourcesAcquired:
Storage.5849b030-626e-47cb-ad90-3ce782d831b3 (exclusive)
Thread-32393::DEBUG::2013-09-20
13:16:35,163::task::974::TaskManager.Task::(_decref)
Task=`72348d40-8442-4dbf-bc66-1d354da5fc31`::ref 1 aborting False
Thread-32393::DEBUG::2013-09-20
13:16:35,163::resourceManager::197::ResourceManager.Request::(__init__)
ResName=`Storage.17d21ac7-5859-4f25-8de7-2a9433d50c11`ReqID=`db5f52a0-d455-419c-b8a5-86fc6b695571`::Request
was made in '/usr/share/vdsm/storage/hsm.py' line '956' at
'createStoragePool'
Thread-32393::DEBUG::2013-09-20
13:16:35,164::resourceManager::541::ResourceManager::(registerResource)
Trying to register resource 'Storage.17d21ac7-5859-4f25-8de7-2a9433d50c11'
for lock type 'exclusive'
Thread-32393::DEBUG::2013-09-20
13:16:35,164::resourceManager::600::ResourceManager::(registerResource)
Resource 'Storage.17d21ac7-5859-4f25-8de7-2a9433d50c11' is free. Now
locking as 'exclusive' (1 active user)
Thread-32393::DEBUG::2013-09-20
13:16:35,164::resourceManager::237::ResourceManager.Request::(grant)
ResName=`Storage.17d21ac7-5859-4f25-8de7-2a9433d50c11`ReqID=`db5f52a0-d455-419c-b8a5-86fc6b695571`::Granted
request
Thread-32393::DEBUG::2013-09-20
13:16:35,165::task::811::TaskManager.Task::(resourceAcquired)
Task=`72348d40-8442-4dbf-bc66-1d354da5fc31`::_resourcesAcquired:
Storage.17d21ac7-5859-4f25-8de7-2a9433d50c11 (exclusive)
Thread-32393::DEBUG::2013-09-20
13:16:35,165::task::974::TaskManager.Task::(_decref)
Task=`72348d40-8442-4dbf-bc66-1d354da5fc31`::ref 1 aborting False
Thread-32393::INFO::2013-09-20
13:16:35,166::sp::592::Storage.StoragePool::(create)
spUUID=5849b030-626e-47cb-ad90-3ce782d831b3 poolName=Default
master_sd=17d21ac7-5859-4f25-8de7-2a9433d50c11
domList=['17d21ac7-5859-4f25-8de7-2a9433d50c11'] masterVersion=9
{'LEASETIMESEC': 60, 'IOOPTIMEOUTSEC': 10, 'LEASERETRIES': 3,
'LOCKRENEWALINTERVALSEC': 5}
Thread-32393::INFO::2013-09-20
13:16:35,166::fileSD::315::Storage.StorageDomain::(validate)
sdUUID=17d21ac7-5859-4f25-8de7-2a9433d50c11
Thread-32393::DEBUG::2013-09-20
13:16:35,177::persistentDict::234::Storage.PersistentDict::(refresh) read
lines (FileMetadataRW)=['CLASS=Data', 'DESCRIPTION=rep2-virt',
'IOOPTIMEOUTSEC=1', 'LEASERETRIES=3', 'LEASETIMESEC=5', 'LOCKPOLICY=',
'LOCKRENEWALINTERVALSEC=5', 'POOL_UUID=', 'REMOTE_PATH=192.168.1.1:rep2-virt',
'ROLE=Regular', 'SDUUID=17d21ac7-5859-4f25-8de7-2a9433d50c11',
'TYPE=GLUSTERFS', 'VERSION=3',
'_SHA_CKSUM=2b07fbc8c65e20eef5180ab785016bde543c6746']
Thread-32393::DEBUG::2013-09-20
13:16:35,188::persistentDict::234::Storage.PersistentDict::(refresh) read
lines (FileMetadataRW)=['CLASS=Data', 'DESCRIPTION=rep2-virt',
'IOOPTIMEOUTSEC=1', 'LEASERETRIES=3', 'LEASETIMESEC=5', 'LOCKPOLICY=',
'LOCKRENEWALINTERVALSEC=5', 'POOL_UUID=', 'REMOTE_PATH=192.168.1.1:rep2-virt',
'ROLE=Regular', 'SDUUID=17d21ac7-5859-4f25-8de7-2a9433d50c11',
'TYPE=GLUSTERFS', 'VERSION=3',
'_SHA_CKSUM=2b07fbc8c65e20eef5180ab785016bde543c6746']
Thread-32393::WARNING::2013-09-20
13:16:35,189::fileUtils::167::Storage.fileUtils::(createdir) Dir
/rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3 already exists
Thread-32393::DEBUG::2013-09-20
13:16:35,189::persistentDict::167::Storage.PersistentDict::(transaction)
Starting transaction
Thread-32393::DEBUG::2013-09-20
13:16:35,189::persistentDict::173::Storage.PersistentDict::(transaction)
Flushing changes
Thread-32393::DEBUG::2013-09-20
13:16:35,189::persistentDict::299::Storage.PersistentDict::(flush) about to
write lines (FileMetadataRW)=['CLASS=Data', 'DESCRIPTION=rep2-virt',
'IOOPTIMEOUTSEC=10', 'LEASERETRIES=3', 'LEASETIMESEC=60', 'LOCKPOLICY=',
'LOCKRENEWALINTERVALSEC=5', 'POOL_UUID=', 'REMOTE_PATH=192.168.1.1:rep2-virt',
'ROLE=Regular', 'SDUUID=17d21ac7-5859-4f25-8de7-2a9433d50c11',
'TYPE=GLUSTERFS', 'VERSION=3',
'_SHA_CKSUM=61b814a65ea3ede1f0ae1d58e139adc06bf9eda5']
Thread-32393::DEBUG::2013-09-20
13:16:35,194::persistentDict::175::Storage.PersistentDict::(transaction)
Finished transaction
Thread-32393::INFO::2013-09-20
13:16:35,194::clusterlock::174::SANLock::(acquireHostId) Acquiring host id
for domain 17d21ac7-5859-4f25-8de7-2a9433d50c11 (id: 250)
Thread-32393::ERROR::2013-09-20
13:16:36,196::task::850::TaskManager.Task::(_setError)
Task=`72348d40-8442-4dbf-bc66-1d354da5fc31`::Unexpected error
Traceback (most recent call last):
File "/usr/share/vdsm/storage/task.py", line 857, in _run
return fn(*args, **kargs)
File "/usr/share/vdsm/logUtils.py", line 45, in wrapper
res = f(*args, **kwargs)
File "/usr/share/vdsm/storage/hsm.py", line 960, in createStoragePool
masterVersion, leaseParams)
File "/usr/share/vdsm/storage/sp.py", line 617, in create
self._acquireTemporaryClusterLock(msdUUID, leaseParams)
File "/usr/share/vdsm/storage/sp.py", line 559, in
_acquireTemporaryClusterLock
msd.acquireHostId(self.id)
File "/usr/share/vdsm/storage/sd.py", line 458, in acquireHostId
self._clusterLock.acquireHostId(hostId, async)
File "/usr/share/vdsm/storage/clusterlock.py", line 189, in acquireHostId
raise se.AcquireHostIdFailure(self._sdUUID, e)
AcquireHostIdFailure: Cannot acquire host id:
('17d21ac7-5859-4f25-8de7-2a9433d50c11', SanlockException(22, 'Sanlock
lockspace add failure', 'Invalid argument'))
Thread-32393::DEBUG::2013-09-20
13:16:36,196::task::869::TaskManager.Task::(_run)
Task=`72348d40-8442-4dbf-bc66-1d354da5fc31`::Task._run:
72348d40-8442-4dbf-bc66-1d354da5fc31 (None,
'5849b030-626e-47cb-ad90-3ce782d831b3', 'Default',
'17d21ac7-5859-4f25-8de7-2a9433d50c11',
['17d21ac7-5859-4f25-8de7-2a9433d50c11'], 9, None, 5, 60, 10, 3) {} failed
- stopping task
Thread-32393::DEBUG::2013-09-20
13:16:36,197::task::1194::TaskManager.Task::(stop)
Task=`72348d40-8442-4dbf-bc66-1d354da5fc31`::stopping in state preparing
(force False)
Thread-32393::DEBUG::2013-09-20
13:16:36,197::task::974::TaskManager.Task::(_decref)
Task=`72348d40-8442-4dbf-bc66-1d354da5fc31`::ref 1 aborting True
Thread-32393::INFO::2013-09-20
13:16:36,197::task::1151::TaskManager.Task::(prepare)
Task=`72348d40-8442-4dbf-bc66-1d354da5fc31`::aborting: Task is aborted:
'Cannot acquire host id' - code 661
Thread-32393::DEBUG::2013-09-20
13:16:36,197::task::1156::TaskManager.Task::(prepare)
Task=`72348d40-8442-4dbf-bc66-1d354da5fc31`::Prepare: aborted: Cannot
acquire host id
Thread-32393::DEBUG::2013-09-20
13:16:36,197::task::974::TaskManager.Task::(_decref)
Task=`72348d40-8442-4dbf-bc66-1d354da5fc31`::ref 0 aborting True
Thread-32393::DEBUG::2013-09-20
13:16:36,197::task::909::TaskManager.Task::(_doAbort)
Task=`72348d40-8442-4dbf-bc66-1d354da5fc31`::Task._doAbort: force False
Thread-32393::DEBUG::2013-09-20
13:16:36,198::resourceManager::976::ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
Thread-32393::DEBUG::2013-09-20
13:16:36,198::task::579::TaskManager.Task::(_updateState)
Task=`72348d40-8442-4dbf-bc66-1d354da5fc31`::moving from state preparing ->
state aborting
Thread-32393::DEBUG::2013-09-20
13:16:36,198::task::534::TaskManager.Task::(__state_aborting)
Task=`72348d40-8442-4dbf-bc66-1d354da5fc31`::_aborting: recover policy none
Thread-32393::DEBUG::2013-09-20
13:16:36,198::task::579::TaskManager.Task::(_updateState)
Task=`72348d40-8442-4dbf-bc66-1d354da5fc31`::moving from state aborting ->
state failed
Thread-32393::DEBUG::2013-09-20
13:16:36,198::resourceManager::939::ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources
{'Storage.17d21ac7-5859-4f25-8de7-2a9433d50c11': < ResourceRef
'Storage.17d21ac7-5859-4f25-8de7-2a9433d50c11', isValid: 'True' obj:
'None'>, 'Storage.5849b030-626e-47cb-ad90-3ce782d831b3': < ResourceRef
'Storage.5849b030-626e-47cb-ad90-3ce782d831b3', isValid: 'True' obj:
'None'>}
Thread-32393::DEBUG::2013-09-20
13:16:36,198::resourceManager::976::ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
Thread-32393::DEBUG::2013-09-20
13:16:36,199::resourceManager::615::ResourceManager::(releaseResource)
Trying to release resource 'Storage.17d21ac7-5859-4f25-8de7-2a9433d50c11'
Thread-32393::DEBUG::2013-09-20
13:16:36,199::resourceManager::634::ResourceManager::(releaseResource)
Released resource 'Storage.17d21ac7-5859-4f25-8de7-2a9433d50c11' (0 active
users)
Thread-32393::DEBUG::2013-09-20
13:16:36,199::resourceManager::640::ResourceManager::(releaseResource)
Resource 'Storage.17d21ac7-5859-4f25-8de7-2a9433d50c11' is free, finding
out if anyone is waiting for it.
Thread-32393::DEBUG::2013-09-20
13:16:36,199::resourceManager::648::ResourceManager::(releaseResource) No
one is waiting for resource 'Storage.17d21ac7-5859-4f25-8de7-2a9433d50c11',
Clearing records.
Thread-32393::DEBUG::2013-09-20
13:16:36,199::resourceManager::615::ResourceManager::(releaseResource)
Trying to release resource 'Storage.5849b030-626e-47cb-ad90-3ce782d831b3'
Thread-32393::DEBUG::2013-09-20
13:16:36,200::resourceManager::634::ResourceManager::(releaseResource)
Released resource 'Storage.5849b030-626e-47cb-ad90-3ce782d831b3' (0 active
users)
Thread-32393::DEBUG::2013-09-20
13:16:36,200::resourceManager::640::ResourceManager::(releaseResource)
Resource 'Storage.5849b030-626e-47cb-ad90-3ce782d831b3' is free, finding
out if anyone is waiting for it.
Thread-32393::DEBUG::2013-09-20
13:16:36,200::resourceManager::648::ResourceManager::(releaseResource) No
one is waiting for resource 'Storage.5849b030-626e-47cb-ad90-3ce782d831b3',
Clearing records.
Thread-32393::ERROR::2013-09-20
13:16:36,200::dispatcher::67::Storage.Dispatcher.Protect::(run) {'status':
{'message': "Cannot acquire host id:
('17d21ac7-5859-4f25-8de7-2a9433d50c11', SanlockException(22, 'Sanlock
lockspace add failure', 'Invalid argument'))", 'code': 661}}
Thread-32398::DEBUG::2013-09-20
13:16:48,921::task::579::TaskManager.Task::(_updateState)
Task=`a5bce432-622b-499b-a216-d9a1f876e3ca`::moving from state init ->
state preparing
Thread-32398::INFO::2013-09-20
13:16:48,922::logUtils::44::dispatcher::(wrapper) Run and protect:
repoStats(options=None)
Thread-32398::INFO::2013-09-20
13:16:48,922::logUtils::47::dispatcher::(wrapper) Run and protect:
repoStats, Return response: {}
Thread-32398::DEBUG::2013-09-20
13:16:48,922::task::1168::TaskManager.Task::(prepare)
Task=`a5bce432-622b-499b-a216-d9a1f876e3ca`::finished: {}
Thread-32398::DEBUG::2013-09-20
13:16:48,922::task::579::TaskManager.Task::(_updateState)
Task=`a5bce432-622b-499b-a216-d9a1f876e3ca`::moving from state preparing ->
state finished
*Steve Dainard *
Infrastructure Manager
Miovision <http://miovision.com/> | *Rethink Traffic*
519-513-2407 ex.250
877-646-8476 (toll-free)
*Blog <http://miovision.com/blog> |
**LinkedIn<https://www.linkedin.com/company/miovision-technologies> |
Twitter <https://twitter.com/miovision> |
Facebook<https://www.facebook.com/miovision>
*
------------------------------
Miovision Technologies Inc. | 148 Manitou Drive, Suite 101, Kitchener, ON,
Canada | N2C 1L3
This e-mail may contain information that is privileged or confidential. If
you are not the intended recipient, please delete the e-mail and any
attachments and notify us immediately.
On Fri, Sep 20, 2013 at 12:23 PM, Deepak C Shetty <
deepakcs at linux.vnet.ibm.com> wrote:
> Either you can use the volume set .. option as mentioned in the wikipage
> --or --
> If the Gluster volume is added / managed to the oVirt UI.. go to "Volumes"
> tab, select the gluster volume and
> click on "Optimize for virt. store". That should also set the volume
> options in addition to few other things
>
> thanx,
> deepak
>
>
> ______________________________**_________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/**mailman/listinfo/users<http://lists.ovirt.org/mailman/listinfo/users>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20130920/dfa2558c/attachment-0001.html>
More information about the Users
mailing list