<div dir="ltr"><div>Hi,<br><br></div>You can also use the below method on every gluster node :- <br><br>For Group-virt (optimize for virt store)<br><br>1. Create the file name /var/lib/glusterd/groups/virt<br>2. And paste all the contents from this location to this file :- <a href="https://raw.githubusercontent.com/gluster/glusterfs/master/extras/group-virt.example">https://raw.githubusercontent.com/gluster/glusterfs/master/extras/group-virt.example</a><br>
3. service glusterd restart <br>4. service vdsmd restart<br><br><pre>--------------<br>quick-read=off
read-ahead=off
io-cache=off
stat-prefetch=off
eager-lock=enable
remote-dio=enable
quorum-type=auto
server-quorum-type=server<br>--------------<br><br></pre><pre>Thanks,<br>Punit Dambiwal<br></pre><br></div><div class="gmail_extra"><br><br><div class="gmail_quote">On Mon, Jun 23, 2014 at 4:35 PM, Itamar Heim <span dir="ltr"><<a href="mailto:iheim@redhat.com" target="_blank">iheim@redhat.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="HOEnZb"><div class="h5">On 06/22/2014 06:38 PM, Tiemen Ruiten wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
On 06/21/14 16:57, Tiemen Ruiten wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
On 06/21/14 16:37, Tiemen Ruiten wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Hello,<br>
<br>
I've been struggling to set up an Ovirt cluster and am now bumping into<br>
this problem:<br>
<br>
When I try to create a new (Gluster) storage domain, it fails to attach<br>
to the data center. The error on the node from vdsm.log:<br>
<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:14,157::BindingXMLRPC::<u></u>251::vds::(wrapper) client [192.168.10.119]<br>
flowID [6e44c0a3]<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:14,159::task::595::<u></u>TaskManager.Task::(_<u></u>updateState)<br>
Task=`97b78287-45d2-4d5a-8336-<u></u>460987df3840`::moving from state init -><br>
state preparing<br>
Thread-13::INFO::2014-06-21<br>
16:17:14,160::logUtils::44::<u></u>dispatcher::(wrapper) Run and protect:<br>
connectStorageServer(domType=<u></u>7,<br>
spUUID='00000000-0000-0000-<u></u>0000-000000000000', conList=[{'port': '',<br>
'connection': '192.168.10.120:/vmimage', 'iqn': '', 'user': '', 'tpgt':<br>
'1', 'vfs_type': 'glusterfs', 'password': '******', 'id':<br>
'901b15ec-6b05-43c1-8a50-<u></u>06b34c8ffdbd'}], options=None)<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:14,172::hsm::2340::<u></u>Storage.HSM::(__<u></u>prefetchDomains)<br>
glusterDomPath: glusterSD/*<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:14,185::hsm::2352::<u></u>Storage.HSM::(__<u></u>prefetchDomains) Found SD<br>
uuids: ('dc661957-c0c1-44ba-a5b9-<u></u>e6558904207b',)<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:14,185::hsm::2408::<u></u>Storage.HSM::(<u></u>connectStorageServer) knownSDs:<br>
{dc661957-c0c1-44ba-a5b9-<u></u>e6558904207b: storage.glusterSD.findDomain}<br>
Thread-13::INFO::2014-06-21<br>
16:17:14,186::logUtils::47::<u></u>dispatcher::(wrapper) Run and protect:<br>
connectStorageServer, Return response: {'statuslist': [{'status': 0,<br>
'id': '901b15ec-6b05-43c1-8a50-<u></u>06b34c8ffdbd'}]}<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:14,186::task::1185::<u></u>TaskManager.Task::(prepare)<br>
Task=`97b78287-45d2-4d5a-8336-<u></u>460987df3840`::finished: {'statuslist':<br>
[{'status': 0, 'id': '901b15ec-6b05-43c1-8a50-<u></u>06b34c8ffdbd'}]}<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:14,187::task::595::<u></u>TaskManager.Task::(_<u></u>updateState)<br>
Task=`97b78287-45d2-4d5a-8336-<u></u>460987df3840`::moving from state preparing<br>
-> state finished<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:14,187::resourceManager:<u></u>:940::ResourceManager.Owner::(<u></u>releaseAll)<br>
Owner.releaseAll requests {} resources {}<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:14,187::resourceManager:<u></u>:977::ResourceManager.Owner::(<u></u>cancelAll)<br>
Owner.cancelAll requests {}<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:14,188::task::990::<u></u>TaskManager.Task::(_decref)<br>
Task=`97b78287-45d2-4d5a-8336-<u></u>460987df3840`::ref 0 aborting False<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:14,195::BindingXMLRPC::<u></u>251::vds::(wrapper) client [192.168.10.119]<br>
flowID [6e44c0a3]<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:14,195::task::595::<u></u>TaskManager.Task::(_<u></u>updateState)<br>
Task=`d815e5e5-0202-4137-94be-<u></u>21dc5e2b61c9`::moving from state init -><br>
state preparing<br>
Thread-13::INFO::2014-06-21<br>
16:17:14,196::logUtils::44::<u></u>dispatcher::(wrapper) Run and protect:<br>
createStoragePool(poolType=<u></u>None,<br>
spUUID='806d2356-12cf-437c-<u></u>8917-dd13ee823e36', poolName='testing',<br>
masterDom='dc661957-c0c1-44ba-<u></u>a5b9-e6558904207b',<br>
domList=['dc661957-c0c1-44ba-<u></u>a5b9-e6558904207b'], masterVersion=2,<br>
lockPolicy=None, lockRenewalIntervalSec=5, leaseTimeSec=60,<br>
ioOpTimeoutSec=10, leaseRetries=3, options=None)<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:14,196::misc::756::<u></u>SamplingMethod::(__call__) Trying to enter<br>
sampling method (storage.sdc.refreshStorage)<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:14,197::misc::758::<u></u>SamplingMethod::(__call__) Got in to sampling<br>
method<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:14,197::misc::756::<u></u>SamplingMethod::(__call__) Trying to enter<br>
sampling method (storage.iscsi.rescan)<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:14,198::misc::758::<u></u>SamplingMethod::(__call__) Got in to sampling<br>
method<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:14,198::iscsi::407::<u></u>Storage.ISCSI::(rescan) Performing SCSI scan,<br>
this will take up to 30 seconds<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:14,199::iscsiadm::92::<u></u>Storage.Misc.excCmd::(_runCmd)<br>
'/usr/bin/sudo -n /sbin/iscsiadm -m session -R' (cwd None)<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:14,228::misc::766::<u></u>SamplingMethod::(__call__) Returning last result<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:14,229::multipath::110::<u></u>Storage.Misc.excCmd::(rescan)<br>
'/usr/bin/sudo -n /sbin/multipath -r' (cwd None)<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:14,294::multipath::110::<u></u>Storage.Misc.excCmd::(rescan) SUCCESS:<br>
<err> = ''; <rc> = 0<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:14,295::lvm::497::<u></u>OperationMutex::(_<u></u>invalidateAllPvs) Operation<br>
'lvm invalidate operation' got the operation mutex<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:14,295::lvm::499::<u></u>OperationMutex::(_<u></u>invalidateAllPvs) Operation<br>
'lvm invalidate operation' released the operation mutex<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:14,296::lvm::508::<u></u>OperationMutex::(_<u></u>invalidateAllVgs) Operation<br>
'lvm invalidate operation' got the operation mutex<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:14,296::lvm::510::<u></u>OperationMutex::(_<u></u>invalidateAllVgs) Operation<br>
'lvm invalidate operation' released the operation mutex<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:14,297::lvm::528::<u></u>OperationMutex::(_<u></u>invalidateAllLvs) Operation<br>
'lvm invalidate operation' got the operation mutex<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:14,297::lvm::530::<u></u>OperationMutex::(_<u></u>invalidateAllLvs) Operation<br>
'lvm invalidate operation' released the operation mutex<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:14,298::misc::766::<u></u>SamplingMethod::(__call__) Returning last result<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:14,318::fileSD::150::<u></u>Storage.StorageDomain::(__<u></u>init__) Reading<br>
domain in path<br>
/rhev/data-center/mnt/<u></u>glusterSD/192.168.10.120:_<u></u>vmimage/dc661957-c0c1-44ba-<u></u>a5b9-e6558904207b<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:14,322::persistentDict::<u></u>192::Storage.PersistentDict::(<u></u>__init__)<br>
Created a persistent dict with FileMetadataRW backend<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:14,328::persistentDict::<u></u>234::Storage.PersistentDict::(<u></u>refresh)<br>
read lines (FileMetadataRW)=['CLASS=Data'<u></u>, 'DESCRIPTION=vmimage',<br>
'IOOPTIMEOUTSEC=10', 'LEASERETRIES=3', 'LEASETIMESEC=60', 'LOCKPOLICY=',<br>
'LOCKRENEWALINTERVALSEC=5', 'POOL_UUID=',<br>
'REMOTE_PATH=192.168.10.120:/<u></u>vmimage', 'ROLE=Regular',<br>
'SDUUID=dc661957-c0c1-44ba-<u></u>a5b9-e6558904207b', 'TYPE=GLUSTERFS',<br>
'VERSION=3', '_SHA_CKSUM=<u></u>9fdc035c398d2cd8b5c31bf5eea288<u></u>2c8782ed57']<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:14,334::fileSD::609::<u></u>Storage.StorageDomain::(<u></u>imageGarbageCollector) Removing<br>
remnants of deleted images []<br>
Thread-13::INFO::2014-06-21<br>
16:17:14,335::sd::383::<u></u>Storage.StorageDomain::(_<u></u>registerResourceNamespaces)<br>
Resource namespace dc661957-c0c1-44ba-a5b9-<u></u>e6558904207b_imageNS already<br>
registered<br>
Thread-13::INFO::2014-06-21<br>
16:17:14,335::sd::391::<u></u>Storage.StorageDomain::(_<u></u>registerResourceNamespaces)<br>
Resource namespace dc661957-c0c1-44ba-a5b9-<u></u>e6558904207b_volumeNS already<br>
registered<br>
Thread-13::INFO::2014-06-21<br>
16:17:14,336::fileSD::350::<u></u>Storage.StorageDomain::(<u></u>validate)<br>
sdUUID=dc661957-c0c1-44ba-<u></u>a5b9-e6558904207b<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:14,340::persistentDict::<u></u>234::Storage.PersistentDict::(<u></u>refresh)<br>
read lines (FileMetadataRW)=['CLASS=Data'<u></u>, 'DESCRIPTION=vmimage',<br>
'IOOPTIMEOUTSEC=10', 'LEASERETRIES=3', 'LEASETIMESEC=60', 'LOCKPOLICY=',<br>
'LOCKRENEWALINTERVALSEC=5', 'POOL_UUID=',<br>
'REMOTE_PATH=192.168.10.120:/<u></u>vmimage', 'ROLE=Regular',<br>
'SDUUID=dc661957-c0c1-44ba-<u></u>a5b9-e6558904207b', 'TYPE=GLUSTERFS',<br>
'VERSION=3', '_SHA_CKSUM=<u></u>9fdc035c398d2cd8b5c31bf5eea288<u></u>2c8782ed57']<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:14,341::resourceManager:<u></u>:198::ResourceManager.Request:<u></u>:(__init__)<br>
ResName=`Storage.806d2356-<u></u>12cf-437c-8917-dd13ee823e36`<u></u>ReqID=`de2ede47-22fa-43b8-<u></u>9f3b-dc714a45b450`::Request<br>
was made in '/usr/share/vdsm/storage/hsm.<u></u>py' line '980' at<br>
'createStoragePool'<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:14,342::resourceManager:<u></u>:542::ResourceManager::(<u></u>registerResource)<br>
Trying to register resource<br>
'Storage.806d2356-12cf-437c-<u></u>8917-dd13ee823e36' for lock type 'exclusive'<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:14,342::resourceManager:<u></u>:601::ResourceManager::(<u></u>registerResource)<br>
Resource 'Storage.806d2356-12cf-437c-<u></u>8917-dd13ee823e36' is free. Now<br>
locking as 'exclusive' (1 active user)<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:14,343::resourceManager:<u></u>:238::ResourceManager.Request:<u></u>:(grant)<br>
ResName=`Storage.806d2356-<u></u>12cf-437c-8917-dd13ee823e36`<u></u>ReqID=`de2ede47-22fa-43b8-<u></u>9f3b-dc714a45b450`::Granted<br>
request<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:14,343::task::827::<u></u>TaskManager.Task::(<u></u>resourceAcquired)<br>
Task=`d815e5e5-0202-4137-94be-<u></u>21dc5e2b61c9`::_<u></u>resourcesAcquired:<br>
Storage.806d2356-12cf-437c-<u></u>8917-dd13ee823e36 (exclusive)<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:14,344::task::990::<u></u>TaskManager.Task::(_decref)<br>
Task=`d815e5e5-0202-4137-94be-<u></u>21dc5e2b61c9`::ref 1 aborting False<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:14,345::resourceManager:<u></u>:198::ResourceManager.Request:<u></u>:(__init__)<br>
ResName=`Storage.dc661957-<u></u>c0c1-44ba-a5b9-e6558904207b`<u></u>ReqID=`71bf6917-b501-4016-<u></u>ad8e-8b84849da8cb`::Request<br>
was made in '/usr/share/vdsm/storage/hsm.<u></u>py' line '982' at<br>
'createStoragePool'<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:14,345::resourceManager:<u></u>:542::ResourceManager::(<u></u>registerResource)<br>
Trying to register resource<br>
'Storage.dc661957-c0c1-44ba-<u></u>a5b9-e6558904207b' for lock type 'exclusive'<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:14,346::resourceManager:<u></u>:601::ResourceManager::(<u></u>registerResource)<br>
Resource 'Storage.dc661957-c0c1-44ba-<u></u>a5b9-e6558904207b' is free. Now<br>
locking as 'exclusive' (1 active user)<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:14,346::resourceManager:<u></u>:238::ResourceManager.Request:<u></u>:(grant)<br>
ResName=`Storage.dc661957-<u></u>c0c1-44ba-a5b9-e6558904207b`<u></u>ReqID=`71bf6917-b501-4016-<u></u>ad8e-8b84849da8cb`::Granted<br>
request<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:14,347::task::827::<u></u>TaskManager.Task::(<u></u>resourceAcquired)<br>
Task=`d815e5e5-0202-4137-94be-<u></u>21dc5e2b61c9`::_<u></u>resourcesAcquired:<br>
Storage.dc661957-c0c1-44ba-<u></u>a5b9-e6558904207b (exclusive)<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:14,347::task::990::<u></u>TaskManager.Task::(_decref)<br>
Task=`d815e5e5-0202-4137-94be-<u></u>21dc5e2b61c9`::ref 1 aborting False<br>
Thread-13::INFO::2014-06-21<br>
16:17:14,347::sp::133::<u></u>Storage.StoragePool::(<u></u>setBackend) updating pool<br>
806d2356-12cf-437c-8917-<u></u>dd13ee823e36 backend from type NoneType instance<br>
0x39e278bf00 to type StoragePoolDiskBackend instance 0x7f764c093cb0<br>
Thread-13::INFO::2014-06-21<br>
16:17:14,348::sp::548::<u></u>Storage.StoragePool::(create)<br>
spUUID=806d2356-12cf-437c-<u></u>8917-dd13ee823e36 poolName=testing<br>
master_sd=dc661957-c0c1-44ba-<u></u>a5b9-e6558904207b<br>
domList=['dc661957-c0c1-44ba-<u></u>a5b9-e6558904207b'] masterVersion=2<br>
{'LEASETIMESEC': 60, 'IOOPTIMEOUTSEC': 10, 'LEASERETRIES': 3,<br>
'LOCKRENEWALINTERVALSEC': 5}<br>
Thread-13::INFO::2014-06-21<br>
16:17:14,348::fileSD::350::<u></u>Storage.StorageDomain::(<u></u>validate)<br>
sdUUID=dc661957-c0c1-44ba-<u></u>a5b9-e6558904207b<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:14,352::persistentDict::<u></u>234::Storage.PersistentDict::(<u></u>refresh)<br>
read lines (FileMetadataRW)=['CLASS=Data'<u></u>, 'DESCRIPTION=vmimage',<br>
'IOOPTIMEOUTSEC=10', 'LEASERETRIES=3', 'LEASETIMESEC=60', 'LOCKPOLICY=',<br>
'LOCKRENEWALINTERVALSEC=5', 'POOL_UUID=',<br>
'REMOTE_PATH=192.168.10.120:/<u></u>vmimage', 'ROLE=Regular',<br>
'SDUUID=dc661957-c0c1-44ba-<u></u>a5b9-e6558904207b', 'TYPE=GLUSTERFS',<br>
'VERSION=3', '_SHA_CKSUM=<u></u>9fdc035c398d2cd8b5c31bf5eea288<u></u>2c8782ed57']<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:14,357::persistentDict::<u></u>234::Storage.PersistentDict::(<u></u>refresh)<br>
read lines (FileMetadataRW)=['CLASS=Data'<u></u>, 'DESCRIPTION=vmimage',<br>
'IOOPTIMEOUTSEC=10', 'LEASERETRIES=3', 'LEASETIMESEC=60', 'LOCKPOLICY=',<br>
'LOCKRENEWALINTERVALSEC=5', 'POOL_UUID=',<br>
'REMOTE_PATH=192.168.10.120:/<u></u>vmimage', 'ROLE=Regular',<br>
'SDUUID=dc661957-c0c1-44ba-<u></u>a5b9-e6558904207b', 'TYPE=GLUSTERFS',<br>
'VERSION=3', '_SHA_CKSUM=<u></u>9fdc035c398d2cd8b5c31bf5eea288<u></u>2c8782ed57']<br>
Thread-13::WARNING::2014-06-21<br>
16:17:14,358::fileUtils::167::<u></u>Storage.fileUtils::(createdir) Dir<br>
/rhev/data-center/806d2356-<u></u>12cf-437c-8917-dd13ee823e36 already exists<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:14,358::persistentDict::<u></u>167::Storage.PersistentDict::(<u></u>transaction)<br>
Starting transaction<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:14,359::persistentDict::<u></u>175::Storage.PersistentDict::(<u></u>transaction)<br>
Finished transaction<br>
Thread-13::INFO::2014-06-21<br>
16:17:14,359::clusterlock::<u></u>184::SANLock::(acquireHostId) Acquiring host<br>
id for domain dc661957-c0c1-44ba-a5b9-<u></u>e6558904207b (id: 250)<br>
Thread-24::DEBUG::2014-06-21<br>
16:17:14,394::task::595::<u></u>TaskManager.Task::(_<u></u>updateState)<br>
Task=`c4430b80-31d9-4a1d-bee8-<u></u>fae01a438da6`::moving from state init -><br>
state preparing<br>
Thread-24::INFO::2014-06-21<br>
16:17:14,395::logUtils::44::<u></u>dispatcher::(wrapper) Run and protect:<br>
repoStats(options=None)<br>
Thread-24::INFO::2014-06-21<br>
16:17:14,395::logUtils::47::<u></u>dispatcher::(wrapper) Run and protect:<br>
repoStats, Return response: {}<br>
Thread-24::DEBUG::2014-06-21<br>
16:17:14,396::task::1185::<u></u>TaskManager.Task::(prepare)<br>
Task=`c4430b80-31d9-4a1d-bee8-<u></u>fae01a438da6`::finished: {}<br>
Thread-24::DEBUG::2014-06-21<br>
16:17:14,396::task::595::<u></u>TaskManager.Task::(_<u></u>updateState)<br>
Task=`c4430b80-31d9-4a1d-bee8-<u></u>fae01a438da6`::moving from state preparing<br>
-> state finished<br>
Thread-24::DEBUG::2014-06-21<br>
16:17:14,396::resourceManager:<u></u>:940::ResourceManager.Owner::(<u></u>releaseAll)<br>
Owner.releaseAll requests {} resources {}<br>
Thread-24::DEBUG::2014-06-21<br>
16:17:14,396::resourceManager:<u></u>:977::ResourceManager.Owner::(<u></u>cancelAll)<br>
Owner.cancelAll requests {}<br>
Thread-24::DEBUG::2014-06-21<br>
16:17:14,397::task::990::<u></u>TaskManager.Task::(_decref)<br>
Task=`c4430b80-31d9-4a1d-bee8-<u></u>fae01a438da6`::ref 0 aborting False<br>
Thread-13::ERROR::2014-06-21<br>
16:17:15,361::task::866::<u></u>TaskManager.Task::(_setError)<br>
Task=`d815e5e5-0202-4137-94be-<u></u>21dc5e2b61c9`::Unexpected error<br>
Traceback (most recent call last):<br>
File "/usr/share/vdsm/storage/task.<u></u>py", line 873, in _run<br>
return fn(*args, **kargs)<br>
File "/usr/share/vdsm/logUtils.py", line 45, in wrapper<br>
res = f(*args, **kwargs)<br>
File "/usr/share/vdsm/storage/hsm.<u></u>py", line 988, in createStoragePool<br>
leaseParams)<br>
File "/usr/share/vdsm/storage/sp.<u></u>py", line 573, in create<br>
self._<u></u>acquireTemporaryClusterLock(<u></u>msdUUID, leaseParams)<br>
File "/usr/share/vdsm/storage/sp.<u></u>py", line 515, in<br>
_acquireTemporaryClusterLock<br>
msd.acquireHostId(<a href="http://self.id" target="_blank">self.id</a>)<br>
File "/usr/share/vdsm/storage/sd.<u></u>py", line 467, in acquireHostId<br>
self._clusterLock.<u></u>acquireHostId(hostId, async)<br>
File "/usr/share/vdsm/storage/<u></u>clusterlock.py", line 199, in acquireHostId<br>
raise se.AcquireHostIdFailure(self._<u></u>sdUUID, e)<br>
AcquireHostIdFailure: Cannot acquire host id:<br>
('dc661957-c0c1-44ba-a5b9-<u></u>e6558904207b', SanlockException(90, 'Sanlock<br>
lockspace add failure', 'Message too long'))<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:15,363::task::885::<u></u>TaskManager.Task::(_run)<br>
Task=`d815e5e5-0202-4137-94be-<u></u>21dc5e2b61c9`::Task._run:<br>
d815e5e5-0202-4137-94be-<u></u>21dc5e2b61c9 (None,<br>
'806d2356-12cf-437c-8917-<u></u>dd13ee823e36', 'testing',<br>
'dc661957-c0c1-44ba-a5b9-<u></u>e6558904207b',<br>
['dc661957-c0c1-44ba-a5b9-<u></u>e6558904207b'], 2, None, 5, 60, 10, 3) {}<br>
failed - stopping task<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:15,364::task::1211::<u></u>TaskManager.Task::(stop)<br>
Task=`d815e5e5-0202-4137-94be-<u></u>21dc5e2b61c9`::stopping in state preparing<br>
(force False)<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:15,364::task::990::<u></u>TaskManager.Task::(_decref)<br>
Task=`d815e5e5-0202-4137-94be-<u></u>21dc5e2b61c9`::ref 1 aborting True<br>
Thread-13::INFO::2014-06-21<br>
16:17:15,365::task::1168::<u></u>TaskManager.Task::(prepare)<br>
Task=`d815e5e5-0202-4137-94be-<u></u>21dc5e2b61c9`::aborting: Task is aborted:<br>
'Cannot acquire host id' - code 661<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:15,365::task::1173::<u></u>TaskManager.Task::(prepare)<br>
Task=`d815e5e5-0202-4137-94be-<u></u>21dc5e2b61c9`::Prepare: aborted: Cannot<br>
acquire host id<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:15,365::task::990::<u></u>TaskManager.Task::(_decref)<br>
Task=`d815e5e5-0202-4137-94be-<u></u>21dc5e2b61c9`::ref 0 aborting True<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:15,366::task::925::<u></u>TaskManager.Task::(_doAbort)<br>
Task=`d815e5e5-0202-4137-94be-<u></u>21dc5e2b61c9`::Task._doAbort: force False<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:15,366::resourceManager:<u></u>:977::ResourceManager.Owner::(<u></u>cancelAll)<br>
Owner.cancelAll requests {}<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:15,366::task::595::<u></u>TaskManager.Task::(_<u></u>updateState)<br>
Task=`d815e5e5-0202-4137-94be-<u></u>21dc5e2b61c9`::moving from state preparing<br>
-> state aborting<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:15,366::task::550::<u></u>TaskManager.Task::(__state_<u></u>aborting)<br>
Task=`d815e5e5-0202-4137-94be-<u></u>21dc5e2b61c9`::_aborting: recover policy none<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:15,367::task::595::<u></u>TaskManager.Task::(_<u></u>updateState)<br>
Task=`d815e5e5-0202-4137-94be-<u></u>21dc5e2b61c9`::moving from state aborting<br>
-> state failed<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:15,367::resourceManager:<u></u>:940::ResourceManager.Owner::(<u></u>releaseAll)<br>
Owner.releaseAll requests {} resources<br>
{'Storage.dc661957-c0c1-44ba-<u></u>a5b9-e6558904207b': < ResourceRef<br>
'Storage.dc661957-c0c1-44ba-<u></u>a5b9-e6558904207b', isValid: 'True' obj:<br>
'None'>, 'Storage.806d2356-12cf-437c-<u></u>8917-dd13ee823e36': < ResourceRef<br>
'Storage.806d2356-12cf-437c-<u></u>8917-dd13ee823e36', isValid: 'True' obj:<br>
'None'>}<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:15,367::resourceManager:<u></u>:977::ResourceManager.Owner::(<u></u>cancelAll)<br>
Owner.cancelAll requests {}<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:15,368::resourceManager:<u></u>:616::ResourceManager::(<u></u>releaseResource)<br>
Trying to release resource 'Storage.dc661957-c0c1-44ba-<u></u>a5b9-e6558904207b'<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:15,369::resourceManager:<u></u>:635::ResourceManager::(<u></u>releaseResource)<br>
Released resource 'Storage.dc661957-c0c1-44ba-<u></u>a5b9-e6558904207b' (0<br>
active users)<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:15,369::resourceManager:<u></u>:641::ResourceManager::(<u></u>releaseResource)<br>
Resource 'Storage.dc661957-c0c1-44ba-<u></u>a5b9-e6558904207b' is free, finding<br>
out if anyone is waiting for it.<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:15,369::resourceManager:<u></u>:649::ResourceManager::(<u></u>releaseResource)<br>
No one is waiting for resource<br>
'Storage.dc661957-c0c1-44ba-<u></u>a5b9-e6558904207b', Clearing records.<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:15,370::resourceManager:<u></u>:616::ResourceManager::(<u></u>releaseResource)<br>
Trying to release resource 'Storage.806d2356-12cf-437c-<u></u>8917-dd13ee823e36'<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:15,370::resourceManager:<u></u>:635::ResourceManager::(<u></u>releaseResource)<br>
Released resource 'Storage.806d2356-12cf-437c-<u></u>8917-dd13ee823e36' (0<br>
active users)<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:15,370::resourceManager:<u></u>:641::ResourceManager::(<u></u>releaseResource)<br>
Resource 'Storage.806d2356-12cf-437c-<u></u>8917-dd13ee823e36' is free, finding<br>
out if anyone is waiting for it.<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:15,371::resourceManager:<u></u>:649::ResourceManager::(<u></u>releaseResource)<br>
No one is waiting for resource<br>
'Storage.806d2356-12cf-437c-<u></u>8917-dd13ee823e36', Clearing records.<br>
Thread-13::ERROR::2014-06-21<br>
16:17:15,371::dispatcher::65::<u></u>Storage.Dispatcher.Protect::(<u></u>run)<br>
{'status': {'message': "Cannot acquire host id:<br>
('dc661957-c0c1-44ba-a5b9-<u></u>e6558904207b', SanlockException(90, 'Sanlock<br>
lockspace add failure', 'Message too long'))", 'code': 661}}<br>
<br>
<br>
My oVirt version: 3.4.2-1.el6 (CentOS 6.5)<br>
The hypervisor hosts run GlusterFS 3.5.0-3.fc19.(Fedora 19)<br>
The two storage servers run GlusterFS 3.5.0-2.el6 (Centos 6.5)<br>
<br>
So I am NOT using local storage of the hypervisor hosts for the<br>
GlusterFS bricks.<br>
<br>
What can I do to solve this error?<br>
<br>
</blockquote>
By the way, the options on the GlusterFS volume are as follows:<br>
<br>
Volume Name: vmimage<br>
Type: Replicate<br>
Volume ID: 348e1d45-1b80-420b-91c2-<u></u>93f0d764f227<br>
Status: Started<br>
Number of Bricks: 1 x 2 = 2<br>
Transport-type: tcp<br>
Bricks:<br>
Brick1: 192.168.10.120:/export/<u></u>gluster01/brick<br>
Brick2: 192.168.10.149:/export/<u></u>gluster01/brick<br>
Options Reconfigured:<br>
network.ping-timeout: 10<br>
cluster.quorum-count: 1<br>
cluster.quorum-type: auto<br>
server.allow-insecure: on<br>
storage.owner-uid: 36<br>
storage.owner-gid: 36<br>
<br>
</blockquote>
OK, fixed it. For someone else's reference, I had to set the following<br>
options on the gluster volume:<br>
<br>
network.remote-dio: on<br>
performance.io-cache: off<br>
performance.read-ahead: off<br>
performance.quick-read: off<br>
cluster.eager-lock: enable<br>
<br>
Apparently that's done by the 'optimize for virt store' checkbox, but<br>
obviously not when the volume is created manually. Having this in the<br>
documentation on <a href="http://ovirt.org" target="_blank">ovirt.org</a> would have saved me a lot of time and<br>
frustration.<br>
<br>
<br>
</blockquote>
<br></div></div>
its a wiki, how about adding this for the next guy?<br>
<br>
thanks,<br>
Itamar<div class="HOEnZb"><div class="h5"><br>
______________________________<u></u>_________________<br>
Users mailing list<br>
<a href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a><br>
<a href="http://lists.ovirt.org/mailman/listinfo/users" target="_blank">http://lists.ovirt.org/<u></u>mailman/listinfo/users</a><br>
</div></div></blockquote></div><br></div>