<div dir="ltr"><div>Hi,<br><br></div>You can also use the below method on every gluster node :- <br><br>For Group-virt (optimize for virt store)<br><br>1. Create the file name /var/lib/glusterd/groups/virt<br>2. And paste all the contents from this location to this file :- <a href="https://raw.githubusercontent.com/gluster/glusterfs/master/extras/group-virt.example">https://raw.githubusercontent.com/gluster/glusterfs/master/extras/group-virt.example</a><br>
3. service glusterd restart <br>4. service vdsmd restart<br><br><pre>--------------<br>quick-read=off
read-ahead=off
io-cache=off
stat-prefetch=off
eager-lock=enable
remote-dio=enable
quorum-type=auto
server-quorum-type=server<br>--------------<br><br></pre><pre>Thanks,<br>Punit Dambiwal<br></pre><br></div><div class="gmail_extra"><br><br><div class="gmail_quote">On Mon, Jun 23, 2014 at 4:35 PM, Itamar Heim <span dir="ltr">&lt;<a href="mailto:iheim@redhat.com" target="_blank">iheim@redhat.com</a>&gt;</span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="HOEnZb"><div class="h5">On 06/22/2014 06:38 PM, Tiemen Ruiten wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
On 06/21/14 16:57, Tiemen Ruiten wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
On 06/21/14 16:37, Tiemen Ruiten wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Hello,<br>
<br>
I&#39;ve been struggling to set up an Ovirt cluster and am now bumping into<br>
this problem:<br>
<br>
When I try to create a new (Gluster) storage domain, it fails to attach<br>
to the data center. The error on the node from vdsm.log:<br>
<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:14,157::BindingXMLRPC::<u></u>251::vds::(wrapper) client [192.168.10.119]<br>
flowID [6e44c0a3]<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:14,159::task::595::<u></u>TaskManager.Task::(_<u></u>updateState)<br>
Task=`97b78287-45d2-4d5a-8336-<u></u>460987df3840`::moving from state init -&gt;<br>
state preparing<br>
Thread-13::INFO::2014-06-21<br>
16:17:14,160::logUtils::44::<u></u>dispatcher::(wrapper) Run and protect:<br>
connectStorageServer(domType=<u></u>7,<br>
spUUID=&#39;00000000-0000-0000-<u></u>0000-000000000000&#39;, conList=[{&#39;port&#39;: &#39;&#39;,<br>
&#39;connection&#39;: &#39;192.168.10.120:/vmimage&#39;, &#39;iqn&#39;: &#39;&#39;, &#39;user&#39;: &#39;&#39;, &#39;tpgt&#39;:<br>
&#39;1&#39;, &#39;vfs_type&#39;: &#39;glusterfs&#39;, &#39;password&#39;: &#39;******&#39;, &#39;id&#39;:<br>
&#39;901b15ec-6b05-43c1-8a50-<u></u>06b34c8ffdbd&#39;}], options=None)<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:14,172::hsm::2340::<u></u>Storage.HSM::(__<u></u>prefetchDomains)<br>
glusterDomPath: glusterSD/*<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:14,185::hsm::2352::<u></u>Storage.HSM::(__<u></u>prefetchDomains) Found SD<br>
uuids: (&#39;dc661957-c0c1-44ba-a5b9-<u></u>e6558904207b&#39;,)<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:14,185::hsm::2408::<u></u>Storage.HSM::(<u></u>connectStorageServer) knownSDs:<br>
{dc661957-c0c1-44ba-a5b9-<u></u>e6558904207b: storage.glusterSD.findDomain}<br>
Thread-13::INFO::2014-06-21<br>
16:17:14,186::logUtils::47::<u></u>dispatcher::(wrapper) Run and protect:<br>
connectStorageServer, Return response: {&#39;statuslist&#39;: [{&#39;status&#39;: 0,<br>
&#39;id&#39;: &#39;901b15ec-6b05-43c1-8a50-<u></u>06b34c8ffdbd&#39;}]}<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:14,186::task::1185::<u></u>TaskManager.Task::(prepare)<br>
Task=`97b78287-45d2-4d5a-8336-<u></u>460987df3840`::finished: {&#39;statuslist&#39;:<br>
[{&#39;status&#39;: 0, &#39;id&#39;: &#39;901b15ec-6b05-43c1-8a50-<u></u>06b34c8ffdbd&#39;}]}<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:14,187::task::595::<u></u>TaskManager.Task::(_<u></u>updateState)<br>
Task=`97b78287-45d2-4d5a-8336-<u></u>460987df3840`::moving from state preparing<br>
-&gt; state finished<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:14,187::resourceManager:<u></u>:940::ResourceManager.Owner::(<u></u>releaseAll)<br>
Owner.releaseAll requests {} resources {}<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:14,187::resourceManager:<u></u>:977::ResourceManager.Owner::(<u></u>cancelAll)<br>
Owner.cancelAll requests {}<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:14,188::task::990::<u></u>TaskManager.Task::(_decref)<br>
Task=`97b78287-45d2-4d5a-8336-<u></u>460987df3840`::ref 0 aborting False<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:14,195::BindingXMLRPC::<u></u>251::vds::(wrapper) client [192.168.10.119]<br>
flowID [6e44c0a3]<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:14,195::task::595::<u></u>TaskManager.Task::(_<u></u>updateState)<br>
Task=`d815e5e5-0202-4137-94be-<u></u>21dc5e2b61c9`::moving from state init -&gt;<br>
state preparing<br>
Thread-13::INFO::2014-06-21<br>
16:17:14,196::logUtils::44::<u></u>dispatcher::(wrapper) Run and protect:<br>
createStoragePool(poolType=<u></u>None,<br>
spUUID=&#39;806d2356-12cf-437c-<u></u>8917-dd13ee823e36&#39;, poolName=&#39;testing&#39;,<br>
masterDom=&#39;dc661957-c0c1-44ba-<u></u>a5b9-e6558904207b&#39;,<br>
domList=[&#39;dc661957-c0c1-44ba-<u></u>a5b9-e6558904207b&#39;], masterVersion=2,<br>
lockPolicy=None, lockRenewalIntervalSec=5, leaseTimeSec=60,<br>
ioOpTimeoutSec=10, leaseRetries=3, options=None)<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:14,196::misc::756::<u></u>SamplingMethod::(__call__) Trying to enter<br>
sampling method (storage.sdc.refreshStorage)<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:14,197::misc::758::<u></u>SamplingMethod::(__call__) Got in to sampling<br>
method<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:14,197::misc::756::<u></u>SamplingMethod::(__call__) Trying to enter<br>
sampling method (storage.iscsi.rescan)<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:14,198::misc::758::<u></u>SamplingMethod::(__call__) Got in to sampling<br>
method<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:14,198::iscsi::407::<u></u>Storage.ISCSI::(rescan) Performing SCSI scan,<br>
this will take up to 30 seconds<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:14,199::iscsiadm::92::<u></u>Storage.Misc.excCmd::(_runCmd)<br>
&#39;/usr/bin/sudo -n /sbin/iscsiadm -m session -R&#39; (cwd None)<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:14,228::misc::766::<u></u>SamplingMethod::(__call__) Returning last result<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:14,229::multipath::110::<u></u>Storage.Misc.excCmd::(rescan)<br>
&#39;/usr/bin/sudo -n /sbin/multipath -r&#39; (cwd None)<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:14,294::multipath::110::<u></u>Storage.Misc.excCmd::(rescan) SUCCESS:<br>
&lt;err&gt; = &#39;&#39;; &lt;rc&gt; = 0<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:14,295::lvm::497::<u></u>OperationMutex::(_<u></u>invalidateAllPvs) Operation<br>
&#39;lvm invalidate operation&#39; got the operation mutex<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:14,295::lvm::499::<u></u>OperationMutex::(_<u></u>invalidateAllPvs) Operation<br>
&#39;lvm invalidate operation&#39; released the operation mutex<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:14,296::lvm::508::<u></u>OperationMutex::(_<u></u>invalidateAllVgs) Operation<br>
&#39;lvm invalidate operation&#39; got the operation mutex<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:14,296::lvm::510::<u></u>OperationMutex::(_<u></u>invalidateAllVgs) Operation<br>
&#39;lvm invalidate operation&#39; released the operation mutex<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:14,297::lvm::528::<u></u>OperationMutex::(_<u></u>invalidateAllLvs) Operation<br>
&#39;lvm invalidate operation&#39; got the operation mutex<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:14,297::lvm::530::<u></u>OperationMutex::(_<u></u>invalidateAllLvs) Operation<br>
&#39;lvm invalidate operation&#39; released the operation mutex<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:14,298::misc::766::<u></u>SamplingMethod::(__call__) Returning last result<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:14,318::fileSD::150::<u></u>Storage.StorageDomain::(__<u></u>init__) Reading<br>
domain in path<br>
/rhev/data-center/mnt/<u></u>glusterSD/192.168.10.120:_<u></u>vmimage/dc661957-c0c1-44ba-<u></u>a5b9-e6558904207b<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:14,322::persistentDict::<u></u>192::Storage.PersistentDict::(<u></u>__init__)<br>
Created a persistent dict with FileMetadataRW backend<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:14,328::persistentDict::<u></u>234::Storage.PersistentDict::(<u></u>refresh)<br>
read lines (FileMetadataRW)=[&#39;CLASS=Data&#39;<u></u>, &#39;DESCRIPTION=vmimage&#39;,<br>
&#39;IOOPTIMEOUTSEC=10&#39;, &#39;LEASERETRIES=3&#39;, &#39;LEASETIMESEC=60&#39;, &#39;LOCKPOLICY=&#39;,<br>
&#39;LOCKRENEWALINTERVALSEC=5&#39;, &#39;POOL_UUID=&#39;,<br>
&#39;REMOTE_PATH=192.168.10.120:/<u></u>vmimage&#39;, &#39;ROLE=Regular&#39;,<br>
&#39;SDUUID=dc661957-c0c1-44ba-<u></u>a5b9-e6558904207b&#39;, &#39;TYPE=GLUSTERFS&#39;,<br>
&#39;VERSION=3&#39;, &#39;_SHA_CKSUM=<u></u>9fdc035c398d2cd8b5c31bf5eea288<u></u>2c8782ed57&#39;]<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:14,334::fileSD::609::<u></u>Storage.StorageDomain::(<u></u>imageGarbageCollector) Removing<br>
remnants of deleted images []<br>
Thread-13::INFO::2014-06-21<br>
16:17:14,335::sd::383::<u></u>Storage.StorageDomain::(_<u></u>registerResourceNamespaces)<br>
Resource namespace dc661957-c0c1-44ba-a5b9-<u></u>e6558904207b_imageNS already<br>
registered<br>
Thread-13::INFO::2014-06-21<br>
16:17:14,335::sd::391::<u></u>Storage.StorageDomain::(_<u></u>registerResourceNamespaces)<br>
Resource namespace dc661957-c0c1-44ba-a5b9-<u></u>e6558904207b_volumeNS already<br>
registered<br>
Thread-13::INFO::2014-06-21<br>
16:17:14,336::fileSD::350::<u></u>Storage.StorageDomain::(<u></u>validate)<br>
sdUUID=dc661957-c0c1-44ba-<u></u>a5b9-e6558904207b<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:14,340::persistentDict::<u></u>234::Storage.PersistentDict::(<u></u>refresh)<br>
read lines (FileMetadataRW)=[&#39;CLASS=Data&#39;<u></u>, &#39;DESCRIPTION=vmimage&#39;,<br>
&#39;IOOPTIMEOUTSEC=10&#39;, &#39;LEASERETRIES=3&#39;, &#39;LEASETIMESEC=60&#39;, &#39;LOCKPOLICY=&#39;,<br>
&#39;LOCKRENEWALINTERVALSEC=5&#39;, &#39;POOL_UUID=&#39;,<br>
&#39;REMOTE_PATH=192.168.10.120:/<u></u>vmimage&#39;, &#39;ROLE=Regular&#39;,<br>
&#39;SDUUID=dc661957-c0c1-44ba-<u></u>a5b9-e6558904207b&#39;, &#39;TYPE=GLUSTERFS&#39;,<br>
&#39;VERSION=3&#39;, &#39;_SHA_CKSUM=<u></u>9fdc035c398d2cd8b5c31bf5eea288<u></u>2c8782ed57&#39;]<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:14,341::resourceManager:<u></u>:198::ResourceManager.Request:<u></u>:(__init__)<br>
ResName=`Storage.806d2356-<u></u>12cf-437c-8917-dd13ee823e36`<u></u>ReqID=`de2ede47-22fa-43b8-<u></u>9f3b-dc714a45b450`::Request<br>
was made in &#39;/usr/share/vdsm/storage/hsm.<u></u>py&#39; line &#39;980&#39; at<br>
&#39;createStoragePool&#39;<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:14,342::resourceManager:<u></u>:542::ResourceManager::(<u></u>registerResource)<br>
Trying to register resource<br>
&#39;Storage.806d2356-12cf-437c-<u></u>8917-dd13ee823e36&#39; for lock type &#39;exclusive&#39;<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:14,342::resourceManager:<u></u>:601::ResourceManager::(<u></u>registerResource)<br>
Resource &#39;Storage.806d2356-12cf-437c-<u></u>8917-dd13ee823e36&#39; is free. Now<br>
locking as &#39;exclusive&#39; (1 active user)<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:14,343::resourceManager:<u></u>:238::ResourceManager.Request:<u></u>:(grant)<br>
ResName=`Storage.806d2356-<u></u>12cf-437c-8917-dd13ee823e36`<u></u>ReqID=`de2ede47-22fa-43b8-<u></u>9f3b-dc714a45b450`::Granted<br>
request<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:14,343::task::827::<u></u>TaskManager.Task::(<u></u>resourceAcquired)<br>
Task=`d815e5e5-0202-4137-94be-<u></u>21dc5e2b61c9`::_<u></u>resourcesAcquired:<br>
Storage.806d2356-12cf-437c-<u></u>8917-dd13ee823e36 (exclusive)<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:14,344::task::990::<u></u>TaskManager.Task::(_decref)<br>
Task=`d815e5e5-0202-4137-94be-<u></u>21dc5e2b61c9`::ref 1 aborting False<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:14,345::resourceManager:<u></u>:198::ResourceManager.Request:<u></u>:(__init__)<br>
ResName=`Storage.dc661957-<u></u>c0c1-44ba-a5b9-e6558904207b`<u></u>ReqID=`71bf6917-b501-4016-<u></u>ad8e-8b84849da8cb`::Request<br>
was made in &#39;/usr/share/vdsm/storage/hsm.<u></u>py&#39; line &#39;982&#39; at<br>
&#39;createStoragePool&#39;<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:14,345::resourceManager:<u></u>:542::ResourceManager::(<u></u>registerResource)<br>
Trying to register resource<br>
&#39;Storage.dc661957-c0c1-44ba-<u></u>a5b9-e6558904207b&#39; for lock type &#39;exclusive&#39;<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:14,346::resourceManager:<u></u>:601::ResourceManager::(<u></u>registerResource)<br>
Resource &#39;Storage.dc661957-c0c1-44ba-<u></u>a5b9-e6558904207b&#39; is free. Now<br>
locking as &#39;exclusive&#39; (1 active user)<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:14,346::resourceManager:<u></u>:238::ResourceManager.Request:<u></u>:(grant)<br>
ResName=`Storage.dc661957-<u></u>c0c1-44ba-a5b9-e6558904207b`<u></u>ReqID=`71bf6917-b501-4016-<u></u>ad8e-8b84849da8cb`::Granted<br>
request<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:14,347::task::827::<u></u>TaskManager.Task::(<u></u>resourceAcquired)<br>
Task=`d815e5e5-0202-4137-94be-<u></u>21dc5e2b61c9`::_<u></u>resourcesAcquired:<br>
Storage.dc661957-c0c1-44ba-<u></u>a5b9-e6558904207b (exclusive)<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:14,347::task::990::<u></u>TaskManager.Task::(_decref)<br>
Task=`d815e5e5-0202-4137-94be-<u></u>21dc5e2b61c9`::ref 1 aborting False<br>
Thread-13::INFO::2014-06-21<br>
16:17:14,347::sp::133::<u></u>Storage.StoragePool::(<u></u>setBackend) updating pool<br>
806d2356-12cf-437c-8917-<u></u>dd13ee823e36 backend from type NoneType instance<br>
0x39e278bf00 to type StoragePoolDiskBackend instance 0x7f764c093cb0<br>
Thread-13::INFO::2014-06-21<br>
16:17:14,348::sp::548::<u></u>Storage.StoragePool::(create)<br>
spUUID=806d2356-12cf-437c-<u></u>8917-dd13ee823e36 poolName=testing<br>
master_sd=dc661957-c0c1-44ba-<u></u>a5b9-e6558904207b<br>
domList=[&#39;dc661957-c0c1-44ba-<u></u>a5b9-e6558904207b&#39;] masterVersion=2<br>
{&#39;LEASETIMESEC&#39;: 60, &#39;IOOPTIMEOUTSEC&#39;: 10, &#39;LEASERETRIES&#39;: 3,<br>
&#39;LOCKRENEWALINTERVALSEC&#39;: 5}<br>
Thread-13::INFO::2014-06-21<br>
16:17:14,348::fileSD::350::<u></u>Storage.StorageDomain::(<u></u>validate)<br>
sdUUID=dc661957-c0c1-44ba-<u></u>a5b9-e6558904207b<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:14,352::persistentDict::<u></u>234::Storage.PersistentDict::(<u></u>refresh)<br>
read lines (FileMetadataRW)=[&#39;CLASS=Data&#39;<u></u>, &#39;DESCRIPTION=vmimage&#39;,<br>
&#39;IOOPTIMEOUTSEC=10&#39;, &#39;LEASERETRIES=3&#39;, &#39;LEASETIMESEC=60&#39;, &#39;LOCKPOLICY=&#39;,<br>
&#39;LOCKRENEWALINTERVALSEC=5&#39;, &#39;POOL_UUID=&#39;,<br>
&#39;REMOTE_PATH=192.168.10.120:/<u></u>vmimage&#39;, &#39;ROLE=Regular&#39;,<br>
&#39;SDUUID=dc661957-c0c1-44ba-<u></u>a5b9-e6558904207b&#39;, &#39;TYPE=GLUSTERFS&#39;,<br>
&#39;VERSION=3&#39;, &#39;_SHA_CKSUM=<u></u>9fdc035c398d2cd8b5c31bf5eea288<u></u>2c8782ed57&#39;]<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:14,357::persistentDict::<u></u>234::Storage.PersistentDict::(<u></u>refresh)<br>
read lines (FileMetadataRW)=[&#39;CLASS=Data&#39;<u></u>, &#39;DESCRIPTION=vmimage&#39;,<br>
&#39;IOOPTIMEOUTSEC=10&#39;, &#39;LEASERETRIES=3&#39;, &#39;LEASETIMESEC=60&#39;, &#39;LOCKPOLICY=&#39;,<br>
&#39;LOCKRENEWALINTERVALSEC=5&#39;, &#39;POOL_UUID=&#39;,<br>
&#39;REMOTE_PATH=192.168.10.120:/<u></u>vmimage&#39;, &#39;ROLE=Regular&#39;,<br>
&#39;SDUUID=dc661957-c0c1-44ba-<u></u>a5b9-e6558904207b&#39;, &#39;TYPE=GLUSTERFS&#39;,<br>
&#39;VERSION=3&#39;, &#39;_SHA_CKSUM=<u></u>9fdc035c398d2cd8b5c31bf5eea288<u></u>2c8782ed57&#39;]<br>
Thread-13::WARNING::2014-06-21<br>
16:17:14,358::fileUtils::167::<u></u>Storage.fileUtils::(createdir) Dir<br>
/rhev/data-center/806d2356-<u></u>12cf-437c-8917-dd13ee823e36 already exists<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:14,358::persistentDict::<u></u>167::Storage.PersistentDict::(<u></u>transaction)<br>
Starting transaction<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:14,359::persistentDict::<u></u>175::Storage.PersistentDict::(<u></u>transaction)<br>
Finished transaction<br>
Thread-13::INFO::2014-06-21<br>
16:17:14,359::clusterlock::<u></u>184::SANLock::(acquireHostId) Acquiring host<br>
id for domain dc661957-c0c1-44ba-a5b9-<u></u>e6558904207b (id: 250)<br>
Thread-24::DEBUG::2014-06-21<br>
16:17:14,394::task::595::<u></u>TaskManager.Task::(_<u></u>updateState)<br>
Task=`c4430b80-31d9-4a1d-bee8-<u></u>fae01a438da6`::moving from state init -&gt;<br>
state preparing<br>
Thread-24::INFO::2014-06-21<br>
16:17:14,395::logUtils::44::<u></u>dispatcher::(wrapper) Run and protect:<br>
repoStats(options=None)<br>
Thread-24::INFO::2014-06-21<br>
16:17:14,395::logUtils::47::<u></u>dispatcher::(wrapper) Run and protect:<br>
repoStats, Return response: {}<br>
Thread-24::DEBUG::2014-06-21<br>
16:17:14,396::task::1185::<u></u>TaskManager.Task::(prepare)<br>
Task=`c4430b80-31d9-4a1d-bee8-<u></u>fae01a438da6`::finished: {}<br>
Thread-24::DEBUG::2014-06-21<br>
16:17:14,396::task::595::<u></u>TaskManager.Task::(_<u></u>updateState)<br>
Task=`c4430b80-31d9-4a1d-bee8-<u></u>fae01a438da6`::moving from state preparing<br>
-&gt; state finished<br>
Thread-24::DEBUG::2014-06-21<br>
16:17:14,396::resourceManager:<u></u>:940::ResourceManager.Owner::(<u></u>releaseAll)<br>
Owner.releaseAll requests {} resources {}<br>
Thread-24::DEBUG::2014-06-21<br>
16:17:14,396::resourceManager:<u></u>:977::ResourceManager.Owner::(<u></u>cancelAll)<br>
Owner.cancelAll requests {}<br>
Thread-24::DEBUG::2014-06-21<br>
16:17:14,397::task::990::<u></u>TaskManager.Task::(_decref)<br>
Task=`c4430b80-31d9-4a1d-bee8-<u></u>fae01a438da6`::ref 0 aborting False<br>
Thread-13::ERROR::2014-06-21<br>
16:17:15,361::task::866::<u></u>TaskManager.Task::(_setError)<br>
Task=`d815e5e5-0202-4137-94be-<u></u>21dc5e2b61c9`::Unexpected error<br>
Traceback (most recent call last):<br>
   File &quot;/usr/share/vdsm/storage/task.<u></u>py&quot;, line 873, in _run<br>
     return fn(*args, **kargs)<br>
   File &quot;/usr/share/vdsm/logUtils.py&quot;, line 45, in wrapper<br>
     res = f(*args, **kwargs)<br>
   File &quot;/usr/share/vdsm/storage/hsm.<u></u>py&quot;, line 988, in createStoragePool<br>
     leaseParams)<br>
   File &quot;/usr/share/vdsm/storage/sp.<u></u>py&quot;, line 573, in create<br>
     self._<u></u>acquireTemporaryClusterLock(<u></u>msdUUID, leaseParams)<br>
   File &quot;/usr/share/vdsm/storage/sp.<u></u>py&quot;, line 515, in<br>
_acquireTemporaryClusterLock<br>
     msd.acquireHostId(<a href="http://self.id" target="_blank">self.id</a>)<br>
   File &quot;/usr/share/vdsm/storage/sd.<u></u>py&quot;, line 467, in acquireHostId<br>
     self._clusterLock.<u></u>acquireHostId(hostId, async)<br>
   File &quot;/usr/share/vdsm/storage/<u></u>clusterlock.py&quot;, line 199, in acquireHostId<br>
     raise se.AcquireHostIdFailure(self._<u></u>sdUUID, e)<br>
AcquireHostIdFailure: Cannot acquire host id:<br>
(&#39;dc661957-c0c1-44ba-a5b9-<u></u>e6558904207b&#39;, SanlockException(90, &#39;Sanlock<br>
lockspace add failure&#39;, &#39;Message too long&#39;))<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:15,363::task::885::<u></u>TaskManager.Task::(_run)<br>
Task=`d815e5e5-0202-4137-94be-<u></u>21dc5e2b61c9`::Task._run:<br>
d815e5e5-0202-4137-94be-<u></u>21dc5e2b61c9 (None,<br>
&#39;806d2356-12cf-437c-8917-<u></u>dd13ee823e36&#39;, &#39;testing&#39;,<br>
&#39;dc661957-c0c1-44ba-a5b9-<u></u>e6558904207b&#39;,<br>
[&#39;dc661957-c0c1-44ba-a5b9-<u></u>e6558904207b&#39;], 2, None, 5, 60, 10, 3) {}<br>
failed - stopping task<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:15,364::task::1211::<u></u>TaskManager.Task::(stop)<br>
Task=`d815e5e5-0202-4137-94be-<u></u>21dc5e2b61c9`::stopping in state preparing<br>
(force False)<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:15,364::task::990::<u></u>TaskManager.Task::(_decref)<br>
Task=`d815e5e5-0202-4137-94be-<u></u>21dc5e2b61c9`::ref 1 aborting True<br>
Thread-13::INFO::2014-06-21<br>
16:17:15,365::task::1168::<u></u>TaskManager.Task::(prepare)<br>
Task=`d815e5e5-0202-4137-94be-<u></u>21dc5e2b61c9`::aborting: Task is aborted:<br>
&#39;Cannot acquire host id&#39; - code 661<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:15,365::task::1173::<u></u>TaskManager.Task::(prepare)<br>
Task=`d815e5e5-0202-4137-94be-<u></u>21dc5e2b61c9`::Prepare: aborted: Cannot<br>
acquire host id<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:15,365::task::990::<u></u>TaskManager.Task::(_decref)<br>
Task=`d815e5e5-0202-4137-94be-<u></u>21dc5e2b61c9`::ref 0 aborting True<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:15,366::task::925::<u></u>TaskManager.Task::(_doAbort)<br>
Task=`d815e5e5-0202-4137-94be-<u></u>21dc5e2b61c9`::Task._doAbort: force False<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:15,366::resourceManager:<u></u>:977::ResourceManager.Owner::(<u></u>cancelAll)<br>
Owner.cancelAll requests {}<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:15,366::task::595::<u></u>TaskManager.Task::(_<u></u>updateState)<br>
Task=`d815e5e5-0202-4137-94be-<u></u>21dc5e2b61c9`::moving from state preparing<br>
-&gt; state aborting<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:15,366::task::550::<u></u>TaskManager.Task::(__state_<u></u>aborting)<br>
Task=`d815e5e5-0202-4137-94be-<u></u>21dc5e2b61c9`::_aborting: recover policy none<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:15,367::task::595::<u></u>TaskManager.Task::(_<u></u>updateState)<br>
Task=`d815e5e5-0202-4137-94be-<u></u>21dc5e2b61c9`::moving from state aborting<br>
-&gt; state failed<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:15,367::resourceManager:<u></u>:940::ResourceManager.Owner::(<u></u>releaseAll)<br>
Owner.releaseAll requests {} resources<br>
{&#39;Storage.dc661957-c0c1-44ba-<u></u>a5b9-e6558904207b&#39;: &lt; ResourceRef<br>
&#39;Storage.dc661957-c0c1-44ba-<u></u>a5b9-e6558904207b&#39;, isValid: &#39;True&#39; obj:<br>
&#39;None&#39;&gt;, &#39;Storage.806d2356-12cf-437c-<u></u>8917-dd13ee823e36&#39;: &lt; ResourceRef<br>
&#39;Storage.806d2356-12cf-437c-<u></u>8917-dd13ee823e36&#39;, isValid: &#39;True&#39; obj:<br>
&#39;None&#39;&gt;}<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:15,367::resourceManager:<u></u>:977::ResourceManager.Owner::(<u></u>cancelAll)<br>
Owner.cancelAll requests {}<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:15,368::resourceManager:<u></u>:616::ResourceManager::(<u></u>releaseResource)<br>
Trying to release resource &#39;Storage.dc661957-c0c1-44ba-<u></u>a5b9-e6558904207b&#39;<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:15,369::resourceManager:<u></u>:635::ResourceManager::(<u></u>releaseResource)<br>
Released resource &#39;Storage.dc661957-c0c1-44ba-<u></u>a5b9-e6558904207b&#39; (0<br>
active users)<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:15,369::resourceManager:<u></u>:641::ResourceManager::(<u></u>releaseResource)<br>
Resource &#39;Storage.dc661957-c0c1-44ba-<u></u>a5b9-e6558904207b&#39; is free, finding<br>
out if anyone is waiting for it.<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:15,369::resourceManager:<u></u>:649::ResourceManager::(<u></u>releaseResource)<br>
No one is waiting for resource<br>
&#39;Storage.dc661957-c0c1-44ba-<u></u>a5b9-e6558904207b&#39;, Clearing records.<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:15,370::resourceManager:<u></u>:616::ResourceManager::(<u></u>releaseResource)<br>
Trying to release resource &#39;Storage.806d2356-12cf-437c-<u></u>8917-dd13ee823e36&#39;<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:15,370::resourceManager:<u></u>:635::ResourceManager::(<u></u>releaseResource)<br>
Released resource &#39;Storage.806d2356-12cf-437c-<u></u>8917-dd13ee823e36&#39; (0<br>
active users)<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:15,370::resourceManager:<u></u>:641::ResourceManager::(<u></u>releaseResource)<br>
Resource &#39;Storage.806d2356-12cf-437c-<u></u>8917-dd13ee823e36&#39; is free, finding<br>
out if anyone is waiting for it.<br>
Thread-13::DEBUG::2014-06-21<br>
16:17:15,371::resourceManager:<u></u>:649::ResourceManager::(<u></u>releaseResource)<br>
No one is waiting for resource<br>
&#39;Storage.806d2356-12cf-437c-<u></u>8917-dd13ee823e36&#39;, Clearing records.<br>
Thread-13::ERROR::2014-06-21<br>
16:17:15,371::dispatcher::65::<u></u>Storage.Dispatcher.Protect::(<u></u>run)<br>
{&#39;status&#39;: {&#39;message&#39;: &quot;Cannot acquire host id:<br>
(&#39;dc661957-c0c1-44ba-a5b9-<u></u>e6558904207b&#39;, SanlockException(90, &#39;Sanlock<br>
lockspace add failure&#39;, &#39;Message too long&#39;))&quot;, &#39;code&#39;: 661}}<br>
<br>
<br>
My oVirt version: 3.4.2-1.el6 (CentOS 6.5)<br>
The hypervisor hosts run GlusterFS 3.5.0-3.fc19.(Fedora 19)<br>
The two storage servers run GlusterFS 3.5.0-2.el6 (Centos 6.5)<br>
<br>
So I am NOT using local storage of the hypervisor hosts for the<br>
GlusterFS bricks.<br>
<br>
What can I do to solve this error?<br>
<br>
</blockquote>
By the way, the options on the GlusterFS volume are as follows:<br>
<br>
Volume Name: vmimage<br>
Type: Replicate<br>
Volume ID: 348e1d45-1b80-420b-91c2-<u></u>93f0d764f227<br>
Status: Started<br>
Number of Bricks: 1 x 2 = 2<br>
Transport-type: tcp<br>
Bricks:<br>
Brick1: 192.168.10.120:/export/<u></u>gluster01/brick<br>
Brick2: 192.168.10.149:/export/<u></u>gluster01/brick<br>
Options Reconfigured:<br>
network.ping-timeout: 10<br>
cluster.quorum-count: 1<br>
cluster.quorum-type: auto<br>
server.allow-insecure: on<br>
storage.owner-uid: 36<br>
storage.owner-gid: 36<br>
<br>
</blockquote>
OK, fixed it. For someone else&#39;s reference, I had to set the following<br>
options on the gluster volume:<br>
<br>
network.remote-dio: on<br>
performance.io-cache: off<br>
performance.read-ahead: off<br>
performance.quick-read: off<br>
cluster.eager-lock: enable<br>
<br>
Apparently that&#39;s done by the &#39;optimize for virt store&#39; checkbox, but<br>
obviously not when the volume is created manually. Having this in the<br>
documentation on <a href="http://ovirt.org" target="_blank">ovirt.org</a> would have saved me a lot of time and<br>
frustration.<br>
<br>
<br>
</blockquote>
<br></div></div>
its a wiki, how about adding this for the next guy?<br>
<br>
thanks,<br>
   Itamar<div class="HOEnZb"><div class="h5"><br>
______________________________<u></u>_________________<br>
Users mailing list<br>
<a href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a><br>
<a href="http://lists.ovirt.org/mailman/listinfo/users" target="_blank">http://lists.ovirt.org/<u></u>mailman/listinfo/users</a><br>
</div></div></blockquote></div><br></div>