<div dir="ltr">Thanks Michael!<div><br></div><div style>That is exactly what we needed:</div><div style><br></div><div style><div><font face="courier new, monospace">engine=&gt; select * from storage_pool_iso_map;</font></div>
<div><font face="courier new, monospace">              storage_id              |           storage_pool_id            | status | owner </font></div><div><font face="courier new, monospace">--------------------------------------+--------------------------------------+--------+-------</font></div>
<div><font face="courier new, monospace"> 774e3604-f449-4b3e-8c06-7cd16f98720c | 0f63de0e-7d98-48ce-99ec-add109f83c4f |      5 |     0</font></div><div><font face="courier new, monospace"> baa42b1c-ae2e-4486-88a1-e09e1f7a59cb | 0f63de0e-7d98-48ce-99ec-add109f83c4f |      0 |     0</font></div>
<div><font face="courier new, monospace"> 758c0abb-ea9a-43fb-bcd9-435f75cd0baa | 0f63de0e-7d98-48ce-99ec-add109f83c4f |      0 |     0</font></div><div><font face="courier new, monospace">(3 rows)</font></div><div><font face="courier new, monospace"><br>
</font></div><div><font face="courier new, monospace">engine=&gt; update storage_pool_iso_map set status=0 where storage_id=&#39;774e3604-f449-4b3e-8c06-7cd16f98720c&#39;;</font></div><div><font face="courier new, monospace">UPDATE 1</font></div>
<div><font face="courier new, monospace"><br></font></div><div><br></div><div><br></div><div style>Now my hosts are active, and I can boot my VMs. </div><div><br></div></div></div><div class="gmail_extra"><br><br><div class="gmail_quote">
On Thu, Apr 25, 2013 at 1:24 AM, Michael Kublin <span dir="ltr">&lt;<a href="mailto:mkublin@redhat.com" target="_blank">mkublin@redhat.com</a>&gt;</span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div class="HOEnZb"><div class="h5"><br>
<br>
<br>
<br>
----- Original Message -----<br>
&gt; From: &quot;Yeela Kaplan&quot; &lt;<a href="mailto:ykaplan@redhat.com">ykaplan@redhat.com</a>&gt;<br>
&gt; To: &quot;Tommy McNeely&quot; &lt;<a href="mailto:tommythekid@gmail.com">tommythekid@gmail.com</a>&gt;<br>
&gt; Cc: <a href="mailto:users@ovirt.org">users@ovirt.org</a><br>
&gt; Sent: Thursday, April 25, 2013 10:08:56 AM<br>
&gt; Subject: Re: [Users] Master domain locked, error code 304<br>
&gt;<br>
&gt; Hi,<br>
&gt; Your problem is that the master domain is locked, so the engine does not send<br>
&gt; connectStorageServer to the vdsm host,<br>
&gt; and therefore the host does not see the master domain.<br>
&gt; You need to change the status of the master domain in the db from locked<br>
&gt; while the host is in maintenance.<br>
&gt; This can be tricky and not very recommended because if you do it wrong you<br>
&gt; might corrupt the db.<br>
&gt; Another, safer, way that I recommend is try to do connectStorageServer to the<br>
&gt; masterSD from vdsClient on the vdsm host and see what happens, it might<br>
&gt; solve your problem.<br>
&gt;<br>
&gt; --<br>
&gt; Yeela<br>
&gt;<br>
&gt; ----- Original Message -----<br>
&gt; &gt; From: &quot;Tommy McNeely&quot; &lt;<a href="mailto:tommythekid@gmail.com">tommythekid@gmail.com</a>&gt;<br>
&gt; &gt; To: &quot;Juan Jose&quot; &lt;<a href="mailto:jj197005@gmail.com">jj197005@gmail.com</a>&gt;<br>
&gt; &gt; Cc: <a href="mailto:users@ovirt.org">users@ovirt.org</a><br>
&gt; &gt; Sent: Wednesday, April 24, 2013 7:30:20 PM<br>
&gt; &gt; Subject: Re: [Users] Master domain locked, error code 304<br>
&gt; &gt;<br>
&gt; &gt; Hi Juan,<br>
&gt; &gt;<br>
&gt; &gt; That sounds like a possible path to follow. Our &quot;master&quot; domain does not<br>
&gt; &gt; have<br>
&gt; &gt; any VMs in it. If no one else responds with an official path to resolution,<br>
&gt; &gt; then I will try going into the database and hacking it like that. I think<br>
&gt; &gt; it<br>
&gt; &gt; has something to do with the version or the metadata??<br>
&gt; &gt;<br>
&gt; &gt; [root@vmserver3 dom_md]# cat metadata<br>
&gt; &gt; CLASS=Data<br>
&gt; &gt; DESCRIPTION=SFOTestMaster1<br>
&gt; &gt; IOOPTIMEOUTSEC=10<br>
&gt; &gt; LEASERETRIES=3<br>
&gt; &gt; LEASETIMESEC=60<br>
&gt; &gt; LOCKPOLICY=<br>
&gt; &gt; LOCKRENEWALINTERVALSEC=5<br>
&gt; &gt; MASTER_VERSION=1<br>
&gt; &gt; POOL_DESCRIPTION=SFODC01<br>
&gt; &gt; POOL_DOMAINS=774e3604-f449-4b3e-8c06-7cd16f98720c:Active,758c0abb-ea9a-43fb-bcd9-435f75cd0baa:Active,baa42b1c-ae2e-4486-88a1-e09e1f7a59cb:Active<br>
&gt; &gt; POOL_SPM_ID=1<br>
&gt; &gt; POOL_SPM_LVER=4<br>
&gt; &gt; POOL_UUID=0f63de0e-7d98-48ce-99ec-add109f83c4f<br>
&gt; &gt; REMOTE_PATH=10.101.0.148:/c/vpt1-master<br>
&gt; &gt; ROLE=Master<br>
&gt; &gt; SDUUID=774e3604-f449-4b3e-8c06-7cd16f98720c<br>
&gt; &gt; TYPE=NFS<br>
&gt; &gt; VERSION=0<br>
&gt; &gt; _SHA_CKSUM=fa8ef0e7cd5e50e107384a146e4bfc838d24ba08<br>
&gt; &gt;<br>
&gt; &gt;<br>
&gt; &gt; On Wed, Apr 24, 2013 at 5:57 AM, Juan Jose &lt; <a href="mailto:jj197005@gmail.com">jj197005@gmail.com</a> &gt; wrote:<br>
&gt; &gt;<br>
&gt; &gt;<br>
&gt; &gt;<br>
&gt; &gt; Hello Tommy,<br>
&gt; &gt;<br>
&gt; &gt; I had a similar experience and after try to recover my storage domain, I<br>
&gt; &gt; realized that my VMs had missed. You have to verify if your VM disks are<br>
&gt; &gt; inside of your storage domain. In my case, I had to add a new a new Storage<br>
&gt; &gt; domain as Master domain to be able to remove the old VMs from DB and<br>
&gt; &gt; reattach the old storage domain. I hope this were not your case. If you<br>
&gt; &gt; haven&#39;t lost your VMs it&#39;s possible that you can recover them.<br>
&gt; &gt;<br>
&gt; &gt; Good luck,<br>
&gt; &gt;<br>
&gt; &gt; Juanjo.<br>
&gt; &gt;<br>
&gt; &gt;<br>
&gt; &gt; On Wed, Apr 24, 2013 at 6:43 AM, Tommy McNeely &lt; <a href="mailto:tommythekid@gmail.com">tommythekid@gmail.com</a> &gt;<br>
&gt; &gt; wrote:<br>
&gt; &gt;<br>
&gt; &gt;<br>
&gt; &gt;<br>
&gt; &gt;<br>
&gt; &gt; We had a hard crash (network, then power) on our 2 node Ovirt Cluster. We<br>
&gt; &gt; have NFS datastore on CentOS 6 (3.2.0-1.39.el6). We can no longer get the<br>
&gt; &gt; hosts to activate. They are unable to activate the &quot;master&quot; domain. The<br>
&gt; &gt; master storage domain show &quot;locked&quot; while the other storage domains show<br>
&gt; &gt; Unknown (disks) and inactive (ISO) All the domains are on the same NFS<br>
&gt; &gt; server, we are able to mount it, the permissions are good. We believe we<br>
&gt; &gt; might be getting bit by <a href="https://bugzilla.redhat.com/show_bug.cgi?id=920694" target="_blank">https://bugzilla.redhat.com/show_bug.cgi?id=920694</a><br>
&gt; &gt; or <a href="http://gerrit.ovirt.org/#/c/13709/" target="_blank">http://gerrit.ovirt.org/#/c/13709/</a> which says to cease working on it:<br>
&gt; &gt;<br>
&gt; &gt; Michael Kublin              Apr 10<br>
&gt; &gt;<br>
&gt; &gt;<br>
&gt; &gt; Patch Set 5: Do not submit<br>
&gt; &gt;<br>
&gt; &gt; Liron, please abondon this work. This interacts with host life cycle which<br>
&gt; &gt; will be changed, during a change a following problem will be solved as<br>
&gt; &gt; well.<br>
&gt; &gt;<br>
&gt; &gt;<br>
&gt; &gt; So, We were wondering what we can do to get our oVirt back online, or<br>
&gt; &gt; rather<br>
&gt; &gt; what the correct way is to solve this. We have a few VMs that are down<br>
&gt; &gt; which<br>
&gt; &gt; we are looking for ways to recover as quickly as possible.<br>
&gt; &gt;<br>
&gt; &gt; Thanks in advance,<br>
&gt; &gt; Tommy<br>
&gt; &gt;<br>
&gt; &gt; Here are the ovirt-engine logs:<br>
&gt; &gt;<br>
&gt; &gt; 2013-04-23 21:30:04,041 ERROR<br>
&gt; &gt; [org.ovirt.engine.core.vdsbroker.VDSCommandBase] (pool-3-thread-49) Command<br>
&gt; &gt; ConnectStoragePoolVDS execution failed. Exception:<br>
&gt; &gt; IRSNoMasterDomainException: IRSGenericException: IRSErrorException:<br>
&gt; &gt; IRSNoMasterDomainException: Cannot find master domain:<br>
&gt; &gt; &#39;spUUID=0f63de0e-7d98-48ce-99ec-add109f83c4f,<br>
&gt; &gt; msdUUID=774e3604-f449-4b3e-8c06-7cd16f98720c&#39;<br>
&gt; &gt; 2013-04-23 21:30:04,043 INFO<br>
&gt; &gt; [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStoragePoolVDSCommand]<br>
&gt; &gt; (pool-3-thread-49) FINISH, ConnectStoragePoolVDSCommand, log id: 50524b34<br>
&gt; &gt; 2013-04-23 21:30:04,049 WARN<br>
&gt; &gt; [org.ovirt.engine.core.bll.storage.ReconstructMasterDomainCommand]<br>
&gt; &gt; (pool-3-thread-49) [7c5867d6] CanDoAction of action ReconstructMasterDomain<br>
&gt; &gt; failed.<br>
&gt; &gt; Reasons:VAR__ACTION__RECONSTRUCT_MASTER,VAR__TYPE__STORAGE__DOMAIN,ACTION_TYPE_FAILED_STORAGE_DOMAIN_STATUS_ILLEGAL2,$status<br>
&gt; &gt; Locked<br>
&gt; &gt;<br>
&gt; &gt;<br>
</div></div>Hi, domain stuck in status Locked it is  a bug and it is not directly related to discussed patch.<br>
No actions in vdsm can help in such situation, please do the following:<br>
<br>
If domains are marked as Locked in GUI they should be unlocked in DB.<br>
My advice is to put host in maintainence, after that<br>
please run the following query :  update storage_pool_iso_map set status = 0 where storage_id=...<br>
(Info about domains is located inside storage_domain_static<br>
table)<br>
Activate a host, after that host should try to connect to all storages and to pool again and reconstruct will run and I hope will<br>
success.<br>
<div class="HOEnZb"><div class="h5"><br>
&gt; &gt;<br>
&gt; &gt; Here are the logs from vdsm:<br>
&gt; &gt;<br>
&gt; &gt; Thread-29::DEBUG::2013-04-23<br>
&gt; &gt; 21:36:05,906::misc::84::Storage.Misc.excCmd::(&lt;lambda&gt;) &#39;/usr/bin/sudo -n<br>
&gt; &gt; /bin/mount -t nfs -o soft,nosharecache,timeo=600,retrans=6,nfsvers=3<br>
&gt; &gt; 10.101.0.148:/c/vpt1-vmdisks1<br>
&gt; &gt; /rhev/data-center/mnt/10.101.0.148:_c_vpt1-vmdisks1&#39; (cwd None)<br>
&gt; &gt; Thread-29::DEBUG::2013-04-23<br>
&gt; &gt; 21:36:06,008::misc::84::Storage.Misc.excCmd::(&lt;lambda&gt;) &#39;/usr/bin/sudo -n<br>
&gt; &gt; /bin/mount -t nfs -o soft,nosharecache,timeo=600,retrans=6,nfsvers=3<br>
&gt; &gt; 10.101.0.148:/c/vpool-iso /rhev/data-center/mnt/10.101.0.148:_c_vpool-iso&#39;<br>
&gt; &gt; (cwd None)<br>
&gt; &gt; Thread-29::INFO::2013-04-23<br>
&gt; &gt; 21:36:06,065::logUtils::44::dispatcher::(wrapper)<br>
&gt; &gt; Run and protect: connectStorageServer, Return response: {&#39;statuslist&#39;:<br>
&gt; &gt; [{&#39;status&#39;: 0, &#39;id&#39;: &#39;7c19bd42-c3dc-41b9-b81b-d9b75214b8dc&#39;}, {&#39;status&#39;: 0,<br>
&gt; &gt; &#39;id&#39;: &#39;eff2ef61-0b12-4429-b087-8742be17ae90&#39;}]}<br>
&gt; &gt; Thread-29::DEBUG::2013-04-23<br>
&gt; &gt; 21:36:06,071::task::1151::TaskManager.Task::(prepare)<br>
&gt; &gt; Task=`48337e40-2446-4357-b6dc-2c86f4da67e2`::finished: {&#39;statuslist&#39;:<br>
&gt; &gt; [{&#39;status&#39;: 0, &#39;id&#39;: &#39;7c19bd42-c3dc-41b9-b81b-d9b75214b8dc&#39;}, {&#39;status&#39;: 0,<br>
&gt; &gt; &#39;id&#39;: &#39;eff2ef61-0b12-4429-b087-8742be17ae90&#39;}]}<br>
&gt; &gt; Thread-29::DEBUG::2013-04-23<br>
&gt; &gt; 21:36:06,071::task::568::TaskManager.Task::(_updateState)<br>
&gt; &gt; Task=`48337e40-2446-4357-b6dc-2c86f4da67e2`::moving from state preparing -&gt;<br>
&gt; &gt; state finished<br>
&gt; &gt; Thread-29::DEBUG::2013-04-23<br>
&gt; &gt; 21:36:06,071::resourceManager::830::ResourceManager.Owner::(releaseAll)<br>
&gt; &gt; Owner.releaseAll requests {} resources {}<br>
&gt; &gt; Thread-29::DEBUG::2013-04-23<br>
&gt; &gt; 21:36:06,072::resourceManager::864::ResourceManager.Owner::(cancelAll)<br>
&gt; &gt; Owner.cancelAll requests {}<br>
&gt; &gt; Thread-29::DEBUG::2013-04-23<br>
&gt; &gt; 21:36:06,072::task::957::TaskManager.Task::(_decref)<br>
&gt; &gt; Task=`48337e40-2446-4357-b6dc-2c86f4da67e2`::ref 0 aborting False<br>
&gt; &gt; Thread-30::DEBUG::2013-04-23<br>
&gt; &gt; 21:36:06,112::BindingXMLRPC::161::vds::(wrapper)<br>
&gt; &gt; [10.101.0.197]<br>
&gt; &gt; Thread-30::DEBUG::2013-04-23<br>
&gt; &gt; 21:36:06,112::task::568::TaskManager.Task::(_updateState)<br>
&gt; &gt; Task=`f551fa3f-9d8c-4de3-895a-964c821060d4`::moving from state init -&gt;<br>
&gt; &gt; state<br>
&gt; &gt; preparing<br>
&gt; &gt; Thread-30::INFO::2013-04-23<br>
&gt; &gt; 21:36:06,113::logUtils::41::dispatcher::(wrapper)<br>
&gt; &gt; Run and protect:<br>
&gt; &gt; connectStoragePool(spUUID=&#39;0f63de0e-7d98-48ce-99ec-add109f83c4f&#39;, hostID=1,<br>
&gt; &gt; scsiKey=&#39;0f63de0e-7d98-48ce-99ec-add109f83c4f&#39;,<br>
&gt; &gt; msdUUID=&#39;774e3604-f449-4b3e-8c06-7cd16f98720c&#39;, masterVersion=73,<br>
&gt; &gt; options=None)<br>
&gt; &gt; Thread-30::DEBUG::2013-04-23<br>
&gt; &gt; 21:36:06,113::resourceManager::190::ResourceManager.Request::(__init__)<br>
&gt; &gt; ResName=`Storage.0f63de0e-7d98-48ce-99ec-add109f83c4f`ReqID=`ee74329a-0a92-465a-be50-b8acc6d7246a`::Request<br>
&gt; &gt; was made in &#39;/usr/share/vdsm/storage/resourceManager.py&#39; line &#39;189&#39; at<br>
&gt; &gt; &#39;__init__&#39;<br>
&gt; &gt; Thread-30::DEBUG::2013-04-23<br>
&gt; &gt; 21:36:06,114::resourceManager::504::ResourceManager::(registerResource)<br>
&gt; &gt; Trying to register resource &#39;Storage.0f63de0e-7d98-48ce-99ec-add109f83c4f&#39;<br>
&gt; &gt; for lock type &#39;exclusive&#39;<br>
&gt; &gt; Thread-30::DEBUG::2013-04-23<br>
&gt; &gt; 21:36:06,114::resourceManager::547::ResourceManager::(registerResource)<br>
&gt; &gt; Resource &#39;Storage.0f63de0e-7d98-48ce-99ec-add109f83c4f&#39; is free. Now<br>
&gt; &gt; locking<br>
&gt; &gt; as &#39;exclusive&#39; (1 active user)<br>
&gt; &gt; Thread-30::DEBUG::2013-04-23<br>
&gt; &gt; 21:36:06,114::resourceManager::227::ResourceManager.Request::(grant)<br>
&gt; &gt; ResName=`Storage.0f63de0e-7d98-48ce-99ec-add109f83c4f`ReqID=`ee74329a-0a92-465a-be50-b8acc6d7246a`::Granted<br>
&gt; &gt; request<br>
&gt; &gt; Thread-30::INFO::2013-04-23<br>
&gt; &gt; 21:36:06,115::sp::625::Storage.StoragePool::(connect) Connect host #1 to<br>
&gt; &gt; the<br>
&gt; &gt; storage pool 0f63de0e-7d98-48ce-99ec-add109f83c4f with master domain:<br>
&gt; &gt; 774e3604-f449-4b3e-8c06-7cd16f98720c (ver = 73)<br>
&gt; &gt; Thread-30::DEBUG::2013-04-23<br>
&gt; &gt; 21:36:06,116::lvm::477::OperationMutex::(_invalidateAllPvs) Operation &#39;lvm<br>
&gt; &gt; invalidate operation&#39; got the operation mutex<br>
&gt; &gt; Thread-30::DEBUG::2013-04-23<br>
&gt; &gt; 21:36:06,116::lvm::479::OperationMutex::(_invalidateAllPvs) Operation &#39;lvm<br>
&gt; &gt; invalidate operation&#39; released the operation mutex<br>
&gt; &gt; Thread-30::DEBUG::2013-04-23<br>
&gt; &gt; 21:36:06,117::lvm::488::OperationMutex::(_invalidateAllVgs) Operation &#39;lvm<br>
&gt; &gt; invalidate operation&#39; got the operation mutex<br>
&gt; &gt; Thread-30::DEBUG::2013-04-23<br>
&gt; &gt; 21:36:06,117::lvm::490::OperationMutex::(_invalidateAllVgs) Operation &#39;lvm<br>
&gt; &gt; invalidate operation&#39; released the operation mutex<br>
&gt; &gt; Thread-30::DEBUG::2013-04-23<br>
&gt; &gt; 21:36:06,117::lvm::508::OperationMutex::(_invalidateAllLvs) Operation &#39;lvm<br>
&gt; &gt; invalidate operation&#39; got the operation mutex<br>
&gt; &gt; Thread-30::DEBUG::2013-04-23<br>
&gt; &gt; 21:36:06,118::lvm::510::OperationMutex::(_invalidateAllLvs) Operation &#39;lvm<br>
&gt; &gt; invalidate operation&#39; released the operation mutex<br>
&gt; &gt; Thread-30::DEBUG::2013-04-23<br>
&gt; &gt; 21:36:06,118::misc::1054::SamplingMethod::(__call__) Trying to enter<br>
&gt; &gt; sampling method (storage.sdc.refreshStorage)<br>
&gt; &gt; Thread-30::DEBUG::2013-04-23<br>
&gt; &gt; 21:36:06,118::misc::1056::SamplingMethod::(__call__) Got in to sampling<br>
&gt; &gt; method<br>
&gt; &gt; Thread-30::DEBUG::2013-04-23<br>
&gt; &gt; 21:36:06,119::misc::1054::SamplingMethod::(__call__) Trying to enter<br>
&gt; &gt; sampling method (storage.iscsi.rescan)<br>
&gt; &gt; Thread-30::DEBUG::2013-04-23<br>
&gt; &gt; 21:36:06,119::misc::1056::SamplingMethod::(__call__) Got in to sampling<br>
&gt; &gt; method<br>
&gt; &gt; Thread-30::DEBUG::2013-04-23<br>
&gt; &gt; 21:36:06,119::misc::84::Storage.Misc.excCmd::(&lt;lambda&gt;) &#39;/usr/bin/sudo -n<br>
&gt; &gt; /sbin/iscsiadm -m session -R&#39; (cwd None)<br>
&gt; &gt; Thread-30::DEBUG::2013-04-23<br>
&gt; &gt; 21:36:06,136::misc::84::Storage.Misc.excCmd::(&lt;lambda&gt;) FAILED: &lt;err&gt; =<br>
&gt; &gt; &#39;iscsiadm: No session found.\n&#39;; &lt;rc&gt; = 21<br>
&gt; &gt; Thread-30::DEBUG::2013-04-23<br>
&gt; &gt; 21:36:06,136::misc::1064::SamplingMethod::(__call__) Returning last result<br>
&gt; &gt; MainProcess|Thread-30::DEBUG::2013-04-23<br>
&gt; &gt; 21:36:06,139::misc::84::Storage.Misc.excCmd::(&lt;lambda&gt;) &#39;/bin/dd<br>
&gt; &gt; of=/sys/class/scsi_host/host0/scan&#39; (cwd None)<br>
&gt; &gt; MainProcess|Thread-30::DEBUG::2013-04-23<br>
&gt; &gt; 21:36:06,142::misc::84::Storage.Misc.excCmd::(&lt;lambda&gt;) &#39;/bin/dd<br>
&gt; &gt; of=/sys/class/scsi_host/host1/scan&#39; (cwd None)<br>
&gt; &gt; MainProcess|Thread-30::DEBUG::2013-04-23<br>
&gt; &gt; 21:36:06,146::misc::84::Storage.Misc.excCmd::(&lt;lambda&gt;) &#39;/bin/dd<br>
&gt; &gt; of=/sys/class/scsi_host/host2/scan&#39; (cwd None)<br>
&gt; &gt; MainProcess|Thread-30::DEBUG::2013-04-23<br>
&gt; &gt; 21:36:06,149::iscsi::402::Storage.ISCSI::(forceIScsiScan) Performing SCSI<br>
&gt; &gt; scan, this will take up to 30 seconds<br>
&gt; &gt; Thread-30::DEBUG::2013-04-23<br>
&gt; &gt; 21:36:08,152::misc::84::Storage.Misc.excCmd::(&lt;lambda&gt;) &#39;/usr/bin/sudo -n<br>
&gt; &gt; /sbin/multipath&#39; (cwd None)<br>
&gt; &gt; Thread-30::DEBUG::2013-04-23<br>
&gt; &gt; 21:36:08,254::misc::84::Storage.Misc.excCmd::(&lt;lambda&gt;) SUCCESS: &lt;err&gt; =<br>
&gt; &gt; &#39;&#39;;<br>
&gt; &gt; &lt;rc&gt; = 0<br>
&gt; &gt; Thread-30::DEBUG::2013-04-23<br>
&gt; &gt; 21:36:08,256::lvm::477::OperationMutex::(_invalidateAllPvs) Operation &#39;lvm<br>
&gt; &gt; invalidate operation&#39; got the operation mutex<br>
&gt; &gt; Thread-30::DEBUG::2013-04-23<br>
&gt; &gt; 21:36:08,256::lvm::479::OperationMutex::(_invalidateAllPvs) Operation &#39;lvm<br>
&gt; &gt; invalidate operation&#39; released the operation mutex<br>
&gt; &gt; Thread-30::DEBUG::2013-04-23<br>
&gt; &gt; 21:36:08,257::lvm::488::OperationMutex::(_invalidateAllVgs) Operation &#39;lvm<br>
&gt; &gt; invalidate operation&#39; got the operation mutex<br>
&gt; &gt; Thread-30::DEBUG::2013-04-23<br>
&gt; &gt; 21:36:08,257::lvm::490::OperationMutex::(_invalidateAllVgs) Operation &#39;lvm<br>
&gt; &gt; invalidate operation&#39; released the operation mutex<br>
&gt; &gt; Thread-30::DEBUG::2013-04-23<br>
&gt; &gt; 21:36:08,258::lvm::508::OperationMutex::(_invalidateAllLvs) Operation &#39;lvm<br>
&gt; &gt; invalidate operation&#39; got the operation mutex<br>
&gt; &gt; Thread-30::DEBUG::2013-04-23<br>
&gt; &gt; 21:36:08,258::lvm::510::OperationMutex::(_invalidateAllLvs) Operation &#39;lvm<br>
&gt; &gt; invalidate operation&#39; released the operation mutex<br>
&gt; &gt; Thread-30::DEBUG::2013-04-23<br>
&gt; &gt; 21:36:08,258::misc::1064::SamplingMethod::(__call__) Returning last result<br>
&gt; &gt; Thread-30::DEBUG::2013-04-23<br>
&gt; &gt; 21:36:08,259::lvm::368::OperationMutex::(_reloadvgs) Operation &#39;lvm reload<br>
&gt; &gt; operation&#39; got the operation mutex<br>
&gt; &gt; Thread-30::DEBUG::2013-04-23<br>
&gt; &gt; 21:36:08,261::misc::84::Storage.Misc.excCmd::(&lt;lambda&gt;) &#39;/usr/bin/sudo -n<br>
&gt; &gt; /sbin/lvm vgs --config &quot; devices { preferred_names = [\\&quot;^/dev/mapper/\\&quot;]<br>
&gt; &gt; ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3<br>
&gt; &gt; filter = [ \\&quot;r%.*%\\&quot; ] } global { locking_type=1 prioritise_write_locks=1<br>
&gt; &gt; wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } &quot;<br>
&gt; &gt; --noheadings<br>
&gt; &gt; --units b --nosuffix --separator | -o<br>
&gt; &gt; uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free<br>
&gt; &gt; 774e3604-f449-4b3e-8c06-7cd16f98720c&#39; (cwd None)<br>
&gt; &gt; Thread-30::DEBUG::2013-04-23<br>
&gt; &gt; 21:36:08,514::misc::84::Storage.Misc.excCmd::(&lt;lambda&gt;) FAILED: &lt;err&gt; = &#39;<br>
&gt; &gt; Volume group &quot;774e3604-f449-4b3e-8c06-7cd16f98720c&quot; not found\n&#39;; &lt;rc&gt; = 5<br>
&gt; &gt; Thread-30::WARNING::2013-04-23<br>
&gt; &gt; 21:36:08,516::lvm::373::Storage.LVM::(_reloadvgs) lvm vgs failed: 5 [] [&#39;<br>
&gt; &gt; Volume group &quot;774e3604-f449-4b3e-8c06-7cd16f98720c&quot; not found&#39;]<br>
&gt; &gt; Thread-30::DEBUG::2013-04-23<br>
&gt; &gt; 21:36:08,518::lvm::397::OperationMutex::(_reloadvgs) Operation &#39;lvm reload<br>
&gt; &gt; operation&#39; released the operation mutex<br>
&gt; &gt; Thread-30::DEBUG::2013-04-23<br>
&gt; &gt; 21:36:08,524::resourceManager::557::ResourceManager::(releaseResource)<br>
&gt; &gt; Trying to release resource &#39;Storage.0f63de0e-7d98-48ce-99ec-add109f83c4f&#39;<br>
&gt; &gt; Thread-30::DEBUG::2013-04-23<br>
&gt; &gt; 21:36:08,525::resourceManager::573::ResourceManager::(releaseResource)<br>
&gt; &gt; Released resource &#39;Storage.0f63de0e-7d98-48ce-99ec-add109f83c4f&#39; (0 active<br>
&gt; &gt; users)<br>
&gt; &gt; Thread-30::DEBUG::2013-04-23<br>
&gt; &gt; 21:36:08,525::resourceManager::578::ResourceManager::(releaseResource)<br>
&gt; &gt; Resource &#39;Storage.0f63de0e-7d98-48ce-99ec-add109f83c4f&#39; is free, finding<br>
&gt; &gt; out<br>
&gt; &gt; if anyone is waiting for it.<br>
&gt; &gt; Thread-30::DEBUG::2013-04-23<br>
&gt; &gt; 21:36:08,525::resourceManager::585::ResourceManager::(releaseResource) No<br>
&gt; &gt; one is waiting for resource &#39;Storage.0f63de0e-7d98-48ce-99ec-add109f83c4f&#39;,<br>
&gt; &gt; Clearing records.<br>
&gt; &gt; Thread-30::ERROR::2013-04-23<br>
&gt; &gt; 21:36:08,526::task::833::TaskManager.Task::(_setError)<br>
&gt; &gt; Task=`f551fa3f-9d8c-4de3-895a-964c821060d4`::Unexpected error<br>
&gt; &gt; Traceback (most recent call last):<br>
&gt; &gt; File &quot;/usr/share/vdsm/storage/task.py&quot;, line 840, in _run<br>
&gt; &gt; return fn(*args, **kargs)<br>
&gt; &gt; File &quot;/usr/share/vdsm/logUtils.py&quot;, line 42, in wrapper<br>
&gt; &gt; res = f(*args, **kwargs)<br>
&gt; &gt; File &quot;/usr/share/vdsm/storage/hsm.py&quot;, line 926, in connectStoragePool<br>
&gt; &gt; masterVersion, options)<br>
&gt; &gt; File &quot;/usr/share/vdsm/storage/hsm.py&quot;, line 973, in _connectStoragePool<br>
&gt; &gt; res = pool.connect(hostID, scsiKey, msdUUID, masterVersion)<br>
&gt; &gt; File &quot;/usr/share/vdsm/storage/sp.py&quot;, line 642, in connect<br>
&gt; &gt; self.__rebuild(msdUUID=msdUUID, masterVersion=masterVersion)<br>
&gt; &gt; File &quot;/usr/share/vdsm/storage/sp.py&quot;, line 1166, in __rebuild<br>
&gt; &gt; self.masterDomain = self.getMasterDomain(msdUUID=msdUUID,<br>
&gt; &gt; masterVersion=masterVersion)<br>
&gt; &gt; File &quot;/usr/share/vdsm/storage/sp.py&quot;, line 1505, in getMasterDomain<br>
&gt; &gt; raise se.StoragePoolMasterNotFound(self.spUUID, msdUUID)<br>
&gt; &gt; StoragePoolMasterNotFound: Cannot find master domain:<br>
&gt; &gt; &#39;spUUID=0f63de0e-7d98-48ce-99ec-add109f83c4f,<br>
&gt; &gt; msdUUID=774e3604-f449-4b3e-8c06-7cd16f98720c&#39;<br>
&gt; &gt; Thread-30::DEBUG::2013-04-23<br>
&gt; &gt; 21:36:08,527::task::852::TaskManager.Task::(_run)<br>
&gt; &gt; Task=`f551fa3f-9d8c-4de3-895a-964c821060d4`::Task._run:<br>
&gt; &gt; f551fa3f-9d8c-4de3-895a-964c821060d4<br>
&gt; &gt; (&#39;0f63de0e-7d98-48ce-99ec-add109f83c4f&#39;, 1,<br>
&gt; &gt; &#39;0f63de0e-7d98-48ce-99ec-add109f83c4f&#39;,<br>
&gt; &gt; &#39;774e3604-f449-4b3e-8c06-7cd16f98720c&#39;, 73) {} failed - stopping task<br>
&gt; &gt; Thread-30::DEBUG::2013-04-23<br>
&gt; &gt; 21:36:08,528::task::1177::TaskManager.Task::(stop)<br>
&gt; &gt; Task=`f551fa3f-9d8c-4de3-895a-964c821060d4`::stopping in state preparing<br>
&gt; &gt; (force False)<br>
&gt; &gt; Thread-30::DEBUG::2013-04-23<br>
&gt; &gt; 21:36:08,528::task::957::TaskManager.Task::(_decref)<br>
&gt; &gt; Task=`f551fa3f-9d8c-4de3-895a-964c821060d4`::ref 1 aborting True<br>
&gt; &gt; Thread-30::INFO::2013-04-23<br>
&gt; &gt; 21:36:08,528::task::1134::TaskManager.Task::(prepare)<br>
&gt; &gt; Task=`f551fa3f-9d8c-4de3-895a-964c821060d4`::aborting: Task is aborted:<br>
&gt; &gt; &#39;Cannot find master domain&#39; - code 304<br>
&gt; &gt; Thread-30::DEBUG::2013-04-23<br>
&gt; &gt; 21:36:08,529::task::1139::TaskManager.Task::(prepare)<br>
&gt; &gt; Task=`f551fa3f-9d8c-4de3-895a-964c821060d4`::Prepare: aborted: Cannot find<br>
&gt; &gt; master domain<br>
&gt; &gt; Thread-30::DEBUG::2013-04-23<br>
&gt; &gt; 21:36:08,529::task::957::TaskManager.Task::(_decref)<br>
&gt; &gt; Task=`f551fa3f-9d8c-4de3-895a-964c821060d4`::ref 0 aborting True<br>
&gt; &gt; Thread-30::DEBUG::2013-04-23<br>
&gt; &gt; 21:36:08,529::task::892::TaskManager.Task::(_doAbort)<br>
&gt; &gt; Task=`f551fa3f-9d8c-4de3-895a-964c821060d4`::Task._doAbort: force False<br>
&gt; &gt; Thread-30::DEBUG::2013-04-23<br>
&gt; &gt; 21:36:08,530::resourceManager::864::ResourceManager.Owner::(cancelAll)<br>
&gt; &gt; Owner.cancelAll requests {}<br>
&gt; &gt; Thread-30::DEBUG::2013-04-23<br>
&gt; &gt; 21:36:08,530::task::568::TaskManager.Task::(_updateState)<br>
&gt; &gt; Task=`f551fa3f-9d8c-4de3-895a-964c821060d4`::moving from state preparing -&gt;<br>
&gt; &gt; state aborting<br>
&gt; &gt; Thread-30::DEBUG::2013-04-23<br>
&gt; &gt; 21:36:08,530::task::523::TaskManager.Task::(__state_aborting)<br>
&gt; &gt; Task=`f551fa3f-9d8c-4de3-895a-964c821060d4`::_aborting: recover policy none<br>
&gt; &gt; Thread-30::DEBUG::2013-04-23<br>
&gt; &gt; 21:36:08,531::task::568::TaskManager.Task::(_updateState)<br>
&gt; &gt; Task=`f551fa3f-9d8c-4de3-895a-964c821060d4`::moving from state aborting -&gt;<br>
&gt; &gt; state failed<br>
&gt; &gt; Thread-30::DEBUG::2013-04-23<br>
&gt; &gt; 21:36:08,531::resourceManager::830::ResourceManager.Owner::(releaseAll)<br>
&gt; &gt; Owner.releaseAll requests {} resources {}<br>
&gt; &gt; Thread-30::DEBUG::2013-04-23<br>
&gt; &gt; 21:36:08,531::resourceManager::864::ResourceManager.Owner::(cancelAll)<br>
&gt; &gt; Owner.cancelAll requests {}<br>
&gt; &gt; Thread-30::ERROR::2013-04-23<br>
&gt; &gt; 21:36:08,532::dispatcher::67::Storage.Dispatcher.Protect::(run) {&#39;status&#39;:<br>
&gt; &gt; {&#39;message&#39;: &quot;Cannot find master domain:<br>
&gt; &gt; &#39;spUUID=0f63de0e-7d98-48ce-99ec-add109f83c4f,<br>
&gt; &gt; msdUUID=774e3604-f449-4b3e-8c06-7cd16f98720c&#39;&quot;, &#39;code&#39;: 304}}<br>
&gt; &gt; [root@vmserver3 vdsm]#<br>
&gt; &gt;<br>
&gt; &gt;<br>
&gt; &gt; _______________________________________________<br>
&gt; &gt; Users mailing list<br>
&gt; &gt; <a href="mailto:Users@ovirt.org">Users@ovirt.org</a><br>
&gt; &gt; <a href="http://lists.ovirt.org/mailman/listinfo/users" target="_blank">http://lists.ovirt.org/mailman/listinfo/users</a><br>
&gt; &gt;<br>
&gt; &gt;<br>
&gt; &gt;<br>
&gt; &gt;<br>
&gt; &gt; _______________________________________________<br>
&gt; &gt; Users mailing list<br>
&gt; &gt; <a href="mailto:Users@ovirt.org">Users@ovirt.org</a><br>
&gt; &gt; <a href="http://lists.ovirt.org/mailman/listinfo/users" target="_blank">http://lists.ovirt.org/mailman/listinfo/users</a><br>
&gt; &gt;<br>
&gt; _______________________________________________<br>
&gt; Users mailing list<br>
&gt; <a href="mailto:Users@ovirt.org">Users@ovirt.org</a><br>
&gt; <a href="http://lists.ovirt.org/mailman/listinfo/users" target="_blank">http://lists.ovirt.org/mailman/listinfo/users</a><br>
&gt;<br>
</div></div></blockquote></div><br></div>