<div dir="ltr">Thanks Michael!<div><br></div><div style>That is exactly what we needed:</div><div style><br></div><div style><div><font face="courier new, monospace">engine=> select * from storage_pool_iso_map;</font></div>
<div><font face="courier new, monospace"> storage_id | storage_pool_id | status | owner </font></div><div><font face="courier new, monospace">--------------------------------------+--------------------------------------+--------+-------</font></div>
<div><font face="courier new, monospace"> 774e3604-f449-4b3e-8c06-7cd16f98720c | 0f63de0e-7d98-48ce-99ec-add109f83c4f | 5 | 0</font></div><div><font face="courier new, monospace"> baa42b1c-ae2e-4486-88a1-e09e1f7a59cb | 0f63de0e-7d98-48ce-99ec-add109f83c4f | 0 | 0</font></div>
<div><font face="courier new, monospace"> 758c0abb-ea9a-43fb-bcd9-435f75cd0baa | 0f63de0e-7d98-48ce-99ec-add109f83c4f | 0 | 0</font></div><div><font face="courier new, monospace">(3 rows)</font></div><div><font face="courier new, monospace"><br>
</font></div><div><font face="courier new, monospace">engine=> update storage_pool_iso_map set status=0 where storage_id='774e3604-f449-4b3e-8c06-7cd16f98720c';</font></div><div><font face="courier new, monospace">UPDATE 1</font></div>
<div><font face="courier new, monospace"><br></font></div><div><br></div><div><br></div><div style>Now my hosts are active, and I can boot my VMs. </div><div><br></div></div></div><div class="gmail_extra"><br><br><div class="gmail_quote">
On Thu, Apr 25, 2013 at 1:24 AM, Michael Kublin <span dir="ltr"><<a href="mailto:mkublin@redhat.com" target="_blank">mkublin@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div class="HOEnZb"><div class="h5"><br>
<br>
<br>
<br>
----- Original Message -----<br>
> From: "Yeela Kaplan" <<a href="mailto:ykaplan@redhat.com">ykaplan@redhat.com</a>><br>
> To: "Tommy McNeely" <<a href="mailto:tommythekid@gmail.com">tommythekid@gmail.com</a>><br>
> Cc: <a href="mailto:users@ovirt.org">users@ovirt.org</a><br>
> Sent: Thursday, April 25, 2013 10:08:56 AM<br>
> Subject: Re: [Users] Master domain locked, error code 304<br>
><br>
> Hi,<br>
> Your problem is that the master domain is locked, so the engine does not send<br>
> connectStorageServer to the vdsm host,<br>
> and therefore the host does not see the master domain.<br>
> You need to change the status of the master domain in the db from locked<br>
> while the host is in maintenance.<br>
> This can be tricky and not very recommended because if you do it wrong you<br>
> might corrupt the db.<br>
> Another, safer, way that I recommend is try to do connectStorageServer to the<br>
> masterSD from vdsClient on the vdsm host and see what happens, it might<br>
> solve your problem.<br>
><br>
> --<br>
> Yeela<br>
><br>
> ----- Original Message -----<br>
> > From: "Tommy McNeely" <<a href="mailto:tommythekid@gmail.com">tommythekid@gmail.com</a>><br>
> > To: "Juan Jose" <<a href="mailto:jj197005@gmail.com">jj197005@gmail.com</a>><br>
> > Cc: <a href="mailto:users@ovirt.org">users@ovirt.org</a><br>
> > Sent: Wednesday, April 24, 2013 7:30:20 PM<br>
> > Subject: Re: [Users] Master domain locked, error code 304<br>
> ><br>
> > Hi Juan,<br>
> ><br>
> > That sounds like a possible path to follow. Our "master" domain does not<br>
> > have<br>
> > any VMs in it. If no one else responds with an official path to resolution,<br>
> > then I will try going into the database and hacking it like that. I think<br>
> > it<br>
> > has something to do with the version or the metadata??<br>
> ><br>
> > [root@vmserver3 dom_md]# cat metadata<br>
> > CLASS=Data<br>
> > DESCRIPTION=SFOTestMaster1<br>
> > IOOPTIMEOUTSEC=10<br>
> > LEASERETRIES=3<br>
> > LEASETIMESEC=60<br>
> > LOCKPOLICY=<br>
> > LOCKRENEWALINTERVALSEC=5<br>
> > MASTER_VERSION=1<br>
> > POOL_DESCRIPTION=SFODC01<br>
> > POOL_DOMAINS=774e3604-f449-4b3e-8c06-7cd16f98720c:Active,758c0abb-ea9a-43fb-bcd9-435f75cd0baa:Active,baa42b1c-ae2e-4486-88a1-e09e1f7a59cb:Active<br>
> > POOL_SPM_ID=1<br>
> > POOL_SPM_LVER=4<br>
> > POOL_UUID=0f63de0e-7d98-48ce-99ec-add109f83c4f<br>
> > REMOTE_PATH=10.101.0.148:/c/vpt1-master<br>
> > ROLE=Master<br>
> > SDUUID=774e3604-f449-4b3e-8c06-7cd16f98720c<br>
> > TYPE=NFS<br>
> > VERSION=0<br>
> > _SHA_CKSUM=fa8ef0e7cd5e50e107384a146e4bfc838d24ba08<br>
> ><br>
> ><br>
> > On Wed, Apr 24, 2013 at 5:57 AM, Juan Jose < <a href="mailto:jj197005@gmail.com">jj197005@gmail.com</a> > wrote:<br>
> ><br>
> ><br>
> ><br>
> > Hello Tommy,<br>
> ><br>
> > I had a similar experience and after try to recover my storage domain, I<br>
> > realized that my VMs had missed. You have to verify if your VM disks are<br>
> > inside of your storage domain. In my case, I had to add a new a new Storage<br>
> > domain as Master domain to be able to remove the old VMs from DB and<br>
> > reattach the old storage domain. I hope this were not your case. If you<br>
> > haven't lost your VMs it's possible that you can recover them.<br>
> ><br>
> > Good luck,<br>
> ><br>
> > Juanjo.<br>
> ><br>
> ><br>
> > On Wed, Apr 24, 2013 at 6:43 AM, Tommy McNeely < <a href="mailto:tommythekid@gmail.com">tommythekid@gmail.com</a> ><br>
> > wrote:<br>
> ><br>
> ><br>
> ><br>
> ><br>
> > We had a hard crash (network, then power) on our 2 node Ovirt Cluster. We<br>
> > have NFS datastore on CentOS 6 (3.2.0-1.39.el6). We can no longer get the<br>
> > hosts to activate. They are unable to activate the "master" domain. The<br>
> > master storage domain show "locked" while the other storage domains show<br>
> > Unknown (disks) and inactive (ISO) All the domains are on the same NFS<br>
> > server, we are able to mount it, the permissions are good. We believe we<br>
> > might be getting bit by <a href="https://bugzilla.redhat.com/show_bug.cgi?id=920694" target="_blank">https://bugzilla.redhat.com/show_bug.cgi?id=920694</a><br>
> > or <a href="http://gerrit.ovirt.org/#/c/13709/" target="_blank">http://gerrit.ovirt.org/#/c/13709/</a> which says to cease working on it:<br>
> ><br>
> > Michael Kublin Apr 10<br>
> ><br>
> ><br>
> > Patch Set 5: Do not submit<br>
> ><br>
> > Liron, please abondon this work. This interacts with host life cycle which<br>
> > will be changed, during a change a following problem will be solved as<br>
> > well.<br>
> ><br>
> ><br>
> > So, We were wondering what we can do to get our oVirt back online, or<br>
> > rather<br>
> > what the correct way is to solve this. We have a few VMs that are down<br>
> > which<br>
> > we are looking for ways to recover as quickly as possible.<br>
> ><br>
> > Thanks in advance,<br>
> > Tommy<br>
> ><br>
> > Here are the ovirt-engine logs:<br>
> ><br>
> > 2013-04-23 21:30:04,041 ERROR<br>
> > [org.ovirt.engine.core.vdsbroker.VDSCommandBase] (pool-3-thread-49) Command<br>
> > ConnectStoragePoolVDS execution failed. Exception:<br>
> > IRSNoMasterDomainException: IRSGenericException: IRSErrorException:<br>
> > IRSNoMasterDomainException: Cannot find master domain:<br>
> > 'spUUID=0f63de0e-7d98-48ce-99ec-add109f83c4f,<br>
> > msdUUID=774e3604-f449-4b3e-8c06-7cd16f98720c'<br>
> > 2013-04-23 21:30:04,043 INFO<br>
> > [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStoragePoolVDSCommand]<br>
> > (pool-3-thread-49) FINISH, ConnectStoragePoolVDSCommand, log id: 50524b34<br>
> > 2013-04-23 21:30:04,049 WARN<br>
> > [org.ovirt.engine.core.bll.storage.ReconstructMasterDomainCommand]<br>
> > (pool-3-thread-49) [7c5867d6] CanDoAction of action ReconstructMasterDomain<br>
> > failed.<br>
> > Reasons:VAR__ACTION__RECONSTRUCT_MASTER,VAR__TYPE__STORAGE__DOMAIN,ACTION_TYPE_FAILED_STORAGE_DOMAIN_STATUS_ILLEGAL2,$status<br>
> > Locked<br>
> ><br>
> ><br>
</div></div>Hi, domain stuck in status Locked it is a bug and it is not directly related to discussed patch.<br>
No actions in vdsm can help in such situation, please do the following:<br>
<br>
If domains are marked as Locked in GUI they should be unlocked in DB.<br>
My advice is to put host in maintainence, after that<br>
please run the following query : update storage_pool_iso_map set status = 0 where storage_id=...<br>
(Info about domains is located inside storage_domain_static<br>
table)<br>
Activate a host, after that host should try to connect to all storages and to pool again and reconstruct will run and I hope will<br>
success.<br>
<div class="HOEnZb"><div class="h5"><br>
> ><br>
> > Here are the logs from vdsm:<br>
> ><br>
> > Thread-29::DEBUG::2013-04-23<br>
> > 21:36:05,906::misc::84::Storage.Misc.excCmd::(<lambda>) '/usr/bin/sudo -n<br>
> > /bin/mount -t nfs -o soft,nosharecache,timeo=600,retrans=6,nfsvers=3<br>
> > 10.101.0.148:/c/vpt1-vmdisks1<br>
> > /rhev/data-center/mnt/10.101.0.148:_c_vpt1-vmdisks1' (cwd None)<br>
> > Thread-29::DEBUG::2013-04-23<br>
> > 21:36:06,008::misc::84::Storage.Misc.excCmd::(<lambda>) '/usr/bin/sudo -n<br>
> > /bin/mount -t nfs -o soft,nosharecache,timeo=600,retrans=6,nfsvers=3<br>
> > 10.101.0.148:/c/vpool-iso /rhev/data-center/mnt/10.101.0.148:_c_vpool-iso'<br>
> > (cwd None)<br>
> > Thread-29::INFO::2013-04-23<br>
> > 21:36:06,065::logUtils::44::dispatcher::(wrapper)<br>
> > Run and protect: connectStorageServer, Return response: {'statuslist':<br>
> > [{'status': 0, 'id': '7c19bd42-c3dc-41b9-b81b-d9b75214b8dc'}, {'status': 0,<br>
> > 'id': 'eff2ef61-0b12-4429-b087-8742be17ae90'}]}<br>
> > Thread-29::DEBUG::2013-04-23<br>
> > 21:36:06,071::task::1151::TaskManager.Task::(prepare)<br>
> > Task=`48337e40-2446-4357-b6dc-2c86f4da67e2`::finished: {'statuslist':<br>
> > [{'status': 0, 'id': '7c19bd42-c3dc-41b9-b81b-d9b75214b8dc'}, {'status': 0,<br>
> > 'id': 'eff2ef61-0b12-4429-b087-8742be17ae90'}]}<br>
> > Thread-29::DEBUG::2013-04-23<br>
> > 21:36:06,071::task::568::TaskManager.Task::(_updateState)<br>
> > Task=`48337e40-2446-4357-b6dc-2c86f4da67e2`::moving from state preparing -><br>
> > state finished<br>
> > Thread-29::DEBUG::2013-04-23<br>
> > 21:36:06,071::resourceManager::830::ResourceManager.Owner::(releaseAll)<br>
> > Owner.releaseAll requests {} resources {}<br>
> > Thread-29::DEBUG::2013-04-23<br>
> > 21:36:06,072::resourceManager::864::ResourceManager.Owner::(cancelAll)<br>
> > Owner.cancelAll requests {}<br>
> > Thread-29::DEBUG::2013-04-23<br>
> > 21:36:06,072::task::957::TaskManager.Task::(_decref)<br>
> > Task=`48337e40-2446-4357-b6dc-2c86f4da67e2`::ref 0 aborting False<br>
> > Thread-30::DEBUG::2013-04-23<br>
> > 21:36:06,112::BindingXMLRPC::161::vds::(wrapper)<br>
> > [10.101.0.197]<br>
> > Thread-30::DEBUG::2013-04-23<br>
> > 21:36:06,112::task::568::TaskManager.Task::(_updateState)<br>
> > Task=`f551fa3f-9d8c-4de3-895a-964c821060d4`::moving from state init -><br>
> > state<br>
> > preparing<br>
> > Thread-30::INFO::2013-04-23<br>
> > 21:36:06,113::logUtils::41::dispatcher::(wrapper)<br>
> > Run and protect:<br>
> > connectStoragePool(spUUID='0f63de0e-7d98-48ce-99ec-add109f83c4f', hostID=1,<br>
> > scsiKey='0f63de0e-7d98-48ce-99ec-add109f83c4f',<br>
> > msdUUID='774e3604-f449-4b3e-8c06-7cd16f98720c', masterVersion=73,<br>
> > options=None)<br>
> > Thread-30::DEBUG::2013-04-23<br>
> > 21:36:06,113::resourceManager::190::ResourceManager.Request::(__init__)<br>
> > ResName=`Storage.0f63de0e-7d98-48ce-99ec-add109f83c4f`ReqID=`ee74329a-0a92-465a-be50-b8acc6d7246a`::Request<br>
> > was made in '/usr/share/vdsm/storage/resourceManager.py' line '189' at<br>
> > '__init__'<br>
> > Thread-30::DEBUG::2013-04-23<br>
> > 21:36:06,114::resourceManager::504::ResourceManager::(registerResource)<br>
> > Trying to register resource 'Storage.0f63de0e-7d98-48ce-99ec-add109f83c4f'<br>
> > for lock type 'exclusive'<br>
> > Thread-30::DEBUG::2013-04-23<br>
> > 21:36:06,114::resourceManager::547::ResourceManager::(registerResource)<br>
> > Resource 'Storage.0f63de0e-7d98-48ce-99ec-add109f83c4f' is free. Now<br>
> > locking<br>
> > as 'exclusive' (1 active user)<br>
> > Thread-30::DEBUG::2013-04-23<br>
> > 21:36:06,114::resourceManager::227::ResourceManager.Request::(grant)<br>
> > ResName=`Storage.0f63de0e-7d98-48ce-99ec-add109f83c4f`ReqID=`ee74329a-0a92-465a-be50-b8acc6d7246a`::Granted<br>
> > request<br>
> > Thread-30::INFO::2013-04-23<br>
> > 21:36:06,115::sp::625::Storage.StoragePool::(connect) Connect host #1 to<br>
> > the<br>
> > storage pool 0f63de0e-7d98-48ce-99ec-add109f83c4f with master domain:<br>
> > 774e3604-f449-4b3e-8c06-7cd16f98720c (ver = 73)<br>
> > Thread-30::DEBUG::2013-04-23<br>
> > 21:36:06,116::lvm::477::OperationMutex::(_invalidateAllPvs) Operation 'lvm<br>
> > invalidate operation' got the operation mutex<br>
> > Thread-30::DEBUG::2013-04-23<br>
> > 21:36:06,116::lvm::479::OperationMutex::(_invalidateAllPvs) Operation 'lvm<br>
> > invalidate operation' released the operation mutex<br>
> > Thread-30::DEBUG::2013-04-23<br>
> > 21:36:06,117::lvm::488::OperationMutex::(_invalidateAllVgs) Operation 'lvm<br>
> > invalidate operation' got the operation mutex<br>
> > Thread-30::DEBUG::2013-04-23<br>
> > 21:36:06,117::lvm::490::OperationMutex::(_invalidateAllVgs) Operation 'lvm<br>
> > invalidate operation' released the operation mutex<br>
> > Thread-30::DEBUG::2013-04-23<br>
> > 21:36:06,117::lvm::508::OperationMutex::(_invalidateAllLvs) Operation 'lvm<br>
> > invalidate operation' got the operation mutex<br>
> > Thread-30::DEBUG::2013-04-23<br>
> > 21:36:06,118::lvm::510::OperationMutex::(_invalidateAllLvs) Operation 'lvm<br>
> > invalidate operation' released the operation mutex<br>
> > Thread-30::DEBUG::2013-04-23<br>
> > 21:36:06,118::misc::1054::SamplingMethod::(__call__) Trying to enter<br>
> > sampling method (storage.sdc.refreshStorage)<br>
> > Thread-30::DEBUG::2013-04-23<br>
> > 21:36:06,118::misc::1056::SamplingMethod::(__call__) Got in to sampling<br>
> > method<br>
> > Thread-30::DEBUG::2013-04-23<br>
> > 21:36:06,119::misc::1054::SamplingMethod::(__call__) Trying to enter<br>
> > sampling method (storage.iscsi.rescan)<br>
> > Thread-30::DEBUG::2013-04-23<br>
> > 21:36:06,119::misc::1056::SamplingMethod::(__call__) Got in to sampling<br>
> > method<br>
> > Thread-30::DEBUG::2013-04-23<br>
> > 21:36:06,119::misc::84::Storage.Misc.excCmd::(<lambda>) '/usr/bin/sudo -n<br>
> > /sbin/iscsiadm -m session -R' (cwd None)<br>
> > Thread-30::DEBUG::2013-04-23<br>
> > 21:36:06,136::misc::84::Storage.Misc.excCmd::(<lambda>) FAILED: <err> =<br>
> > 'iscsiadm: No session found.\n'; <rc> = 21<br>
> > Thread-30::DEBUG::2013-04-23<br>
> > 21:36:06,136::misc::1064::SamplingMethod::(__call__) Returning last result<br>
> > MainProcess|Thread-30::DEBUG::2013-04-23<br>
> > 21:36:06,139::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/dd<br>
> > of=/sys/class/scsi_host/host0/scan' (cwd None)<br>
> > MainProcess|Thread-30::DEBUG::2013-04-23<br>
> > 21:36:06,142::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/dd<br>
> > of=/sys/class/scsi_host/host1/scan' (cwd None)<br>
> > MainProcess|Thread-30::DEBUG::2013-04-23<br>
> > 21:36:06,146::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/dd<br>
> > of=/sys/class/scsi_host/host2/scan' (cwd None)<br>
> > MainProcess|Thread-30::DEBUG::2013-04-23<br>
> > 21:36:06,149::iscsi::402::Storage.ISCSI::(forceIScsiScan) Performing SCSI<br>
> > scan, this will take up to 30 seconds<br>
> > Thread-30::DEBUG::2013-04-23<br>
> > 21:36:08,152::misc::84::Storage.Misc.excCmd::(<lambda>) '/usr/bin/sudo -n<br>
> > /sbin/multipath' (cwd None)<br>
> > Thread-30::DEBUG::2013-04-23<br>
> > 21:36:08,254::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> =<br>
> > '';<br>
> > <rc> = 0<br>
> > Thread-30::DEBUG::2013-04-23<br>
> > 21:36:08,256::lvm::477::OperationMutex::(_invalidateAllPvs) Operation 'lvm<br>
> > invalidate operation' got the operation mutex<br>
> > Thread-30::DEBUG::2013-04-23<br>
> > 21:36:08,256::lvm::479::OperationMutex::(_invalidateAllPvs) Operation 'lvm<br>
> > invalidate operation' released the operation mutex<br>
> > Thread-30::DEBUG::2013-04-23<br>
> > 21:36:08,257::lvm::488::OperationMutex::(_invalidateAllVgs) Operation 'lvm<br>
> > invalidate operation' got the operation mutex<br>
> > Thread-30::DEBUG::2013-04-23<br>
> > 21:36:08,257::lvm::490::OperationMutex::(_invalidateAllVgs) Operation 'lvm<br>
> > invalidate operation' released the operation mutex<br>
> > Thread-30::DEBUG::2013-04-23<br>
> > 21:36:08,258::lvm::508::OperationMutex::(_invalidateAllLvs) Operation 'lvm<br>
> > invalidate operation' got the operation mutex<br>
> > Thread-30::DEBUG::2013-04-23<br>
> > 21:36:08,258::lvm::510::OperationMutex::(_invalidateAllLvs) Operation 'lvm<br>
> > invalidate operation' released the operation mutex<br>
> > Thread-30::DEBUG::2013-04-23<br>
> > 21:36:08,258::misc::1064::SamplingMethod::(__call__) Returning last result<br>
> > Thread-30::DEBUG::2013-04-23<br>
> > 21:36:08,259::lvm::368::OperationMutex::(_reloadvgs) Operation 'lvm reload<br>
> > operation' got the operation mutex<br>
> > Thread-30::DEBUG::2013-04-23<br>
> > 21:36:08,261::misc::84::Storage.Misc.excCmd::(<lambda>) '/usr/bin/sudo -n<br>
> > /sbin/lvm vgs --config " devices { preferred_names = [\\"^/dev/mapper/\\"]<br>
> > ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3<br>
> > filter = [ \\"r%.*%\\" ] } global { locking_type=1 prioritise_write_locks=1<br>
> > wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } "<br>
> > --noheadings<br>
> > --units b --nosuffix --separator | -o<br>
> > uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free<br>
> > 774e3604-f449-4b3e-8c06-7cd16f98720c' (cwd None)<br>
> > Thread-30::DEBUG::2013-04-23<br>
> > 21:36:08,514::misc::84::Storage.Misc.excCmd::(<lambda>) FAILED: <err> = '<br>
> > Volume group "774e3604-f449-4b3e-8c06-7cd16f98720c" not found\n'; <rc> = 5<br>
> > Thread-30::WARNING::2013-04-23<br>
> > 21:36:08,516::lvm::373::Storage.LVM::(_reloadvgs) lvm vgs failed: 5 [] ['<br>
> > Volume group "774e3604-f449-4b3e-8c06-7cd16f98720c" not found']<br>
> > Thread-30::DEBUG::2013-04-23<br>
> > 21:36:08,518::lvm::397::OperationMutex::(_reloadvgs) Operation 'lvm reload<br>
> > operation' released the operation mutex<br>
> > Thread-30::DEBUG::2013-04-23<br>
> > 21:36:08,524::resourceManager::557::ResourceManager::(releaseResource)<br>
> > Trying to release resource 'Storage.0f63de0e-7d98-48ce-99ec-add109f83c4f'<br>
> > Thread-30::DEBUG::2013-04-23<br>
> > 21:36:08,525::resourceManager::573::ResourceManager::(releaseResource)<br>
> > Released resource 'Storage.0f63de0e-7d98-48ce-99ec-add109f83c4f' (0 active<br>
> > users)<br>
> > Thread-30::DEBUG::2013-04-23<br>
> > 21:36:08,525::resourceManager::578::ResourceManager::(releaseResource)<br>
> > Resource 'Storage.0f63de0e-7d98-48ce-99ec-add109f83c4f' is free, finding<br>
> > out<br>
> > if anyone is waiting for it.<br>
> > Thread-30::DEBUG::2013-04-23<br>
> > 21:36:08,525::resourceManager::585::ResourceManager::(releaseResource) No<br>
> > one is waiting for resource 'Storage.0f63de0e-7d98-48ce-99ec-add109f83c4f',<br>
> > Clearing records.<br>
> > Thread-30::ERROR::2013-04-23<br>
> > 21:36:08,526::task::833::TaskManager.Task::(_setError)<br>
> > Task=`f551fa3f-9d8c-4de3-895a-964c821060d4`::Unexpected error<br>
> > Traceback (most recent call last):<br>
> > File "/usr/share/vdsm/storage/task.py", line 840, in _run<br>
> > return fn(*args, **kargs)<br>
> > File "/usr/share/vdsm/logUtils.py", line 42, in wrapper<br>
> > res = f(*args, **kwargs)<br>
> > File "/usr/share/vdsm/storage/hsm.py", line 926, in connectStoragePool<br>
> > masterVersion, options)<br>
> > File "/usr/share/vdsm/storage/hsm.py", line 973, in _connectStoragePool<br>
> > res = pool.connect(hostID, scsiKey, msdUUID, masterVersion)<br>
> > File "/usr/share/vdsm/storage/sp.py", line 642, in connect<br>
> > self.__rebuild(msdUUID=msdUUID, masterVersion=masterVersion)<br>
> > File "/usr/share/vdsm/storage/sp.py", line 1166, in __rebuild<br>
> > self.masterDomain = self.getMasterDomain(msdUUID=msdUUID,<br>
> > masterVersion=masterVersion)<br>
> > File "/usr/share/vdsm/storage/sp.py", line 1505, in getMasterDomain<br>
> > raise se.StoragePoolMasterNotFound(self.spUUID, msdUUID)<br>
> > StoragePoolMasterNotFound: Cannot find master domain:<br>
> > 'spUUID=0f63de0e-7d98-48ce-99ec-add109f83c4f,<br>
> > msdUUID=774e3604-f449-4b3e-8c06-7cd16f98720c'<br>
> > Thread-30::DEBUG::2013-04-23<br>
> > 21:36:08,527::task::852::TaskManager.Task::(_run)<br>
> > Task=`f551fa3f-9d8c-4de3-895a-964c821060d4`::Task._run:<br>
> > f551fa3f-9d8c-4de3-895a-964c821060d4<br>
> > ('0f63de0e-7d98-48ce-99ec-add109f83c4f', 1,<br>
> > '0f63de0e-7d98-48ce-99ec-add109f83c4f',<br>
> > '774e3604-f449-4b3e-8c06-7cd16f98720c', 73) {} failed - stopping task<br>
> > Thread-30::DEBUG::2013-04-23<br>
> > 21:36:08,528::task::1177::TaskManager.Task::(stop)<br>
> > Task=`f551fa3f-9d8c-4de3-895a-964c821060d4`::stopping in state preparing<br>
> > (force False)<br>
> > Thread-30::DEBUG::2013-04-23<br>
> > 21:36:08,528::task::957::TaskManager.Task::(_decref)<br>
> > Task=`f551fa3f-9d8c-4de3-895a-964c821060d4`::ref 1 aborting True<br>
> > Thread-30::INFO::2013-04-23<br>
> > 21:36:08,528::task::1134::TaskManager.Task::(prepare)<br>
> > Task=`f551fa3f-9d8c-4de3-895a-964c821060d4`::aborting: Task is aborted:<br>
> > 'Cannot find master domain' - code 304<br>
> > Thread-30::DEBUG::2013-04-23<br>
> > 21:36:08,529::task::1139::TaskManager.Task::(prepare)<br>
> > Task=`f551fa3f-9d8c-4de3-895a-964c821060d4`::Prepare: aborted: Cannot find<br>
> > master domain<br>
> > Thread-30::DEBUG::2013-04-23<br>
> > 21:36:08,529::task::957::TaskManager.Task::(_decref)<br>
> > Task=`f551fa3f-9d8c-4de3-895a-964c821060d4`::ref 0 aborting True<br>
> > Thread-30::DEBUG::2013-04-23<br>
> > 21:36:08,529::task::892::TaskManager.Task::(_doAbort)<br>
> > Task=`f551fa3f-9d8c-4de3-895a-964c821060d4`::Task._doAbort: force False<br>
> > Thread-30::DEBUG::2013-04-23<br>
> > 21:36:08,530::resourceManager::864::ResourceManager.Owner::(cancelAll)<br>
> > Owner.cancelAll requests {}<br>
> > Thread-30::DEBUG::2013-04-23<br>
> > 21:36:08,530::task::568::TaskManager.Task::(_updateState)<br>
> > Task=`f551fa3f-9d8c-4de3-895a-964c821060d4`::moving from state preparing -><br>
> > state aborting<br>
> > Thread-30::DEBUG::2013-04-23<br>
> > 21:36:08,530::task::523::TaskManager.Task::(__state_aborting)<br>
> > Task=`f551fa3f-9d8c-4de3-895a-964c821060d4`::_aborting: recover policy none<br>
> > Thread-30::DEBUG::2013-04-23<br>
> > 21:36:08,531::task::568::TaskManager.Task::(_updateState)<br>
> > Task=`f551fa3f-9d8c-4de3-895a-964c821060d4`::moving from state aborting -><br>
> > state failed<br>
> > Thread-30::DEBUG::2013-04-23<br>
> > 21:36:08,531::resourceManager::830::ResourceManager.Owner::(releaseAll)<br>
> > Owner.releaseAll requests {} resources {}<br>
> > Thread-30::DEBUG::2013-04-23<br>
> > 21:36:08,531::resourceManager::864::ResourceManager.Owner::(cancelAll)<br>
> > Owner.cancelAll requests {}<br>
> > Thread-30::ERROR::2013-04-23<br>
> > 21:36:08,532::dispatcher::67::Storage.Dispatcher.Protect::(run) {'status':<br>
> > {'message': "Cannot find master domain:<br>
> > 'spUUID=0f63de0e-7d98-48ce-99ec-add109f83c4f,<br>
> > msdUUID=774e3604-f449-4b3e-8c06-7cd16f98720c'", 'code': 304}}<br>
> > [root@vmserver3 vdsm]#<br>
> ><br>
> ><br>
> > _______________________________________________<br>
> > Users mailing list<br>
> > <a href="mailto:Users@ovirt.org">Users@ovirt.org</a><br>
> > <a href="http://lists.ovirt.org/mailman/listinfo/users" target="_blank">http://lists.ovirt.org/mailman/listinfo/users</a><br>
> ><br>
> ><br>
> ><br>
> ><br>
> > _______________________________________________<br>
> > Users mailing list<br>
> > <a href="mailto:Users@ovirt.org">Users@ovirt.org</a><br>
> > <a href="http://lists.ovirt.org/mailman/listinfo/users" target="_blank">http://lists.ovirt.org/mailman/listinfo/users</a><br>
> ><br>
> _______________________________________________<br>
> Users mailing list<br>
> <a href="mailto:Users@ovirt.org">Users@ovirt.org</a><br>
> <a href="http://lists.ovirt.org/mailman/listinfo/users" target="_blank">http://lists.ovirt.org/mailman/listinfo/users</a><br>
><br>
</div></div></blockquote></div><br></div>