<div dir="ltr">John,<div><br></div><div>Thanks again for the reply.  Yes the API at the path you mentioned shows the domain.  This has to have been a bug as things began working after I changed values in the database.  Somehow setting the new IP for the storage connection in the database for both NFS and iSCSI resulted in the NFS domain becoming master again and at that point the iSCSI &quot;magically&quot; went active once NFS (master) was active.  I don&#39;t pretend to know how this happened and even my boss laughed when I shrugged to the question &quot;how did you fix it?&quot;.  I&#39;d be glad to supply the devs with whatever information I can, but I can&#39;t change much now as the goal of today was to get back online and that&#39;s been achieved.</div><div><br></div><div>One thing I may have done that could have been a cause of iSCSI not coming back was once I lost the IB fabric, in order to disconnect iSCSI that was over ISER, I issued the &quot;vgchange -an &lt;domain ID&gt;&quot; and then logged out of the iscsi session on each ovirt node.  One of my hosts would not re-activate once everything was back online and doing a &quot;vgchange -ay &lt;domain ID&gt;&quot; then removing the host from maintenance worked.  Since I had to switch from one network to another and from iSER to iSCSI, I wanted all active connections closed and the only way I could make the block devices disconnect cleanly was to disable the volume group on the LUN.</div><div><br></div><div>Thanks,</div><div>- Trey</div></div><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Oct 21, 2014 at 4:06 PM, Sandra Taylor <span dir="ltr">&lt;<a href="mailto:jtt77777@gmail.com" target="_blank">jtt77777@gmail.com</a>&gt;</span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Trey,<br>
The thread that keeps repeating is the call to repoStats. I believe<br>
it&#39;s part of the storage monitoring and in my environment it repeats<br>
every 15 seconds<br>
Mine looks like<br>
Thread-168::INFO::2014-10-21<br>
15:02:42,616::logUtils::44::dispatcher::(wrapper) Run and protect:<br>
repoStats(options=None)<br>
Thread-168::INFO::2014-10-21<br>
15:02:42,617::logUtils::47::dispatcher::(wrapper) Run and protect:<br>
repoStats, Return response: {&#39;86f0a388-dc9d-4e44-a599-b3f2c9e58922&#39;:<br>
{&#39;code&#39;: 0, &#39;version&#39;: 3, &#39;acquired&#39;: True, &#39;delay&#39;: &#39;0.00066814&#39;,<br>
&#39;lastCheck&#39;: &#39;1.8&#39;, &#39;valid&#39;: True}}<br>
<br>
but yours isn&#39;t returning anything , that&#39;s the the response: {}<br>
<br>
But I think that the problem is that the hsm isn&#39;t finding volume<br>
groups in its call to lvm vgs, and thus no storage domains (below in<br>
the No volume groups found and  Found SD uuids: () )<br>
<span class=""><br>
Thread-14::DEBUG::2014-10-21<br>
15:12:56,768::lvm::296::Storage.Misc.excCmd::(cmd) &#39;/usr/bin/sudo -n<br>
/sbin/lvm vgs --config &quot; devices { preferred_names =<br>
[\\&quot;^/dev/mapper/\\&quot;] ignore_suspended_devices=1 write_cache_state=0<br>
disable_after_error_count=3)<br>
Thread-14::DEBUG::2014-10-21<br>
15:12:56,968::lvm::296::Storage.Misc.excCmd::(cmd) SUCCESS: &lt;err&gt; = &#39;<br>
No volume groups found\n&#39;; &lt;rc&gt; = 0<br>
Thread-14::DEBUG::2014-10-21<br>
15:12:56,969::lvm::415::OperationMutex::(_reloadvgs) Operation &#39;lvm<br>
reload operation&#39; released the operation mutex<br>
Thread-14::DEBUG::2014-10-21<br>
15:12:56,974::hsm::2352::Storage.HSM::(__prefetchDomains) Found SD<br>
uuids: ()<br>
Thread-14::DEBUG::2014-10-21<br>
15:12:56,974::hsm::2408::Storage.HSM::(connectStorageServer) knownSDs:<br>
{}<br>
<br>
</span>But I don&#39;t really know how that&#39;s possible considering you show what<br>
looks to be an domain in the lvscan.<br>
The only thing that comes to mind is that there was a bug in some of<br>
the iscsi initiator tools where there was an error returned if a<br>
session was already logged in but that doesn&#39;t look to be the case by<br>
the logs. Or maybe something like lvmetad caching but vdsm uses its<br>
own config to turn lvmetad off  (at /var/run/vdsm/lvm I think)<br>
<br>
Does the storage domain with that id exist ?<br>
It should be seen at  /api/storagedomains/4eeb8415-c912-44bf-b482-2673849705c9<br>
<br>
-John<br>
<div class="HOEnZb"><div class="h5"><br>
<br>
<br>
On Tue, Oct 21, 2014 at 4:17 PM, Trey Dockendorf &lt;<a href="mailto:treydock@gmail.com">treydock@gmail.com</a>&gt; wrote:<br>
&gt; John,<br>
&gt;<br>
&gt; Thanks for reply.  The Discover function in GUI works...it&#39;s once I try and<br>
&gt; login (Click the array next to target) that things just hang indefinitely.<br>
&gt;<br>
&gt; # iscsiadm -m session<br>
&gt; tcp: [2] <a href="http://10.0.0.10:3260" target="_blank">10.0.0.10:3260</a>,1<br>
&gt; iqn.2014-04.edu.tamu.brazos.vmstore1:ovirt-data_iscsi<br>
&gt;<br>
&gt; # iscsiadm -m node<br>
&gt; <a href="http://10.0.0.10:3260" target="_blank">10.0.0.10:3260</a>,1 iqn.2014-04.edu.tamu.brazos.vmstore1:ovirt-data_iscsi<br>
&gt;<br>
&gt; # multipath -ll<br>
&gt; 1IET_00010001 dm-3 IET,VIRTUAL-DISK<br>
&gt; size=500G features=&#39;0&#39; hwhandler=&#39;0&#39; wp=rw<br>
&gt; `-+- policy=&#39;round-robin 0&#39; prio=1 status=active<br>
&gt;   `- 8:0:0:1 sdd 8:48 active ready running<br>
&gt; 1ATA_WDC_WD5003ABYZ-011FA0_WD-WMAYP0DNSAEZ dm-2 ATA,WDC WD5003ABYZ-0<br>
&gt; size=466G features=&#39;0&#39; hwhandler=&#39;0&#39; wp=rw<br>
&gt; `-+- policy=&#39;round-robin 0&#39; prio=1 status=active<br>
&gt;   `- 3:0:0:0 sdc 8:32 active ready running<br>
&gt;<br>
&gt; The first entry, 1IET_00010001 is the iSCSI LUN.<br>
&gt;<br>
&gt; The log when I click the array in the interface for the target is this:<br>
&gt;<br>
&gt; Thread-14::DEBUG::2014-10-21<br>
&gt; 15:12:49,900::BindingXMLRPC::251::vds::(wrapper) client [192.168.202.99]<br>
&gt; flowID [7177dafe]<br>
&gt; Thread-14::DEBUG::2014-10-21<br>
&gt; 15:12:49,901::task::595::TaskManager.Task::(_updateState)<br>
&gt; Task=`01d8d01e-8bfd-4764-890f-2026fdeb78d9`::moving from state init -&gt; state<br>
&gt; preparing<br>
&gt; Thread-14::INFO::2014-10-21<br>
&gt; 15:12:49,901::logUtils::44::dispatcher::(wrapper) Run and protect:<br>
&gt; connectStorageServer(domType=3,<br>
&gt; spUUID=&#39;00000000-0000-0000-0000-000000000000&#39;, conList=[{&#39;connection&#39;:<br>
&gt; &#39;10.0.0.10&#39;, &#39;iqn&#39;: &#39;iqn.2014-04.edu.tamu.brazos.)<br>
&gt; Thread-14::DEBUG::2014-10-21<br>
&gt; 15:12:49,902::iscsiadm::92::Storage.Misc.excCmd::(_runCmd) &#39;/usr/bin/sudo -n<br>
&gt; /sbin/iscsiadm -m node -T<br>
&gt; iqn.2014-04.edu.tamu.brazos.vmstore1:ovirt-data_iscsi -I default -p<br>
&gt; <a href="http://10.0.0.10:3260" target="_blank">10.0.0.10:3260</a>,1 --op=new&#39; (cwd None)<br>
&gt; Thread-14::DEBUG::2014-10-21<br>
&gt; 15:12:56,684::iscsiadm::92::Storage.Misc.excCmd::(_runCmd) SUCCESS: &lt;err&gt; =<br>
&gt; &#39;&#39;; &lt;rc&gt; = 0<br>
&gt; Thread-14::DEBUG::2014-10-21<br>
&gt; 15:12:56,685::iscsiadm::92::Storage.Misc.excCmd::(_runCmd) &#39;/usr/bin/sudo -n<br>
&gt; /sbin/iscsiadm -m node -T<br>
&gt; iqn.2014-04.edu.tamu.brazos.vmstore1:ovirt-data_iscsi -I default -p<br>
&gt; <a href="http://10.0.0.10:3260" target="_blank">10.0.0.10:3260</a>,1 -l&#39; (cwd None)<br>
&gt; Thread-14::DEBUG::2014-10-21<br>
&gt; 15:12:56,711::iscsiadm::92::Storage.Misc.excCmd::(_runCmd) SUCCESS: &lt;err&gt; =<br>
&gt; &#39;&#39;; &lt;rc&gt; = 0<br>
&gt; Thread-14::DEBUG::2014-10-21<br>
&gt; 15:12:56,711::iscsiadm::92::Storage.Misc.excCmd::(_runCmd) &#39;/usr/bin/sudo -n<br>
&gt; /sbin/iscsiadm -m node -T<br>
&gt; iqn.2014-04.edu.tamu.brazos.vmstore1:ovirt-data_iscsi -I default -p<br>
&gt; <a href="http://10.0.0.10:3260" target="_blank">10.0.0.10:3260</a>,1 -n node.startup -v manual --op)<br>
&gt; Thread-14::DEBUG::2014-10-21<br>
&gt; 15:12:56,767::iscsiadm::92::Storage.Misc.excCmd::(_runCmd) SUCCESS: &lt;err&gt; =<br>
&gt; &#39;&#39;; &lt;rc&gt; = 0<br>
&gt; Thread-14::DEBUG::2014-10-21<br>
&gt; 15:12:56,767::lvm::373::OperationMutex::(_reloadvgs) Operation &#39;lvm reload<br>
&gt; operation&#39; got the operation mutex<br>
&gt; Thread-14::DEBUG::2014-10-21<br>
&gt; 15:12:56,768::lvm::296::Storage.Misc.excCmd::(cmd) &#39;/usr/bin/sudo -n<br>
&gt; /sbin/lvm vgs --config &quot; devices { preferred_names = [\\&quot;^/dev/mapper/\\&quot;]<br>
&gt; ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3)<br>
&gt; Thread-14::DEBUG::2014-10-21<br>
&gt; 15:12:56,968::lvm::296::Storage.Misc.excCmd::(cmd) SUCCESS: &lt;err&gt; = &#39;  No<br>
&gt; volume groups found\n&#39;; &lt;rc&gt; = 0<br>
&gt; Thread-14::DEBUG::2014-10-21<br>
&gt; 15:12:56,969::lvm::415::OperationMutex::(_reloadvgs) Operation &#39;lvm reload<br>
&gt; operation&#39; released the operation mutex<br>
&gt; Thread-14::DEBUG::2014-10-21<br>
&gt; 15:12:56,974::hsm::2352::Storage.HSM::(__prefetchDomains) Found SD uuids: ()<br>
&gt; Thread-14::DEBUG::2014-10-21<br>
&gt; 15:12:56,974::hsm::2408::Storage.HSM::(connectStorageServer) knownSDs: {}<br>
&gt; Thread-14::INFO::2014-10-21<br>
&gt; 15:12:56,974::logUtils::47::dispatcher::(wrapper) Run and protect:<br>
&gt; connectStorageServer, Return response: {&#39;statuslist&#39;: [{&#39;status&#39;: 0, &#39;id&#39;:<br>
&gt; &#39;00000000-0000-0000-0000-000000000000&#39;}]}<br>
&gt; Thread-14::DEBUG::2014-10-21<br>
&gt; 15:12:56,974::task::1185::TaskManager.Task::(prepare)<br>
&gt; Task=`01d8d01e-8bfd-4764-890f-2026fdeb78d9`::finished: {&#39;statuslist&#39;:<br>
&gt; [{&#39;status&#39;: 0, &#39;id&#39;: &#39;00000000-0000-0000-0000-000000000000&#39;}]}<br>
&gt; Thread-14::DEBUG::2014-10-21<br>
&gt; 15:12:56,975::task::595::TaskManager.Task::(_updateState)<br>
&gt; Task=`01d8d01e-8bfd-4764-890f-2026fdeb78d9`::moving from state preparing -&gt;<br>
&gt; state finished<br>
&gt; Thread-14::DEBUG::2014-10-21<br>
&gt; 15:12:56,975::resourceManager::940::ResourceManager.Owner::(releaseAll)<br>
&gt; Owner.releaseAll requests {} resources {}<br>
&gt; Thread-14::DEBUG::2014-10-21<br>
&gt; 15:12:56,975::resourceManager::977::ResourceManager.Owner::(cancelAll)<br>
&gt; Owner.cancelAll requests {}<br>
&gt; Thread-14::DEBUG::2014-10-21<br>
&gt; 15:12:56,975::task::990::TaskManager.Task::(_decref)<br>
&gt; Task=`01d8d01e-8bfd-4764-890f-2026fdeb78d9`::ref 0 aborting False<br>
&gt; Thread-13::DEBUG::2014-10-21<br>
&gt; 15:13:18,281::task::595::TaskManager.Task::(_updateState)<br>
&gt; Task=`8674b6b0-5e4c-4f0c-8b6b-c5fa5fef6126`::moving from state init -&gt; state<br>
&gt; preparing<br>
&gt; Thread-13::INFO::2014-10-21<br>
&gt; 15:13:18,281::logUtils::44::dispatcher::(wrapper) Run and protect:<br>
&gt; repoStats(options=None)<br>
&gt; Thread-13::INFO::2014-10-21<br>
&gt; 15:13:18,282::logUtils::47::dispatcher::(wrapper) Run and protect:<br>
&gt; repoStats, Return response: {}<br>
&gt; Thread-13::DEBUG::2014-10-21<br>
&gt; 15:13:18,282::task::1185::TaskManager.Task::(prepare)<br>
&gt; Task=`8674b6b0-5e4c-4f0c-8b6b-c5fa5fef6126`::finished: {}<br>
&gt; Thread-13::DEBUG::2014-10-21<br>
&gt; 15:13:18,282::task::595::TaskManager.Task::(_updateState)<br>
&gt; Task=`8674b6b0-5e4c-4f0c-8b6b-c5fa5fef6126`::moving from state preparing -&gt;<br>
&gt; state finished<br>
&gt; Thread-13::DEBUG::2014-10-21<br>
&gt; 15:13:18,282::resourceManager::940::ResourceManager.Owner::(releaseAll)<br>
&gt; Owner.releaseAll requests {} resources {}<br>
&gt; Thread-13::DEBUG::2014-10-21<br>
&gt; 15:13:18,282::resourceManager::977::ResourceManager.Owner::(cancelAll)<br>
&gt; Owner.cancelAll requests {}<br>
&gt; Thread-13::DEBUG::2014-10-21<br>
&gt; 15:13:18,283::task::990::TaskManager.Task::(_decref)<br>
&gt; Task=`8674b6b0-5e4c-4f0c-8b6b-c5fa5fef6126`::ref 0 aborting False<br>
&gt;<br>
&gt; The lines prefixed with &quot;Thread-13&quot; just repeat over and over only changing<br>
&gt; the Task value.<br>
&gt;<br>
&gt; Unsure what could be done to restore things.  The iscsi connection is good<br>
&gt; and I&#39;m able to see the logical volumes:<br>
&gt;<br>
&gt; # lvscan<br>
&gt;   ACTIVE            &#39;/dev/4eeb8415-c912-44bf-b482-2673849705c9/metadata&#39;<br>
&gt; [512.00 MiB] inherit<br>
&gt;   ACTIVE            &#39;/dev/4eeb8415-c912-44bf-b482-2673849705c9/leases&#39; [2.00<br>
&gt; GiB] inherit<br>
&gt;   ACTIVE            &#39;/dev/4eeb8415-c912-44bf-b482-2673849705c9/ids&#39; [128.00<br>
&gt; MiB] inherit<br>
&gt;   ACTIVE            &#39;/dev/4eeb8415-c912-44bf-b482-2673849705c9/inbox&#39;<br>
&gt; [128.00 MiB] inherit<br>
&gt;   ACTIVE            &#39;/dev/4eeb8415-c912-44bf-b482-2673849705c9/outbox&#39;<br>
&gt; [128.00 MiB] inherit<br>
&gt;   ACTIVE            &#39;/dev/4eeb8415-c912-44bf-b482-2673849705c9/master&#39; [1.00<br>
&gt; GiB] inherit<br>
&gt;   inactive<br>
&gt; &#39;/dev/4eeb8415-c912-44bf-b482-2673849705c9/aced9726-5a28-4d52-96f5-89553ba770af&#39;<br>
&gt; [100.00 GiB] inherit<br>
&gt;   inactive<br>
&gt; &#39;/dev/4eeb8415-c912-44bf-b482-2673849705c9/87bf28aa-be25-4a93-9b23-f70bfd8accc0&#39;<br>
&gt; [1.00 GiB] inherit<br>
&gt;   inactive<br>
&gt; &#39;/dev/4eeb8415-c912-44bf-b482-2673849705c9/27256587-bf87-4519-89e7-260e13697de3&#39;<br>
&gt; [20.00 GiB] inherit<br>
&gt;   inactive<br>
&gt; &#39;/dev/4eeb8415-c912-44bf-b482-2673849705c9/ac2cb7f9-1df9-43dc-9fda-8a9958ef970f&#39;<br>
&gt; [20.00 GiB] inherit<br>
&gt;   inactive<br>
&gt; &#39;/dev/4eeb8415-c912-44bf-b482-2673849705c9/d8c41f05-006a-492b-8e5f-101c4e113b28&#39;<br>
&gt; [100.00 GiB] inherit<br>
&gt;   inactive<br>
&gt; &#39;/dev/4eeb8415-c912-44bf-b482-2673849705c9/83f17e9b-183e-4bad-ada5-bcef1c5c8e6a&#39;<br>
&gt; [20.00 GiB] inherit<br>
&gt;   inactive<br>
&gt; &#39;/dev/4eeb8415-c912-44bf-b482-2673849705c9/cf79052e-b4ef-4bda-96dc-c53b7c2acfb5&#39;<br>
&gt; [20.00 GiB] inherit<br>
&gt;   ACTIVE            &#39;/dev/vg_ovirtnode02/lv_swap&#39; [46.59 GiB] inherit<br>
&gt;   ACTIVE            &#39;/dev/vg_ovirtnode02/lv_root&#39; [418.53 GiB] inherit<br>
&gt;<br>
&gt; Thanks,<br>
&gt; - Trey<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; On Tue, Oct 21, 2014 at 2:49 PM, Sandra Taylor &lt;<a href="mailto:jtt77777@gmail.com">jtt77777@gmail.com</a>&gt; wrote:<br>
&gt;&gt;<br>
&gt;&gt; Hi Trey,<br>
&gt;&gt; Sorry for your trouble.<br>
&gt;&gt; Don&#39;t know if I can help but I run iscsi here as my primary domain so<br>
&gt;&gt; I&#39;ve had some experience with it.<br>
&gt;&gt; I don&#39;t know the answer to the master domain question.<br>
&gt;&gt;<br>
&gt;&gt; Does iscsi show connected  using iscsiadm -m session and   -m node  ?<br>
&gt;&gt; in the vdsm log there should be the iscsiadm commands that were<br>
&gt;&gt; executed to connect.<br>
&gt;&gt; Does multipath -ll show anything?<br>
&gt;&gt;<br>
&gt;&gt; -John<br>
&gt;&gt;<br>
&gt;&gt; On Tue, Oct 21, 2014 at 3:18 PM, Trey Dockendorf &lt;<a href="mailto:treydock@gmail.com">treydock@gmail.com</a>&gt;<br>
&gt;&gt; wrote:<br>
&gt;&gt; &gt; I was able to get iSCSI over TCP working...but now the task of adding<br>
&gt;&gt; &gt; the<br>
&gt;&gt; &gt; LUN to the GUI has been stuck at the &quot;spinning&quot; icon for about 20<br>
&gt;&gt; &gt; minutes.<br>
&gt;&gt; &gt;<br>
&gt;&gt; &gt; I see these entries in vdsm.log over and over with the Task value<br>
&gt;&gt; &gt; changing:<br>
&gt;&gt; &gt;<br>
&gt;&gt; &gt; Thread-14::DEBUG::2014-10-21<br>
&gt;&gt; &gt; 14:16:50,086::task::595::TaskManager.Task::(_updateState)<br>
&gt;&gt; &gt; Task=`ebcd8e0a-54b1-43d2-92a2-ed9fd62d00fa`::moving from state init -&gt;<br>
&gt;&gt; &gt; state<br>
&gt;&gt; &gt; preparing<br>
&gt;&gt; &gt; Thread-14::INFO::2014-10-21<br>
&gt;&gt; &gt; 14:16:50,086::logUtils::44::dispatcher::(wrapper) Run and protect:<br>
&gt;&gt; &gt; repoStats(options=None)<br>
&gt;&gt; &gt; Thread-14::INFO::2014-10-21<br>
&gt;&gt; &gt; 14:16:50,086::logUtils::47::dispatcher::(wrapper) Run and protect:<br>
&gt;&gt; &gt; repoStats, Return response: {}<br>
&gt;&gt; &gt; Thread-14::DEBUG::2014-10-21<br>
&gt;&gt; &gt; 14:16:50,087::task::1185::TaskManager.Task::(prepare)<br>
&gt;&gt; &gt; Task=`ebcd8e0a-54b1-43d2-92a2-ed9fd62d00fa`::finished: {}<br>
&gt;&gt; &gt; Thread-14::DEBUG::2014-10-21<br>
&gt;&gt; &gt; 14:16:50,087::task::595::TaskManager.Task::(_updateState)<br>
&gt;&gt; &gt; Task=`ebcd8e0a-54b1-43d2-92a2-ed9fd62d00fa`::moving from state preparing<br>
&gt;&gt; &gt; -&gt;<br>
&gt;&gt; &gt; state finished<br>
&gt;&gt; &gt; Thread-14::DEBUG::2014-10-21<br>
&gt;&gt; &gt; 14:16:50,087::resourceManager::940::ResourceManager.Owner::(releaseAll)<br>
&gt;&gt; &gt; Owner.releaseAll requests {} resources {}<br>
&gt;&gt; &gt; Thread-14::DEBUG::2014-10-21<br>
&gt;&gt; &gt; 14:16:50,087::resourceManager::977::ResourceManager.Owner::(cancelAll)<br>
&gt;&gt; &gt; Owner.cancelAll requests {}<br>
&gt;&gt; &gt; Thread-14::DEBUG::2014-10-21<br>
&gt;&gt; &gt; 14:16:50,087::task::990::TaskManager.Task::(_decref)<br>
&gt;&gt; &gt; Task=`ebcd8e0a-54b1-43d2-92a2-ed9fd62d00fa`::ref 0 aborting False<br>
&gt;&gt; &gt;<br>
&gt;&gt; &gt; What is there I can do to get my storage back online?  Right now my<br>
&gt;&gt; &gt; iSCSI is<br>
&gt;&gt; &gt; master (something I did not want) which is odd considering the NFS data<br>
&gt;&gt; &gt; domain was added as master when I setup oVirt.  Nothing will come back<br>
&gt;&gt; &gt; until<br>
&gt;&gt; &gt; I get the master domain online and unsure what to do now.<br>
&gt;&gt; &gt;<br>
&gt;&gt; &gt; Thanks,<br>
&gt;&gt; &gt; - Trey<br>
&gt;&gt; &gt;<br>
&gt;&gt; &gt; On Tue, Oct 21, 2014 at 12:58 PM, Trey Dockendorf &lt;<a href="mailto:treydock@gmail.com">treydock@gmail.com</a>&gt;<br>
&gt;&gt; &gt; wrote:<br>
&gt;&gt; &gt;&gt;<br>
&gt;&gt; &gt;&gt; I had a catastrophic failure of the IB switch that was used by all my<br>
&gt;&gt; &gt;&gt; storage domains.  I had one data domain that was NFS and one that was<br>
&gt;&gt; &gt;&gt; iSCSI.<br>
&gt;&gt; &gt;&gt; I managed to get the iSCSI LUN detached using the docs [1] but now I<br>
&gt;&gt; &gt;&gt; noticed<br>
&gt;&gt; &gt;&gt; that somehow my master domain went from the NFS domain to the iSCSI<br>
&gt;&gt; &gt;&gt; domain<br>
&gt;&gt; &gt;&gt; and I&#39;m unable to switch them back.<br>
&gt;&gt; &gt;&gt;<br>
&gt;&gt; &gt;&gt; How does one change the master?  Right now I am having issues getting<br>
&gt;&gt; &gt;&gt; iSCSI over TCP to work, so am sort of stuck with 30 VMs down and an<br>
&gt;&gt; &gt;&gt; entire<br>
&gt;&gt; &gt;&gt; cluster inaccessible.<br>
&gt;&gt; &gt;&gt;<br>
&gt;&gt; &gt;&gt; Thanks,<br>
&gt;&gt; &gt;&gt; - Trey<br>
&gt;&gt; &gt;&gt;<br>
&gt;&gt; &gt;&gt; [1] <a href="http://www.ovirt.org/Features/Manage_Storage_Connections" target="_blank">http://www.ovirt.org/Features/Manage_Storage_Connections</a><br>
&gt;&gt; &gt;<br>
&gt;&gt; &gt;<br>
&gt;&gt; &gt;<br>
&gt;&gt; &gt; _______________________________________________<br>
&gt;&gt; &gt; Users mailing list<br>
&gt;&gt; &gt; <a href="mailto:Users@ovirt.org">Users@ovirt.org</a><br>
&gt;&gt; &gt; <a href="http://lists.ovirt.org/mailman/listinfo/users" target="_blank">http://lists.ovirt.org/mailman/listinfo/users</a><br>
&gt;&gt; &gt;<br>
&gt;<br>
&gt;<br>
</div></div></blockquote></div><br></div>