<div dir="ltr">John,<div><br></div><div>Thanks for reply. The Discover function in GUI works...it's once I try and login (Click the array next to target) that things just hang indefinitely.</div><div><br></div><div><div># iscsiadm -m session</div><div>tcp: [2] <a href="http://10.0.0.10:3260">10.0.0.10:3260</a>,1 iqn.2014-04.edu.tamu.brazos.vmstore1:ovirt-data_iscsi</div></div><div><br></div><div><div># iscsiadm -m node</div><div><a href="http://10.0.0.10:3260">10.0.0.10:3260</a>,1 iqn.2014-04.edu.tamu.brazos.vmstore1:ovirt-data_iscsi</div></div><div><br></div><div><div># multipath -ll</div><div>1IET_00010001 dm-3 IET,VIRTUAL-DISK</div><div>size=500G features='0' hwhandler='0' wp=rw</div><div>`-+- policy='round-robin 0' prio=1 status=active</div><div> `- 8:0:0:1 sdd 8:48 active ready running</div><div>1ATA_WDC_WD5003ABYZ-011FA0_WD-WMAYP0DNSAEZ dm-2 ATA,WDC WD5003ABYZ-0</div><div>size=466G features='0' hwhandler='0' wp=rw</div><div>`-+- policy='round-robin 0' prio=1 status=active</div><div> `- 3:0:0:0 sdc 8:32 active ready running</div></div><div><br></div><div>The first entry, 1IET_00010001 is the iSCSI LUN.</div><div><br></div><div>The log when I click the array in the interface for the target is this:</div><div><br></div><div><div>Thread-14::DEBUG::2014-10-21 15:12:49,900::BindingXMLRPC::251::vds::(wrapper) client [192.168.202.99] flowID [7177dafe]</div><div>Thread-14::DEBUG::2014-10-21 15:12:49,901::task::595::TaskManager.Task::(_updateState) Task=`01d8d01e-8bfd-4764-890f-2026fdeb78d9`::moving from state init -> state preparing</div><div>Thread-14::INFO::2014-10-21 15:12:49,901::logUtils::44::dispatcher::(wrapper) Run and protect: connectStorageServer(domType=3, spUUID='00000000-0000-0000-0000-000000000000', conList=[{'connection': '10.0.0.10', 'iqn': 'iqn.2014-04.edu.tamu.brazos.)</div><div>Thread-14::DEBUG::2014-10-21 15:12:49,902::iscsiadm::92::Storage.Misc.excCmd::(_runCmd) '/usr/bin/sudo -n /sbin/iscsiadm -m node -T iqn.2014-04.edu.tamu.brazos.vmstore1:ovirt-data_iscsi -I default -p <a href="http://10.0.0.10:3260">10.0.0.10:3260</a>,1 --op=new' (cwd None)</div><div>Thread-14::DEBUG::2014-10-21 15:12:56,684::iscsiadm::92::Storage.Misc.excCmd::(_runCmd) SUCCESS: <err> = ''; <rc> = 0</div><div>Thread-14::DEBUG::2014-10-21 15:12:56,685::iscsiadm::92::Storage.Misc.excCmd::(_runCmd) '/usr/bin/sudo -n /sbin/iscsiadm -m node -T iqn.2014-04.edu.tamu.brazos.vmstore1:ovirt-data_iscsi -I default -p <a href="http://10.0.0.10:3260">10.0.0.10:3260</a>,1 -l' (cwd None)</div><div>Thread-14::DEBUG::2014-10-21 15:12:56,711::iscsiadm::92::Storage.Misc.excCmd::(_runCmd) SUCCESS: <err> = ''; <rc> = 0</div><div>Thread-14::DEBUG::2014-10-21 15:12:56,711::iscsiadm::92::Storage.Misc.excCmd::(_runCmd) '/usr/bin/sudo -n /sbin/iscsiadm -m node -T iqn.2014-04.edu.tamu.brazos.vmstore1:ovirt-data_iscsi -I default -p <a href="http://10.0.0.10:3260">10.0.0.10:3260</a>,1 -n node.startup -v manual --op)</div><div>Thread-14::DEBUG::2014-10-21 15:12:56,767::iscsiadm::92::Storage.Misc.excCmd::(_runCmd) SUCCESS: <err> = ''; <rc> = 0</div><div>Thread-14::DEBUG::2014-10-21 15:12:56,767::lvm::373::OperationMutex::(_reloadvgs) Operation 'lvm reload operation' got the operation mutex</div><div>Thread-14::DEBUG::2014-10-21 15:12:56,768::lvm::296::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm vgs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3)</div><div>Thread-14::DEBUG::2014-10-21 15:12:56,968::lvm::296::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ' No volume groups found\n'; <rc> = 0</div><div>Thread-14::DEBUG::2014-10-21 15:12:56,969::lvm::415::OperationMutex::(_reloadvgs) Operation 'lvm reload operation' released the operation mutex</div><div>Thread-14::DEBUG::2014-10-21 15:12:56,974::hsm::2352::Storage.HSM::(__prefetchDomains) Found SD uuids: ()</div><div>Thread-14::DEBUG::2014-10-21 15:12:56,974::hsm::2408::Storage.HSM::(connectStorageServer) knownSDs: {}</div><div>Thread-14::INFO::2014-10-21 15:12:56,974::logUtils::47::dispatcher::(wrapper) Run and protect: connectStorageServer, Return response: {'statuslist': [{'status': 0, 'id': '00000000-0000-0000-0000-000000000000'}]}</div><div>Thread-14::DEBUG::2014-10-21 15:12:56,974::task::1185::TaskManager.Task::(prepare) Task=`01d8d01e-8bfd-4764-890f-2026fdeb78d9`::finished: {'statuslist': [{'status': 0, 'id': '00000000-0000-0000-0000-000000000000'}]}</div><div>Thread-14::DEBUG::2014-10-21 15:12:56,975::task::595::TaskManager.Task::(_updateState) Task=`01d8d01e-8bfd-4764-890f-2026fdeb78d9`::moving from state preparing -> state finished</div><div>Thread-14::DEBUG::2014-10-21 15:12:56,975::resourceManager::940::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {}</div><div>Thread-14::DEBUG::2014-10-21 15:12:56,975::resourceManager::977::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}</div><div>Thread-14::DEBUG::2014-10-21 15:12:56,975::task::990::TaskManager.Task::(_decref) Task=`01d8d01e-8bfd-4764-890f-2026fdeb78d9`::ref 0 aborting False</div><div>Thread-13::DEBUG::2014-10-21 15:13:18,281::task::595::TaskManager.Task::(_updateState) Task=`8674b6b0-5e4c-4f0c-8b6b-c5fa5fef6126`::moving from state init -> state preparing</div><div>Thread-13::INFO::2014-10-21 15:13:18,281::logUtils::44::dispatcher::(wrapper) Run and protect: repoStats(options=None)</div><div>Thread-13::INFO::2014-10-21 15:13:18,282::logUtils::47::dispatcher::(wrapper) Run and protect: repoStats, Return response: {}</div><div>Thread-13::DEBUG::2014-10-21 15:13:18,282::task::1185::TaskManager.Task::(prepare) Task=`8674b6b0-5e4c-4f0c-8b6b-c5fa5fef6126`::finished: {}</div><div>Thread-13::DEBUG::2014-10-21 15:13:18,282::task::595::TaskManager.Task::(_updateState) Task=`8674b6b0-5e4c-4f0c-8b6b-c5fa5fef6126`::moving from state preparing -> state finished</div><div>Thread-13::DEBUG::2014-10-21 15:13:18,282::resourceManager::940::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {}</div><div>Thread-13::DEBUG::2014-10-21 15:13:18,282::resourceManager::977::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}</div><div>Thread-13::DEBUG::2014-10-21 15:13:18,283::task::990::TaskManager.Task::(_decref) Task=`8674b6b0-5e4c-4f0c-8b6b-c5fa5fef6126`::ref 0 aborting False</div></div><div><br></div><div>The lines prefixed with "Thread-13" just repeat over and over only changing the Task value.</div><div><br></div><div>Unsure what could be done to restore things. The iscsi connection is good and I'm able to see the logical volumes:</div><div><br></div><div><div># lvscan</div><div> ACTIVE '/dev/4eeb8415-c912-44bf-b482-2673849705c9/metadata' [512.00 MiB] inherit</div><div> ACTIVE '/dev/4eeb8415-c912-44bf-b482-2673849705c9/leases' [2.00 GiB] inherit</div><div> ACTIVE '/dev/4eeb8415-c912-44bf-b482-2673849705c9/ids' [128.00 MiB] inherit</div><div> ACTIVE '/dev/4eeb8415-c912-44bf-b482-2673849705c9/inbox' [128.00 MiB] inherit</div><div> ACTIVE '/dev/4eeb8415-c912-44bf-b482-2673849705c9/outbox' [128.00 MiB] inherit</div><div> ACTIVE '/dev/4eeb8415-c912-44bf-b482-2673849705c9/master' [1.00 GiB] inherit</div><div> inactive '/dev/4eeb8415-c912-44bf-b482-2673849705c9/aced9726-5a28-4d52-96f5-89553ba770af' [100.00 GiB] inherit</div><div> inactive '/dev/4eeb8415-c912-44bf-b482-2673849705c9/87bf28aa-be25-4a93-9b23-f70bfd8accc0' [1.00 GiB] inherit</div><div> inactive '/dev/4eeb8415-c912-44bf-b482-2673849705c9/27256587-bf87-4519-89e7-260e13697de3' [20.00 GiB] inherit</div><div> inactive '/dev/4eeb8415-c912-44bf-b482-2673849705c9/ac2cb7f9-1df9-43dc-9fda-8a9958ef970f' [20.00 GiB] inherit</div><div> inactive '/dev/4eeb8415-c912-44bf-b482-2673849705c9/d8c41f05-006a-492b-8e5f-101c4e113b28' [100.00 GiB] inherit</div><div> inactive '/dev/4eeb8415-c912-44bf-b482-2673849705c9/83f17e9b-183e-4bad-ada5-bcef1c5c8e6a' [20.00 GiB] inherit</div><div> inactive '/dev/4eeb8415-c912-44bf-b482-2673849705c9/cf79052e-b4ef-4bda-96dc-c53b7c2acfb5' [20.00 GiB] inherit</div><div> ACTIVE '/dev/vg_ovirtnode02/lv_swap' [46.59 GiB] inherit</div><div> ACTIVE '/dev/vg_ovirtnode02/lv_root' [418.53 GiB] inherit</div></div><div><br></div><div>Thanks,</div><div>- Trey</div><div><br></div><div><br></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Oct 21, 2014 at 2:49 PM, Sandra Taylor <span dir="ltr"><<a href="mailto:jtt77777@gmail.com" target="_blank">jtt77777@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi Trey,<br>
Sorry for your trouble.<br>
Don't know if I can help but I run iscsi here as my primary domain so<br>
I've had some experience with it.<br>
I don't know the answer to the master domain question.<br>
<br>
Does iscsi show connected using iscsiadm -m session and -m node ?<br>
in the vdsm log there should be the iscsiadm commands that were<br>
executed to connect.<br>
Does multipath -ll show anything?<br>
<br>
-John<br>
<div><div class="h5"><br>
On Tue, Oct 21, 2014 at 3:18 PM, Trey Dockendorf <<a href="mailto:treydock@gmail.com">treydock@gmail.com</a>> wrote:<br>
> I was able to get iSCSI over TCP working...but now the task of adding the<br>
> LUN to the GUI has been stuck at the "spinning" icon for about 20 minutes.<br>
><br>
> I see these entries in vdsm.log over and over with the Task value changing:<br>
><br>
> Thread-14::DEBUG::2014-10-21<br>
> 14:16:50,086::task::595::TaskManager.Task::(_updateState)<br>
> Task=`ebcd8e0a-54b1-43d2-92a2-ed9fd62d00fa`::moving from state init -> state<br>
> preparing<br>
> Thread-14::INFO::2014-10-21<br>
> 14:16:50,086::logUtils::44::dispatcher::(wrapper) Run and protect:<br>
> repoStats(options=None)<br>
> Thread-14::INFO::2014-10-21<br>
> 14:16:50,086::logUtils::47::dispatcher::(wrapper) Run and protect:<br>
> repoStats, Return response: {}<br>
> Thread-14::DEBUG::2014-10-21<br>
> 14:16:50,087::task::1185::TaskManager.Task::(prepare)<br>
> Task=`ebcd8e0a-54b1-43d2-92a2-ed9fd62d00fa`::finished: {}<br>
> Thread-14::DEBUG::2014-10-21<br>
> 14:16:50,087::task::595::TaskManager.Task::(_updateState)<br>
> Task=`ebcd8e0a-54b1-43d2-92a2-ed9fd62d00fa`::moving from state preparing -><br>
> state finished<br>
> Thread-14::DEBUG::2014-10-21<br>
> 14:16:50,087::resourceManager::940::ResourceManager.Owner::(releaseAll)<br>
> Owner.releaseAll requests {} resources {}<br>
> Thread-14::DEBUG::2014-10-21<br>
> 14:16:50,087::resourceManager::977::ResourceManager.Owner::(cancelAll)<br>
> Owner.cancelAll requests {}<br>
> Thread-14::DEBUG::2014-10-21<br>
> 14:16:50,087::task::990::TaskManager.Task::(_decref)<br>
> Task=`ebcd8e0a-54b1-43d2-92a2-ed9fd62d00fa`::ref 0 aborting False<br>
><br>
> What is there I can do to get my storage back online? Right now my iSCSI is<br>
> master (something I did not want) which is odd considering the NFS data<br>
> domain was added as master when I setup oVirt. Nothing will come back until<br>
> I get the master domain online and unsure what to do now.<br>
><br>
> Thanks,<br>
> - Trey<br>
><br>
> On Tue, Oct 21, 2014 at 12:58 PM, Trey Dockendorf <<a href="mailto:treydock@gmail.com">treydock@gmail.com</a>><br>
> wrote:<br>
>><br>
>> I had a catastrophic failure of the IB switch that was used by all my<br>
>> storage domains. I had one data domain that was NFS and one that was iSCSI.<br>
>> I managed to get the iSCSI LUN detached using the docs [1] but now I noticed<br>
>> that somehow my master domain went from the NFS domain to the iSCSI domain<br>
>> and I'm unable to switch them back.<br>
>><br>
>> How does one change the master? Right now I am having issues getting<br>
>> iSCSI over TCP to work, so am sort of stuck with 30 VMs down and an entire<br>
>> cluster inaccessible.<br>
>><br>
>> Thanks,<br>
>> - Trey<br>
>><br>
>> [1] <a href="http://www.ovirt.org/Features/Manage_Storage_Connections" target="_blank">http://www.ovirt.org/Features/Manage_Storage_Connections</a><br>
><br>
><br>
><br>
</div></div>> _______________________________________________<br>
> Users mailing list<br>
> <a href="mailto:Users@ovirt.org">Users@ovirt.org</a><br>
> <a href="http://lists.ovirt.org/mailman/listinfo/users" target="_blank">http://lists.ovirt.org/mailman/listinfo/users</a><br>
><br>
</blockquote></div><br></div>