<div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">On Tue, Feb 21, 2017 at 5:12 PM, Adam Litke <span dir="ltr"><<a href="mailto:alitke@redhat.com" target="_blank">alitke@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote"><span class="gmail-">On Tue, Feb 21, 2017 at 10:19 AM, Gianluca Cecchi <span dir="ltr"><<a href="mailto:gianluca.cecchi@gmail.com" target="_blank">gianluca.cecchi@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">Hello,<div>currently I have a cluster of 3 hosts where each one has FC SAN connectivity to 4 LUNs: 3 are already configured as storage domains (1TB, 2TB, 4TB), one is free, not allocated.</div><div>See here for screenshot:</div><div><a href="https://drive.google.com/file/d/0BwoPbcrMv8mvRVZZMTlNcTQ5MGs/view?usp=sharing" target="_blank">https://drive.google.com/file/<wbr>d/0BwoPbcrMv8mvRVZZMTlNcTQ5MGs<wbr>/view?usp=sharing</a><br></div><div><br></div><div>At the moment the command "multipath -l" run on hosts shows all the 4 LUNs.<br></div><div><br></div><div>Now I want to do 2 things at storage array level:</div><div><br></div><div>- remove the 2TB storage domain LUN</div><div>- remove the 20Gb LUN not yet allocated</div><div><br></div><div>What is the correct workflow, supposing I have already emptied the 2TB from VM disks ad such?</div><div>Select 2Tb SD, then Datacenter subtab, then "maintenance", detach" and at the end "remove"? </div></div></blockquote><div><br></div></span><div>Yes, these should be your first steps.<br></div><span class="gmail-"><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div>I think I continue to see 4 LUNs at this point, correct?</div></div></blockquote><div><br></div></span><div>Yes.<br></div><span class="gmail-"><div> <br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div>Now I proceed with removal of lun at storage array level?<br></div><div><br></div><div>Should I select an SD line and then "Scan disks" to see refresh the SAN and see in multipath only 2 of them at the end?</div><div>Or any manual command at host level before removal from array?</div></div></blockquote><div><br></div></span><div>After removing the storage domains you should be able to remove the luns. I am not extremely familiar with the multipath and low-level scsi commands but I would try the scan disks button and if the luns are not gone from your host you can manually remove them. I think that involves removing the device from multipath (multipath -d) and deleting it from the scsi subsystem.<br><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div>Thanks in advance<span class="gmail-m_-3996916427316958679HOEnZb"><font color="#888888"><br></font></span></div></div></blockquote><div><br></div><div>Hope this helped you.</div></div><br></div></div></blockquote><div><br></div><div>Hello,</div><div>the "Scan Disks" seems related to the particular storage domain selected in storage tab, not overall FC SAN connectivity...</div><div>If I then select "manage domain", it still shows the now missing disks with an exclamation mark aside</div><div>I try to follow standard RH EL 7 way for removal:<br></div><div><br></div><div><a href="https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Storage_Administration_Guide/removing_devices.html">https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Storage_Administration_Guide/removing_devices.html</a><br></div><div><br></div><div>I can remove at os level the 20Gb lun that was never used in oVirt, but for the previous 2Tb storage domain related LUN I get the error that is in use....<br></div><div><br></div><div>[root@ovmsrv06 ~]# multipath -f 3600a0b80002999020000cd3c5501458f</div><div>Feb 21 17:25:58 | 3600a0b80002999020000cd3c5501458f: map in use</div><div>Feb 21 17:25:58 | failed to remove multipath map 3600a0b80002999020000cd3c5501458f</div><div>[root@ovmsrv06 ~]#</div><div><br></div><div><div>[root@ovmsrv06 ~]# fuser /dev/mapper/3600a0b80002999020000cd3c5501458f</div><div>[root@ovmsrv06 ~]# </div></div><div><br></div><div><br></div></div><div class="gmail_extra">[root@ovmsrv06 ~]# ll /dev/mapper/3600a0b80002999020000cd3c5501458f</div><div class="gmail_extra">lrwxrwxrwx. 1 root root 7 Feb 21 17:25 /dev/mapper/3600a0b80002999020000cd3c5501458f -> ../dm-4</div><div class="gmail_extra"><br></div><div class="gmail_extra">[root@ovmsrv06 ~]# fuser /dev/dm-4 </div><div class="gmail_extra">[root@ovmsrv06 ~]#</div><div class="gmail_extra"><br></div><div class="gmail_extra">Strange thing is that vgs command returns differrent value on th three hosts</div><div class="gmail_extra"><br></div><div class="gmail_extra"><div class="gmail_extra">[root@ovmsrv05 vdsm]# vgs</div><div class="gmail_extra"> VG #PV #LV #SN Attr VSize VFree</div><div class="gmail_extra"> 922b5269-ab56-4c4d-838f-49d33427e2ab 1 22 0 wz--n- 4.00t 3.49t</div><div class="gmail_extra"> cl_ovmsrv05 1 3 0 wz--n- 67.33g 0 </div><div class="gmail_extra">[root@ovmsrv05 vdsm]# </div><div><br></div></div><div class="gmail_extra"><br></div><div class="gmail_extra"><div class="gmail_extra">[root@ovmsrv06 ~]# vgs</div><div class="gmail_extra"> VG #PV #LV #SN Attr VSize VFree</div><div class="gmail_extra"> 922b5269-ab56-4c4d-838f-49d33427e2ab 1 22 0 wz--n- 4.00t 3.49t</div><div class="gmail_extra"> cl 1 3 0 wz--n- 67.33g 0 </div><div class="gmail_extra">[root@ovmsrv06 ~]# </div><div class="gmail_extra"><br></div><div class="gmail_extra"><div class="gmail_extra">[root@ovmsrv07 vdsm]# vgs</div><div class="gmail_extra"> VG #PV #LV #SN Attr VSize VFree</div><div class="gmail_extra"> 900b1853-e192-4661-a0f9-7c7c396f6f49 1 10 0 wz--n- 2.00t 1.76t</div><div class="gmail_extra"> 922b5269-ab56-4c4d-838f-49d33427e2ab 1 27 0 wz--n- 4.00t 3.34t</div><div class="gmail_extra"> cl 1 3 0 wz--n- 67.33g 0 </div><div class="gmail_extra">[root@ovmsrv07 vdsm]# </div><div><br></div><div>So no host as a VG the 1TB storage domain and In particular ovmsrv07 has a VG of 2TB that I suspect was the previosu storage domain</div><div><br></div><div>[root@ovmsrv07 vdsm]# lvs 900b1853-e192-4661-a0f9-7c7c396f6f49<br></div><div><div> LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert</div><div> 35b8834e-a429-4223-b293-51d562b6def4 900b1853-e192-4661-a0f9-7c7c396f6f49 -wi------- 128.00m </div><div> 7ed43974-1039-4a68-a8b3-321e7594fe4c 900b1853-e192-4661-a0f9-7c7c396f6f49 -wi------- 240.00g </div><div> d7f6be37-0f6c-43e3-b0af-a511fc59c842 900b1853-e192-4661-a0f9-7c7c396f6f49 -wi------- 128.00m </div><div> ids 900b1853-e192-4661-a0f9-7c7c396f6f49 -wi-a----- 128.00m </div><div> inbox 900b1853-e192-4661-a0f9-7c7c396f6f49 -wi-a----- 128.00m </div><div> leases 900b1853-e192-4661-a0f9-7c7c396f6f49 -wi-a----- 2.00g </div><div> master 900b1853-e192-4661-a0f9-7c7c396f6f49 -wi-a----- 1.00g </div><div> metadata 900b1853-e192-4661-a0f9-7c7c396f6f49 -wi-a----- 512.00m </div><div> outbox 900b1853-e192-4661-a0f9-7c7c396f6f49 -wi-a----- 128.00m </div><div> xleases 900b1853-e192-4661-a0f9-7c7c396f6f49 -wi-a----- 1.00g </div><div>[root@ovmsrv07 vdsm]# </div></div><div><br></div><div><br></div><div>Gianluca</div></div></div><div class="gmail_extra"><br></div></div></div>