[ovirt-users] best way to remove SAN lun
Gianluca Cecchi
gianluca.cecchi at gmail.com
Tue Feb 21 16:34:21 UTC 2017
On Tue, Feb 21, 2017 at 5:12 PM, Adam Litke <alitke at redhat.com> wrote:
>
>
> On Tue, Feb 21, 2017 at 10:19 AM, Gianluca Cecchi <
> gianluca.cecchi at gmail.com> wrote:
>
>> Hello,
>> currently I have a cluster of 3 hosts where each one has FC SAN
>> connectivity to 4 LUNs: 3 are already configured as storage domains (1TB,
>> 2TB, 4TB), one is free, not allocated.
>> See here for screenshot:
>> https://drive.google.com/file/d/0BwoPbcrMv8mvRVZZMTlNcTQ5MGs
>> /view?usp=sharing
>>
>> At the moment the command "multipath -l" run on hosts shows all the 4
>> LUNs.
>>
>> Now I want to do 2 things at storage array level:
>>
>> - remove the 2TB storage domain LUN
>> - remove the 20Gb LUN not yet allocated
>>
>> What is the correct workflow, supposing I have already emptied the 2TB
>> from VM disks ad such?
>> Select 2Tb SD, then Datacenter subtab, then "maintenance", detach" and at
>> the end "remove"?
>>
>
> Yes, these should be your first steps.
>
>
>> I think I continue to see 4 LUNs at this point, correct?
>>
>
> Yes.
>
>
>> Now I proceed with removal of lun at storage array level?
>>
>> Should I select an SD line and then "Scan disks" to see refresh the SAN
>> and see in multipath only 2 of them at the end?
>> Or any manual command at host level before removal from array?
>>
>
> After removing the storage domains you should be able to remove the luns.
> I am not extremely familiar with the multipath and low-level scsi commands
> but I would try the scan disks button and if the luns are not gone from
> your host you can manually remove them. I think that involves removing the
> device from multipath (multipath -d) and deleting it from the scsi
> subsystem.
>
> Thanks in advance
>>
>
> Hope this helped you.
>
>
Hello,
the "Scan Disks" seems related to the particular storage domain selected in
storage tab, not overall FC SAN connectivity...
If I then select "manage domain", it still shows the now missing disks with
an exclamation mark aside
I try to follow standard RH EL 7 way for removal:
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Storage_Administration_Guide/removing_devices.html
I can remove at os level the 20Gb lun that was never used in oVirt, but for
the previous 2Tb storage domain related LUN I get the error that is in
use....
[root at ovmsrv06 ~]# multipath -f 3600a0b80002999020000cd3c5501458f
Feb 21 17:25:58 | 3600a0b80002999020000cd3c5501458f: map in use
Feb 21 17:25:58 | failed to remove multipath map
3600a0b80002999020000cd3c5501458f
[root at ovmsrv06 ~]#
[root at ovmsrv06 ~]# fuser /dev/mapper/3600a0b80002999020000cd3c5501458f
[root at ovmsrv06 ~]#
[root at ovmsrv06 ~]# ll /dev/mapper/3600a0b80002999020000cd3c5501458f
lrwxrwxrwx. 1 root root 7 Feb 21 17:25
/dev/mapper/3600a0b80002999020000cd3c5501458f -> ../dm-4
[root at ovmsrv06 ~]# fuser /dev/dm-4
[root at ovmsrv06 ~]#
Strange thing is that vgs command returns differrent value on th three hosts
[root at ovmsrv05 vdsm]# vgs
VG #PV #LV #SN Attr VSize VFree
922b5269-ab56-4c4d-838f-49d33427e2ab 1 22 0 wz--n- 4.00t 3.49t
cl_ovmsrv05 1 3 0 wz--n- 67.33g 0
[root at ovmsrv05 vdsm]#
[root at ovmsrv06 ~]# vgs
VG #PV #LV #SN Attr VSize VFree
922b5269-ab56-4c4d-838f-49d33427e2ab 1 22 0 wz--n- 4.00t 3.49t
cl 1 3 0 wz--n- 67.33g 0
[root at ovmsrv06 ~]#
[root at ovmsrv07 vdsm]# vgs
VG #PV #LV #SN Attr VSize VFree
900b1853-e192-4661-a0f9-7c7c396f6f49 1 10 0 wz--n- 2.00t 1.76t
922b5269-ab56-4c4d-838f-49d33427e2ab 1 27 0 wz--n- 4.00t 3.34t
cl 1 3 0 wz--n- 67.33g 0
[root at ovmsrv07 vdsm]#
So no host as a VG the 1TB storage domain and In particular ovmsrv07 has a
VG of 2TB that I suspect was the previosu storage domain
[root at ovmsrv07 vdsm]# lvs 900b1853-e192-4661-a0f9-7c7c396f6f49
LV VG
Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
35b8834e-a429-4223-b293-51d562b6def4 900b1853-e192-4661-a0f9-7c7c396f6f49
-wi------- 128.00m
7ed43974-1039-4a68-a8b3-321e7594fe4c 900b1853-e192-4661-a0f9-7c7c396f6f49
-wi------- 240.00g
d7f6be37-0f6c-43e3-b0af-a511fc59c842 900b1853-e192-4661-a0f9-7c7c396f6f49
-wi------- 128.00m
ids 900b1853-e192-4661-a0f9-7c7c396f6f49
-wi-a----- 128.00m
inbox 900b1853-e192-4661-a0f9-7c7c396f6f49
-wi-a----- 128.00m
leases 900b1853-e192-4661-a0f9-7c7c396f6f49
-wi-a----- 2.00g
master 900b1853-e192-4661-a0f9-7c7c396f6f49
-wi-a----- 1.00g
metadata 900b1853-e192-4661-a0f9-7c7c396f6f49
-wi-a----- 512.00m
outbox 900b1853-e192-4661-a0f9-7c7c396f6f49
-wi-a----- 128.00m
xleases 900b1853-e192-4661-a0f9-7c7c396f6f49
-wi-a----- 1.00g
[root at ovmsrv07 vdsm]#
Gianluca
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20170221/c291892e/attachment-0001.html>
More information about the Users
mailing list