[ovirt-users] Removing iSCSI domain: host side part

Yaniv Kaul ykaul at redhat.com
Fri Jul 14 02:17:05 UTC 2017


On Thu, Jul 13, 2017 at 5:59 PM, Gianluca Cecchi <gianluca.cecchi at gmail.com>
wrote:

> On Thu, Jul 13, 2017 at 6:23 PM, Gianluca Cecchi <
> gianluca.cecchi at gmail.com> wrote:
>
>> Hello,
>> I have cleanly removed an iSCSI domain from oVirt. There is another one
>> (connecting to another storage array) that is the master domain.
>> But I see that oVirt hosts still maintain the iscsi session to the LUN.
>> So I want to clean from os point of view before removing the LUN itself
>> from storage.
>>
>> At the moment I still see the multipath lun on both hosts
>>
>> [root at ov301 network-scripts]# multipath -l
>> . . .
>> 364817197b5dfd0e5538d959702249b1c dm-2 EQLOGIC ,100E-00
>> size=4.0T features='0' hwhandler='0' wp=rw
>> `-+- policy='round-robin 0' prio=0 status=active
>>   |- 9:0:0:0  sde 8:64  active undef  running
>>   `- 10:0:0:0 sdf 8:80  active undef  running
>>
>> and
>> [root at ov301 network-scripts]# iscsiadm -m session
>> tcp: [1] 10.10.100.9:3260,1 iqn.2001-05.com.equallogic:4-7
>> 71816-e5d0dfb59-1c9b240297958d53-ovsd3910 (non-flash)
>> tcp: [2] 10.10.100.9:3260,1 iqn.2001-05.com.equallogic:4-7
>> 71816-e5d0dfb59-1c9b240297958d53-ovsd3910 (non-flash)
>> . . .
>>
>> Do I have to clean the multipath paths and multipath device and then
>> iSCSI logout, or is it sufficient to iSCSI logout and the multipath device
>> and its path will be cleanly removed from OS point of view?
>>
>> I would like not to have multipath device in stale condition.
>>
>> Thanks
>> Gianluca
>>
>
>
> I have not understood why, if I destroy a storage domain, still oVirt
> maintains its LVM structures....
>

Destroy is an Engine command - it does not touch the storage at all (the
assumption is that you've someow lost/deleted your storage domain and now
you want to get rid of it from the Engine side).


>
> Anyway, these were the step done at host side before removal of the LUN at
> storage array level
>

I assume removing the LUN + reboot would have been quicker.
Y.


>
> Pick up the VG of which the lun is still a PV for..
>
> vgchange -an 5ed04196-87f1-480e-9fee-9dd450a3b53b
> --> actually all lvs were already inactive
>
> vgremove 5ed04196-87f1-480e-9fee-9dd450a3b53b
> Do you really want to remove volume group "5ed04196-87f1-480e-9fee-9dd450a3b53b"
> containing 22 logical volumes? [y/n]: y
>   Logical volume "metadata" successfully removed
>   Logical volume "outbox" successfully removed
>   Logical volume "xleases" successfully removed
>   Logical volume "leases" successfully removed
>   Logical volume "ids" successfully removed
>   Logical volume "inbox" successfully removed
>   Logical volume "master" successfully removed
>   Logical volume "bc141d0d-b648-409b-a862-9b6d950517a5" successfully
> removed
>   Logical volume "31255d83-ca67-4f47-a001-c734c498d176" successfully
> removed
>   Logical volume "607dbf59-7d4d-4fc3-ae5f-e8824bf82648" successfully
> removed
>   Logical volume "dfbf5787-36a4-4685-bf3a-43a55e9cd4a6" successfully
> removed
>   Logical volume "400ea884-3876-4a21-9ec6-b0b8ac706cee" successfully
> removed
>   Logical volume "1919f6e6-86cd-4a13-9a21-ce52b9f62e35" successfully
> removed
>   Logical volume "a3ea679b-95c0-475d-80c5-8dc4d86bd87f" successfully
> removed
>   Logical volume "32f433c8-a991-4cfc-9a0b-7f44422815b7" successfully
> removed
>   Logical volume "7f867f59-c977-47cf-b280-a2a0fef8b95b" successfully
> removed
>   Logical volume "6e2005f2-3ff5-42fa-867e-e7812c6726e4" successfully
> removed
>   Logical volume "42344cf4-8f9c-464d-ab0f-d62beb15d359" successfully
> removed
>   Logical volume "293e169e-53ed-4d60-b22a-65835f5b0d29" successfully
> removed
>   Logical volume "e86752c4-de73-4733-b561-2afb31bcc2d3" successfully
> removed
>   Logical volume "79350ec5-eea5-458b-a3ee-ba394d2cda27" successfully
> removed
>   Logical volume "77824fce-4f95-49e3-b732-f791151dd15c" successfully
> removed
>   Volume group "5ed04196-87f1-480e-9fee-9dd450a3b53b" successfully removed
>
> pvremove /dev/mapper/364817197b5dfd0e5538d959702249b1c
>
> multipath -f 364817197b5dfd0e5538d959702249b1c
>
> iscsiadm -m session -r 1 -u
> Logging out of session [sid: 1, target: iqn.2001-05.com.equallogic:4-
> 771816-e5d0dfb59-1c9b240297958d53-ovsd3910, portal: 10.10.100.9,3260]
> Logout of [sid: 1, target: iqn.2001-05.com.equallogic:4-771816-e5d0dfb59-1c9b240297958d53-ovsd3910,
> portal: 10.10.100.9,3260] successful.
>
> iscsiadm -m session -r 2 -u
> Logging out of session [sid: 2, target: iqn.2001-05.com.equallogic:4-
> 771816-e5d0dfb59-1c9b240297958d53-ovsd3910, portal: 10.10.100.9,3260]
> Logout of [sid: 2, target: iqn.2001-05.com.equallogic:4-771816-e5d0dfb59-1c9b240297958d53-ovsd3910,
> portal: 10.10.100.9,3260] successful.
>
> done.
>
> NOTE: on one node I missed the LVM clean before logging out of the iSCSI
> session
> this resulted in impossibility to have a clean status because the
> multipath device resulted as without paths but still used (by LVM)
> and the command
> multipath -f
> failed.
> Also vgs and lvs commands threw out many errors and many errors in
> messages too
>
> These were the commands to clean the situation also on that node.
>
> dmsetup remove 5ed04196-87f1-480e-9fee-9dd450a3b53b/master
> dmsetup remove 5ed04196-87f1-480e-9fee-9dd450a3b53b/inbox
> dmsetup remove 5ed04196-87f1-480e-9fee-9dd450a3b53b/xleases
> dmsetup remove 5ed04196-87f1-480e-9fee-9dd450a3b53b/leases
> dmsetup remove 5ed04196-87f1-480e-9fee-9dd450a3b53b/outbox
> dmsetup remove 5ed04196-87f1-480e-9fee-9dd450a3b53b/ids
> dmsetup remove 5ed04196-87f1-480e-9fee-9dd450a3b53b/metadata
>
> multipath -f 364817197b5dfd0e5538d959702249b1c
>
> Gianluca
>
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20170713/aa9ba629/attachment-0001.html>


More information about the Users mailing list