[ovirt-users] best way to remove SAN lun

Nelson Lameiras nelson.lameiras at lyra-network.com
Wed Feb 22 07:03:41 UTC 2017


Hello,

Not sure it is the same issue, but we have had a "major" issue recently in our production system when removing a ISCSI volume from oVirt, and then removing it from SAN. The issue being that each host was still trying to access regularly to the SAN volume in spite of not being completely removed from oVirt. This led to an massive increase of error logs, which filled completely /var/log partition, which snowballed into crashing vdsm and other nasty consequences.

Anyway, the solution was to manually logout from SAN (in each host) with iscsiadm and manually remove iscsi targets (again in each host). It was not difficult once the problem was found because currently we only have 3 hosts in this cluster, but I'm wondering what would happen if we had hundreds of hosts ?

Maybe I'm being naive but shouldn't this be "oVirt job" ? Is there a RFE still waiting to be included on this subject or should I write one ?

cordialement, regards, 


Nelson LAMEIRAS 
Ingénieur Systèmes et Réseaux / Systems and Networks engineer 
Tel: +33 5 32 09 09 70 
nelson.lameiras at lyra-network.com 

www.lyra-network.com | www.payzen.eu 

	
	
	

Lyra Network, 109 rue de l'innovation, 31670 Labège, FRANCE

----- Original Message -----
From: "Nir Soffer" <nsoffer at redhat.com>
To: "Gianluca Cecchi" <gianluca.cecchi at gmail.com>, "Adam Litke" <alitke at redhat.com>
Cc: "users" <users at ovirt.org>
Sent: Tuesday, February 21, 2017 6:32:18 PM
Subject: Re: [ovirt-users] best way to remove SAN lun

On Tue, Feb 21, 2017 at 7:25 PM, Gianluca Cecchi
<gianluca.cecchi at gmail.com> wrote:
> On Tue, Feb 21, 2017 at 6:10 PM, Nir Soffer <nsoffer at redhat.com> wrote:
>>
>> This is caused by active lvs on the remove storage domains that were not
>> deactivated during the removal. This is a very old known issue.
>>
>> You have remove the remove device mapper entries - you can see the devices
>> using:
>>
>>     dmsetup status
>>
>> Then you can remove the mapping using:
>>
>>     dmsetup remove device-name
>>
>> Once you removed the stale lvs, you will be able to remove the multipath
>> device and the underlying paths, and lvm will not complain about read
>> errors.
>>
>> Nir
>
>
> OK Nir, thanks for advising.
>
> So what I run with success on the 2 hosts
>
> [root at ovmsrv05 vdsm]# for dev in $(dmsetup status | grep
> 900b1853--e192--4661--a0f9--7c7c396f6f49 | cut -d ":" -f 1)
> do
>    dmsetup remove $dev
> done
> [root at ovmsrv05 vdsm]#
>
> and now I can run
>
> [root at ovmsrv05 vdsm]# multipath -f 3600a0b80002999020000cd3c5501458f
> [root at ovmsrv05 vdsm]#
>
> Also, with related names depending on host,
>
> previous maps to single devices were for example in ovmsrv05:
>
> 3600a0b80002999020000cd3c5501458f dm-4 IBM     ,1814      FAStT
> size=2.0T features='2 pg_init_retries 50' hwhandler='1 rdac' wp=rw
> |-+- policy='service-time 0' prio=0 status=enabled
> | |- 0:0:0:2 sdb        8:16  failed undef running
> | `- 1:0:0:2 sdh        8:112 failed undef running
> `-+- policy='service-time 0' prio=0 status=enabled
>   |- 0:0:1:2 sdg        8:96  failed undef running
>   `- 1:0:1:2 sdn        8:208 failed undef running
>
> And removal of single path devices:
>
> [root at ovmsrv05 root]# for dev in sdb sdh sdg sdn
> do
>   echo 1 > /sys/block/${dev}/device/delete
> done
> [root at ovmsrv05 vdsm]#
>
> All clean now... ;-)

Great!

I think we should have a script doing all these steps.

Nir
_______________________________________________
Users mailing list
Users at ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


More information about the Users mailing list