Disk from FC storage not removed

Hi, Recently I tried to delete 1TB disk created on top ~3TB LUN from ovirtengine. Disk is preallocated and I backuped data to other disk so I could recreate it once again as thin volume. I couldn't remove this disk when it was attached to a VM. But once I detached it I could remove it permanently. The thing is it only disappeared from ovirtengine GUI. I've got 4 hosts with FC HBA attached to storage array and all of them are saying that this 1TB disk which should be gone is opened by all hosts simultaneously. [root@wrops1 BLUE ~]# lvdisplay -m /dev/e69d1c16-36d1-4375-aaee-69f5a5ce1616/ee53af81-820d-4916-b766-5236ca99daf8 --- Logical volume --- LV Path /dev/e69d1c16-36d1-4375-aaee-69f5a5ce1616/ee53af81-820d-4916-b766-5236ca99daf8 LV Name ee53af81-820d-4916-b766-5236ca99daf8 VG Name e69d1c16-36d1-4375-aaee-69f5a5ce1616 LV UUID sBdBRk-tNyZ-Rval-F4lw-ka6X-wOe8-AQenTb LV Write Access read/write LV Creation host, time wrops1.blue, 2015-07-31 10:40:57 +0200 LV Status available # open 1 LV Size 1.00 TiB Current LE 8192 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 8192 Block device 253:29 --- Segments --- Logical extents 0 to 8191: Type linear Physical volume /dev/mapper/3600000e00d0000000024057200000000 Physical extents 8145 to 16336 Deactivating LV doesn't work: [root@wrops1 BLUE ~]# lvchange -an /dev/e69d1c16-36d1-4375-aaee-69f5a5ce1616/ee53af81-820d-4916-b766-5236ca99daf8 Logical volume e69d1c16-36d1-4375-aaee-69f5a5ce1616/ee53af81-820d-4916-b766-5236ca99daf8 is used by another device. Removing from hypervisor doesn't work either. [root@wrops1 BLUE ~]# lvremove --force /dev/e69d1c16-36d1-4375-aaee-69f5a5ce1616/ee53af81-820d-4916-b766-5236ca99daf8 Logical volume e69d1c16-36d1-4375-aaee-69f5a5ce1616/ee53af81-820d-4916-b766-5236ca99daf8 is used by another device. I tried and rebooted one host and as soon as it booted the volume became opened once again. Lsof on all hosts doesn't give anything meaningful regarding this LV. As opposed to other LV which are used by qemu-kvm. Has anyone encountered similar problem? How can I remove this LV?

Hi, Recently I tried to delete 1TB disk created on top ~3TB LUN from ovirtengine. Disk is preallocated and I backuped data to other disk so I could recreate it once again as thin volume. I couldn't remove this disk when it was attached to a VM. But once I detached it I could remove it permanently. The thing is it only disappeared from ovirtengine GUI. I've got 4 hosts with FC HBA attached to storage array and all of them are saying that this 1TB disk which should be gone is opened by all hosts simultaneously. [root@wrops1 BLUE ~]# lvdisplay -m /dev/e69d1c16-36d1-4375-aaee-69f5a5ce1616/ee53af81-820d-4916-b766-5236ca99daf8 --- Logical volume --- LV Path /dev/e69d1c16-36d1-4375-aaee-69f5a5ce1616/ee53af81-820d-4916-b766-5236ca99daf8 LV Name ee53af81-820d-4916-b766-5236ca99daf8 VG Name e69d1c16-36d1-4375-aaee-69f5a5ce1616 LV UUID sBdBRk-tNyZ-Rval-F4lw-ka6X-wOe8-AQenTb LV Write Access read/write LV Creation host, time wrops1.blue, 2015-07-31 10:40:57 +0200 LV Status available # open 1 LV Size 1.00 TiB Current LE 8192 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 8192 Block device 253:29 --- Segments --- Logical extents 0 to 8191: Type linear Physical volume /dev/mapper/3600000e00d0000000024057200000000 Physical extents 8145 to 16336 Deactivating LV doesn't work: [root@wrops1 BLUE ~]# lvchange -an /dev/e69d1c16-36d1-4375-aaee-69f5a5ce1616/ee53af81-820d-4916-b766-5236ca99daf8 Logical volume e69d1c16-36d1-4375-aaee-69f5a5ce1616/ee53af81-820d-4916-b766-5236ca99daf8 is used by another device. Removing from hypervisor doesn't work either. [root@wrops1 BLUE ~]# lvremove --force /dev/e69d1c16-36d1-4375-aaee-69f5a5ce1616/ee53af81-820d-4916-b766-5236ca99daf8 Logical volume e69d1c16-36d1-4375-aaee-69f5a5ce1616/ee53af81-820d-4916-b766-5236ca99daf8 is used by another device. I tried and rebooted one host and as soon as it booted the volume became opened once again. Lsof on all hosts doesn't give anything meaningful regarding this LV. As opposed to other LV which are used by qemu-kvm. Has anyone encountered similar problem? How can I remove this LV? Regards Krzysztof Dajka

On Thu, Jun 9, 2016 at 11:46 AM, Krzysztof Dajka <alteriks@gmail.com> wrote:
Hi,
Recently I tried to delete 1TB disk created on top ~3TB LUN from ovirtengine. Disk is preallocated and I backuped data to other disk so I could recreate it once again as thin volume. I couldn't remove this disk when it was attached to a VM. But once I detached it I could remove it permanently. The thing is it only disappeared from ovirtengine GUI.
I've got 4 hosts with FC HBA attached to storage array and all of them are saying that this 1TB disk which should be gone is opened by all hosts simultaneously.
[root@wrops1 BLUE ~]# lvdisplay -m /dev/e69d1c16-36d1-4375-aaee-69f5a5ce1616/ee53af81-820d-4916-b766-5236ca99daf8 --- Logical volume --- LV Path /dev/e69d1c16-36d1-4375-aaee-69f5a5ce1616/ee53af81-820d-4916-b766-5236ca99daf8 LV Name ee53af81-820d-4916-b766-5236ca99daf8 VG Name e69d1c16-36d1-4375-aaee-69f5a5ce1616 LV UUID sBdBRk-tNyZ-Rval-F4lw-ka6X-wOe8-AQenTb LV Write Access read/write LV Creation host, time wrops1.blue, 2015-07-31 10:40:57 +0200 LV Status available # open 1 LV Size 1.00 TiB Current LE 8192 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 8192 Block device 253:29
--- Segments --- Logical extents 0 to 8191: Type linear Physical volume /dev/mapper/3600000e00d0000000024057200000000 Physical extents 8145 to 16336
Deactivating LV doesn't work: [root@wrops1 BLUE ~]# lvchange -an /dev/e69d1c16-36d1-4375-aaee-69f5a5ce1616/ee53af81-820d-4916-b766-5236ca99daf8 Logical volume e69d1c16-36d1-4375-aaee-69f5a5ce1616/ee53af81-820d-4916-b766-5236ca99daf8 is used by another device.
Looks like your lv is used as a physical volume on another vg - probably a vg created on a guest. Lvm and systemd are trying hard to discover stuff on multipath devices and expose anything to the hypervisor. Can you share the output of: ls -l /sys/block/$(basename $(readlink /dev/e69d1c16-36d1-4375-aaee-69f5a5ce1616/ee53af81-820d-4916-b766-5236ca99daf8))/holders And: pvscan --cache vgs -o pv_name,vg_name Nir
Removing from hypervisor doesn't work either. [root@wrops1 BLUE ~]# lvremove --force /dev/e69d1c16-36d1-4375-aaee-69f5a5ce1616/ee53af81-820d-4916-b766-5236ca99daf8 Logical volume e69d1c16-36d1-4375-aaee-69f5a5ce1616/ee53af81-820d-4916-b766-5236ca99daf8 is used by another device.
I tried and rebooted one host and as soon as it booted the volume became opened once again. Lsof on all hosts doesn't give anything meaningful regarding this LV. As opposed to other LV which are used by qemu-kvm.
Has anyone encountered similar problem? How can I remove this LV?
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Hi Nir, Thanks for solution. I didn't notice the guest /dev/backupvg01/backuplv01 on all hypervisors. It seems that I've got this issue with 2 additionals volumes, but no one noticed because they were only few gb. [root@wrops2 BLUE/WRO ~]# ls -l /sys/block/$(basename $(readlink /dev/e69d1c16-36d1-4375-aaee-69f5a5ce1616/ee53af81-820d-4916-b766-5236ca99daf8))/holders total 0 lrwxrwxrwx. 1 root root 0 Jun 13 10:48 dm-43 -> ../../dm-43 [root@wrops2 BLUE/WRO ~]# pvscan --cache [root@wrops2 BLUE/WRO ~]# vgs -o pv_name,vg_name PV VG /dev/e69d1c16-36d1-4375-aaee-69f5a5ce1616/ee53af81-820d-4916-b766-5236ca99daf8 backupvg01 /dev/sda2 centos_wrops2 /dev/e69d1c16-36d1-4375-aaee-69f5a5ce1616/99a1c067-9728-484a-a0cb-cb6689d5724c deployvg /dev/mapper/3600000e00d0000000024057200000000 e69d1c16-36d1-4375-aaee-69f5a5ce1616 /dev/e69d1c16-36d1-4375-aaee-69f5a5ce1616/86a6d83f-2661-4fe3-8874-ce4d8a111c0d jenkins /dev/sda3 w2vg1 [root@wrops2 BLUE/WRO ~]# dmsetup info Name: backupvg01-backuplv01 State: ACTIVE Read Ahead: 8192 Tables present: LIVE Open count: 0 Event number: 0 Major, minor: 253, 43 Number of targets: 1 UUID: LVM-ubxOH5R2h6B8JwLGfhpiNjnAKlPxMPy6KfkeLBxXajoT3gxU0yC5JvOQQVkixrTA [root@wrops2 BLUE/WRO ~]# lvchange -an /dev/backupvg01/backuplv01 [root@wrops2 BLUE/WRO ~]# lvremove /dev/e69d1c16-36d1-4375-aaee-69f5a5ce1616/ee53af81-820d-4916-b766-5236ca99daf8 Do you really want to remove active logical volume ee53af81-820d-4916-b766-5236ca99daf8? [y/n]: y Logical volume "ee53af81-820d-4916-b766-5236ca99daf8" successfully removed Would this configuration in lvm.conf: filter = [ "r|/dev/e69d1c16-36d1-4375-aaee-69f5a5ce1616/.*|" ] on all hypervisors solve problem of scanning guest volumes? 2016-06-11 23:16 GMT+02:00 Nir Soffer <nsoffer@redhat.com>:
On Thu, Jun 9, 2016 at 11:46 AM, Krzysztof Dajka <alteriks@gmail.com> wrote:
Hi,
Recently I tried to delete 1TB disk created on top ~3TB LUN from ovirtengine. Disk is preallocated and I backuped data to other disk so I could recreate it once again as thin volume. I couldn't remove this disk when it was attached to a VM. But once I detached it I could remove it permanently. The thing is it only disappeared from ovirtengine GUI.
I've got 4 hosts with FC HBA attached to storage array and all of them are saying that this 1TB disk which should be gone is opened by all hosts simultaneously.
[root@wrops1 BLUE ~]# lvdisplay -m
/dev/e69d1c16-36d1-4375-aaee-69f5a5ce1616/ee53af81-820d-4916-b766-5236ca99daf8
--- Logical volume --- LV Path
/dev/e69d1c16-36d1-4375-aaee-69f5a5ce1616/ee53af81-820d-4916-b766-5236ca99daf8
LV Name ee53af81-820d-4916-b766-5236ca99daf8 VG Name e69d1c16-36d1-4375-aaee-69f5a5ce1616 LV UUID sBdBRk-tNyZ-Rval-F4lw-ka6X-wOe8-AQenTb LV Write Access read/write LV Creation host, time wrops1.blue, 2015-07-31 10:40:57 +0200 LV Status available # open 1 LV Size 1.00 TiB Current LE 8192 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 8192 Block device 253:29
--- Segments --- Logical extents 0 to 8191: Type linear Physical volume /dev/mapper/3600000e00d0000000024057200000000 Physical extents 8145 to 16336
Deactivating LV doesn't work: [root@wrops1 BLUE ~]# lvchange -an
/dev/e69d1c16-36d1-4375-aaee-69f5a5ce1616/ee53af81-820d-4916-b766-5236ca99daf8
Logical volume
e69d1c16-36d1-4375-aaee-69f5a5ce1616/ee53af81-820d-4916-b766-5236ca99daf8 is
used by another device.
Looks like your lv is used as a physical volume on another vg - probably a vg created on a guest. Lvm and systemd are trying hard to discover stuff on multipath devices and expose anything to the hypervisor.
Can you share the output of:
ls -l /sys/block/$(basename $(readlink
/dev/e69d1c16-36d1-4375-aaee-69f5a5ce1616/ee53af81-820d-4916-b766-5236ca99daf8))/holders
And:
pvscan --cache vgs -o pv_name,vg_name
Nir
Removing from hypervisor doesn't work either. [root@wrops1 BLUE ~]# lvremove --force
/dev/e69d1c16-36d1-4375-aaee-69f5a5ce1616/ee53af81-820d-4916-b766-5236ca99daf8
Logical volume
e69d1c16-36d1-4375-aaee-69f5a5ce1616/ee53af81-820d-4916-b766-5236ca99daf8 is
used by another device.
I tried and rebooted one host and as soon as it booted the volume became opened once again. Lsof on all hosts doesn't give anything meaningful regarding this LV. As opposed to other LV which are used by qemu-kvm.
Has anyone encountered similar problem? How can I remove this LV?
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On Mon, Jun 13, 2016 at 12:06 PM, Krzysztof Dajka <alteriks@gmail.com> wrote:
Hi Nir,
Thanks for solution. I didn't notice the guest /dev/backupvg01/backuplv01 on all hypervisors. It seems that I've got this issue with 2 additionals volumes, but no one noticed because they were only few gb.
[root@wrops2 BLUE/WRO ~]# ls -l /sys/block/$(basename $(readlink /dev/e69d1c16-36d1-4375-aaee-69f5a5ce1616/ee53af81-820d-4916-b766-5236ca99daf8))/holders total 0 lrwxrwxrwx. 1 root root 0 Jun 13 10:48 dm-43 -> ../../dm-43
[root@wrops2 BLUE/WRO ~]# pvscan --cache [root@wrops2 BLUE/WRO ~]# vgs -o pv_name,vg_name PV VG
/dev/e69d1c16-36d1-4375-aaee-69f5a5ce1616/ee53af81-820d-4916-b766-5236ca99daf8 backupvg01 /dev/sda2 centos_wrops2
/dev/e69d1c16-36d1-4375-aaee-69f5a5ce1616/99a1c067-9728-484a-a0cb-cb6689d5724c deployvg /dev/mapper/3600000e00d0000000024057200000000 e69d1c16-36d1-4375-aaee-69f5a5ce1616
/dev/e69d1c16-36d1-4375-aaee-69f5a5ce1616/86a6d83f-2661-4fe3-8874-ce4d8a111c0d jenkins /dev/sda3 w2vg1
[root@wrops2 BLUE/WRO ~]# dmsetup info Name: backupvg01-backuplv01 State: ACTIVE Read Ahead: 8192 Tables present: LIVE Open count: 0 Event number: 0 Major, minor: 253, 43 Number of targets: 1 UUID: LVM-ubxOH5R2h6B8JwLGfhpiNjnAKlPxMPy6KfkeLBxXajoT3gxU0yC5JvOQQVkixrTA
[root@wrops2 BLUE/WRO ~]# lvchange -an /dev/backupvg01/backuplv01 [root@wrops2 BLUE/WRO ~]# lvremove /dev/e69d1c16-36d1-4375-aaee-69f5a5ce1616/ee53af81-820d-4916-b766-5236ca99daf8 Do you really want to remove active logical volume ee53af81-820d-4916-b766-5236ca99daf8? [y/n]: y Logical volume "ee53af81-820d-4916-b766-5236ca99daf8" successfully removed
Would this configuration in lvm.conf: filter = [ "r|/dev/e69d1c16-36d1-4375-aaee-69f5a5ce1616/.*|" ] on all hypervisors solve problem of scanning guest volumes?
I would use global_filter, to make sure that commands using filter from the command line do not override your filter. vdsm is such application, using --config 'devices { filter = ... }' Nir
2016-06-11 23:16 GMT+02:00 Nir Soffer <nsoffer@redhat.com>:
On Thu, Jun 9, 2016 at 11:46 AM, Krzysztof Dajka <alteriks@gmail.com> wrote:
Hi,
Recently I tried to delete 1TB disk created on top ~3TB LUN from ovirtengine. Disk is preallocated and I backuped data to other disk so I could recreate it once again as thin volume. I couldn't remove this disk when it was attached to a VM. But once I detached it I could remove it permanently. The thing is it only disappeared from ovirtengine GUI.
I've got 4 hosts with FC HBA attached to storage array and all of them are saying that this 1TB disk which should be gone is opened by all hosts simultaneously.
[root@wrops1 BLUE ~]# lvdisplay -m
/dev/e69d1c16-36d1-4375-aaee-69f5a5ce1616/ee53af81-820d-4916-b766-5236ca99daf8 --- Logical volume --- LV Path
/dev/e69d1c16-36d1-4375-aaee-69f5a5ce1616/ee53af81-820d-4916-b766-5236ca99daf8 LV Name ee53af81-820d-4916-b766-5236ca99daf8 VG Name e69d1c16-36d1-4375-aaee-69f5a5ce1616 LV UUID sBdBRk-tNyZ-Rval-F4lw-ka6X-wOe8-AQenTb LV Write Access read/write LV Creation host, time wrops1.blue, 2015-07-31 10:40:57 +0200 LV Status available # open 1 LV Size 1.00 TiB Current LE 8192 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 8192 Block device 253:29
--- Segments --- Logical extents 0 to 8191: Type linear Physical volume /dev/mapper/3600000e00d0000000024057200000000 Physical extents 8145 to 16336
Deactivating LV doesn't work: [root@wrops1 BLUE ~]# lvchange -an
/dev/e69d1c16-36d1-4375-aaee-69f5a5ce1616/ee53af81-820d-4916-b766-5236ca99daf8 Logical volume
e69d1c16-36d1-4375-aaee-69f5a5ce1616/ee53af81-820d-4916-b766-5236ca99daf8 is used by another device.
Looks like your lv is used as a physical volume on another vg - probably a vg created on a guest. Lvm and systemd are trying hard to discover stuff on multipath devices and expose anything to the hypervisor.
Can you share the output of:
ls -l /sys/block/$(basename $(readlink
/dev/e69d1c16-36d1-4375-aaee-69f5a5ce1616/ee53af81-820d-4916-b766-5236ca99daf8))/holders
And:
pvscan --cache vgs -o pv_name,vg_name
Nir
Removing from hypervisor doesn't work either. [root@wrops1 BLUE ~]# lvremove --force
/dev/e69d1c16-36d1-4375-aaee-69f5a5ce1616/ee53af81-820d-4916-b766-5236ca99daf8 Logical volume
e69d1c16-36d1-4375-aaee-69f5a5ce1616/ee53af81-820d-4916-b766-5236ca99daf8 is used by another device.
I tried and rebooted one host and as soon as it booted the volume became opened once again. Lsof on all hosts doesn't give anything meaningful regarding this LV. As opposed to other LV which are used by qemu-kvm.
Has anyone encountered similar problem? How can I remove this LV?
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
participants (2)
-
Krzysztof Dajka
-
Nir Soffer