<div dir="ltr"><div>Hi Nir,</div><div><br></div><div>Thanks for solution. I didn't notice the guest /dev/backupvg01/backuplv01 on all hypervisors. It seems that I've got this issue with 2 additionals volumes, but no one noticed because they were only few gb.</div><div><br></div><div><br></div><div><div>[root@wrops2 BLUE/WRO ~]# ls -l /sys/block/$(basename $(readlink /dev/e69d1c16-36d1-4375-aaee-69f5a5ce1616/ee53af81-820d-4916-b766-5236ca99daf8))/holders</div><div>total 0</div><div>lrwxrwxrwx. 1 root root 0 Jun 13 10:48 dm-43 -> ../../dm-43</div></div><div><br></div><div>[root@wrops2 BLUE/WRO ~]# pvscan --cache</div><div>[root@wrops2 BLUE/WRO ~]# vgs -o pv_name,vg_name</div><div> PV VG </div><div> /dev/e69d1c16-36d1-4375-aaee-69f5a5ce1616/ee53af81-820d-4916-b766-5236ca99daf8 backupvg01 </div><div> /dev/sda2 centos_wrops2 </div><div> /dev/e69d1c16-36d1-4375-aaee-69f5a5ce1616/99a1c067-9728-484a-a0cb-cb6689d5724c deployvg </div><div> /dev/mapper/3600000e00d0000000024057200000000 e69d1c16-36d1-4375-aaee-69f5a5ce1616</div><div> /dev/e69d1c16-36d1-4375-aaee-69f5a5ce1616/86a6d83f-2661-4fe3-8874-ce4d8a111c0d jenkins </div><div> /dev/sda3 w2vg1 </div><div><br></div><div><div>[root@wrops2 BLUE/WRO ~]# dmsetup info</div></div><div><div>Name: backupvg01-backuplv01</div><div>State: ACTIVE</div><div>Read Ahead: 8192</div><div>Tables present: LIVE</div><div>Open count: 0</div><div>Event number: 0</div><div>Major, minor: 253, 43</div><div>Number of targets: 1</div><div>UUID: LVM-ubxOH5R2h6B8JwLGfhpiNjnAKlPxMPy6KfkeLBxXajoT3gxU0yC5JvOQQVkixrTA</div></div><div><br></div><div><div>[root@wrops2 BLUE/WRO ~]# lvchange -an /dev/backupvg01/backuplv01 </div></div><div><div>[root@wrops2 BLUE/WRO ~]# lvremove /dev/e69d1c16-36d1-4375-aaee-69f5a5ce1616/ee53af81-820d-4916-b766-5236ca99daf8</div></div><div><div>Do you really want to remove active logical volume ee53af81-820d-4916-b766-5236ca99daf8? [y/n]: y</div><div> Logical volume "ee53af81-820d-4916-b766-5236ca99daf8" successfully removed</div></div><div><br></div><div><br></div><div><div>Would this configuration in lvm.conf:</div><div>filter = [ "r|/dev/e69d1c16-36d1-4375-aaee-69f5a5ce1616/.*|" ]<br></div><div>on all hypervisors solve problem of scanning guest volumes?</div></div></div><div class="gmail_extra"><br><div class="gmail_quote">2016-06-11 23:16 GMT+02:00 Nir Soffer <span dir="ltr"><<a href="mailto:nsoffer@redhat.com" target="_blank">nsoffer@redhat.com</a>></span>:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="HOEnZb"><div class="h5">On Thu, Jun 9, 2016 at 11:46 AM, Krzysztof Dajka <<a href="mailto:alteriks@gmail.com">alteriks@gmail.com</a>> wrote:<br>
> Hi,<br>
><br>
> Recently I tried to delete 1TB disk created on top ~3TB LUN from<br>
> ovirtengine.<br>
> Disk is preallocated and I backuped data to other disk so I could recreate<br>
> it once again as thin volume. I couldn't remove this disk when it was<br>
> attached to a VM. But once I detached it I could remove it permanently. The<br>
> thing is it only disappeared from ovirtengine GUI.<br>
><br>
> I've got 4 hosts with FC HBA attached to storage array and all of them are<br>
> saying that this 1TB disk which should be gone is opened by all hosts<br>
> simultaneously.<br>
><br>
> [root@wrops1 BLUE ~]# lvdisplay -m<br>
> /dev/e69d1c16-36d1-4375-aaee-69f5a5ce1616/ee53af81-820d-4916-b766-5236ca99daf8<br>
> --- Logical volume ---<br>
> LV Path<br>
> /dev/e69d1c16-36d1-4375-aaee-69f5a5ce1616/ee53af81-820d-4916-b766-5236ca99daf8<br>
> LV Name ee53af81-820d-4916-b766-5236ca99daf8<br>
> VG Name e69d1c16-36d1-4375-aaee-69f5a5ce1616<br>
> LV UUID sBdBRk-tNyZ-Rval-F4lw-ka6X-wOe8-AQenTb<br>
> LV Write Access read/write<br>
> LV Creation host, time wrops1.blue, 2015-07-31 10:40:57 +0200<br>
> LV Status available<br>
> # open 1<br>
> LV Size 1.00 TiB<br>
> Current LE 8192<br>
> Segments 1<br>
> Allocation inherit<br>
> Read ahead sectors auto<br>
> - currently set to 8192<br>
> Block device 253:29<br>
><br>
> --- Segments ---<br>
> Logical extents 0 to 8191:<br>
> Type linear<br>
> Physical volume /dev/mapper/3600000e00d0000000024057200000000<br>
> Physical extents 8145 to 16336<br>
><br>
> Deactivating LV doesn't work:<br>
> [root@wrops1 BLUE ~]# lvchange -an<br>
> /dev/e69d1c16-36d1-4375-aaee-69f5a5ce1616/ee53af81-820d-4916-b766-5236ca99daf8<br>
> Logical volume<br>
> e69d1c16-36d1-4375-aaee-69f5a5ce1616/ee53af81-820d-4916-b766-5236ca99daf8 is<br>
> used by another device.<br>
<br>
</div></div>Looks like your lv is used as a physical volume on another vg - probably<br>
a vg created on a guest. Lvm and systemd are trying hard to discover<br>
stuff on multipath devices and expose anything to the hypervisor.<br>
<br>
Can you share the output of:<br>
<br>
ls -l /sys/block/$(basename $(readlink<br>
/dev/e69d1c16-36d1-4375-aaee-69f5a5ce1616/ee53af81-820d-4916-b766-5236ca99daf8))/holders<br>
<br>
And:<br>
<br>
pvscan --cache<br>
vgs -o pv_name,vg_name<br>
<br>
Nir<br>
<span class=""><br>
> Removing from hypervisor doesn't work either.<br>
> [root@wrops1 BLUE ~]# lvremove --force<br>
> /dev/e69d1c16-36d1-4375-aaee-69f5a5ce1616/ee53af81-820d-4916-b766-5236ca99daf8<br>
> Logical volume<br>
> e69d1c16-36d1-4375-aaee-69f5a5ce1616/ee53af81-820d-4916-b766-5236ca99daf8 is<br>
> used by another device.<br>
><br>
> I tried and rebooted one host and as soon as it booted the volume became<br>
> opened once again. Lsof on all hosts doesn't give anything meaningful<br>
> regarding this LV. As opposed to other LV which are used by qemu-kvm.<br>
><br>
> Has anyone encountered similar problem? How can I remove this LV?<br>
><br>
</span>> _______________________________________________<br>
> Users mailing list<br>
> <a href="mailto:Users@ovirt.org">Users@ovirt.org</a><br>
> <a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/mailman/listinfo/users</a><br>
><br>
</blockquote></div><br></div>