
On Thu, Mar 31, 2022 at 6:03 PM Gianluca Cecchi <gianluca.cecchi@gmail.com> wrote:
On Thu, Mar 31, 2022 at 4:45 PM Nir Soffer <nsoffer@redhat.com> wrote:
Regarding removing the vg on other nodes - you don't need to do anything. On the host, the vg is hidden since you use lvm filter. Vdsm can see the vg since vdsm uses lvm filter with all the luns on the system. Vdsm will see the change the next time it runs pvs, vgs, or lvs.
Nir
Ok, thank you very much So I will: . remove LVM structures on one node (probably I'll use the SPM host, but as you said it shouldn't matter) . remove multipath devices and paths on both hosts (hope the second host doesn't complain about LVM presence, because actually it is hidden by filter...) . have the SAN mgmt guys unpresent LUN from both hosts . rescan SAN from inside oVirt (to verify LUN not detected any more and at the same time all expected LUNs/paths ok)
I should have also the second host updated in regard of LVM structures... correct?
The right order his: 1. Make sure the vg does not have any active lv on any host, since you removed it in the path without formatting, and some lvs may be activated by mistake since that time. vgchange -an --config 'devices { filter = ["a|.*|" ] }' vg-name 2. Remove the vg on one of the hosts (assuming you don't need the data) vgremove -f --config 'devices { filter = ["a|.*|" ] }' vg-name If you don't plan to use this vg with lvm, you can remove the pvs 3. Have the SAN mgmt guys unpresent LUN from both hosts This should be done before removing the multipath devices, otherwise scsi rescan initiated by vdsm may discover the devices again and recreate the multipath devices. 4. Remove the multipath devices and the scsi devices related to these luns To verify you can use lsblk on the hosts, the devices will disappear. If you want to make sure the luns were unzoned, doing a rescan is a good idea. it can be done by opening the "new domain" or "manage domain" in ovirt UI, or by running: vdsm-client Host getDeviceList checkStatus='' Nir