On Mon, Oct 5, 2020 at 2:19 AM Nir Soffer <nsoffer@redhat.com> wrote:
On Sun, Oct 4, 2020 at 6:09 PM Amit Bawer <abawer@redhat.com> wrote:
>
>
>
> On Sun, Oct 4, 2020 at 5:28 PM Gianluca Cecchi <gianluca.cecchi@gmail.com> wrote:
>>
>> On Sun, Oct 4, 2020 at 10:21 AM Amit Bawer <abawer@redhat.com> wrote:
>>>
>>>
>>>
>>> Since there wasn't a filter set on the node, the 4.4.2 update added the default filter for the root-lv pv
>>> if there was some filter set before the upgrade, it would not have been added by the 4.4.2 update.
>>>>
>>>>
>>
>> Do you mean that I will get the same problem upgrading from 4.4.2 to an upcoming 4.4.3, as also now I don't have any filter set?
>> This would not be desirable....
>
> Once you have got back into 4.4.2, it's recommended to set the lvm filter to fit the pvs you use on your node
> for the local root pv you can run
> # vdsm-tool config-lvm-filter -y
> For the gluster bricks you'll need to add their uuids to the filter as well.

vdsm-tool is expected to add all the devices needed by the mounted
logical volumes, so adding devices manually should not be needed.

If this does not work please file a bug and include all the info to reproduce
the issue.


I don't know what exactly happened when I installed ovirt-ng-node in 4.4.0, but the effect was that no filter at all was set up in lvm.conf, and so the problem I had upgrading to 4.4.2.
Any way to see related logs for 4.4.0? In which phase of the install of the node itself or of the gluster based wizard is it supposed to run the vdsm-tool command?

Right now in 4.4.2 I get this output, so it seems it works in 4.4.2:

"
[root@ovirt01 ~]# vdsm-tool config-lvm-filter
Analyzing host...
Found these mounted logical volumes on this host:

  logical volume:  /dev/mapper/gluster_vg_sda-gluster_lv_data
  mountpoint:      /gluster_bricks/data
  devices:         /dev/disk/by-id/lvm-pv-uuid-5D4JSI-vqEc-ir4o-BGnG-sZmh-ILjS-jgzICr

  logical volume:  /dev/mapper/gluster_vg_sda-gluster_lv_engine
  mountpoint:      /gluster_bricks/engine
  devices:         /dev/disk/by-id/lvm-pv-uuid-5D4JSI-vqEc-ir4o-BGnG-sZmh-ILjS-jgzICr

  logical volume:  /dev/mapper/gluster_vg_sda-gluster_lv_vmstore
  mountpoint:      /gluster_bricks/vmstore
  devices:         /dev/disk/by-id/lvm-pv-uuid-5D4JSI-vqEc-ir4o-BGnG-sZmh-ILjS-jgzICr

  logical volume:  /dev/mapper/onn-home
  mountpoint:      /home
  devices:         /dev/disk/by-id/lvm-pv-uuid-52iT6N-L9sU-ubqE-6vPt-dn7T-W19c-NOXjc7

  logical volume:  /dev/mapper/onn-ovirt--node--ng--4.4.2--0.20200918.0+1
  mountpoint:      /
  devices:         /dev/disk/by-id/lvm-pv-uuid-52iT6N-L9sU-ubqE-6vPt-dn7T-W19c-NOXjc7

  logical volume:  /dev/mapper/onn-swap
  mountpoint:      [SWAP]
  devices:         /dev/disk/by-id/lvm-pv-uuid-52iT6N-L9sU-ubqE-6vPt-dn7T-W19c-NOXjc7

  logical volume:  /dev/mapper/onn-tmp
  mountpoint:      /tmp
  devices:         /dev/disk/by-id/lvm-pv-uuid-52iT6N-L9sU-ubqE-6vPt-dn7T-W19c-NOXjc7

  logical volume:  /dev/mapper/onn-var
  mountpoint:      /var
  devices:         /dev/disk/by-id/lvm-pv-uuid-52iT6N-L9sU-ubqE-6vPt-dn7T-W19c-NOXjc7

  logical volume:  /dev/mapper/onn-var_crash
  mountpoint:      /var/crash
  devices:         /dev/disk/by-id/lvm-pv-uuid-52iT6N-L9sU-ubqE-6vPt-dn7T-W19c-NOXjc7

  logical volume:  /dev/mapper/onn-var_log
  mountpoint:      /var/log
  devices:         /dev/disk/by-id/lvm-pv-uuid-52iT6N-L9sU-ubqE-6vPt-dn7T-W19c-NOXjc7

  logical volume:  /dev/mapper/onn-var_log_audit
  mountpoint:      /var/log/audit
  devices:         /dev/disk/by-id/lvm-pv-uuid-52iT6N-L9sU-ubqE-6vPt-dn7T-W19c-NOXjc7

This is the recommended LVM filter for this host:

  filter = [ "a|^/dev/disk/by-id/lvm-pv-uuid-52iT6N-L9sU-ubqE-6vPt-dn7T-W19c-NOXjc7$|", "a|^/dev/disk/by-id/lvm-pv-uuid-5D4JSI-vqEc-ir4o-BGnG-sZmh-ILjS-jgzICr$|", "r|.*|" ]

This filter allows LVM to access the local devices used by the
hypervisor, but not shared storage owned by Vdsm. If you add a new
device to the volume group, you will need to edit the filter manually.

To use the recommended filter we need to add multipath
blacklist in /etc/multipath/conf.d/vdsm_blacklist.conf:

  blacklist {
      wwid "Samsung_SSD_850_EVO_500GB_S2RBNXAH108545V"
      wwid "Samsung_SSD_850_EVO_M.2_250GB_S24BNXAH209481K"
  }


Configure host? [yes,NO] 

"
Does this mean that answering "yes" I will get both lvm and multipath related files modified?

Right now my multipath is configured this way:

[root@ovirt01 ~]# grep -v "^#" /etc/multipath.conf | grep -v "^    #" | grep -v "^$"
defaults {
    polling_interval            5
    no_path_retry               4
    user_friendly_names         no
    flush_on_last_del           yes
    fast_io_fail_tmo            5
    dev_loss_tmo                30
    max_fds                     4096
}
blacklist {
        protocol "(scsi:adt|scsi:sbp)"
}
overrides {
      no_path_retry            4
}
[root@ovirt01 ~]#

with blacklist explicit on both disks but inside different files:

root disk:
[root@ovirt01 ~]# cat /etc/multipath/conf.d/vdsm_blacklist.conf
# This file is managed by vdsm, do not edit!
# Any changes made to this file will be overwritten when running:
# vdsm-tool config-lvm-filter

blacklist {
    wwid "Samsung_SSD_850_EVO_M.2_250GB_S24BNXAH209481K"
}
[root@ovirt01 ~]#

gluster disk:
[root@ovirt01 ~]# cat /etc/multipath/conf.d/blacklist.conf
# BEGIN ANSIBLE MANAGED BLOCK
blacklist {
# BEGIN ANSIBLE MANAGED BLOCK sda
wwid "Samsung_SSD_850_EVO_500GB_S2RBNXAH108545V"
# END ANSIBLE MANAGED BLOCK sda
}
# END ANSIBLE MANAGED BLOCK
[root@ovirt01 ~]#


[root@ovirt01 ~]# cat /etc/multipath/wwids
# Multipath wwids, Version : 1.0
# NOTE: This file is automatically maintained by multipath and multipathd.
# You should not need to edit this file in normal circumstances.
#
# Valid WWIDs:
/Samsung_SSD_850_EVO_500GB_S2RBNXAH108545V/
[root@ovirt01 ~]#

and in fact no multipath devices setup due to the blacklist sections for local disks...

[root@ovirt01 ~]# multipath -l
[root@ovirt01 ~]#

Gianluca