On Tue, Mar 22, 2022 at 7:17 PM Nir Soffer <nsoffer(a)redhat.com> wrote:
On Tue, Mar 22, 2022 at 6:57 PM Abe E <aellahib(a)gmail.com> wrote:
>
> Yes it throws the following:
>
> This is the recommended LVM filter for this host:
>
> filter = [
"a|^/dev/disk/by-id/lvm-pv-uuid-DxNDT5-3NH3-I1YJ-0ajl-ah6W-M7Kf-h5uZKU$|",
"a|^/dev/disk/by-id/lvm-pv-uuid-Yepp1J-dsfN-jLh7-xCxm-G7QC-nbaL-6rT2KY$|",
"r|.*|" ]
This is not complete output - did you strip the lines explaining why
we need this
filter?
> This filter allows LVM to access the local devices used by the
> hypervisor, but not shared storage owned by Vdsm. If you add a new
> device to the volume group, you will need to edit the filter manually.
>
> This is the current LVM filter:
>
> filter = [
"a|^/dev/disk/by-id/lvm-pv-uuid-3QbgiW-WaOV-ejW9-rs5R-akfW-sUZb-AXm8Pq$|",
"a|^/dev/sda|", "r|.*|" ]
So the issue is that you likely have a stale lvm filter for a device
which is not
used by the host.
>
> To use the recommended filter we need to add multipath
> blacklist in /etc/multipath/conf.d/vdsm_blacklist.conf:
>
> blacklist {
> wwid "364cd98f06762ec0029afc17a03e0cf6a"
> }
>
>
> WARNING: The current LVM filter does not match the recommended filter,
> Vdsm cannot configure the filter automatically.
>
> Please edit /etc/lvm/lvm.conf and set the 'filter' option in the
> 'devices' section to the recommended value.
>
> Make sure /etc/multipath/conf.d/vdsm_blacklist.conf is set with the
> recommended 'blacklist' section.
>
> It is recommended to reboot to verify the new configuration.
>
>
>
>
> I updated my entry to the following (Blacklist is already configured
from
before):
> filter = [
"a|^/dev/disk/by-id/lvm-pv-uuid-DxNDT5-3NH3-I1YJ-0ajl-ah6W-M7Kf-h5uZKU$|","a|^/dev/disk/by-id/lvm-pv-uuid-Yepp1J-dsfN-jLh7-xCxm-G7QC-nbaL-6rT2KY$|","a|^/dev/sda|","r|.*|"
]
>
>
> although then it threw this error
>
> [root@ovirt-2 ~]# vdsm-tool config-lvm-filter
> Analyzing host...
> Parse error at byte 106979 (line 2372): unexpected token
> Failed to load config file /etc/lvm/lvm.conf
> Traceback (most recent call last):
> File "/usr/bin/vdsm-tool", line 209, in main
> return tool_command[cmd]["command"](*args)
> File
"/usr/lib/python3.6/site-packages/vdsm/tool/config_lvm_filter.py", line
65,
in main
> mounts = lvmfilter.find_lvm_mounts()
> File "/usr/lib/python3.6/site-packages/vdsm/storage/lvmfilter.py",
line 170, in find_lvm_mounts
> vg_name, tags = vg_info(name)
> File "/usr/lib/python3.6/site-packages/vdsm/storage/lvmfilter.py",
line 467, in vg_info
> lv_path
> File "/usr/lib/python3.6/site-packages/vdsm/storage/lvmfilter.py",
line 566, in _run
> out = subprocess.check_output(args)
> File "/usr/lib64/python3.6/subprocess.py", line 356, in check_output
> **kwargs).stdout
> File "/usr/lib64/python3.6/subprocess.py", line 438, in run
> output=stdout, stderr=stderr)
> subprocess.CalledProcessError: Command '['/usr/sbin/lvm', 'lvs',
'--noheadings', '--readonly', '--config', 'devices
{filter=["a|.*|"ed
non-zero exit status 4.
I'm not sure if this error comes from the code configuring lvm filter,
or from lvm.
The best way to handle this depends on why you have lvm filter that
vdsm-tool cannot handle.
If you know why the lvm filter is set to the current value, and you
know that the system actually
need all the devices in the filter, you can keep the current lvm filter.
If you don't know why the curent lvm filter is set to this value, you
can remove the lvm filter
from lvm.conf, and run "vdsm-tool config-lvm-filter" to let the tool
configure the default filter.
In general, the lvm filter allows the host to access the devices
needed by the host, for
example the root file system.
If you are not sure what are the required devices, please share the
the *complete* output
of running "vdsm-tool config-lvm-filter", with lvm.conf that does not
include any filter.
Example of running config-lvm-filter on RHEL 8.6 host with oVirt 4.5:
# vdsm-tool config-lvm-filter
Analyzing host...
Found these mounted logical volumes on this host:
logical volume: /dev/mapper/rhel-root
mountpoint: /
devices: /dev/vda2
logical volume: /dev/mapper/rhel-swap
mountpoint: [SWAP]
devices: /dev/vda2
logical volume: /dev/mapper/test-lv1
mountpoint: /data
devices: /dev/mapper/0QEMU_QEMU_HARDDISK_123456789
Configuring LVM system.devices.
Devices for following VGs will be imported:
rhel, test
Configure host? [yes,NO]
The tool shows that we have 3 mounted logical volumes, and suggest to
configure lvmdevices file for 2 volume groups.
On oVirt 4.4, the configuration method is lvm filter, and suggests the
required
filter for the mounted logical volumes.