VDSM Issue after Upgrade of Node in HCI

So I have my 2nd node in my cluster that showed an upgrade option in OVIRT. I put it in maint mode and ran the upgrade, it went through it but at one point it lost its internet connection or connection within the gluster, it didnt get to the reboot process and simply lost its connection to the engine from there. I can see the gluster is still running and was able to keep all 3 glusters syncing but it seems the VDSM may be the culprit here. ovirt-ha-agent wont start and the hosted-engine --connect-storage returns: Traceback (most recent call last): File "/usr/lib64/python3.6/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/usr/lib64/python3.6/runpy.py", line 85, in _run_code exec(code, run_globals) File "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_setup/connect_storage_server.py", line 30, in <module> timeout=ohostedcons.Const.STORAGE_SERVER_TIMEOUT, File "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/client/client.py", line 312, in connect_storage_server sserver.connect_storage_server(timeout=timeout) File "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/lib/storage_server.py", line 411, in connect_storage_server timeout=timeout, File "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/lib/util.py", line 474, in connect_vdsm_json_rpc __vdsm_json_rpc_connect(logger, timeout) File "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/lib/util.py", line 415, in __vdsm_json_rpc_connect timeout=VDSM_MAX_RETRY * VDSM_DELAY RuntimeError: Couldn't connect to VDSM within 60 seconds VDSM just keeps loop restart and failing, vdsm-tool configure --force throws this : [root@ovirt-2 ~]# vdsm-tool configure --force Checking configuration status... sanlock is configured for vdsm abrt is already configured for vdsm Current revision of multipath.conf detected, preserving lvm is configured for vdsm Managed volume database is already configured libvirt is already configured for vdsm SUCCESS: ssl configured to true. No conflicts Running configure... libsepol.context_from_record: type insights_client_var_lib_t is not defined libsepol.context_from_record: could not create context structure libsepol.context_from_string: could not create context structure libsepol.sepol_context_to_sid: could not convert system_u:object_r:insights_client_var_lib_t:s0 to sid invalid context system_u:object_r:insights_client_var_lib_t:s0 libsemanage.semanage_validate_and_compile_fcontexts: setfiles returned error code 255. Traceback (most recent call last): File "/usr/bin/vdsm-tool", line 209, in main return tool_command[cmd]["command"](*args) File "/usr/lib/python3.6/site-packages/vdsm/tool/__init__.py", line 40, in wrapper func(*args, **kwargs) File "/usr/lib/python3.6/site-packages/vdsm/tool/configurator.py", line 145, in configure _configure(c) File "/usr/lib/python3.6/site-packages/vdsm/tool/configurator.py", line 92, in _configure getattr(module, 'configure', lambda: None)() File "/usr/lib/python3.6/site-packages/vdsm/tool/configurators/sebool.py", line 88, in configure _setup_booleans(True) File "/usr/lib/python3.6/site-packages/vdsm/tool/configurators/sebool.py", line 60, in _setup_booleans sebool_obj.finish() File "/usr/lib/python3.6/site-packages/seobject.py", line 340, in finish self.commit() File "/usr/lib/python3.6/site-packages/seobject.py", line 330, in commit rc = semanage_commit(self.sh) OSError: [Errno 0] Error Anyone have ideas where I could recover this, I am not sure if something corrupted on update or on a reboot -- I would prefer updating notes from the CLI next time but unfortunately I have not looked that far, it would have helped me see what failed and where much easier.

Interestingly enough I am able to re-install ovirt from the engine to a certain point. I ran a re-install and it failed asking me to run vdsm-tool config-lvm-filter Error: Installing Host ovirt-2... Check for LVM filter configuration error: Cannot configure LVM filter on host, please run: vdsm-tool config-lvm-filter.

On Tue, Mar 22, 2022 at 6:09 PM Abe E <aellahib@gmail.com> wrote:
Interestingly enough I am able to re-install ovirt from the engine to a certain point. I ran a re-install and it failed asking me to run vdsm-tool config-lvm-filter Error: Installing Host ovirt-2... Check for LVM filter configuration error: Cannot configure LVM filter on host, please run: vdsm-tool config-lvm-filter.
Did you try to run it? Please the complete output of running: vdsm-tool config-lvm-filter Nir

Yes it throws the following: This is the recommended LVM filter for this host: filter = [ "a|^/dev/disk/by-id/lvm-pv-uuid-DxNDT5-3NH3-I1YJ-0ajl-ah6W-M7Kf-h5uZKU$|", "a|^/dev/disk/by-id/lvm-pv-uuid-Yepp1J-dsfN-jLh7-xCxm-G7QC-nbaL-6rT2KY$|", "r|.*|" ] This filter allows LVM to access the local devices used by the hypervisor, but not shared storage owned by Vdsm. If you add a new device to the volume group, you will need to edit the filter manually. This is the current LVM filter: filter = [ "a|^/dev/disk/by-id/lvm-pv-uuid-3QbgiW-WaOV-ejW9-rs5R-akfW-sUZb-AXm8Pq$|", "a|^/dev/sda|", "r|.*|" ] To use the recommended filter we need to add multipath blacklist in /etc/multipath/conf.d/vdsm_blacklist.conf: blacklist { wwid "364cd98f06762ec0029afc17a03e0cf6a" } WARNING: The current LVM filter does not match the recommended filter, Vdsm cannot configure the filter automatically. Please edit /etc/lvm/lvm.conf and set the 'filter' option in the 'devices' section to the recommended value. Make sure /etc/multipath/conf.d/vdsm_blacklist.conf is set with the recommended 'blacklist' section. It is recommended to reboot to verify the new configuration. I updated my entry to the following (Blacklist is already configured from before): filter = [ "a|^/dev/disk/by-id/lvm-pv-uuid-DxNDT5-3NH3-I1YJ-0ajl-ah6W-M7Kf-h5uZKU$|","a|^/dev/disk/by-id/lvm-pv-uuid-Yepp1J-dsfN-jLh7-xCxm-G7QC-nbaL-6rT2KY$|","a|^/dev/sda|","r|.*|" ] although then it threw this error [root@ovirt-2 ~]# vdsm-tool config-lvm-filter Analyzing host... Parse error at byte 106979 (line 2372): unexpected token Failed to load config file /etc/lvm/lvm.conf Traceback (most recent call last): File "/usr/bin/vdsm-tool", line 209, in main return tool_command[cmd]["command"](*args) File "/usr/lib/python3.6/site-packages/vdsm/tool/config_lvm_filter.py", line 65, in main mounts = lvmfilter.find_lvm_mounts() File "/usr/lib/python3.6/site-packages/vdsm/storage/lvmfilter.py", line 170, in find_lvm_mounts vg_name, tags = vg_info(name) File "/usr/lib/python3.6/site-packages/vdsm/storage/lvmfilter.py", line 467, in vg_info lv_path File "/usr/lib/python3.6/site-packages/vdsm/storage/lvmfilter.py", line 566, in _run out = subprocess.check_output(args) File "/usr/lib64/python3.6/subprocess.py", line 356, in check_output **kwargs).stdout File "/usr/lib64/python3.6/subprocess.py", line 438, in run output=stdout, stderr=stderr) subprocess.CalledProcessError: Command '['/usr/sbin/lvm', 'lvs', '--noheadings', '--readonly', '--config', 'devices {filter=["a|.*|"ed non-zero exit status 4. I thought maybe it required a reboot although now it failed to reboot so I am physically going to check on it.

On Tue, Mar 22, 2022 at 6:57 PM Abe E <aellahib@gmail.com> wrote:
Yes it throws the following:
This is the recommended LVM filter for this host:
filter = [ "a|^/dev/disk/by-id/lvm-pv-uuid-DxNDT5-3NH3-I1YJ-0ajl-ah6W-M7Kf-h5uZKU$|", "a|^/dev/disk/by-id/lvm-pv-uuid-Yepp1J-dsfN-jLh7-xCxm-G7QC-nbaL-6rT2KY$|", "r|.*|" ]
This is not complete output - did you strip the lines explaining why we need this filter?
This filter allows LVM to access the local devices used by the hypervisor, but not shared storage owned by Vdsm. If you add a new device to the volume group, you will need to edit the filter manually.
This is the current LVM filter:
filter = [ "a|^/dev/disk/by-id/lvm-pv-uuid-3QbgiW-WaOV-ejW9-rs5R-akfW-sUZb-AXm8Pq$|", "a|^/dev/sda|", "r|.*|" ]
So the issue is that you likely have a stale lvm filter for a device which is not used by the host.
To use the recommended filter we need to add multipath blacklist in /etc/multipath/conf.d/vdsm_blacklist.conf:
blacklist { wwid "364cd98f06762ec0029afc17a03e0cf6a" }
WARNING: The current LVM filter does not match the recommended filter, Vdsm cannot configure the filter automatically.
Please edit /etc/lvm/lvm.conf and set the 'filter' option in the 'devices' section to the recommended value.
Make sure /etc/multipath/conf.d/vdsm_blacklist.conf is set with the recommended 'blacklist' section.
It is recommended to reboot to verify the new configuration.
I updated my entry to the following (Blacklist is already configured from before): filter = [ "a|^/dev/disk/by-id/lvm-pv-uuid-DxNDT5-3NH3-I1YJ-0ajl-ah6W-M7Kf-h5uZKU$|","a|^/dev/disk/by-id/lvm-pv-uuid-Yepp1J-dsfN-jLh7-xCxm-G7QC-nbaL-6rT2KY$|","a|^/dev/sda|","r|.*|" ]
although then it threw this error
[root@ovirt-2 ~]# vdsm-tool config-lvm-filter Analyzing host... Parse error at byte 106979 (line 2372): unexpected token Failed to load config file /etc/lvm/lvm.conf Traceback (most recent call last): File "/usr/bin/vdsm-tool", line 209, in main return tool_command[cmd]["command"](*args) File "/usr/lib/python3.6/site-packages/vdsm/tool/config_lvm_filter.py", line 65, in main mounts = lvmfilter.find_lvm_mounts() File "/usr/lib/python3.6/site-packages/vdsm/storage/lvmfilter.py", line 170, in find_lvm_mounts vg_name, tags = vg_info(name) File "/usr/lib/python3.6/site-packages/vdsm/storage/lvmfilter.py", line 467, in vg_info lv_path File "/usr/lib/python3.6/site-packages/vdsm/storage/lvmfilter.py", line 566, in _run out = subprocess.check_output(args) File "/usr/lib64/python3.6/subprocess.py", line 356, in check_output **kwargs).stdout File "/usr/lib64/python3.6/subprocess.py", line 438, in run output=stdout, stderr=stderr) subprocess.CalledProcessError: Command '['/usr/sbin/lvm', 'lvs', '--noheadings', '--readonly', '--config', 'devices {filter=["a|.*|"ed non-zero exit status 4.
I'm not sure if this error comes from the code configuring lvm filter, or from lvm. The best way to handle this depends on why you have lvm filter that vdsm-tool cannot handle. If you know why the lvm filter is set to the current value, and you know that the system actually need all the devices in the filter, you can keep the current lvm filter. If you don't know why the curent lvm filter is set to this value, you can remove the lvm filter from lvm.conf, and run "vdsm-tool config-lvm-filter" to let the tool configure the default filter. In general, the lvm filter allows the host to access the devices needed by the host, for example the root file system. If you are not sure what are the required devices, please share the the *complete* output of running "vdsm-tool config-lvm-filter", with lvm.conf that does not include any filter. Nir

On Tue, Mar 22, 2022 at 7:17 PM Nir Soffer <nsoffer@redhat.com> wrote:
On Tue, Mar 22, 2022 at 6:57 PM Abe E <aellahib@gmail.com> wrote:
Yes it throws the following:
This is the recommended LVM filter for this host:
filter = [
"a|^/dev/disk/by-id/lvm-pv-uuid-DxNDT5-3NH3-I1YJ-0ajl-ah6W-M7Kf-h5uZKU$|", "a|^/dev/disk/by-id/lvm-pv-uuid-Yepp1J-dsfN-jLh7-xCxm-G7QC-nbaL-6rT2KY$|", "r|.*|" ]
This is not complete output - did you strip the lines explaining why we need this filter?
This filter allows LVM to access the local devices used by the hypervisor, but not shared storage owned by Vdsm. If you add a new device to the volume group, you will need to edit the filter manually.
This is the current LVM filter:
filter = [
"a|^/dev/disk/by-id/lvm-pv-uuid-3QbgiW-WaOV-ejW9-rs5R-akfW-sUZb-AXm8Pq$|", "a|^/dev/sda|", "r|.*|" ]
So the issue is that you likely have a stale lvm filter for a device which is not used by the host.
To use the recommended filter we need to add multipath blacklist in /etc/multipath/conf.d/vdsm_blacklist.conf:
blacklist { wwid "364cd98f06762ec0029afc17a03e0cf6a" }
WARNING: The current LVM filter does not match the recommended filter, Vdsm cannot configure the filter automatically.
Please edit /etc/lvm/lvm.conf and set the 'filter' option in the 'devices' section to the recommended value.
Make sure /etc/multipath/conf.d/vdsm_blacklist.conf is set with the recommended 'blacklist' section.
It is recommended to reboot to verify the new configuration.
I updated my entry to the following (Blacklist is already configured
filter = [ "a|^/dev/disk/by-id/lvm-pv-uuid-DxNDT5-3NH3-I1YJ-0ajl-ah6W-M7Kf-h5uZKU$|","a|^/dev/disk/by-id/lvm-pv-uuid-Yepp1J-dsfN-jLh7-xCxm-G7QC-nbaL-6rT2KY$|","a|^/dev/sda|","r|.*|" ]
although then it threw this error
[root@ovirt-2 ~]# vdsm-tool config-lvm-filter Analyzing host... Parse error at byte 106979 (line 2372): unexpected token Failed to load config file /etc/lvm/lvm.conf Traceback (most recent call last): File "/usr/bin/vdsm-tool", line 209, in main return tool_command[cmd]["command"](*args) File "/usr/lib/python3.6/site-packages/vdsm/tool/config_lvm_filter.py", line 65, in main mounts = lvmfilter.find_lvm_mounts() File "/usr/lib/python3.6/site-packages/vdsm/storage/lvmfilter.py",
vg_name, tags = vg_info(name) File "/usr/lib/python3.6/site-packages/vdsm/storage/lvmfilter.py",
lv_path File "/usr/lib/python3.6/site-packages/vdsm/storage/lvmfilter.py",
from before): line 170, in find_lvm_mounts line 467, in vg_info line 566, in _run
out = subprocess.check_output(args) File "/usr/lib64/python3.6/subprocess.py", line 356, in check_output **kwargs).stdout File "/usr/lib64/python3.6/subprocess.py", line 438, in run output=stdout, stderr=stderr) subprocess.CalledProcessError: Command '['/usr/sbin/lvm', 'lvs',
'--noheadings', '--readonly', '--config', 'devices {filter=["a|.*|"ed non-zero exit status 4.
I'm not sure if this error comes from the code configuring lvm filter, or from lvm.
The best way to handle this depends on why you have lvm filter that vdsm-tool cannot handle.
If you know why the lvm filter is set to the current value, and you know that the system actually need all the devices in the filter, you can keep the current lvm filter.
If you don't know why the curent lvm filter is set to this value, you can remove the lvm filter from lvm.conf, and run "vdsm-tool config-lvm-filter" to let the tool configure the default filter.
In general, the lvm filter allows the host to access the devices needed by the host, for example the root file system.
If you are not sure what are the required devices, please share the the *complete* output of running "vdsm-tool config-lvm-filter", with lvm.conf that does not include any filter.
Example of running config-lvm-filter on RHEL 8.6 host with oVirt 4.5: # vdsm-tool config-lvm-filter Analyzing host... Found these mounted logical volumes on this host: logical volume: /dev/mapper/rhel-root mountpoint: / devices: /dev/vda2 logical volume: /dev/mapper/rhel-swap mountpoint: [SWAP] devices: /dev/vda2 logical volume: /dev/mapper/test-lv1 mountpoint: /data devices: /dev/mapper/0QEMU_QEMU_HARDDISK_123456789 Configuring LVM system.devices. Devices for following VGs will be imported: rhel, test Configure host? [yes,NO] The tool shows that we have 3 mounted logical volumes, and suggest to configure lvmdevices file for 2 volume groups. On oVirt 4.4, the configuration method is lvm filter, and suggests the required filter for the mounted logical volumes.

Apologies, here it is [root@ovirt-2 ~]# vdsm-tool config-lvm-filter Analyzing host... Found these mounted logical volumes on this host: logical volume: /dev/mapper/gluster_vg_sda4-gluster_lv_data mountpoint: /gluster_bricks/data devices: /dev/disk/by-id/lvm-pv-uuid-DxNDT5-3NH3-I1YJ-0ajl-ah6W-M7Kf-h5uZKU logical volume: /dev/mapper/gluster_vg_sda4-gluster_lv_engine mountpoint: /gluster_bricks/engine devices: /dev/disk/by-id/lvm-pv-uuid-DxNDT5-3NH3-I1YJ-0ajl-ah6W-M7Kf-h5uZKU logical volume: /dev/mapper/onn-home mountpoint: /home devices: /dev/disk/by-id/lvm-pv-uuid-Yepp1J-dsfN-jLh7-xCxm-G7QC-nbaL-6rT2KY logical volume: /dev/mapper/onn-ovirt--node--ng--4.4.10.1--0.20220202.0+1 mountpoint: / devices: /dev/disk/by-id/lvm-pv-uuid-Yepp1J-dsfN-jLh7-xCxm-G7QC-nbaL-6rT2KY logical volume: /dev/mapper/onn-swap mountpoint: [SWAP] devices: /dev/disk/by-id/lvm-pv-uuid-Yepp1J-dsfN-jLh7-xCxm-G7QC-nbaL-6rT2KY logical volume: /dev/mapper/onn-tmp mountpoint: /tmp devices: /dev/disk/by-id/lvm-pv-uuid-Yepp1J-dsfN-jLh7-xCxm-G7QC-nbaL-6rT2KY logical volume: /dev/mapper/onn-var mountpoint: /var devices: /dev/disk/by-id/lvm-pv-uuid-Yepp1J-dsfN-jLh7-xCxm-G7QC-nbaL-6rT2KY logical volume: /dev/mapper/onn-var_crash mountpoint: /var/crash devices: /dev/disk/by-id/lvm-pv-uuid-Yepp1J-dsfN-jLh7-xCxm-G7QC-nbaL-6rT2KY logical volume: /dev/mapper/onn-var_log mountpoint: /var/log devices: /dev/disk/by-id/lvm-pv-uuid-Yepp1J-dsfN-jLh7-xCxm-G7QC-nbaL-6rT2KY logical volume: /dev/mapper/onn-var_log_audit mountpoint: /var/log/audit devices: /dev/disk/by-id/lvm-pv-uuid-Yepp1J-dsfN-jLh7-xCxm-G7QC-nbaL-6rT2KY This is the recommended LVM filter for this host: filter = [ "a|^/dev/disk/by-id/lvm-pv-uuid-DxNDT5-3NH3-I1YJ-0ajl-ah6W-M7Kf-h5uZKU$|", "a|^/dev/disk/by-id/lvm-pv-uuid-Yepp1J-dsfN-jLh7-xCxm-G7QC-nbaL-6rT2KY$|", "r|.*|" ] This filter allows LVM to access the local devices used by the hypervisor, but not shared storage owned by Vdsm. If you add a new device to the volume group, you will need to edit the filter manually. This is the current LVM filter: filter = [ "a|^/dev/disk/by-id/lvm-pv-uuid-3QbgiW-WaOV-ejW9-rs5R-akfW-sUZb-AXm8Pq$|", "a|^/dev/sda|", "r|.*|" ] To use the recommended filter we need to add multipath blacklist in /etc/multipath/conf.d/vdsm_blacklist.conf: blacklist { wwid "364cd98f06762ec0029afc17a03e0cf6a" } WARNING: The current LVM filter does not match the recommended filter, Vdsm cannot configure the filter automatically. Please edit /etc/lvm/lvm.conf and set the 'filter' option in the 'devices' section to the recommended value. Make sure /etc/multipath/conf.d/vdsm_blacklist.conf is set with the recommended 'blacklist' section. It is recommended to reboot to verify the new configuration. After configuring the LVM to the recommended I adjusted to the recommended filter although it is still returning the same results when i run the vdsm-tool config-lvm-filter command. Instead I did as you mentioned, I commented out my current filter and ran the vdsm-tool config-lvm-filter and it configured successfully and I rebooted the node. Now on boot it is returning the following which looks alot better. Analyzing host... LVM filter is already configured for Vdsm Now my error on re-install is Host ovirt-2... installation failed. Task Configure host for vdsm failed to execute. THat was just a re-install and this host currently has and the log returns this output, let me know if youd like more from it but this is where it errors out it seems: "start_line" : 215, "end_line" : 216, "runner_ident" : "ddb84e00-aa0a-11ec-98dc-00163e6f31f1", "event" : "runner_on_failed", "pid" : 83339, "created" : "2022-03-22T18:09:08.381022", "parent_uuid" : "00163e6f-31f1-a3fb-8e1d-000000000201", "event_data" : { "playbook" : "ovirt-host-deploy.yml", "playbook_uuid" : "2e84fbd4-8368-463e-82e7-3f457ae702d4", "play" : "all", "play_uuid" : "00163e6f-31f1-a3fb-8e1d-00000000000b", "play_pattern" : "all", "task" : "Configure host for vdsm", "task_uuid" : "00163e6f-31f1-a3fb-8e1d-000000000201", "task_action" : "command", "task_args" : "", "task_path" : "/usr/share/ovirt-engine/ansible-runner-service-project/project/roles/ovirt-host-deploy-vdsm/tasks/configure.yml:27", "role" : "ovirt-host-deploy-vdsm", "host" : "ovirt-2..com", "remote_addr" : "ovirt-2..com", "res" : { "msg" : "non-zero return code", "cmd" : [ "vdsm-tool", "configure", "--force" ], "stdout" : "\nChecking configuration status...\n\nlibvirt is already configured for vdsm\nSUCCESS: ssl configured to true. No conflicts\nManaged volume database is already configured\nlvm is configured for vdsm\nsanlock is configured for vdsm\nCurrent revision of multipath.conf detected, preserving\nabrt is already configured for vdsm\n\nRunning configure...", "stderr" : "libsepol.context_from_record: type insights_client_var_lib_t is not defined\nlibsepol.context_from_record: could not create context structure\nlibsepol.context_from_string: could not create context structure\nlibsepol.sepol_context_to_sid: could not convert system_u:object_r:insights_client_var_lib_t:s0 to sid\ninvalid context system_u:object_r:insights_client_var_lib_t:s0\nlibsemanage.semanage_validate_and_compile_fcontexts: setfiles returned error code 255.\nTraceback (most recent call last):\n File \"/usr/bin/vdsm-tool\", line 209, in main\n return tool_command[cmd][\"command\"](*args)\n File \"/usr/lib/python3.6/site-packages/vdsm/tool/__init__.py\", line 40, in wrapper\n func(*args, **kwargs)\n File \"/usr/lib/python3.6/site-packages/vdsm/tool/configurator.py\", line 145, in configure\n _configure(c)\n File \"/usr/lib/python3.6/site-packages/vdsm/tool/configurator.py\", line 92, in _configure\n getattr(module, 'configure', lambda: None)()\n F ile \"/usr/lib/python3.6/site-packages/vdsm/tool/configurators/sebool.py\", line 88, in configure\n _setup_booleans(True)\n File \"/usr/lib/python3.6/site-packages/vdsm/tool/configurators/sebool.py\", line 60, in _setup_booleans\n sebool_obj.finish()\n File \"/usr/lib/python3.6/site-packages/seobject.py\", line 340, in finish\n self.commit()\n File \"/usr/lib/python3.6/site-packages/seobject.py\", line 330, in commit\n rc = semanage_commit(self.sh)\nOSError: [Errno 0] Error", "rc" : 1, "start" : "2022-03-22 12:09:02.211068", "end" : "2022-03-22 12:09:09.302289", "delta" : "0:00:07.091221", "changed" : true, "invocation" : { "module_args" : { "_raw_params" : "vdsm-tool configure --force", "warn" : true, "_uses_shell" : false, "stdin_add_newline" : true, "strip_empty_ends" : true, "argv" : null, "chdir" : null, "executable" : null, "creates" : null, "removes" : null, "stdin" : null } }, "stdout_lines" : [ "", "Checking configuration status...", "", "libvirt is already configured for vdsm", "SUCCESS: ssl configured to true. No conflicts", "Managed volume database is already configured", "lvm is configured for vdsm", "sanlock is configured for vdsm", "Current revision of multipath.conf detected, preserving", "abrt is already configured for vdsm", "", "Running configure..." ], "stderr_lines" : [ "libsepol.context_from_record: type insights_client_var_lib_t is not defined", "libsepol.context_from_record: could not create context structure", "libsepol.context_from_string: could not create context structure", "libsepol.sepol_context_to_sid: could not convert system_u:object_r:insights_client_var_lib_t:s0 to sid", "invalid context system_u:object_r:insights_client_var_lib_t:s0", "libsemanage.semanage_validate_and_compile_fcontexts: setfiles returned error code 255.", "Traceback (most recent call last):", " File \"/usr/bin/vdsm-tool\", line 209, in main", " return tool_command[cmd][\"command\"](*args)", " File \"/usr/lib/python3.6/site-packages/vdsm/tool/__init__.py\", line 40, in wrapper", " func(*args, **kwargs)", " File \"/usr/lib/python3.6/site-packages/vdsm/tool/configurator.py\", line 145, in configure", " _configure(c)", " File \"/usr/lib/python3.6/site-packages/vdsm/tool/configurator.py\", line 92, in _configure", " getattr(modul e, 'configure', lambda: None)()", " File \"/usr/lib/python3.6/site-packages/vdsm/tool/configurators/sebool.py\", line 88, in configure", " _setup_booleans(True)", " File \"/usr/lib/python3.6/site-packages/vdsm/tool/configurators/sebool.py\", line 60, in _setup_booleans", " sebool_obj.finish()", " File \"/usr/lib/python3.6/site-packages/seobject.py\", line 340, in finish", " self.commit()", " File \"/usr/lib/python3.6/site-packages/seobject.py\", line 330, in commit", " rc = semanage_commit(self.sh)", "OSError: [Errno 0] Error" ], "_ansible_no_log" : false }, "start" : "2022-03-22T18:09:00.343989", "end" : "2022-03-22T18:09:08.380734", "duration" : 8.036745, "ignore_errors" : null, "event_loop" : null, "uuid" : "bc92ed31-4322-433c-a44d-186369dc8158" } } }

On Tue, Mar 22, 2022 at 8:14 PM Abe E <aellahib@gmail.com> wrote:
Apologies, here it is [root@ovirt-2 ~]# vdsm-tool config-lvm-filter Analyzing host... Found these mounted logical volumes on this host:
logical volume: /dev/mapper/gluster_vg_sda4-gluster_lv_data mountpoint: /gluster_bricks/data devices: /dev/disk/by-id/lvm-pv-uuid-DxNDT5-3NH3-I1YJ-0ajl-ah6W-M7Kf-h5uZKU
logical volume: /dev/mapper/gluster_vg_sda4-gluster_lv_engine mountpoint: /gluster_bricks/engine devices: /dev/disk/by-id/lvm-pv-uuid-DxNDT5-3NH3-I1YJ-0ajl-ah6W-M7Kf-h5uZKU
logical volume: /dev/mapper/onn-home mountpoint: /home devices: /dev/disk/by-id/lvm-pv-uuid-Yepp1J-dsfN-jLh7-xCxm-G7QC-nbaL-6rT2KY
logical volume: /dev/mapper/onn-ovirt--node--ng--4.4.10.1--0.20220202.0+1 mountpoint: / devices: /dev/disk/by-id/lvm-pv-uuid-Yepp1J-dsfN-jLh7-xCxm-G7QC-nbaL-6rT2KY
logical volume: /dev/mapper/onn-swap mountpoint: [SWAP] devices: /dev/disk/by-id/lvm-pv-uuid-Yepp1J-dsfN-jLh7-xCxm-G7QC-nbaL-6rT2KY
logical volume: /dev/mapper/onn-tmp mountpoint: /tmp devices: /dev/disk/by-id/lvm-pv-uuid-Yepp1J-dsfN-jLh7-xCxm-G7QC-nbaL-6rT2KY
logical volume: /dev/mapper/onn-var mountpoint: /var devices: /dev/disk/by-id/lvm-pv-uuid-Yepp1J-dsfN-jLh7-xCxm-G7QC-nbaL-6rT2KY
logical volume: /dev/mapper/onn-var_crash mountpoint: /var/crash devices: /dev/disk/by-id/lvm-pv-uuid-Yepp1J-dsfN-jLh7-xCxm-G7QC-nbaL-6rT2KY
logical volume: /dev/mapper/onn-var_log mountpoint: /var/log devices: /dev/disk/by-id/lvm-pv-uuid-Yepp1J-dsfN-jLh7-xCxm-G7QC-nbaL-6rT2KY
logical volume: /dev/mapper/onn-var_log_audit mountpoint: /var/log/audit devices: /dev/disk/by-id/lvm-pv-uuid-Yepp1J-dsfN-jLh7-xCxm-G7QC-nbaL-6rT2KY
This is the recommended LVM filter for this host:
filter = [ "a|^/dev/disk/by-id/lvm-pv-uuid-DxNDT5-3NH3-I1YJ-0ajl-ah6W-M7Kf-h5uZKU$|", "a|^/dev/disk/by-id/lvm-pv-uuid-Yepp1J-dsfN-jLh7-xCxm-G7QC-nbaL-6rT2KY$|", "r|.*|" ]
This filter allows LVM to access the local devices used by the hypervisor, but not shared storage owned by Vdsm. If you add a new device to the volume group, you will need to edit the filter manually.
This is the current LVM filter:
filter = [ "a|^/dev/disk/by-id/lvm-pv-uuid-3QbgiW-WaOV-ejW9-rs5R-akfW-sUZb-AXm8Pq$|", "a|^/dev/sda|", "r|.*|" ]
To use the recommended filter we need to add multipath blacklist in /etc/multipath/conf.d/vdsm_blacklist.conf:
blacklist { wwid "364cd98f06762ec0029afc17a03e0cf6a" }
WARNING: The current LVM filter does not match the recommended filter, Vdsm cannot configure the filter automatically.
Please edit /etc/lvm/lvm.conf and set the 'filter' option in the 'devices' section to the recommended value.
Make sure /etc/multipath/conf.d/vdsm_blacklist.conf is set with the recommended 'blacklist' section.
It is recommended to reboot to verify the new configuration.
After configuring the LVM to the recommended
I adjusted to the recommended filter although it is still returning the same results when i run the vdsm-tool config-lvm-filter command. Instead I did as you mentioned, I commented out my current filter and ran the vdsm-tool config-lvm-filter and it configured successfully and I rebooted the node.
Now on boot it is returning the following which looks alot better. Analyzing host... LVM filter is already configured for Vdsm
Good, we solved the storage issue.
Now my error on re-install is Host ovirt-2... installation failed. Task Configure host for vdsm failed to execute. THat was just a re-install and this host currently has and the log returns this output, let me know if youd like more from it but this is where it errors out it seems:
"start_line" : 215, "end_line" : 216, "runner_ident" : "ddb84e00-aa0a-11ec-98dc-00163e6f31f1", "event" : "runner_on_failed", "pid" : 83339, "created" : "2022-03-22T18:09:08.381022", "parent_uuid" : "00163e6f-31f1-a3fb-8e1d-000000000201", "event_data" : { "playbook" : "ovirt-host-deploy.yml", "playbook_uuid" : "2e84fbd4-8368-463e-82e7-3f457ae702d4", "play" : "all", "play_uuid" : "00163e6f-31f1-a3fb-8e1d-00000000000b", "play_pattern" : "all", "task" : "Configure host for vdsm", "task_uuid" : "00163e6f-31f1-a3fb-8e1d-000000000201", "task_action" : "command", "task_args" : "", "task_path" : "/usr/share/ovirt-engine/ansible-runner-service-project/project/roles/ovirt-host-deploy-vdsm/tasks/configure.yml:27", "role" : "ovirt-host-deploy-vdsm", "host" : "ovirt-2..com", "remote_addr" : "ovirt-2..com", "res" : { "msg" : "non-zero return code", "cmd" : [ "vdsm-tool", "configure", "--force" ], "stdout" : "\nChecking configuration status...\n\nlibvirt is already configured for vdsm\nSUCCESS: ssl configured to true. No conflicts\nManaged volume database is already configured\nlvm is configured for vdsm\nsanlock is configured for vdsm\nCurrent revision of multipath.conf detected, preserving\nabrt is already configured for vdsm\n\nRunning configure...", "stderr" : "libsepol.context_from_record: type insights_client_var_lib_t is not defined\nlibsepol.context_from_record: could not create context structure\nlibsepol.context_from_string: could not create context structure\nlibsepol.sepol_context_to_sid: could not convert system_u:object_r:insights_client_var_lib_t:s0 to sid\ninvalid context system_u:object_r:insights_client_var_lib_t:s0\nlibsemanage.semanage_validate_and_compile_fcontexts: setfiles returned error code 255.\nTraceback (most recent call last):\n File \"/usr/bin/vdsm-tool\", line 209, in main\n return tool_command[cmd][\"command\"](*args)\n File \"/usr/lib/python3.6/site-packages/vdsm/tool/__init__.py\", line 40, in wrapper\n func(*args, **kwargs)\n File \"/usr/lib/python3.6/site-packages/vdsm/tool/configurator.py\", line 145, in configure\n _configure(c)\n File \"/usr/lib/python3.6/site-packages/vdsm/tool/configurator.py\", line 92, in _configure\n getattr(module, 'configure', lambda: None)()\n F ile \"/usr/lib/python3.6/site-packages/vdsm/tool/configurators/sebool.py\", line 88, in configure\n _setup_booleans(True)\n File \"/usr/lib/python3.6/site-packages/vdsm/tool/configurators/sebool.py\", line 60, in _setup_booleans\n sebool_obj.finish()\n File \"/usr/lib/python3.6/site-packages/seobject.py\", line 340, in finish\n self.commit()\n File \"/usr/lib/python3.6/site-packages/seobject.py\", line 330, in commit\n rc = semanage_commit(self.sh)\nOSError: [Errno 0] Error", "rc" : 1, "start" : "2022-03-22 12:09:02.211068", "end" : "2022-03-22 12:09:09.302289", "delta" : "0:00:07.091221", "changed" : true, "invocation" : { "module_args" : { "_raw_params" : "vdsm-tool configure --force", "warn" : true, "_uses_shell" : false, "stdin_add_newline" : true, "strip_empty_ends" : true, "argv" : null, "chdir" : null, "executable" : null, "creates" : null, "removes" : null, "stdin" : null } }, "stdout_lines" : [ "", "Checking configuration status...", "", "libvirt is already configured for vdsm", "SUCCESS: ssl configured to true. No conflicts", "Managed volume database is already configured", "lvm is configured for vdsm", "sanlock is configured for vdsm", "Current revision of multipath.conf detected, preserving", "abrt is already configured for vdsm", "", "Running configure..." ], "stderr_lines" : [ "libsepol.context_from_record: type insights_client_var_lib_t is not defined", "libsepol.context_from_record: could not create context structure", "libsepol.context_from_string: could not create context structure", "libsepol.sepol_context_to_sid: could not convert system_u:object_r:insights_client_var_lib_t:s0 to sid", "invalid context system_u:object_r:insights_client_var_lib_t:s0", "libsemanage.semanage_validate_and_compile_fcontexts: setfiles returned error code 255.", "Traceback (most recent call last):", " File \"/usr/bin/vdsm-tool\", line 209, in main", " return tool_command[cmd][\"command\"](*args)", " File \"/usr/lib/python3.6/site-packages/vdsm/tool/__init__.py\", line 40, in wrapper", " func(*args, **kwargs)", " File \"/usr/lib/python3.6/site-packages/vdsm/tool/configurator.py\", line 145, in configure", " _configure(c)", " File \"/usr/lib/python3.6/site-packages/vdsm/tool/configurator.py\", line 92, in _configure", " getattr(modul e, 'configure', lambda: None)()", " File \"/usr/lib/python3.6/site-packages/vdsm/tool/configurators/sebool.py\", line 88, in configure", " _setup_booleans(True)", " File \"/usr/lib/python3.6/site-packages/vdsm/tool/configurators/sebool.py\", line 60, in _setup_booleans", " sebool_obj.finish()", " File \"/usr/lib/python3.6/site-packages/seobject.py\", line 340, in finish", " self.commit()", " File \"/usr/lib/python3.6/site-packages/seobject.py\", line 330, in commit", " rc = semanage_commit(self.sh)", "OSError: [Errno 0] Error" ], "_ansible_no_log" : false }, "start" : "2022-03-22T18:09:00.343989", "end" : "2022-03-22T18:09:08.380734", "duration" : 8.036745, "ignore_errors" : null, "event_loop" : null, "uuid" : "bc92ed31-4322-433c-a44d-186369dc8158" } } }
This is an issue with the sebool configurator, I hope Marcin can help with this. Did you try the obvious things, like installing latest packages on the host, and installing the latest oVirt version? Details on your host and ovirt version can also help. Nir

Thank You. I've tried to re-install the current ovirt package only (ovirt-engine-appliance-4.4-20220308105414.1.el8.x86_64.rpm). My Ovirt Hose is running 4.4. I am not sure if we can fully rely on the info ovirt reports being that it sees this server as "not responding" but these are specs: OS Version: RHEL - 8.6.2109.0 - 1.el8 OS Description: oVirt Node 4.4.10 Kernel Version: 4.18.0 - 358.el8.x86_64 KVM Version: 6.0.0 - 33.el8s LIBVIRT Version: libvirt-7.10.0-1.module_el8.6.0+1046+bd8eec5e VDSM Version: vdsm-4.40.100.2-1.el8 SPICE Version: 0.14.3 - 4.el8 GlusterFS Version: glusterfs-8.6-2.el8s CEPH Version: librbd1-16.2.7-1.el8s Open vSwitch Version: openvswitch-2.11-1.el8 Nmstate Version: nmstate-1.2.1-0.2.alpha2.el8 Kernel Features: MDS: (Mitigation: Clear CPU buffers; SMT vulnerable), L1TF: (Mitigation: PTE Inversion; VMX: conditional cache flushes, SMT vulnerable), SRBDS: (Not affected), MELTDOWN: (Mitigation: PTI), SPECTRE_V1: (Mitigation: usercopy/swapgs barriers and __user pointer sanitization), SPECTRE_V2: (Mitigation: Full generic retpoline, IBPB: conditional, IBRS_FW, STIBP: conditional, RSB filling), ITLB_MULTIHIT: (KVM: Mitigation: VMX disabled), TSX_ASYNC_ABORT: (Mitigation: Clear CPU buffers; SMT vulnerable), SPEC_STORE_BYPASS: (Mitigation: Speculative Store Bypass disabled via prctl and seccomp) VNC Encryption: Disabled FIPS mode enabled: Disabled

After running : yum reinstall ovirt-node-ng-image-update It re-installed the ovirt node and I was able to start VDSM again aswell as the ovirt-ha-broker an ovirt-ha-agent. I was still unable to activate the 2nd Node in the engine so I tried to re-install with engine deploy and it was able to complete past the previous VDSM issue it had. Thank You for your help in regards to the LVM issues I was having, noted for future reference!

On Wed, Mar 23, 2022 at 6:04 PM Abe E <aellahib@gmail.com> wrote:
After running : yum reinstall ovirt-node-ng-image-update It re-installed the ovirt node and I was able to start VDSM again aswell as the ovirt-ha-broker an ovirt-ha-agent.
I was still unable to activate the 2nd Node in the engine so I tried to re-install with engine deploy and it was able to complete past the previous VDSM issue it had.
Thank You for your help in regards to the LVM issues I was having, noted for future reference!
Great that you managed to recover, but if reinstalling fixed the issue, it means that there is some issue with the node upgrade. Sandro, do you think we need a bug for this?
participants (2)
-
Abe E
-
Nir Soffer