On Tue, Mar 22, 2022 at 8:14 PM Abe E <aellahib(a)gmail.com> wrote:
Apologies, here it is
[root@ovirt-2 ~]# vdsm-tool config-lvm-filter
Analyzing host...
Found these mounted logical volumes on this host:
logical volume: /dev/mapper/gluster_vg_sda4-gluster_lv_data
mountpoint: /gluster_bricks/data
devices:
/dev/disk/by-id/lvm-pv-uuid-DxNDT5-3NH3-I1YJ-0ajl-ah6W-M7Kf-h5uZKU
logical volume: /dev/mapper/gluster_vg_sda4-gluster_lv_engine
mountpoint: /gluster_bricks/engine
devices:
/dev/disk/by-id/lvm-pv-uuid-DxNDT5-3NH3-I1YJ-0ajl-ah6W-M7Kf-h5uZKU
logical volume: /dev/mapper/onn-home
mountpoint: /home
devices:
/dev/disk/by-id/lvm-pv-uuid-Yepp1J-dsfN-jLh7-xCxm-G7QC-nbaL-6rT2KY
logical volume:
/dev/mapper/onn-ovirt--node--ng--4.4.10.1--0.20220202.0+1
mountpoint: /
devices:
/dev/disk/by-id/lvm-pv-uuid-Yepp1J-dsfN-jLh7-xCxm-G7QC-nbaL-6rT2KY
logical volume: /dev/mapper/onn-swap
mountpoint: [SWAP]
devices:
/dev/disk/by-id/lvm-pv-uuid-Yepp1J-dsfN-jLh7-xCxm-G7QC-nbaL-6rT2KY
logical volume: /dev/mapper/onn-tmp
mountpoint: /tmp
devices:
/dev/disk/by-id/lvm-pv-uuid-Yepp1J-dsfN-jLh7-xCxm-G7QC-nbaL-6rT2KY
logical volume: /dev/mapper/onn-var
mountpoint: /var
devices:
/dev/disk/by-id/lvm-pv-uuid-Yepp1J-dsfN-jLh7-xCxm-G7QC-nbaL-6rT2KY
logical volume: /dev/mapper/onn-var_crash
mountpoint: /var/crash
devices:
/dev/disk/by-id/lvm-pv-uuid-Yepp1J-dsfN-jLh7-xCxm-G7QC-nbaL-6rT2KY
logical volume: /dev/mapper/onn-var_log
mountpoint: /var/log
devices:
/dev/disk/by-id/lvm-pv-uuid-Yepp1J-dsfN-jLh7-xCxm-G7QC-nbaL-6rT2KY
logical volume: /dev/mapper/onn-var_log_audit
mountpoint: /var/log/audit
devices:
/dev/disk/by-id/lvm-pv-uuid-Yepp1J-dsfN-jLh7-xCxm-G7QC-nbaL-6rT2KY
This is the recommended LVM filter for this host:
filter = [
"a|^/dev/disk/by-id/lvm-pv-uuid-DxNDT5-3NH3-I1YJ-0ajl-ah6W-M7Kf-h5uZKU$|",
"a|^/dev/disk/by-id/lvm-pv-uuid-Yepp1J-dsfN-jLh7-xCxm-G7QC-nbaL-6rT2KY$|",
"r|.*|" ]
This filter allows LVM to access the local devices used by the
hypervisor, but not shared storage owned by Vdsm. If you add a new
device to the volume group, you will need to edit the filter manually.
This is the current LVM filter:
filter = [
"a|^/dev/disk/by-id/lvm-pv-uuid-3QbgiW-WaOV-ejW9-rs5R-akfW-sUZb-AXm8Pq$|",
"a|^/dev/sda|", "r|.*|" ]
To use the recommended filter we need to add multipath
blacklist in /etc/multipath/conf.d/vdsm_blacklist.conf:
blacklist {
wwid "364cd98f06762ec0029afc17a03e0cf6a"
}
WARNING: The current LVM filter does not match the recommended filter,
Vdsm cannot configure the filter automatically.
Please edit /etc/lvm/lvm.conf and set the 'filter' option in the
'devices' section to the recommended value.
Make sure /etc/multipath/conf.d/vdsm_blacklist.conf is set with the
recommended 'blacklist' section.
It is recommended to reboot to verify the new configuration.
After configuring the LVM to the recommended
I adjusted to the recommended filter although it is still returning the
same results when i run the vdsm-tool config-lvm-filter command. Instead I
did as you mentioned, I commented out my current filter and ran the
vdsm-tool config-lvm-filter and it configured successfully and I rebooted
the node.
Now on boot it is returning the following which looks alot better.
Analyzing host...
LVM filter is already configured for Vdsm
Good, we solved the storage issue.
Now my error on re-install is Host ovirt-2... installation failed.
Task
Configure host for vdsm failed to execute. THat was just a re-install and
this host currently has and the log returns this output, let me know if
youd like more from it but this is where it errors out it seems:
"start_line" : 215,
"end_line" : 216,
"runner_ident" : "ddb84e00-aa0a-11ec-98dc-00163e6f31f1",
"event" : "runner_on_failed",
"pid" : 83339,
"created" : "2022-03-22T18:09:08.381022",
"parent_uuid" : "00163e6f-31f1-a3fb-8e1d-000000000201",
"event_data" : {
"playbook" : "ovirt-host-deploy.yml",
"playbook_uuid" : "2e84fbd4-8368-463e-82e7-3f457ae702d4",
"play" : "all",
"play_uuid" : "00163e6f-31f1-a3fb-8e1d-00000000000b",
"play_pattern" : "all",
"task" : "Configure host for vdsm",
"task_uuid" : "00163e6f-31f1-a3fb-8e1d-000000000201",
"task_action" : "command",
"task_args" : "",
"task_path" :
"/usr/share/ovirt-engine/ansible-runner-service-project/project/roles/ovirt-host-deploy-vdsm/tasks/configure.yml:27",
"role" : "ovirt-host-deploy-vdsm",
"host" : "ovirt-2..com",
"remote_addr" : "ovirt-2..com",
"res" : {
"msg" : "non-zero return code",
"cmd" : [ "vdsm-tool", "configure",
"--force" ],
"stdout" : "\nChecking configuration status...\n\nlibvirt is
already configured for vdsm\nSUCCESS: ssl configured to true. No
conflicts\nManaged volume database is already configured\nlvm is configured
for vdsm\nsanlock is configured for vdsm\nCurrent revision of
multipath.conf detected, preserving\nabrt is already configured for
vdsm\n\nRunning configure...",
"stderr" : "libsepol.context_from_record: type
insights_client_var_lib_t is not defined\nlibsepol.context_from_record:
could not create context structure\nlibsepol.context_from_string: could not
create context structure\nlibsepol.sepol_context_to_sid: could not convert
system_u:object_r:insights_client_var_lib_t:s0 to sid\ninvalid context
system_u:object_r:insights_client_var_lib_t:s0\nlibsemanage.semanage_validate_and_compile_fcontexts:
setfiles returned error code 255.\nTraceback (most recent call last):\n
File \"/usr/bin/vdsm-tool\", line 209, in main\n return
tool_command[cmd][\"command\"](*args)\n File
\"/usr/lib/python3.6/site-packages/vdsm/tool/__init__.py\", line 40, in
wrapper\n func(*args, **kwargs)\n File
\"/usr/lib/python3.6/site-packages/vdsm/tool/configurator.py\", line 145,
in configure\n _configure(c)\n File
\"/usr/lib/python3.6/site-packages/vdsm/tool/configurator.py\", line 92, in
_configure\n getattr(module, 'configure', lambda: None)()\n F
ile
\"/usr/lib/python3.6/site-packages/vdsm/tool/configurators/sebool.py\",
line 88, in configure\n _setup_booleans(True)\n File
\"/usr/lib/python3.6/site-packages/vdsm/tool/configurators/sebool.py\",
line 60, in _setup_booleans\n sebool_obj.finish()\n File
\"/usr/lib/python3.6/site-packages/seobject.py\", line 340, in finish\n
self.commit()\n File \"/usr/lib/python3.6/site-packages/seobject.py\",
line 330, in commit\n rc = semanage_commit(self.sh)\nOSError: [Errno 0]
Error",
"rc" : 1,
"start" : "2022-03-22 12:09:02.211068",
"end" : "2022-03-22 12:09:09.302289",
"delta" : "0:00:07.091221",
"changed" : true,
"invocation" : {
"module_args" : {
"_raw_params" : "vdsm-tool configure --force",
"warn" : true,
"_uses_shell" : false,
"stdin_add_newline" : true,
"strip_empty_ends" : true,
"argv" : null,
"chdir" : null,
"executable" : null,
"creates" : null,
"removes" : null,
"stdin" : null
}
},
"stdout_lines" : [ "", "Checking configuration
status...", "",
"libvirt is already configured for vdsm", "SUCCESS: ssl configured to
true.
No conflicts", "Managed volume database is already configured", "lvm
is
configured for vdsm", "sanlock is configured for vdsm", "Current
revision
of multipath.conf detected, preserving", "abrt is already configured for
vdsm", "", "Running configure..." ],
"stderr_lines" : [ "libsepol.context_from_record: type
insights_client_var_lib_t is not defined", "libsepol.context_from_record:
could not create context structure", "libsepol.context_from_string: could
not create context structure", "libsepol.sepol_context_to_sid: could not
convert system_u:object_r:insights_client_var_lib_t:s0 to sid", "invalid
context system_u:object_r:insights_client_var_lib_t:s0",
"libsemanage.semanage_validate_and_compile_fcontexts: setfiles returned
error code 255.", "Traceback (most recent call last):", " File
\"/usr/bin/vdsm-tool\", line 209, in main", " return
tool_command[cmd][\"command\"](*args)", " File
\"/usr/lib/python3.6/site-packages/vdsm/tool/__init__.py\", line 40, in
wrapper", " func(*args, **kwargs)", " File
\"/usr/lib/python3.6/site-packages/vdsm/tool/configurator.py\", line 145,
in configure", " _configure(c)", " File
\"/usr/lib/python3.6/site-packages/vdsm/tool/configurator.py\", line 92, in
_configure", " getattr(modul
e, 'configure', lambda: None)()", " File
\"/usr/lib/python3.6/site-packages/vdsm/tool/configurators/sebool.py\",
line 88, in configure", " _setup_booleans(True)", " File
\"/usr/lib/python3.6/site-packages/vdsm/tool/configurators/sebool.py\",
line 60, in _setup_booleans", " sebool_obj.finish()", " File
\"/usr/lib/python3.6/site-packages/seobject.py\", line 340, in finish",
"
self.commit()", " File
\"/usr/lib/python3.6/site-packages/seobject.py\",
line 330, in commit", " rc = semanage_commit(self.sh)", "OSError:
[Errno
0] Error" ],
"_ansible_no_log" : false
},
"start" : "2022-03-22T18:09:00.343989",
"end" : "2022-03-22T18:09:08.380734",
"duration" : 8.036745,
"ignore_errors" : null,
"event_loop" : null,
"uuid" : "bc92ed31-4322-433c-a44d-186369dc8158"
}
}
}
This is an issue with the sebool configurator, I hope Marcin can help with
this.
Did you try the obvious things, like installing latest packages on the host,
and installing the latest oVirt version?
Details on your host and ovirt version can also help.
Nir