Also, you can edit the /etc/fstab entries by adding in the mount options:
context="system_u:object_r:glusterd_brick_t:s0"
Then remount the bricks (umount <path>; mount <path>). This tells the kernel
to skip selinux lookups and assume everything has gluster brick files, which will reduce
the I/O.
Best Regards,Strahil Nikolov
On Sat, Oct 2, 2021 at 11:02, Strahil Nikolov<hunter86_bg(a)yahoo.com> wrote: Most
probably it's in a variable.
just run the following:semanage fcontext -a -t
"system_u:object_r:glusterd_brick_t:s0" "/gluster_bricks(/.*)?"
restorecon -RFvv /gluster_bricks/
Best Regards,Strahil Nikolov
On Sat, Oct 2, 2021 at 3:08, Woo Hsutung<woohsutung(a)gmail.com> wrote: Strahil,
Thanks for your response!
Below is ansible script I can edit at last step:
-----------------------------------------------------------------------------hc_nodes:
hosts: node00: gluster_infra_volume_groups: - vgname: gluster_vg_sdd
pvname: /dev/sdd gluster_infra_mount_devices: - path:
/gluster_bricks/engine lvname: gluster_lv_engine vgname: gluster_vg_sdd
- path: /gluster_bricks/data lvname: gluster_lv_data vgname:
gluster_vg_sdd - path: /gluster_bricks/vmstore lvname: gluster_lv_vmstore
vgname: gluster_vg_sdd blacklist_mpath_devices: - sdd
gluster_infra_thick_lvs: - vgname: gluster_vg_sdd lvname:
gluster_lv_engine size: 100G gluster_infra_thinpools: - vgname:
gluster_vg_sdd thinpoolname: gluster_thinpool_gluster_vg_sdd
poolmetadatasize: 2G gluster_infra_lv_logicalvols: - vgname: gluster_vg_sdd
thinpool: gluster_thinpool_gluster_vg_sdd lvname: gluster_lv_data
lvsize: 400G - vgname: gluster_vg_sdd thinpool:
gluster_thinpool_gluster_vg_sdd lvname: gluster_lv_vmstore lvsize: 400G
vars: gluster_infra_disktype: JBOD gluster_set_selinux_labels: true
gluster_infra_fw_ports: - 2049/tcp - 54321/tcp - 5900/tcp -
5900-6923/tcp - 5666/tcp - 16514/tcp gluster_infra_fw_permanent: true
gluster_infra_fw_state: enabled gluster_infra_fw_zone: public
gluster_infra_fw_services: - glusterfs gluster_features_force_varlogsizecheck:
false cluster_nodes: - node00 gluster_features_hci_cluster: '{{
cluster_nodes }}' gluster_features_hci_volumes: - volname: engine
brick: /gluster_bricks/engine/engine arbiter: 0 - volname: data brick:
/gluster_bricks/data/data arbiter: 0 - volname: vmstore brick:
/gluster_bricks/vmstore/vmstore arbiter: 0 gluster_features_hci_volume_options:
storage.owner-uid: '36' storage.owner-gid: '36'
features.shard: 'on' performance.low-prio-threads: '32'
performance.strict-o-direct: 'on' network.remote-dio: 'off'
network.ping-timeout: '30' user.cifs: 'off' nfs.disable:
'on' performance.quick-read: 'off' performance.read-ahead:
'off' performance.io-cache: 'off' cluster.eager-lock:
enable-----------------------------------------------------------------------------
There is no words “glusterd_brick_t”:(
At last, I change the value of “gluster_set_selinux_labels” from “ true” to false, the
deployment is successfully completed.
But I don’t know whether this change will impact the system…., could you give some
suggestions? BR,Hsutung
在 2021年10月1日,下午10:58,Strahil Nikolov <hunter86_bg(a)yahoo.com> 写道:
In cockpit installer last step allows you to edit the ansible before running it.Just
search for glusterd_brick_t and replace it.
Best Regards,Strahil Nikolov
On Fri, Oct 1, 2021 at 17:48, Woo Hsutung<woohsutung(a)gmail.com> wrote: Same
issue happens when I deploy on single node.
And I can’t find where I can edit text to replace glusterd_brick_t with
system_u:object_r:glusterd_brick_t:s0
Any suggestion?
Best RegardsHsutung _______________________________________________
Users mailing list -- users(a)ovirt.org
To unsubscribe send an email to users-leave(a)ovirt.org
Privacy Statement:
https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives: