It seems that I replied 2 min before your e-mail :D
Glad you made it.
P.S: It would be better if you had the time to investigate the root cause
Best Regards,
Strahil NIkolov В сряда, 4 май 2022 г., 09:54:13 ч. Гринуич+3, Abe E
<aellahib(a)gmail.com> написа:
I am happy no one answered, was a fun experimenting process but I was able to reconfigure
the gluster node using an ansible playbook from redhats website.
Instructions for whats needed:
-
https://access.redhat.com/documentation/en-us/red_hat_hyperconverged_infr...
- Also had to edit the gluster python file in other threads due to vdsm bug in 4.5.
Incase anyone is wondering, this is the playbook I had to edit with my configs:
# gluster_infra_disktype
# Set a disk type. Options: JBOD, RAID6, RAID10 - Default: JBOD
gluster_infra_disktype: RAID6
# gluster_infra_dalign
# Dataalignment, for JBOD default is 256K if not provided.
# For RAID{6,10} dataalignment is computed by multiplying
# gluster_infra_diskcount and gluster_infra_stripe_unit_size.
gluster_infra_dalign: 256K
# gluster_infra_diskcount
# Required only for RAID6 and RAID10.
gluster_infra_diskcount: 8
# gluster_infra_stripe_unit_size
# Required only in case of RAID6 and RAID10. Stripe unit size always in KiB, do
# not provide the trailing `K' in the value.
gluster_infra_stripe_unit_size: 128
# gluster_infra_volume_groups
# Variables for creating volume group
gluster_infra_volume_groups:
- { vgname: 'gluster_vg_sda4', pvname: '/dev/sda4' }
# gluster_infra_thick_lvs
# Variable for thick lv creation
gluster_infra_thick_lvs:
- { vgname: 'gluster_vg_sda4', lvname: 'gluster_lv_engine', size:
'100G' }
# gluster_infra_thinpools
# thinpoolname is optional, if not provided `vgname' followed by _thinpool is
# used for name. poolmetadatasize is optional, default 16G is used
gluster_infra_thinpools:
- {vgname: 'gluster_vg_sda4', thinpoolname:
'gluster_thinpool_gluster_vg_sda4', thinpoolsize: '100G',
poolmetadatasize: '16G' }
# gluster_infra_lv_logicalvols
# Thinvolumes for the brick. `thinpoolname' is optional, if omitted `vgname'
# followed by _thinpool is used
gluster_infra_lv_logicalvols:
- { vgname: 'gluster_vg_sda4', thinpool:
'gluster_thinpool_gluster_vg_sda4', lvname: 'gluster_lv_data', lvsize:
'5500G' }
# Setting up cache using SSD disks
#gluster_infra_cache_vars:
# - { vgname: 'gluster_vg_sda4', cachedisk: '/dev/vdd',
# cachethinpoolname: 'foo_thinpool', cachelvname: 'cachelv',
# cachelvsize: '20G', cachemetalvname: 'cachemeta',
# cachemetalvsize: '100M', cachemode: 'writethrough' }
# gluster_infra_mount_devices
gluster_infra_mount_devices:
- { path: '/gluster_bricks/engine/engine', vgname: 'gluster_vg_sda4',
lvname: 'gluster_lv_engine' }
- { path: '/gluster_bricks/data/data', vgname: 'gluster_vg_sda4',
lvname: 'gluster_lv_data' }
_______________________________________________
Users mailing list -- users(a)ovirt.org
To unsubscribe send an email to users-leave(a)ovirt.org
Privacy Statement:
https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KAD4ESJEGIM...