
Hello all I tried to setup Gluster volumes in cockpit using the wizard. Based on Red Hat's recommendations I wanted to put the Volume for the oVirt Engine on a thick provisioned logical volume [1] and therefore removed the line thinpoolname and corresponding configuration from the yml file (see below). Unfortunately, this approach was not successful. My solution is now to only create a data volume and manually create a thick provisioned gluster volume manually. What would you recommend doing? Thanks your any input :) Regards, Jonas [1]: https://access.redhat.com/documentation/en-us/red_hat_hyperconverged_infrast... hc_nodes: hosts: server-005.storage.int.rabe.ch: gluster_infra_volume_groups: - vgname: vg_tier1_01 pvname: /dev/md/raid_tier1_gluster gluster_infra_mount_devices: - path: /gluster_bricks/tier1-ovirt-engine-01/gb-01 lvname: lv_tier1_ovirt_engine_01 vgname: vg_tier1_01 - path: /gluster_bricks/tier1-ovirt-data-01/gb-01 lvname: lv_tier1_ovirt_data_01 vgname: vg_tier1_01 blacklist_mpath_devices: - raid_tier1_gluster gluster_infra_thinpools: - vgname: vg_tier1_01 thinpoolname: lv_tier1_ovirt_data_01_tp poolmetadatasize: 16G gluster_infra_lv_logicalvols: - vgname: vg_tier1_01 lvname: lv_tier1_ovirt_engine_01 lvsize: 100G - vgname: vg_tier1_01 thinpool: lv_tier1_ovirt_data_01_tp lvname: lv_tier1_ovirt_data_01 lvsize: 16000G server-006.storage.int.rabe.ch: gluster_infra_volume_groups: - vgname: vg_tier1_01 pvname: /dev/md/raid_tier1_gluster gluster_infra_mount_devices: - path: /gluster_bricks/tier1-ovirt-engine-01/gb-01 lvname: lv_tier1_ovirt_engine_01 vgname: vg_tier1_01 - path: /gluster_bricks/tier1-ovirt-data-01/gb-01 lvname: lv_tier1_ovirt_data_01 vgname: vg_tier1_01 blacklist_mpath_devices: - raid_tier1_gluster gluster_infra_thinpools: - vgname: vg_tier1_01 thinpoolname: lv_tier1_ovirt_data_01_tp poolmetadatasize: 16G gluster_infra_lv_logicalvols: - vgname: vg_tier1_01 lvname: lv_tier1_ovirt_engine_01 lvsize: 100G - vgname: vg_tier1_01 thinpool: lv_tier1_ovirt_data_01_tp lvname: lv_tier1_ovirt_data_01 lvsize: 16000G server-007.storage.int.rabe.ch: gluster_infra_volume_groups: - vgname: vg_tier0_01 pvname: /dev/md/raid_tier0_gluster gluster_infra_mount_devices: - path: /gluster_bricks/tier1-ovirt-engine-01/gb-01 lvname: lv_tier1_ovirt_engine_01 vgname: vg_tier0_01 - path: /gluster_bricks/tier1-ovirt-data-01/gb-01 lvname: lv_tier1_ovirt_data_01 vgname: vg_tier0_01 blacklist_mpath_devices: - raid_tier0_gluster gluster_infra_thinpools: - vgname: vg_tier0_01 thinpoolname: lv_tier1_ovirt_data_01_tp poolmetadatasize: 1G gluster_infra_lv_logicalvols: - vgname: vg_tier0_01 lvname: lv_tier1_ovirt_engine_01 lvsize: 20G - vgname: vg_tier0_01 thinpool: lv_tier1_ovirt_data_01_tp lvname: lv_tier1_ovirt_data_01 lvsize: 32G vars: gluster_infra_disktype: JBOD gluster_infra_daling: 1024K gluster_set_selinux_labels: true gluster_infra_fw_ports: - 2049/tcp - 54321/tcp - 5900/tcp - 5900-6923/tcp - 5666/tcp - 16514/tcp gluster_infra_fw_permanent: true gluster_infra_fw_state: enabled gluster_infra_fw_zone: public gluster_infra_fw_services: - glusterfs gluster_features_force_varlogsizecheck: false cluster_nodes: - server-005.storage.int.rabe.ch - server-006.storage.int.rabe.ch - server-007.storage.int.rabe.ch gluster_features_hci_cluster: '{{ cluster_nodes }}' gluster_features_hci_volumes: - volname: tier1-ovirt-engine-01 brick: /gluster_bricks/tier1-ovirt-engine-01/gb-01 arbiter: 1 - volname: tier1-ovirt-data-01 brick: /gluster_bricks/tier1-ovirt-data-01/gb-01 arbiter: 1

Nevermind, I found this here: https://github.com/gluster/gluster-ansible-infra/blob/master/roles/backend_s... On 9/12/22 21:02, Jonas wrote:
Hello all
I tried to setup Gluster volumes in cockpit using the wizard. Based on Red Hat's recommendations I wanted to put the Volume for the oVirt Engine on a thick provisioned logical volume [1] and therefore removed the line thinpoolname and corresponding configuration from the yml file (see below). Unfortunately, this approach was not successful. My solution is now to only create a data volume and manually create a thick provisioned gluster volume manually. What would you recommend doing?
Thanks your any input :)
Regards, Jonas
[1]: https://access.redhat.com/documentation/en-us/red_hat_hyperconverged_infrast...
hc_nodes: hosts: server-005.storage.int.rabe.ch: gluster_infra_volume_groups: - vgname: vg_tier1_01 pvname: /dev/md/raid_tier1_gluster gluster_infra_mount_devices: - path: /gluster_bricks/tier1-ovirt-engine-01/gb-01 lvname: lv_tier1_ovirt_engine_01 vgname: vg_tier1_01 - path: /gluster_bricks/tier1-ovirt-data-01/gb-01 lvname: lv_tier1_ovirt_data_01 vgname: vg_tier1_01 blacklist_mpath_devices: - raid_tier1_gluster gluster_infra_thinpools: - vgname: vg_tier1_01 thinpoolname: lv_tier1_ovirt_data_01_tp poolmetadatasize: 16G gluster_infra_lv_logicalvols: - vgname: vg_tier1_01 lvname: lv_tier1_ovirt_engine_01 lvsize: 100G - vgname: vg_tier1_01 thinpool: lv_tier1_ovirt_data_01_tp lvname: lv_tier1_ovirt_data_01 lvsize: 16000G server-006.storage.int.rabe.ch: gluster_infra_volume_groups: - vgname: vg_tier1_01 pvname: /dev/md/raid_tier1_gluster gluster_infra_mount_devices: - path: /gluster_bricks/tier1-ovirt-engine-01/gb-01 lvname: lv_tier1_ovirt_engine_01 vgname: vg_tier1_01 - path: /gluster_bricks/tier1-ovirt-data-01/gb-01 lvname: lv_tier1_ovirt_data_01 vgname: vg_tier1_01 blacklist_mpath_devices: - raid_tier1_gluster gluster_infra_thinpools: - vgname: vg_tier1_01 thinpoolname: lv_tier1_ovirt_data_01_tp poolmetadatasize: 16G gluster_infra_lv_logicalvols: - vgname: vg_tier1_01 lvname: lv_tier1_ovirt_engine_01 lvsize: 100G - vgname: vg_tier1_01 thinpool: lv_tier1_ovirt_data_01_tp lvname: lv_tier1_ovirt_data_01 lvsize: 16000G server-007.storage.int.rabe.ch: gluster_infra_volume_groups: - vgname: vg_tier0_01 pvname: /dev/md/raid_tier0_gluster gluster_infra_mount_devices: - path: /gluster_bricks/tier1-ovirt-engine-01/gb-01 lvname: lv_tier1_ovirt_engine_01 vgname: vg_tier0_01 - path: /gluster_bricks/tier1-ovirt-data-01/gb-01 lvname: lv_tier1_ovirt_data_01 vgname: vg_tier0_01 blacklist_mpath_devices: - raid_tier0_gluster gluster_infra_thinpools: - vgname: vg_tier0_01 thinpoolname: lv_tier1_ovirt_data_01_tp poolmetadatasize: 1G gluster_infra_lv_logicalvols: - vgname: vg_tier0_01 lvname: lv_tier1_ovirt_engine_01 lvsize: 20G - vgname: vg_tier0_01 thinpool: lv_tier1_ovirt_data_01_tp lvname: lv_tier1_ovirt_data_01 lvsize: 32G vars: gluster_infra_disktype: JBOD gluster_infra_daling: 1024K gluster_set_selinux_labels: true gluster_infra_fw_ports: - 2049/tcp - 54321/tcp - 5900/tcp - 5900-6923/tcp - 5666/tcp - 16514/tcp gluster_infra_fw_permanent: true gluster_infra_fw_state: enabled gluster_infra_fw_zone: public gluster_infra_fw_services: - glusterfs gluster_features_force_varlogsizecheck: false cluster_nodes: - server-005.storage.int.rabe.ch - server-006.storage.int.rabe.ch - server-007.storage.int.rabe.ch gluster_features_hci_cluster: '{{ cluster_nodes }}' gluster_features_hci_volumes: - volname: tier1-ovirt-engine-01 brick: /gluster_bricks/tier1-ovirt-engine-01/gb-01 arbiter: 1 - volname: tier1-ovirt-data-01 brick: /gluster_bricks/tier1-ovirt-data-01/gb-01 arbiter: 1 _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/WX2CIKP43KOCG7...

Can you share the error which you get when you run. On Tue, Sep 13, 2022 at 9:41 PM Jonas <jonas@rabe.ch> wrote:
Nevermind, I found this here:
https://github.com/gluster/gluster-ansible-infra/blob/master/roles/backend_s...
On 9/12/22 21:02, Jonas wrote:
Hello all
I tried to setup Gluster volumes in cockpit using the wizard. Based on Red Hat's recommendations I wanted to put the Volume for the oVirt Engine on a thick provisioned logical volume [1] and therefore removed the line thinpoolname and corresponding configuration from the yml file (see below). Unfortunately, this approach was not successful. My solution is now to only create a data volume and manually create a thick provisioned gluster volume manually. What would you recommend doing?
Thanks your any input :)
Regards, Jonas
[1]:
https://access.redhat.com/documentation/en-us/red_hat_hyperconverged_infrast...
hc_nodes: hosts: server-005.storage.int.rabe.ch: gluster_infra_volume_groups: - vgname: vg_tier1_01 pvname: /dev/md/raid_tier1_gluster gluster_infra_mount_devices: - path: /gluster_bricks/tier1-ovirt-engine-01/gb-01 lvname: lv_tier1_ovirt_engine_01 vgname: vg_tier1_01 - path: /gluster_bricks/tier1-ovirt-data-01/gb-01 lvname: lv_tier1_ovirt_data_01 vgname: vg_tier1_01 blacklist_mpath_devices: - raid_tier1_gluster gluster_infra_thinpools: - vgname: vg_tier1_01 thinpoolname: lv_tier1_ovirt_data_01_tp poolmetadatasize: 16G gluster_infra_lv_logicalvols: - vgname: vg_tier1_01 lvname: lv_tier1_ovirt_engine_01 lvsize: 100G - vgname: vg_tier1_01 thinpool: lv_tier1_ovirt_data_01_tp lvname: lv_tier1_ovirt_data_01 lvsize: 16000G server-006.storage.int.rabe.ch: gluster_infra_volume_groups: - vgname: vg_tier1_01 pvname: /dev/md/raid_tier1_gluster gluster_infra_mount_devices: - path: /gluster_bricks/tier1-ovirt-engine-01/gb-01 lvname: lv_tier1_ovirt_engine_01 vgname: vg_tier1_01 - path: /gluster_bricks/tier1-ovirt-data-01/gb-01 lvname: lv_tier1_ovirt_data_01 vgname: vg_tier1_01 blacklist_mpath_devices: - raid_tier1_gluster gluster_infra_thinpools: - vgname: vg_tier1_01 thinpoolname: lv_tier1_ovirt_data_01_tp poolmetadatasize: 16G gluster_infra_lv_logicalvols: - vgname: vg_tier1_01 lvname: lv_tier1_ovirt_engine_01 lvsize: 100G - vgname: vg_tier1_01 thinpool: lv_tier1_ovirt_data_01_tp lvname: lv_tier1_ovirt_data_01 lvsize: 16000G server-007.storage.int.rabe.ch: gluster_infra_volume_groups: - vgname: vg_tier0_01 pvname: /dev/md/raid_tier0_gluster gluster_infra_mount_devices: - path: /gluster_bricks/tier1-ovirt-engine-01/gb-01 lvname: lv_tier1_ovirt_engine_01 vgname: vg_tier0_01 - path: /gluster_bricks/tier1-ovirt-data-01/gb-01 lvname: lv_tier1_ovirt_data_01 vgname: vg_tier0_01 blacklist_mpath_devices: - raid_tier0_gluster gluster_infra_thinpools: - vgname: vg_tier0_01 thinpoolname: lv_tier1_ovirt_data_01_tp poolmetadatasize: 1G gluster_infra_lv_logicalvols: - vgname: vg_tier0_01 lvname: lv_tier1_ovirt_engine_01 lvsize: 20G - vgname: vg_tier0_01 thinpool: lv_tier1_ovirt_data_01_tp lvname: lv_tier1_ovirt_data_01 lvsize: 32G vars: gluster_infra_disktype: JBOD gluster_infra_daling: 1024K gluster_set_selinux_labels: true gluster_infra_fw_ports: - 2049/tcp - 54321/tcp - 5900/tcp - 5900-6923/tcp - 5666/tcp - 16514/tcp gluster_infra_fw_permanent: true gluster_infra_fw_state: enabled gluster_infra_fw_zone: public gluster_infra_fw_services: - glusterfs gluster_features_force_varlogsizecheck: false cluster_nodes: - server-005.storage.int.rabe.ch - server-006.storage.int.rabe.ch - server-007.storage.int.rabe.ch gluster_features_hci_cluster: '{{ cluster_nodes }}' gluster_features_hci_volumes: - volname: tier1-ovirt-engine-01 brick: /gluster_bricks/tier1-ovirt-engine-01/gb-01 arbiter: 1 - volname: tier1-ovirt-data-01 brick: /gluster_bricks/tier1-ovirt-data-01/gb-01 arbiter: 1 _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WX2CIKP43KOCG7... _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/XMDBWP7MSKBPT4...

I don't have that anymore, but I assume that gluster_infra_lv_logicalvols requires a thin pool: https://github.com/gluster/gluster-ansible-infra/blob/master/roles/backend_s...

Can you share the RH recommendation to use Thick LV ? Best Regards,Strahil Nikolov I don't have that anymore, but I assume that gluster_infra_lv_logicalvols requires a thin pool: https://github.com/gluster/gluster-ansible-infra/blob/master/roles/backend_s... _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/XQ4Q4SELENO6EM...
participants (4)
-
Jonas
-
jonas@rabe.ch
-
Ritesh Chikatwar
-
Strahil Nikolov