Deployment on three node cluster using oVirt HCI wizard.

I think this is a bug where it needs to do either a pre-flight name length validation, or increase valid field length.


I avoid using /dev/sd#   as those can change.  And the wizard allows for this change to a more explicit devices Ex: /dev/mapper/Samsung_SSD_850_PRO_512GB_S250NXAGA15787L


Error:
TASK [gluster.infra/roles/backend_setup : Create a LV thinpool for similar device types] ***
task path: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/thin_pool_create.yml:239
failed: [thorst.penguinpages.local] (item={'vgname': 'gluster_vg_Samsung_SSD_850_PRO_512GB_S250NXAGA15787L', 'thinpoolname': 'gluster_thinpool_gluster_vg_Samsung_SSD_850_PRO_512GB_S250NXAGA15787L', 'poolmetadatasize': '3G'}) => {"ansible_loop_var": "item", "changed": false, "err": "  Full LV name \"gluster_vg_Samsung_SSD_850_PRO_512GB_S250NXAGA15787L/gluster_thinpool_gluster_vg_Samsung_SSD_850_PRO_512GB_S250NXAGA15787L_tdata\" is too long.\n  Full LV name \"gluster_vg_Samsung_SSD_850_PRO_512GB_S250NXAGA15787L/gluster_thinpool_gluster_vg_Samsung_SSD_850_PRO_512GB_S250NXAGA15787L_tdata\" is too long.\n  Full LV name \"gluster_vg_Samsung_SSD_850_PRO_512GB_S250NXAGA15787L/gluster_thinpool_gluster_vg_Samsung_SSD_850_PRO_512GB_S250NXAGA15787L_tdata\" is too long.\n  Full LV name \"gluster_vg_Samsung_SSD_850_PRO_512GB_S250NXAGA15787L/gluster_thinpool_gluster_vg_Samsung_SSD_850_PRO_512GB_S250NXAGA15787L_tdata\" is too long.\n  Full LV name \"gluster_vg_Samsung_SSD_850_PRO_512GB_S250NXAGA15787L/gluster_thinpool_gluster_vg_Samsung_SSD_850_PRO_512GB_S250NXAGA15787L_tdata\" is too long.\n  Full LV name \"gluster_vg_Samsung_SSD_850_PRO_512GB_S250NXAGA15787L/gluster_thinpool_gluster_vg_Samsung_SSD_850_PRO_512GB_S250NXAGA15787L_tmeta\" is too long.\n  Internal error: LV name \"gluster_vg_Samsung_SSD_850_PRO_512GB_S250NXAGA15787L/gluster_thinpool_gluster_vg_Samsung_SSD_850_PRO_512GB_S250NXAGA15787L_tmeta\" length 130 is not supported.\n  Internal error: LV name \"gluster_vg_Samsung_SSD_850_PRO_512GB_S250NXAGA15787L/gluster_thinpool_gluster_vg_Samsung_SSD_850_PRO_512GB_S250NXAGA15787L_tdata\" length 130 is not supported.\n  Internal error: LV name \"gluster_vg_Samsung_SSD_850_PRO_512GB_S250NXAGA15787L/gluster_thinpool_gluster_vg_Samsung_SSD_850_PRO_512GB_S250NXAGA15787L_tmeta\" length 130 is not supported.\n  Internal error: LV name \"gluster_vg_Samsung_SSD_850_PRO_512GB_S250NXAGA15787L/gluster_thinpool_gluster_vg_Samsung_SSD_850_PRO_512GB_S250NXAGA15787L_tdata\" length 130 is not supported.\n", "item": {"poolmetadatasize": "3G", "thinpoolname": "gluster_thinpool_gluster_vg_Samsung_SSD_850_PRO_512GB_S250NXAGA15787L", "vgname": "gluster_vg_Samsung_SSD_850_PRO_512GB_S250NXAGA15787L"}, "msg": "Creating logical volume 'None' failed", "rc": 5}
failed: [medusast.penguinpages.local] (item={'vgname': 'gluster_vg_SAMSUNG_SSD_PM851_mSATA_512GB_S1EWNYAF609306', 'thinpoolname': 'gluster_thinpool_gluster_vg_SAMSUNG_SSD_PM851_mSATA_512GB_S1EWNYAF609306', 'poolmetadatasize': '3G'}) => {"ansible_loop_var": "item", "changed": false, "err": "  Internal error: LV name \"gluster_vg_SAMSUNG_SSD_PM851_mSATA_512GB_S1EWNYAF609306/gluster_thinpool_gluster_vg_SAMSUNG_SSD_PM851_mSATA_512GB_S1EWNYAF609306\" length 130 is not supported.\n", "item": {"poolmetadatasize": "3G", "thinpoolname": "gluster_thinpool_gluster_vg_SAMSUNG_SSD_PM851_mSATA_512GB_S1EWNYAF609306", "vgname": "gluster_vg_SAMSUNG_SSD_PM851_mSATA_512GB_S1EWNYAF609306"}, "msg": "Creating logical volume 'None' failed", "rc": 5}
changed: [odinst.penguinpages.local] => (item={'vgname': 'gluster_vg_Micron_1100_MTFDDAV512TBN_17401F699137', 'thinpoolname': 'gluster_thinpool_gluster_vg_Micron_1100_MTFDDAV512TBN_17401F699137', 'poolmetadatasize': '3G'}) => {"ansible_loop_var": "item", "changed": true, "item": {"poolmetadatasize": "3G", "thinpoolname": "gluster_thinpool_gluster_vg_Micron_1100_MTFDDAV512TBN_17401F699137", "vgname": "gluster_vg_Micron_1100_MTFDDAV512TBN_17401F699137"}, "msg": ""}


I will revert back to  /dev/sd# for now... but this should be cleaned up.

Attached is YAML file for deployment of cluster

--
penguinpages