Error while deploying Hyperconverged oVirt 4.3.3(el7) + GlusterFS

As title, we want to use 3 hosts with hyperconverged solution of oVirt. We installed oVirt and Gluster as reported in the guide https://www.ovirt.org/documentation/gluster-hyperconverged/chap-Deploying_Hy... When we try to deploy, we get some errors which we cannot figure out. ============================================================== gdeploy creates as configuration rules: hc_nodes: hosts: virtnodetest-0-0: gluster_infra_volume_groups: - vgname: gluster_vg_sdb pvname: /dev/sdb gluster_infra_mount_devices: - path: /gluster_bricks/engine lvname: gluster_lv_engine vgname: gluster_vg_sdb - path: /gluster_bricks/isostorage lvname: gluster_lv_isostorage vgname: gluster_vg_sdb - path: /gluster_bricks/vmstorage lvname: gluster_lv_vmstorage vgname: gluster_vg_sdb gluster_infra_thick_lvs: - vgname: gluster_vg_sdb lvname: gluster_lv_engine size: 150G gluster_infra_thinpools: - vgname: gluster_vg_sdb thinpoolname: gluster_thinpool_gluster_vg_sdb poolmetadatasize: 16G gluster_infra_lv_logicalvols: - vgname: gluster_vg_sdb thinpool: gluster_thinpool_gluster_vg_sdb lvname: gluster_lv_isostorage lvsize: 250G - vgname: gluster_vg_sdb thinpool: gluster_thinpool_gluster_vg_sdb lvname: gluster_lv_vmstorage lvsize: 3500G virtnodetest-0-1: gluster_infra_volume_groups: - vgname: gluster_vg_sdb pvname: /dev/sdb gluster_infra_mount_devices: - path: /gluster_bricks/engine lvname: gluster_lv_engine vgname: gluster_vg_sdb - path: /gluster_bricks/isostorage lvname: gluster_lv_isostorage vgname: gluster_vg_sdb - path: /gluster_bricks/vmstorage lvname: gluster_lv_vmstorage vgname: gluster_vg_sdb gluster_infra_thick_lvs: - vgname: gluster_vg_sdb lvname: gluster_lv_engine size: 150G gluster_infra_thinpools: - vgname: gluster_vg_sdb thinpoolname: gluster_thinpool_gluster_vg_sdb poolmetadatasize: 16G gluster_infra_lv_logicalvols: - vgname: gluster_vg_sdb thinpool: gluster_thinpool_gluster_vg_sdb lvname: gluster_lv_isostorage lvsize: 250G - vgname: gluster_vg_sdb thinpool: gluster_thinpool_gluster_vg_sdb lvname: gluster_lv_vmstorage lvsize: 3500G virtnodetest-0-2: gluster_infra_volume_groups: - vgname: gluster_vg_sdb pvname: /dev/sdb gluster_infra_mount_devices: - path: /gluster_bricks/engine lvname: gluster_lv_engine vgname: gluster_vg_sdb - path: /gluster_bricks/isostorage lvname: gluster_lv_isostorage vgname: gluster_vg_sdb - path: /gluster_bricks/vmstorage lvname: gluster_lv_vmstorage vgname: gluster_vg_sdb gluster_infra_thick_lvs: - vgname: gluster_vg_sdb lvname: gluster_lv_engine size: 150G gluster_infra_thinpools: - vgname: gluster_vg_sdb thinpoolname: gluster_thinpool_gluster_vg_sdb poolmetadatasize: 16G gluster_infra_lv_logicalvols: - vgname: gluster_vg_sdb thinpool: gluster_thinpool_gluster_vg_sdb lvname: gluster_lv_isostorage lvsize: 250G - vgname: gluster_vg_sdb thinpool: gluster_thinpool_gluster_vg_sdb lvname: gluster_lv_vmstorage lvsize: 3500G vars: gluster_infra_disktype: JBOD gluster_set_selinux_labels: true gluster_infra_fw_ports: - 2049/tcp - 54321/tcp - 5900/tcp - 5900-6923/tcp - 5666/tcp - 16514/tcp gluster_infra_fw_permanent: true gluster_infra_fw_state: enabled gluster_infra_fw_zone: public gluster_infra_fw_services: - glusterfs gluster_features_force_varlogsizecheck: false cluster_nodes: - virtnodetest-0-0 - virtnodetest-0-1 - virtnodetest-0-2 gluster_features_hci_cluster: '{{ cluster_nodes }}' gluster_features_hci_volumes: - volname: engine brick: /gluster_bricks/engine/engine arbiter: 0 - volname: isostorage brick: /gluster_bricks/isostorage/isostorage arbiter: 0 - volname: vmstorage brick: /gluster_bricks/vmstorage/vmstorage arbiter: 0 ========================================================================= The system returns this error: PLAY [Setup backend] *********************************************************** TASK [Gathering Facts] ********************************************************* ok: [virtnodetest-0-2] ok: [virtnodetest-0-1] ok: [virtnodetest-0-0] TASK [Check if valid hostnames are provided] *********************************** changed: [virtnodetest-0-1] => (item=virtnodetest-0-1) changed: [virtnodetest-0-1] => (item=virtnodetest-0-0) changed: [virtnodetest-0-1] => (item=virtnodetest-0-2) TASK [Check if provided hostnames are valid] *********************************** ok: [virtnodetest-0-1] => { "changed": false, "msg": "All assertions passed" } ok: [virtnodetest-0-0] => { "changed": false, "msg": "All assertions passed" } ok: [virtnodetest-0-2] => { "changed": false, "msg": "All assertions passed" } TASK [Check if /var/log has enough disk space] ********************************* skipping: [virtnodetest-0-1] skipping: [virtnodetest-0-0] skipping: [virtnodetest-0-2] TASK [Check if the /var is greater than 15G] *********************************** skipping: [virtnodetest-0-1] skipping: [virtnodetest-0-0] skipping: [virtnodetest-0-2] TASK [Check if disks have logical block size of 512B] ************************** skipping: [virtnodetest-0-1] => (item={u'vgname': u'gluster_vg_sdb', u'pvname': u'/dev/sdb'}) skipping: [virtnodetest-0-0] => (item={u'vgname': u'gluster_vg_sdb', u'pvname': u'/dev/sdb'}) skipping: [virtnodetest-0-2] => (item={u'vgname': u'gluster_vg_sdb', u'pvname': u'/dev/sdb'}) TASK [Check if logical block size is 512 bytes] ******************************** skipping: [virtnodetest-0-1] => (item=Logical Block Size) skipping: [virtnodetest-0-0] => (item=Logical Block Size) skipping: [virtnodetest-0-2] => (item=Logical Block Size) TASK [Get logical block size of VDO devices] *********************************** skipping: [virtnodetest-0-1] skipping: [virtnodetest-0-0] skipping: [virtnodetest-0-2] TASK [Check if logical block size is 512 bytes for VDO devices] **************** skipping: [virtnodetest-0-1] skipping: [virtnodetest-0-0] skipping: [virtnodetest-0-2] TASK [gluster.infra/roles/firewall_config : Start firewalld if not already started] *** ok: [virtnodetest-0-2] ok: [virtnodetest-0-1] ok: [virtnodetest-0-0] TASK [gluster.infra/roles/firewall_config : check if required variables are set] *** skipping: [virtnodetest-0-1] skipping: [virtnodetest-0-0] skipping: [virtnodetest-0-2] TASK [gluster.infra/roles/firewall_config : Open/Close firewalld ports] ******** ok: [virtnodetest-0-2] => (item=2049/tcp) ok: [virtnodetest-0-1] => (item=2049/tcp) ok: [virtnodetest-0-0] => (item=2049/tcp) ok: [virtnodetest-0-2] => (item=54321/tcp) ok: [virtnodetest-0-0] => (item=54321/tcp) ok: [virtnodetest-0-1] => (item=54321/tcp) ok: [virtnodetest-0-2] => (item=5900/tcp) ok: [virtnodetest-0-1] => (item=5900/tcp) ok: [virtnodetest-0-0] => (item=5900/tcp) ok: [virtnodetest-0-2] => (item=5900-6923/tcp) ok: [virtnodetest-0-0] => (item=5900-6923/tcp) ok: [virtnodetest-0-1] => (item=5900-6923/tcp) ok: [virtnodetest-0-2] => (item=5666/tcp) ok: [virtnodetest-0-1] => (item=5666/tcp) ok: [virtnodetest-0-0] => (item=5666/tcp) ok: [virtnodetest-0-2] => (item=16514/tcp) ok: [virtnodetest-0-1] => (item=16514/tcp) ok: [virtnodetest-0-0] => (item=16514/tcp) TASK [gluster.infra/roles/firewall_config : Add/Delete services to firewalld rules] *** ok: [virtnodetest-0-1] => (item=glusterfs) ok: [virtnodetest-0-0] => (item=glusterfs) ok: [virtnodetest-0-2] => (item=glusterfs) TASK [gluster.infra/roles/backend_setup : Check if vdsm-python package is installed or not] *** changed: [virtnodetest-0-2] changed: [virtnodetest-0-1] changed: [virtnodetest-0-0] TASK [gluster.infra/roles/backend_setup : Remove the existing LVM filter] ****** skipping: [virtnodetest-0-1] skipping: [virtnodetest-0-0] skipping: [virtnodetest-0-2] TASK [gluster.infra/roles/backend_setup : Check that the multipath.conf exists] *** skipping: [virtnodetest-0-1] skipping: [virtnodetest-0-0] skipping: [virtnodetest-0-2] TASK [gluster.infra/roles/backend_setup : Ensure that multipathd services is enabled if not] *** skipping: [virtnodetest-0-1] skipping: [virtnodetest-0-0] skipping: [virtnodetest-0-2] TASK [gluster.infra/roles/backend_setup : Ensure that multipathd services is running] *** skipping: [virtnodetest-0-1] skipping: [virtnodetest-0-0] skipping: [virtnodetest-0-2] TASK [gluster.infra/roles/backend_setup : Create /etc/multipath/conf.d if doesn't exists] *** skipping: [virtnodetest-0-1] skipping: [virtnodetest-0-0] skipping: [virtnodetest-0-2] TASK [gluster.infra/roles/backend_setup : Get the UUID of the devices] ********* skipping: [virtnodetest-0-1] skipping: [virtnodetest-0-0] skipping: [virtnodetest-0-2] TASK [gluster.infra/roles/backend_setup : Check that the blacklist.conf exists] *** skipping: [virtnodetest-0-1] skipping: [virtnodetest-0-0] skipping: [virtnodetest-0-2] TASK [gluster.infra/roles/backend_setup : Create blacklist template content] *** skipping: [virtnodetest-0-1] skipping: [virtnodetest-0-0] skipping: [virtnodetest-0-2] TASK [gluster.infra/roles/backend_setup : Add wwid to blacklist in blacklist.conf file] *** skipping: [virtnodetest-0-1] skipping: [virtnodetest-0-0] skipping: [virtnodetest-0-2] TASK [gluster.infra/roles/backend_setup : Reload multipathd] ******************* skipping: [virtnodetest-0-1] skipping: [virtnodetest-0-0] skipping: [virtnodetest-0-2] TASK [gluster.infra/roles/backend_setup : Gather facts to determine the OS distribution] *** ok: [virtnodetest-0-1] ok: [virtnodetest-0-0] ok: [virtnodetest-0-2] TASK [gluster.infra/roles/backend_setup : Change to Install lvm tools for debian systems.] *** skipping: [virtnodetest-0-1] skipping: [virtnodetest-0-0] skipping: [virtnodetest-0-2] TASK [gluster.infra/roles/backend_setup : Change to Install lvm tools for RHEL systems.] *** ok: [virtnodetest-0-2] ok: [virtnodetest-0-1] ok: [virtnodetest-0-0] TASK [gluster.infra/roles/backend_setup : Install python-yaml package for Debian systems] *** skipping: [virtnodetest-0-1] skipping: [virtnodetest-0-0] skipping: [virtnodetest-0-2] TASK [gluster.infra/roles/backend_setup : Initialize vdo_devs array] *********** ok: [virtnodetest-0-1] ok: [virtnodetest-0-0] ok: [virtnodetest-0-2] TASK [gluster.infra/roles/backend_setup : Record VDO devices (if any)] ********* skipping: [virtnodetest-0-1] => (item={u'vgname': u'gluster_vg_sdb', u'pvname': u'/dev/sdb'}) skipping: [virtnodetest-0-0] => (item={u'vgname': u'gluster_vg_sdb', u'pvname': u'/dev/sdb'}) skipping: [virtnodetest-0-2] => (item={u'vgname': u'gluster_vg_sdb', u'pvname': u'/dev/sdb'}) TASK [gluster.infra/roles/backend_setup : Configure lvm thinpool extend threshold] *** skipping: [virtnodetest-0-1] skipping: [virtnodetest-0-0] skipping: [virtnodetest-0-2] TASK [gluster.infra/roles/backend_setup : Configure lvm thinpool extend percentage] *** skipping: [virtnodetest-0-1] skipping: [virtnodetest-0-0] skipping: [virtnodetest-0-2] TASK [gluster.infra/roles/backend_setup : Check if vdo block device exists] **** skipping: [virtnodetest-0-1] skipping: [virtnodetest-0-0] skipping: [virtnodetest-0-2] TASK [gluster.infra/roles/backend_setup : Record for missing devices for phase 2] *** skipping: [virtnodetest-0-1] skipping: [virtnodetest-0-0] skipping: [virtnodetest-0-2] TASK [gluster.infra/roles/backend_setup : set fact if it will at least install 1 vdo device] *** TASK [gluster.infra/roles/backend_setup : Install VDO dependencies] ************ skipping: [virtnodetest-0-1] skipping: [virtnodetest-0-0] skipping: [virtnodetest-0-2] TASK [gluster.infra/roles/backend_setup : set fact about vdo installed deps] *** skipping: [virtnodetest-0-1] skipping: [virtnodetest-0-0] skipping: [virtnodetest-0-2] TASK [gluster.infra/roles/backend_setup : Enable and start vdo service] ******** skipping: [virtnodetest-0-1] skipping: [virtnodetest-0-0] skipping: [virtnodetest-0-2] TASK [gluster.infra/roles/backend_setup : Create VDO with specified size] ****** skipping: [virtnodetest-0-1] skipping: [virtnodetest-0-0] skipping: [virtnodetest-0-2] TASK [gluster.infra/roles/backend_setup : Set VDO maxDiscardSize as 16M] ******* skipping: [virtnodetest-0-1] skipping: [virtnodetest-0-0] skipping: [virtnodetest-0-2] TASK [gluster.infra/roles/backend_setup : Stop VDO volumes] ******************** skipping: [virtnodetest-0-1] skipping: [virtnodetest-0-0] skipping: [virtnodetest-0-2] TASK [gluster.infra/roles/backend_setup : Start VDO volumes] ******************* skipping: [virtnodetest-0-1] skipping: [virtnodetest-0-0] skipping: [virtnodetest-0-2] TASK [gluster.infra/roles/backend_setup : Check if valid disktype is provided] *** skipping: [virtnodetest-0-1] skipping: [virtnodetest-0-0] skipping: [virtnodetest-0-2] TASK [gluster.infra/roles/backend_setup : Set PV data alignment for JBOD] ****** ok: [virtnodetest-0-1] ok: [virtnodetest-0-0] ok: [virtnodetest-0-2] TASK [gluster.infra/roles/backend_setup : Set PV data alignment for RAID] ****** skipping: [virtnodetest-0-1] skipping: [virtnodetest-0-0] skipping: [virtnodetest-0-2] TASK [gluster.infra/roles/backend_setup : Set VG physical extent size for RAID] *** skipping: [virtnodetest-0-1] skipping: [virtnodetest-0-0] skipping: [virtnodetest-0-2] TASK [gluster.infra/roles/backend_setup : include_tasks] *********************** included: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/get_vg_groupings.yml for virtnodetest-0-1, virtnodetest-0-0, virtnodetest-0-2 TASK [gluster.infra/roles/backend_setup : Group devices by volume group name, including existing devices] *** ok: [virtnodetest-0-1] ok: [virtnodetest-0-0] ok: [virtnodetest-0-2] TASK [gluster.infra/roles/backend_setup : Check if vg block device exists] ***** changed: [virtnodetest-0-0] => (item={u'key': u'gluster_vg_sdb', u'value': [{u'vgname': u'gluster_vg_sdb', u'pvname': u'/dev/sdb'}]}) changed: [virtnodetest-0-1] => (item={u'key': u'gluster_vg_sdb', u'value': [{u'vgname': u'gluster_vg_sdb', u'pvname': u'/dev/sdb'}]}) changed: [virtnodetest-0-2] => (item={u'key': u'gluster_vg_sdb', u'value': [{u'vgname': u'gluster_vg_sdb', u'pvname': u'/dev/sdb'}]}) TASK [gluster.infra/roles/backend_setup : Filter none-existing devices] ******** ok: [virtnodetest-0-1] ok: [virtnodetest-0-0] ok: [virtnodetest-0-2] TASK [gluster.infra/roles/backend_setup : Record for missing devices for phase 2] *** ok: [virtnodetest-0-1] => (item={u'stderr_lines': [], u'ansible_loop_var': u'item', u'end': u'2021-05-13 12:18:33.575598', u'stderr': u'', u'stdout': u'0', u'changed': True, u'failed': False, u'delta': u'0:00:00.009901', u'cmd': u' test -b /dev/sdb && echo "1" || echo "0"; \n', u'item': {u'value': [{u'vgname': u'gluster_vg_sdb', u'pvname': u'/dev/sdb'}], u'key': u'gluster_vg_sdb'}, u'rc': 0, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': True, u'strip_empty_ends': True, u'_raw_params': u' test -b /dev/sdb && echo "1" || echo "0"; \n', u'removes': None, u'argv': None, u'creates': None, u'chdir': None, u'stdin_add_newline': True, u'stdin': None}}, u'stdout_lines': [u'0'], u'start': u'2021-05-13 12:18:33.565697'}) ok: [virtnodetest-0-0] => (item={u'stderr_lines': [], u'ansible_loop_var': u'item', u'end': u'2021-05-13 10:52:56.886693', u'stderr': u'', u'stdout': u'0', u'changed': True, u'failed': False, u'delta': u'0:00:00.008123', u'cmd': u' test -b /dev/sdb && echo "1" || echo "0"; \n', u'item': {u'value': [{u'vgname': u'gluster_vg_sdb', u'pvname': u'/dev/sdb'}], u'key': u'gluster_vg_sdb'}, u'rc': 0, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': True, u'strip_empty_ends': True, u'_raw_params': u' test -b /dev/sdb && echo "1" || echo "0"; \n', u'removes': None, u'argv': None, u'creates': None, u'chdir': None, u'stdin_add_newline': True, u'stdin': None}}, u'stdout_lines': [u'0'], u'start': u'2021-05-13 10:52:56.878570'}) ok: [virtnodetest-0-2] => (item={u'stderr_lines': [], u'ansible_loop_var': u'item', u'end': u'2021-05-13 12:25:24.420710', u'stderr': u'', u'stdout': u'0', u'changed': True, u'failed': False, u'delta': u'0:00:00.007307', u'cmd': u' test -b /dev/sdb && echo "1" || echo "0"; \n', u'item': {u'value': [{u'vgname': u'gluster_vg_sdb', u'pvname': u'/dev/sdb'}], u'key': u'gluster_vg_sdb'}, u'rc': 0, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': True, u'strip_empty_ends': True, u'_raw_params': u' test -b /dev/sdb && echo "1" || echo "0"; \n', u'removes': None, u'argv': None, u'creates': None, u'chdir': None, u'stdin_add_newline': True, u'stdin': None}}, u'stdout_lines': [u'0'], u'start': u'2021-05-13 12:25:24.413403'}) TASK [gluster.infra/roles/backend_setup : Create volume groups] **************** skipping: [virtnodetest-0-1] => (item={u'key': u'gluster_vg_sdb', u'value': []}) skipping: [virtnodetest-0-0] => (item={u'key': u'gluster_vg_sdb', u'value': []}) skipping: [virtnodetest-0-2] => (item={u'key': u'gluster_vg_sdb', u'value': []}) TASK [gluster.infra/roles/backend_setup : update LVM fact's] ******************* skipping: [virtnodetest-0-1] skipping: [virtnodetest-0-0] skipping: [virtnodetest-0-2] TASK [gluster.infra/roles/backend_setup : Check if thick-lv block devices exists] *** changed: [virtnodetest-0-0] => (item={u'lvname': u'gluster_lv_engine', u'vgname': u'gluster_vg_sdb', u'size': u'150G'}) changed: [virtnodetest-0-1] => (item={u'lvname': u'gluster_lv_engine', u'vgname': u'gluster_vg_sdb', u'size': u'150G'}) changed: [virtnodetest-0-2] => (item={u'lvname': u'gluster_lv_engine', u'vgname': u'gluster_vg_sdb', u'size': u'150G'}) TASK [gluster.infra/roles/backend_setup : Record for missing devices for phase 2] *** skipping: [virtnodetest-0-1] => (item={u'stderr_lines': [], u'ansible_loop_var': u'item', u'end': u'2021-05-13 12:18:37.528159', u'stderr': u'', u'stdout': u'1', u'changed': True, u'failed': False, u'delta': u'0:00:00.010032', u'cmd': u' echo "1" \n', u'item': {u'lvname': u'gluster_lv_engine', u'vgname': u'gluster_vg_sdb', u'size': u'150G'}, u'rc': 0, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': True, u'strip_empty_ends': True, u'_raw_params': u' echo "1" \n', u'removes': None, u'argv': None, u'creates': None, u'chdir': None, u'stdin_add_newline': True, u'stdin': None}}, u'stdout_lines': [u'1'], u'start': u'2021-05-13 12:18:37.518127'}) skipping: [virtnodetest-0-0] => (item={u'stderr_lines': [], u'ansible_loop_var': u'item', u'end': u'2021-05-13 10:53:00.863436', u'stderr': u'', u'stdout': u'1', u'changed': True, u'failed': False, u'delta': u'0:00:00.007459', u'cmd': u' echo "1" \n', u'item': {u'lvname': u'gluster_lv_engine', u'vgname': u'gluster_vg_sdb', u'size': u'150G'}, u'rc': 0, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': True, u'strip_empty_ends': True, u'_raw_params': u' echo "1" \n', u'removes': None, u'argv': None, u'creates': None, u'chdir': None, u'stdin_add_newline': True, u'stdin': None}}, u'stdout_lines': [u'1'], u'start': u'2021-05-13 10:53:00.855977'}) skipping: [virtnodetest-0-2] => (item={u'stderr_lines': [], u'ansible_loop_var': u'item', u'end': u'2021-05-13 12:25:28.261106', u'stderr': u'', u'stdout': u'1', u'changed': True, u'failed': False, u'delta': u'0:00:00.007818', u'cmd': u' echo "1" \n', u'item': {u'lvname': u'gluster_lv_engine', u'vgname': u'gluster_vg_sdb', u'size': u'150G'}, u'rc': 0, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': True, u'strip_empty_ends': True, u'_raw_params': u' echo "1" \n', u'removes': None, u'argv': None, u'creates': None, u'chdir': None, u'stdin_add_newline': True, u'stdin': None}}, u'stdout_lines': [u'1'], u'start': u'2021-05-13 12:25:28.253288'}) TASK [gluster.infra/roles/backend_setup : include_tasks] *********************** included: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/get_vg_groupings.yml for virtnodetest-0-1, virtnodetest-0-0, virtnodetest-0-2 TASK [gluster.infra/roles/backend_setup : Group devices by volume group name, including existing devices] *** ok: [virtnodetest-0-1] ok: [virtnodetest-0-0] ok: [virtnodetest-0-2] TASK [gluster.infra/roles/backend_setup : Check if vg block device exists] ***** TASK [gluster.infra/roles/backend_setup : Filter none-existing devices] ******** ok: [virtnodetest-0-1] ok: [virtnodetest-0-0] ok: [virtnodetest-0-2] TASK [gluster.infra/roles/backend_setup : Make sure thick pvs exists in volume group] *** TASK [gluster.infra/roles/backend_setup : update LVM fact's] ******************* skipping: [virtnodetest-0-1] skipping: [virtnodetest-0-0] skipping: [virtnodetest-0-2] TASK [gluster.infra/roles/backend_setup : Create thick logical volume] ********* failed: [virtnodetest-0-1] (item={u'lvname': u'gluster_lv_engine', u'vgname': u'gluster_vg_sdb', u'size': u'150G'}) => {"ansible_index_var": "index", "ansible_loop_var": "item", "changed": false, "err": " WARNING: Device for PV gx6iUE-369Z-3FDP-aRUQ-Wur0-1Xhf-v4g79j not found or rejected by a filter.\n Volume group \"gluster_vg_sdb\" not found\n Cannot process volume group gluster_vg_sdb\n", "index": 0, "item": {"lvname": "gluster_lv_engine", "size": "150G", "vgname": "gluster_vg_sdb"}, "msg": "Volume group gluster_vg_sdb does not exist.", "rc": 5} failed: [virtnodetest-0-0] (item={u'lvname': u'gluster_lv_engine', u'vgname': u'gluster_vg_sdb', u'size': u'150G'}) => {"ansible_index_var": "index", "ansible_loop_var": "item", "changed": false, "err": " Volume group \"gluster_vg_sdb\" not found\n Cannot process volume group gluster_vg_sdb\n", "index": 0, "item": {"lvname": "gluster_lv_engine", "size": "150G", "vgname": "gluster_vg_sdb"}, "msg": "Volume group gluster_vg_sdb does not exist.", "rc": 5} failed: [virtnodetest-0-2] (item={u'lvname': u'gluster_lv_engine', u'vgname': u'gluster_vg_sdb', u'size': u'150G'}) => {"ansible_index_var": "index", "ansible_loop_var": "item", "changed": false, "err": " Volume group \"gluster_vg_sdb\" not found\n Cannot process volume group gluster_vg_sdb\n", "index": 0, "item": {"lvname": "gluster_lv_engine", "size": "150G", "vgname": "gluster_vg_sdb"}, "msg": "Volume group gluster_vg_sdb does not exist.", "rc": 5} NO MORE HOSTS LEFT ************************************************************* NO MORE HOSTS LEFT ************************************************************* PLAY RECAP ********************************************************************* virtnodetest-0-0 : ok=19 changed=3 unreachable=0 failed=1 skipped=41 rescued=0 ignored=0 virtnodetest-0-1 : ok=20 changed=4 unreachable=0 failed=1 skipped=41 rescued=0 ignored=0 virtnodetest-0-2 : ok=19 changed=3 unreachable=0 failed=1 skipped=41 rescued=0 ignored=0 Please check /var/log/cockpit/ovirt-dashboard/gluster-deployment.log for more informations. ====================================================================== How can we resolve this issue?

4.3.10 is the latest version of oVirt 4.3, not sure why you'd want to try installing an older version. These types of installer errors are usually caused by either the device 'sdb' is smaller than you're trying to allocate or has some previous partition formatting on it (installer presumes a non-formatted drive, and stops to protect data) or the system has lvm or multipathd filtering out the drive /dev/sdb In the second case usually wipefs can be used to make /dev/sdb clean for script to work. for the third case, you may need to edit the /etc/multipathd.conf or lvm.conf and reboot to make the device available to the installer. On Thu, May 13, 2021 at 7:23 AM techbreak--- via Users <users@ovirt.org> wrote:
As title, we want to use 3 hosts with hyperconverged solution of oVirt. We installed oVirt and Gluster as reported in the guide https://www.ovirt.org/documentation/gluster-hyperconverged/chap-Deploying_Hy... When we try to deploy, we get some errors which we cannot figure out.
============================================================== gdeploy creates as configuration rules: hc_nodes: hosts: virtnodetest-0-0: gluster_infra_volume_groups: - vgname: gluster_vg_sdb pvname: /dev/sdb gluster_infra_mount_devices: - path: /gluster_bricks/engine lvname: gluster_lv_engine vgname: gluster_vg_sdb - path: /gluster_bricks/isostorage lvname: gluster_lv_isostorage vgname: gluster_vg_sdb - path: /gluster_bricks/vmstorage lvname: gluster_lv_vmstorage vgname: gluster_vg_sdb gluster_infra_thick_lvs: - vgname: gluster_vg_sdb lvname: gluster_lv_engine size: 150G gluster_infra_thinpools: - vgname: gluster_vg_sdb thinpoolname: gluster_thinpool_gluster_vg_sdb poolmetadatasize: 16G gluster_infra_lv_logicalvols: - vgname: gluster_vg_sdb thinpool: gluster_thinpool_gluster_vg_sdb lvname: gluster_lv_isostorage lvsize: 250G - vgname: gluster_vg_sdb thinpool: gluster_thinpool_gluster_vg_sdb lvname: gluster_lv_vmstorage lvsize: 3500G virtnodetest-0-1: gluster_infra_volume_groups: - vgname: gluster_vg_sdb pvname: /dev/sdb gluster_infra_mount_devices: - path: /gluster_bricks/engine lvname: gluster_lv_engine vgname: gluster_vg_sdb - path: /gluster_bricks/isostorage lvname: gluster_lv_isostorage vgname: gluster_vg_sdb - path: /gluster_bricks/vmstorage lvname: gluster_lv_vmstorage vgname: gluster_vg_sdb gluster_infra_thick_lvs: - vgname: gluster_vg_sdb lvname: gluster_lv_engine size: 150G gluster_infra_thinpools: - vgname: gluster_vg_sdb thinpoolname: gluster_thinpool_gluster_vg_sdb poolmetadatasize: 16G gluster_infra_lv_logicalvols: - vgname: gluster_vg_sdb thinpool: gluster_thinpool_gluster_vg_sdb lvname: gluster_lv_isostorage lvsize: 250G - vgname: gluster_vg_sdb thinpool: gluster_thinpool_gluster_vg_sdb lvname: gluster_lv_vmstorage lvsize: 3500G virtnodetest-0-2: gluster_infra_volume_groups: - vgname: gluster_vg_sdb pvname: /dev/sdb gluster_infra_mount_devices: - path: /gluster_bricks/engine lvname: gluster_lv_engine vgname: gluster_vg_sdb - path: /gluster_bricks/isostorage lvname: gluster_lv_isostorage vgname: gluster_vg_sdb - path: /gluster_bricks/vmstorage lvname: gluster_lv_vmstorage vgname: gluster_vg_sdb gluster_infra_thick_lvs: - vgname: gluster_vg_sdb lvname: gluster_lv_engine size: 150G gluster_infra_thinpools: - vgname: gluster_vg_sdb thinpoolname: gluster_thinpool_gluster_vg_sdb poolmetadatasize: 16G gluster_infra_lv_logicalvols: - vgname: gluster_vg_sdb thinpool: gluster_thinpool_gluster_vg_sdb lvname: gluster_lv_isostorage lvsize: 250G - vgname: gluster_vg_sdb thinpool: gluster_thinpool_gluster_vg_sdb lvname: gluster_lv_vmstorage lvsize: 3500G vars: gluster_infra_disktype: JBOD gluster_set_selinux_labels: true gluster_infra_fw_ports: - 2049/tcp - 54321/tcp - 5900/tcp - 5900-6923/tcp - 5666/tcp - 16514/tcp gluster_infra_fw_permanent: true gluster_infra_fw_state: enabled gluster_infra_fw_zone: public gluster_infra_fw_services: - glusterfs gluster_features_force_varlogsizecheck: false cluster_nodes: - virtnodetest-0-0 - virtnodetest-0-1 - virtnodetest-0-2 gluster_features_hci_cluster: '{{ cluster_nodes }}' gluster_features_hci_volumes: - volname: engine brick: /gluster_bricks/engine/engine arbiter: 0 - volname: isostorage brick: /gluster_bricks/isostorage/isostorage arbiter: 0 - volname: vmstorage brick: /gluster_bricks/vmstorage/vmstorage arbiter: 0
========================================================================= The system returns this error:
PLAY [Setup backend] ***********************************************************
TASK [Gathering Facts] ********************************************************* ok: [virtnodetest-0-2] ok: [virtnodetest-0-1] ok: [virtnodetest-0-0]
TASK [Check if valid hostnames are provided] *********************************** changed: [virtnodetest-0-1] => (item=virtnodetest-0-1) changed: [virtnodetest-0-1] => (item=virtnodetest-0-0) changed: [virtnodetest-0-1] => (item=virtnodetest-0-2)
TASK [Check if provided hostnames are valid] *********************************** ok: [virtnodetest-0-1] => { "changed": false, "msg": "All assertions passed" } ok: [virtnodetest-0-0] => { "changed": false, "msg": "All assertions passed" } ok: [virtnodetest-0-2] => { "changed": false, "msg": "All assertions passed" }
TASK [Check if /var/log has enough disk space] ********************************* skipping: [virtnodetest-0-1] skipping: [virtnodetest-0-0] skipping: [virtnodetest-0-2]
TASK [Check if the /var is greater than 15G] *********************************** skipping: [virtnodetest-0-1] skipping: [virtnodetest-0-0] skipping: [virtnodetest-0-2]
TASK [Check if disks have logical block size of 512B] ************************** skipping: [virtnodetest-0-1] => (item={u'vgname': u'gluster_vg_sdb', u'pvname': u'/dev/sdb'}) skipping: [virtnodetest-0-0] => (item={u'vgname': u'gluster_vg_sdb', u'pvname': u'/dev/sdb'}) skipping: [virtnodetest-0-2] => (item={u'vgname': u'gluster_vg_sdb', u'pvname': u'/dev/sdb'})
TASK [Check if logical block size is 512 bytes] ******************************** skipping: [virtnodetest-0-1] => (item=Logical Block Size) skipping: [virtnodetest-0-0] => (item=Logical Block Size) skipping: [virtnodetest-0-2] => (item=Logical Block Size)
TASK [Get logical block size of VDO devices] *********************************** skipping: [virtnodetest-0-1] skipping: [virtnodetest-0-0] skipping: [virtnodetest-0-2]
TASK [Check if logical block size is 512 bytes for VDO devices] **************** skipping: [virtnodetest-0-1] skipping: [virtnodetest-0-0] skipping: [virtnodetest-0-2]
TASK [gluster.infra/roles/firewall_config : Start firewalld if not already started] *** ok: [virtnodetest-0-2] ok: [virtnodetest-0-1] ok: [virtnodetest-0-0]
TASK [gluster.infra/roles/firewall_config : check if required variables are set] *** skipping: [virtnodetest-0-1] skipping: [virtnodetest-0-0] skipping: [virtnodetest-0-2]
TASK [gluster.infra/roles/firewall_config : Open/Close firewalld ports] ******** ok: [virtnodetest-0-2] => (item=2049/tcp) ok: [virtnodetest-0-1] => (item=2049/tcp) ok: [virtnodetest-0-0] => (item=2049/tcp) ok: [virtnodetest-0-2] => (item=54321/tcp) ok: [virtnodetest-0-0] => (item=54321/tcp) ok: [virtnodetest-0-1] => (item=54321/tcp) ok: [virtnodetest-0-2] => (item=5900/tcp) ok: [virtnodetest-0-1] => (item=5900/tcp) ok: [virtnodetest-0-0] => (item=5900/tcp) ok: [virtnodetest-0-2] => (item=5900-6923/tcp) ok: [virtnodetest-0-0] => (item=5900-6923/tcp) ok: [virtnodetest-0-1] => (item=5900-6923/tcp) ok: [virtnodetest-0-2] => (item=5666/tcp) ok: [virtnodetest-0-1] => (item=5666/tcp) ok: [virtnodetest-0-0] => (item=5666/tcp) ok: [virtnodetest-0-2] => (item=16514/tcp) ok: [virtnodetest-0-1] => (item=16514/tcp) ok: [virtnodetest-0-0] => (item=16514/tcp)
TASK [gluster.infra/roles/firewall_config : Add/Delete services to firewalld rules] *** ok: [virtnodetest-0-1] => (item=glusterfs) ok: [virtnodetest-0-0] => (item=glusterfs) ok: [virtnodetest-0-2] => (item=glusterfs)
TASK [gluster.infra/roles/backend_setup : Check if vdsm-python package is installed or not] *** changed: [virtnodetest-0-2] changed: [virtnodetest-0-1] changed: [virtnodetest-0-0]
TASK [gluster.infra/roles/backend_setup : Remove the existing LVM filter] ****** skipping: [virtnodetest-0-1] skipping: [virtnodetest-0-0] skipping: [virtnodetest-0-2]
TASK [gluster.infra/roles/backend_setup : Check that the multipath.conf exists] *** skipping: [virtnodetest-0-1] skipping: [virtnodetest-0-0] skipping: [virtnodetest-0-2]
TASK [gluster.infra/roles/backend_setup : Ensure that multipathd services is enabled if not] *** skipping: [virtnodetest-0-1] skipping: [virtnodetest-0-0] skipping: [virtnodetest-0-2]
TASK [gluster.infra/roles/backend_setup : Ensure that multipathd services is running] *** skipping: [virtnodetest-0-1] skipping: [virtnodetest-0-0] skipping: [virtnodetest-0-2]
TASK [gluster.infra/roles/backend_setup : Create /etc/multipath/conf.d if doesn't exists] *** skipping: [virtnodetest-0-1] skipping: [virtnodetest-0-0] skipping: [virtnodetest-0-2]
TASK [gluster.infra/roles/backend_setup : Get the UUID of the devices] ********* skipping: [virtnodetest-0-1] skipping: [virtnodetest-0-0] skipping: [virtnodetest-0-2]
TASK [gluster.infra/roles/backend_setup : Check that the blacklist.conf exists] *** skipping: [virtnodetest-0-1] skipping: [virtnodetest-0-0] skipping: [virtnodetest-0-2]
TASK [gluster.infra/roles/backend_setup : Create blacklist template content] *** skipping: [virtnodetest-0-1] skipping: [virtnodetest-0-0] skipping: [virtnodetest-0-2]
TASK [gluster.infra/roles/backend_setup : Add wwid to blacklist in blacklist.conf file] *** skipping: [virtnodetest-0-1] skipping: [virtnodetest-0-0] skipping: [virtnodetest-0-2]
TASK [gluster.infra/roles/backend_setup : Reload multipathd] ******************* skipping: [virtnodetest-0-1] skipping: [virtnodetest-0-0] skipping: [virtnodetest-0-2]
TASK [gluster.infra/roles/backend_setup : Gather facts to determine the OS distribution] *** ok: [virtnodetest-0-1] ok: [virtnodetest-0-0] ok: [virtnodetest-0-2]
TASK [gluster.infra/roles/backend_setup : Change to Install lvm tools for debian systems.] *** skipping: [virtnodetest-0-1] skipping: [virtnodetest-0-0] skipping: [virtnodetest-0-2]
TASK [gluster.infra/roles/backend_setup : Change to Install lvm tools for RHEL systems.] *** ok: [virtnodetest-0-2] ok: [virtnodetest-0-1] ok: [virtnodetest-0-0]
TASK [gluster.infra/roles/backend_setup : Install python-yaml package for Debian systems] *** skipping: [virtnodetest-0-1] skipping: [virtnodetest-0-0] skipping: [virtnodetest-0-2]
TASK [gluster.infra/roles/backend_setup : Initialize vdo_devs array] *********** ok: [virtnodetest-0-1] ok: [virtnodetest-0-0] ok: [virtnodetest-0-2]
TASK [gluster.infra/roles/backend_setup : Record VDO devices (if any)] ********* skipping: [virtnodetest-0-1] => (item={u'vgname': u'gluster_vg_sdb', u'pvname': u'/dev/sdb'}) skipping: [virtnodetest-0-0] => (item={u'vgname': u'gluster_vg_sdb', u'pvname': u'/dev/sdb'}) skipping: [virtnodetest-0-2] => (item={u'vgname': u'gluster_vg_sdb', u'pvname': u'/dev/sdb'})
TASK [gluster.infra/roles/backend_setup : Configure lvm thinpool extend threshold] *** skipping: [virtnodetest-0-1] skipping: [virtnodetest-0-0] skipping: [virtnodetest-0-2]
TASK [gluster.infra/roles/backend_setup : Configure lvm thinpool extend percentage] *** skipping: [virtnodetest-0-1] skipping: [virtnodetest-0-0] skipping: [virtnodetest-0-2]
TASK [gluster.infra/roles/backend_setup : Check if vdo block device exists] **** skipping: [virtnodetest-0-1] skipping: [virtnodetest-0-0] skipping: [virtnodetest-0-2]
TASK [gluster.infra/roles/backend_setup : Record for missing devices for phase 2] *** skipping: [virtnodetest-0-1] skipping: [virtnodetest-0-0] skipping: [virtnodetest-0-2]
TASK [gluster.infra/roles/backend_setup : set fact if it will at least install 1 vdo device] ***
TASK [gluster.infra/roles/backend_setup : Install VDO dependencies] ************ skipping: [virtnodetest-0-1] skipping: [virtnodetest-0-0] skipping: [virtnodetest-0-2]
TASK [gluster.infra/roles/backend_setup : set fact about vdo installed deps] *** skipping: [virtnodetest-0-1] skipping: [virtnodetest-0-0] skipping: [virtnodetest-0-2]
TASK [gluster.infra/roles/backend_setup : Enable and start vdo service] ******** skipping: [virtnodetest-0-1] skipping: [virtnodetest-0-0] skipping: [virtnodetest-0-2]
TASK [gluster.infra/roles/backend_setup : Create VDO with specified size] ****** skipping: [virtnodetest-0-1] skipping: [virtnodetest-0-0] skipping: [virtnodetest-0-2]
TASK [gluster.infra/roles/backend_setup : Set VDO maxDiscardSize as 16M] ******* skipping: [virtnodetest-0-1] skipping: [virtnodetest-0-0] skipping: [virtnodetest-0-2]
TASK [gluster.infra/roles/backend_setup : Stop VDO volumes] ******************** skipping: [virtnodetest-0-1] skipping: [virtnodetest-0-0] skipping: [virtnodetest-0-2]
TASK [gluster.infra/roles/backend_setup : Start VDO volumes] ******************* skipping: [virtnodetest-0-1] skipping: [virtnodetest-0-0] skipping: [virtnodetest-0-2]
TASK [gluster.infra/roles/backend_setup : Check if valid disktype is provided] *** skipping: [virtnodetest-0-1] skipping: [virtnodetest-0-0] skipping: [virtnodetest-0-2]
TASK [gluster.infra/roles/backend_setup : Set PV data alignment for JBOD] ****** ok: [virtnodetest-0-1] ok: [virtnodetest-0-0] ok: [virtnodetest-0-2]
TASK [gluster.infra/roles/backend_setup : Set PV data alignment for RAID] ****** skipping: [virtnodetest-0-1] skipping: [virtnodetest-0-0] skipping: [virtnodetest-0-2]
TASK [gluster.infra/roles/backend_setup : Set VG physical extent size for RAID] *** skipping: [virtnodetest-0-1] skipping: [virtnodetest-0-0] skipping: [virtnodetest-0-2]
TASK [gluster.infra/roles/backend_setup : include_tasks] *********************** included: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/get_vg_groupings.yml for virtnodetest-0-1, virtnodetest-0-0, virtnodetest-0-2
TASK [gluster.infra/roles/backend_setup : Group devices by volume group name, including existing devices] *** ok: [virtnodetest-0-1] ok: [virtnodetest-0-0] ok: [virtnodetest-0-2]
TASK [gluster.infra/roles/backend_setup : Check if vg block device exists] ***** changed: [virtnodetest-0-0] => (item={u'key': u'gluster_vg_sdb', u'value': [{u'vgname': u'gluster_vg_sdb', u'pvname': u'/dev/sdb'}]}) changed: [virtnodetest-0-1] => (item={u'key': u'gluster_vg_sdb', u'value': [{u'vgname': u'gluster_vg_sdb', u'pvname': u'/dev/sdb'}]}) changed: [virtnodetest-0-2] => (item={u'key': u'gluster_vg_sdb', u'value': [{u'vgname': u'gluster_vg_sdb', u'pvname': u'/dev/sdb'}]})
TASK [gluster.infra/roles/backend_setup : Filter none-existing devices] ******** ok: [virtnodetest-0-1] ok: [virtnodetest-0-0] ok: [virtnodetest-0-2]
TASK [gluster.infra/roles/backend_setup : Record for missing devices for phase 2] *** ok: [virtnodetest-0-1] => (item={u'stderr_lines': [], u'ansible_loop_var': u'item', u'end': u'2021-05-13 12:18:33.575598', u'stderr': u'', u'stdout': u'0', u'changed': True, u'failed': False, u'delta': u'0:00:00.009901', u'cmd': u' test -b /dev/sdb && echo "1" || echo "0"; \n', u'item': {u'value': [{u'vgname': u'gluster_vg_sdb', u'pvname': u'/dev/sdb'}], u'key': u'gluster_vg_sdb'}, u'rc': 0, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': True, u'strip_empty_ends': True, u'_raw_params': u' test -b /dev/sdb && echo "1" || echo "0"; \n', u'removes': None, u'argv': None, u'creates': None, u'chdir': None, u'stdin_add_newline': True, u'stdin': None}}, u'stdout_lines': [u'0'], u'start': u'2021-05-13 12:18:33.565697'}) ok: [virtnodetest-0-0] => (item={u'stderr_lines': [], u'ansible_loop_var': u'item', u'end': u'2021-05-13 10:52:56.886693', u'stderr': u'', u'stdout': u'0', u'changed': True, u'failed': False, u'delta': u'0:00:00.008123', u'cmd': u' test -b /dev/sdb && echo "1" || echo "0"; \n', u'item': {u'value': [{u'vgname': u'gluster_vg_sdb', u'pvname': u'/dev/sdb'}], u'key': u'gluster_vg_sdb'}, u'rc': 0, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': True, u'strip_empty_ends': True, u'_raw_params': u' test -b /dev/sdb && echo "1" || echo "0"; \n', u'removes': None, u'argv': None, u'creates': None, u'chdir': None, u'stdin_add_newline': True, u'stdin': None}}, u'stdout_lines': [u'0'], u'start': u'2021-05-13 10:52:56.878570'}) ok: [virtnodetest-0-2] => (item={u'stderr_lines': [], u'ansible_loop_var': u'item', u'end': u'2021-05-13 12:25:24.420710', u'stderr': u'', u'stdout': u'0', u'changed': True, u'failed': False, u'delta': u'0:00:00.007307', u'cmd': u' test -b /dev/sdb && echo "1" || echo "0"; \n', u'item': {u'value': [{u'vgname': u'gluster_vg_sdb', u'pvname': u'/dev/sdb'}], u'key': u'gluster_vg_sdb'}, u'rc': 0, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': True, u'strip_empty_ends': True, u'_raw_params': u' test -b /dev/sdb && echo "1" || echo "0"; \n', u'removes': None, u'argv': None, u'creates': None, u'chdir': None, u'stdin_add_newline': True, u'stdin': None}}, u'stdout_lines': [u'0'], u'start': u'2021-05-13 12:25:24.413403'})
TASK [gluster.infra/roles/backend_setup : Create volume groups] **************** skipping: [virtnodetest-0-1] => (item={u'key': u'gluster_vg_sdb', u'value': []}) skipping: [virtnodetest-0-0] => (item={u'key': u'gluster_vg_sdb', u'value': []}) skipping: [virtnodetest-0-2] => (item={u'key': u'gluster_vg_sdb', u'value': []})
TASK [gluster.infra/roles/backend_setup : update LVM fact's] ******************* skipping: [virtnodetest-0-1] skipping: [virtnodetest-0-0] skipping: [virtnodetest-0-2]
TASK [gluster.infra/roles/backend_setup : Check if thick-lv block devices exists] *** changed: [virtnodetest-0-0] => (item={u'lvname': u'gluster_lv_engine', u'vgname': u'gluster_vg_sdb', u'size': u'150G'}) changed: [virtnodetest-0-1] => (item={u'lvname': u'gluster_lv_engine', u'vgname': u'gluster_vg_sdb', u'size': u'150G'}) changed: [virtnodetest-0-2] => (item={u'lvname': u'gluster_lv_engine', u'vgname': u'gluster_vg_sdb', u'size': u'150G'})
TASK [gluster.infra/roles/backend_setup : Record for missing devices for phase 2] *** skipping: [virtnodetest-0-1] => (item={u'stderr_lines': [], u'ansible_loop_var': u'item', u'end': u'2021-05-13 12:18:37.528159', u'stderr': u'', u'stdout': u'1', u'changed': True, u'failed': False, u'delta': u'0:00:00.010032', u'cmd': u' echo "1" \n', u'item': {u'lvname': u'gluster_lv_engine', u'vgname': u'gluster_vg_sdb', u'size': u'150G'}, u'rc': 0, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': True, u'strip_empty_ends': True, u'_raw_params': u' echo "1" \n', u'removes': None, u'argv': None, u'creates': None, u'chdir': None, u'stdin_add_newline': True, u'stdin': None}}, u'stdout_lines': [u'1'], u'start': u'2021-05-13 12:18:37.518127'}) skipping: [virtnodetest-0-0] => (item={u'stderr_lines': [], u'ansible_loop_var': u'item', u'end': u'2021-05-13 10:53:00.863436', u'stderr': u'', u'stdout': u'1', u'changed': True, u'failed': False, u'delta': u'0:00:00.007459', u'cmd': u' echo "1" \n', u'item': {u'lvname': u'gluster_lv_engine', u'vgname': u'gluster_vg_sdb', u'size': u'150G'}, u'rc': 0, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': True, u'strip_empty_ends': True, u'_raw_params': u' echo "1" \n', u'removes': None, u'argv': None, u'creates': None, u'chdir': None, u'stdin_add_newline': True, u'stdin': None}}, u'stdout_lines': [u'1'], u'start': u'2021-05-13 10:53:00.855977'}) skipping: [virtnodetest-0-2] => (item={u'stderr_lines': [], u'ansible_loop_var': u'item', u'end': u'2021-05-13 12:25:28.261106', u'stderr': u'', u'stdout': u'1', u'changed': True, u'failed': False, u'delta': u'0:00:00.007818', u'cmd': u' echo "1" \n', u'item': {u'lvname': u'gluster_lv_engine', u'vgname': u'gluster_vg_sdb', u'size': u'150G'}, u'rc': 0, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': True, u'strip_empty_ends': True, u'_raw_params': u' echo "1" \n', u'removes': None, u'argv': None, u'creates': None, u'chdir': None, u'stdin_add_newline': True, u'stdin': None}}, u'stdout_lines': [u'1'], u'start': u'2021-05-13 12:25:28.253288'})
TASK [gluster.infra/roles/backend_setup : include_tasks] *********************** included: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/get_vg_groupings.yml for virtnodetest-0-1, virtnodetest-0-0, virtnodetest-0-2
TASK [gluster.infra/roles/backend_setup : Group devices by volume group name, including existing devices] *** ok: [virtnodetest-0-1] ok: [virtnodetest-0-0] ok: [virtnodetest-0-2]
TASK [gluster.infra/roles/backend_setup : Check if vg block device exists] *****
TASK [gluster.infra/roles/backend_setup : Filter none-existing devices] ******** ok: [virtnodetest-0-1] ok: [virtnodetest-0-0] ok: [virtnodetest-0-2]
TASK [gluster.infra/roles/backend_setup : Make sure thick pvs exists in volume group] ***
TASK [gluster.infra/roles/backend_setup : update LVM fact's] ******************* skipping: [virtnodetest-0-1] skipping: [virtnodetest-0-0] skipping: [virtnodetest-0-2]
TASK [gluster.infra/roles/backend_setup : Create thick logical volume] ********* failed: [virtnodetest-0-1] (item={u'lvname': u'gluster_lv_engine', u'vgname': u'gluster_vg_sdb', u'size': u'150G'}) => {"ansible_index_var": "index", "ansible_loop_var": "item", "changed": false, "err": " WARNING: Device for PV gx6iUE-369Z-3FDP-aRUQ-Wur0-1Xhf-v4g79j not found or rejected by a filter.\n Volume group \"gluster_vg_sdb\" not found\n Cannot process volume group gluster_vg_sdb\n", "index": 0, "item": {"lvname": "gluster_lv_engine", "size": "150G", "vgname": "gluster_vg_sdb"}, "msg": "Volume group gluster_vg_sdb does not exist.", "rc": 5} failed: [virtnodetest-0-0] (item={u'lvname': u'gluster_lv_engine', u'vgname': u'gluster_vg_sdb', u'size': u'150G'}) => {"ansible_index_var": "index", "ansible_loop_var": "item", "changed": false, "err": " Volume group \"gluster_vg_sdb\" not found\n Cannot process volume group gluster_vg_sdb\n", "index": 0, "item": {"lvname": "gluster_lv_engine", "size": "150G", "vgname": "gluster_vg_sdb"}, "msg": "Volume group gluster_vg_sdb does not exist.", "rc": 5} failed: [virtnodetest-0-2] (item={u'lvname': u'gluster_lv_engine', u'vgname': u'gluster_vg_sdb', u'size': u'150G'}) => {"ansible_index_var": "index", "ansible_loop_var": "item", "changed": false, "err": " Volume group \"gluster_vg_sdb\" not found\n Cannot process volume group gluster_vg_sdb\n", "index": 0, "item": {"lvname": "gluster_lv_engine", "size": "150G", "vgname": "gluster_vg_sdb"}, "msg": "Volume group gluster_vg_sdb does not exist.", "rc": 5}
NO MORE HOSTS LEFT *************************************************************
NO MORE HOSTS LEFT *************************************************************
PLAY RECAP ********************************************************************* virtnodetest-0-0 : ok=19 changed=3 unreachable=0 failed=1 skipped=41 rescued=0 ignored=0 virtnodetest-0-1 : ok=20 changed=4 unreachable=0 failed=1 skipped=41 rescued=0 ignored=0 virtnodetest-0-2 : ok=19 changed=3 unreachable=0 failed=1 skipped=41 rescued=0 ignored=0
Please check /var/log/cockpit/ovirt-dashboard/gluster-deployment.log for more informations. ======================================================================
How can we resolve this issue? _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/3J3GA4SWOISUR5...

This looks to me like something I've been stumbling across several times... When trying to redo a filed partial installation of HCI, I often stumbled across volume setups not working, even if I had cleared "everything" via the 'cleanup partial install' button (I don't recall literally what it is called). As it turns out, the oVirt setup inserts an exclusion filter into /etc/lvm/lvm.conf, that blocks any modification to that device as a 'protective' measure. Try looking for the gx6iUE-369Z-3FDP-aRUQ-Wur0-1Xhf-v4g79j tag in there and eliminate the filter. Before finding this glitch, I found myself resorting to a full base OS reinstall... And yes, I'd also suggest using the latest 4.3 release, even if that still contains bugs.

Hi there! I have to use an older version just because I have to move a currently working server in that one, just to safely upgrade that one. I've just tried to find the filter you said, but unsuccessefully. "[ gx6iUE-369Z-3FDP-aRUQ-Wur0-1Xhf-v4g79j not found ]" Any other suggestion? :)

I've tried also with command line using "hosted-engine --deploy" and following the guided tour, but then I end up with this error: [ INFO ] TASK [ovirt.hosted_engine_setup : Validate selected bridge interface if management bridge does not exists] [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "The selected network interface is not valid"} [ ERROR ] Failed to execute stage 'Closing up': Failed executing ansible-playbook How can I solve this problem?

What is the output of 'ip a s' from the host ? Best Regards,Strahil Nikolov On Tue, May 25, 2021 at 16:05, techbreak--- via Users<users@ovirt.org> wrote: I've tried also with command line using "hosted-engine --deploy" and following the guided tour, but then I end up with this error: [ INFO ] TASK [ovirt.hosted_engine_setup : Validate selected bridge interface if management bridge does not exists] [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "The selected network interface is not valid"} [ ERROR ] Failed to execute stage 'Closing up': Failed executing ansible-playbook How can I solve this problem? _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/HN6GY3PNIBBVNT...

Actually I reinstalled CentOS7, Gluster and oVirt 4.3.10. Now the situation is like that: TASK [gluster.features/roles/gluster_hci : Create the GlusterFS volumes] ******* An exception occurred during task execution. To see the full traceback, use -vvv. The error was: None failed: [virtnodetest-0-1] (item={u'volname': u'engine', u'brick': u'/gluster_bricks/engine/engine', u'arbiter': 0}) => {"ansible_loop_var": "item", "changed": false, "item": {"arbiter": 0, "brick": "/gluster_bricks/engine/engine", "volname": "engine"}, "msg": "error running gluster (/usr/sbin/gluster --mode=script volume create engine replica 3 transport tcp virtnodetest-0-0:/gluster_bricks/engine/engine virtnodetest-0-1:/gluster_bricks/engine/engine virtnodetest-0-2:/gluster_bricks/engine/engine force) command (rc=1): volume create: engine: failed: Staging failed on virtnodetest-0-2. Error: Host virtnodetest-0-1 not connected\n"} An exception occurred during task execution. To see the full traceback, use -vvv. The error was: None failed: [virtnodetest-0-1] (item={u'volname': u'isostorage', u'brick': u'/gluster_bricks/isostorage/isostorage', u'arbiter': 0}) => {"ansible_loop_var": "item", "changed": false, "item": {"arbiter": 0, "brick": "/gluster_bricks/isostorage/isostorage", "volname": "isostorage"}, "msg": "error running gluster (/usr/sbin/gluster --mode=script volume create isostorage replica 3 transport tcp virtnodetest-0-0:/gluster_bricks/isostorage/isostorage virtnodetest-0-1:/gluster_bricks/isostorage/isostorage virtnodetest-0-2:/gluster_bricks/isostorage/isostorage force) command (rc=1): volume create: isostorage: failed: Staging failed on virtnodetest-0-2. Error: Host virtnodetest-0-1 not connected\n"} An exception occurred during task execution. To see the full traceback, use -vvv. The error was: None failed: [virtnodetest-0-1] (item={u'volname': u'vmstorage', u'brick': u'/gluster_bricks/vmstorage/vmstorage', u'arbiter': 0}) => {"ansible_loop_var": "item", "changed": false, "item": {"arbiter": 0, "brick": "/gluster_bricks/vmstorage/vmstorage", "volname": "vmstorage"}, "msg": "error running gluster (/usr/sbin/gluster --mode=script volume create vmstorage replica 3 transport tcp virtnodetest-0-0:/gluster_bricks/vmstorage/vmstorage virtnodetest-0-1:/gluster_bricks/vmstorage/vmstorage virtnodetest-0-2:/gluster_bricks/vmstorage/vmstorage force) command (rc=1): volume create: vmstorage: failed: Staging failed on virtnodetest-0-2. Error: Host virtnodetest-0-1 not connected\n"} NO MORE HOSTS LEFT ************************************************************* NO MORE HOSTS LEFT ************************************************************* PLAY RECAP ********************************************************************* virtnodetest-0-0 : ok=49 changed=10 unreachable=0 failed=0 skipped=179 rescued=0 ignored=0 virtnodetest-0-1 : ok=50 changed=11 unreachable=0 failed=1 skipped=201 rescued=0 ignored=0 virtnodetest-0-2 : ok=49 changed=10 unreachable=0 failed=0 skipped=179 rescued=0 ignored=0 What should I do? All three nodes are already in /etc/hosts in each node, running "gluster peer status" they are all connected with all possible names, ping and ssh works fine, even previous checks of the deployment are ok, just during this step it's not working on node1.

what is the output of 'gluster pool list' ? Best Regards,Strahil Nikolov On Mon, May 31, 2021 at 17:00, techbreak--- via Users<users@ovirt.org> wrote: Actually I reinstalled CentOS7, Gluster and oVirt 4.3.10. Now the situation is like that: TASK [gluster.features/roles/gluster_hci : Create the GlusterFS volumes] ******* An exception occurred during task execution. To see the full traceback, use -vvv. The error was: None failed: [virtnodetest-0-1] (item={u'volname': u'engine', u'brick': u'/gluster_bricks/engine/engine', u'arbiter': 0}) => {"ansible_loop_var": "item", "changed": false, "item": {"arbiter": 0, "brick": "/gluster_bricks/engine/engine", "volname": "engine"}, "msg": "error running gluster (/usr/sbin/gluster --mode=script volume create engine replica 3 transport tcp virtnodetest-0-0:/gluster_bricks/engine/engine virtnodetest-0-1:/gluster_bricks/engine/engine virtnodetest-0-2:/gluster_bricks/engine/engine force) command (rc=1): volume create: engine: failed: Staging failed on virtnodetest-0-2. Error: Host virtnodetest-0-1 not connected\n"} An exception occurred during task execution. To see the full traceback, use -vvv. The error was: None failed: [virtnodetest-0-1] (item={u'volname': u'isostorage', u'brick': u'/gluster_bricks/isostorage/isostorage', u'arbiter': 0}) => {"ansible_loop_var": "item", "changed": false, "item": {"arbiter": 0, "brick": "/gluster_bricks/isostorage/isostorage", "volname": "isostorage"}, "msg": "error running gluster (/usr/sbin/gluster --mode=script volume create isostorage replica 3 transport tcp virtnodetest-0-0:/gluster_bricks/isostorage/isostorage virtnodetest-0-1:/gluster_bricks/isostorage/isostorage virtnodetest-0-2:/gluster_bricks/isostorage/isostorage force) command (rc=1): volume create: isostorage: failed: Staging failed on virtnodetest-0-2. Error: Host virtnodetest-0-1 not connected\n"} An exception occurred during task execution. To see the full traceback, use -vvv. The error was: None failed: [virtnodetest-0-1] (item={u'volname': u'vmstorage', u'brick': u'/gluster_bricks/vmstorage/vmstorage', u'arbiter': 0}) => {"ansible_loop_var": "item", "changed": false, "item": {"arbiter": 0, "brick": "/gluster_bricks/vmstorage/vmstorage", "volname": "vmstorage"}, "msg": "error running gluster (/usr/sbin/gluster --mode=script volume create vmstorage replica 3 transport tcp virtnodetest-0-0:/gluster_bricks/vmstorage/vmstorage virtnodetest-0-1:/gluster_bricks/vmstorage/vmstorage virtnodetest-0-2:/gluster_bricks/vmstorage/vmstorage force) command (rc=1): volume create: vmstorage: failed: Staging failed on virtnodetest-0-2. Error: Host virtnodetest-0-1 not connected\n"} NO MORE HOSTS LEFT ************************************************************* NO MORE HOSTS LEFT ************************************************************* PLAY RECAP ********************************************************************* virtnodetest-0-0 : ok=49 changed=10 unreachable=0 failed=0 skipped=179 rescued=0 ignored=0 virtnodetest-0-1 : ok=50 changed=11 unreachable=0 failed=1 skipped=201 rescued=0 ignored=0 virtnodetest-0-2 : ok=49 changed=10 unreachable=0 failed=0 skipped=179 rescued=0 ignored=0 What should I do? All three nodes are already in /etc/hosts in each node, running "gluster peer status" they are all connected with all possible names, ping and ssh works fine, even previous checks of the deployment are ok, just during this step it's not working on node1. _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/PUTOUIOLNLRSJD...
participants (4)
-
Edward Berger
-
Strahil Nikolov
-
techbreak@icloud.com
-
Thomas Hoberg