Single Node Hyperconverged - Failing Gluster Setup

Ovirt newbie here - using v 4.4.4 Have been trying for days to get this installed on my HP DL380p G6. I have 2 disk 170GB Raid 0 for OS and 6 x 330GB disk Raid 5 for Gluster. DNS all set up (that took some working out), but I just can't fathom out whats (not) happening here. Block size is returned as 512. I've had some help on Reddit where I've been told that Ovirt is seeing my single local disk ass an multipath device, which it is not/??! I think I removed the flag, but it still fails here. So, Gluster install fails quite early through, though it carries on creating all the volumes (with default settings) but then gives me the 'Deployment Failed' message :( Here is where it fails.... Any help gratefully received! TASK [fail] ******************************************************************** task path: /usr/share/cockpit/ovirt-dashboard/ansible/hc_wizard.yml:62 skipping: [ovirt-gluster.whichelo.com] => (item=[{'cmd': 'blockdev --getss /dev/sdb | grep -Po -q "512" && echo true || echo false\n', 'stdout': 'true', 'stderr': '', 'rc': 0, 'start': '2021-02-07 13:21:10.237701', 'end': '2021-02-07 13:21:10.243111', 'delta': '0:00:00.005410', 'changed': True, 'invocation': {'module_args': {'_raw_params': 'blockdev --getss /dev/sdb | grep -Po -q "512" && echo true || echo false\n', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['true'], 'stderr_lines': [], 'failed': False, 'item': {'vgname': 'gluster_vg_sdb', 'pvname': '/dev/sdb'}, 'ansible_loop_var': 'item'}, {'cmd': 'blockdev --getss /dev/sdb | grep -Po -q "4096" && echo true || echo false\n', 'stdout': 'false', 'stderr': '', 'rc': 0, 'start': '2021-02-07 13:21:14.760897', 'end': '2021-02-07 13:21:14.766395', 'delta': '0:00:00.005498', 'chang ed': True, 'invocation': {'module_args': {'_raw_params': 'blockdev --getss /dev/sdb | grep -Po -q "4096" && echo true || echo false\n', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['false'], 'stderr_lines': [], 'failed': False, 'item': {'vgname': 'gluster_vg_sdb', 'pvname': '/dev/sdb'}, 'ansible_loop_var': 'item'}]) => {"ansible_loop_var": "item", "changed": false, "item": [{"ansible_loop_var": "item", "changed": true, "cmd": "blockdev --getss /dev/sdb | grep -Po -q \"512\" && echo true || echo false\n", "delta": "0:00:00.005410", "end": "2021-02-07 13:21:10.243111", "failed": false, "invocation": {"module_args": {"_raw_params": "blockdev --getss /dev/sdb | grep -Po -q \"512\" && echo true || echo false\n", "_uses_shell": true, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, " stdin_add_newline": true, "strip_empty_ends": true, "warn": true}}, "item": {"pvname": "/dev/sdb", "vgname": "gluster_vg_sdb"}, "rc": 0, "start": "2021-02-07 13:21:10.237701", "stderr": "", "stderr_lines": [], "stdout": "true", "stdout_lines": ["true"]}, {"ansible_loop_var": "item", "changed": true, "cm d": "blockdev --getss /dev/sdb | grep -Po -q \"4096\" && echo true || echo false\n", "delta": "0:00:00.005498", "end": "2021-02-07 13:21:14.766395", "failed": false, "invocation": {"module_args": {"_raw_params": "blockdev --getss /dev/sdb | grep -Po -q \"4096\" && echo true || echo false\n", "_uses_shell": true, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "stdin_add_newline": true, "strip_empty_ends": true, "warn": true}}, "item": {"pvname": "/dev/sdb", "vgname": "gluster_vg_sdb"}, "rc": 0, "start": "2021-02-07 13:21:14.760897", "stderr": "", "stderr_lines": [], "stdout": "false", "stdout_lines": ["false"]}], "skip_reason": "Conditional result was False"} hc_wizard.yml excerpt: - name: Check if block device is 4KN shell: > blockdev --getss {{ item.pvname }} | grep -Po -q "4096" && echo true || echo false register: is4KN with_items: "{{ gluster_infra_volume_groups }}" - fail: ################ THIS IS LINE 62 ##################################### msg: "Mix of 4K and 512 Block devices are not allowed" with_nested: - "{{ is512.results }}" - "{{ is4KN.results }}" when: item[0].stdout|bool and item[1].stdout|bool # logical block size of 512 bytes. To disable the check set # gluster_features_512B_check to false. DELETE the below task once # OVirt limitation is fixed - name: Check if disks have logical block size of 512B command: blockdev --getss {{ item.pvname }} register: logical_blk_size when: gluster_infra_volume_groups is defined and item.pvname is not search("/dev/mapper") and gluster_features_512B_check|default(false) Can anyone help?

What is the output of 'lsblk -t' ? Best Regards,Strahil Nikolov Ovirt newbie here - using v 4.4.4 Have been trying for days to get this installed on my HP DL380p G6. I have 2 disk 170GB Raid 0 for OS and 6 x 330GB disk Raid 5 for Gluster. DNS all set up (that took some working out), but I just can't fathom out whats (not) happening here. Block size is returned as 512. I've had some help on Reddit where I've been told that Ovirt is seeing my single local disk ass an multipath device, which it is not/??! I think I removed the flag, but it still fails here. So, Gluster install fails quite early through, though it carries on creating all the volumes (with default settings) but then gives me the 'Deployment Failed' message :( Here is where it fails.... Any help gratefully received! TASK [fail] ******************************************************************** task path: /usr/share/cockpit/ovirt-dashboard/ansible/hc_wizard.yml:62 skipping: [ovirt-gluster.whichelo.com] => (item=[{'cmd': 'blockdev --getss /dev/sdb | grep -Po -q "512" && echo true || echo false\n', 'stdout': 'true', 'stderr': '', 'rc': 0, 'start': '2021-02-07 13:21:10.237701', 'end': '2021-02-07 13:21:10.243111', 'delta': '0:00:00.005410', 'changed': True, 'invocation': {'module_args': {'_raw_params': 'blockdev --getss /dev/sdb | grep -Po -q "512" && echo true || echo false\n', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['true'], 'stderr_lines': [], 'failed': False, 'item': {'vgname': 'gluster_vg_sdb', 'pvname': '/dev/sdb'}, 'ansible_loop_var': 'item'}, {'cmd': 'blockdev --getss /dev/sdb | grep -Po -q "4096" && echo true || echo false\n', 'stdout': 'false', 'stderr': '', 'rc': 0, 'start': '2021-02-07 13:21:14.760897', 'end': '2021-02-07 13:21:14.766395', 'delta': '0:00:00.005498', 'chang ed': True, 'invocation': {'module_args': {'_raw_params': 'blockdev --getss /dev/sdb | grep -Po -q "4096" && echo true || echo false\n', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['false'], 'stderr_lines': [], 'failed': False, 'item': {'vgname': 'gluster_vg_sdb', 'pvname': '/dev/sdb'}, 'ansible_loop_var': 'item'}]) => {"ansible_loop_var": "item", "changed": false, "item": [{"ansible_loop_var": "item", "changed": true, "cmd": "blockdev --getss /dev/sdb | grep -Po -q \"512\" && echo true || echo false\n", "delta": "0:00:00.005410", "end": "2021-02-07 13:21:10.243111", "failed": false, "invocation": {"module_args": {"_raw_params": "blockdev --getss /dev/sdb | grep -Po -q \"512\" && echo true || echo false\n", "_uses_shell": true, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, " stdin_add_newline": true, "strip_empty_ends": true, "warn": true}}, "item": {"pvname": "/dev/sdb", "vgname": "gluster_vg_sdb"}, "rc": 0, "start": "2021-02-07 13:21:10.237701", "stderr": "", "stderr_lines": [], "stdout": "true", "stdout_lines": ["true"]}, {"ansible_loop_var": "item", "changed": true, "cm d": "blockdev --getss /dev/sdb | grep -Po -q \"4096\" && echo true || echo false\n", "delta": "0:00:00.005498", "end": "2021-02-07 13:21:14.766395", "failed": false, "invocation": {"module_args": {"_raw_params": "blockdev --getss /dev/sdb | grep -Po -q \"4096\" && echo true || echo false\n", "_uses_shell": true, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "stdin_add_newline": true, "strip_empty_ends": true, "warn": true}}, "item": {"pvname": "/dev/sdb", "vgname": "gluster_vg_sdb"}, "rc": 0, "start": "2021-02-07 13:21:14.760897", "stderr": "", "stderr_lines": [], "stdout": "false", "stdout_lines": ["false"]}], "skip_reason": "Conditional result was False"} hc_wizard.yml excerpt: - name: Check if block device is 4KN shell: > blockdev --getss {{ item.pvname }} | grep -Po -q "4096" && echo true || echo false register: is4KN with_items: "{{ gluster_infra_volume_groups }}" - fail: ################ THIS IS LINE 62 ##################################### msg: "Mix of 4K and 512 Block devices are not allowed" with_nested: - "{{ is512.results }}" - "{{ is4KN.results }}" when: item[0].stdout|bool and item[1].stdout|bool # logical block size of 512 bytes. To disable the check set # gluster_features_512B_check to false. DELETE the below task once # OVirt limitation is fixed - name: Check if disks have logical block size of 512B command: blockdev --getss {{ item.pvname }} register: logical_blk_size when: gluster_infra_volume_groups is defined and item.pvname is not search("/dev/mapper") and gluster_features_512B_check|default(false) Can anyone help? _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/BNDBDKC4EBFC6A...

If the disk was previously used, you may need to 'wipefs -a /dev/sdb' to clean out any previous partitioning, etc. If the installer can't create the gluster PV, it is often because the drive needs to be added to the multipath blacklist. lsblk to find the ID and add it to the /etc/multipath.conf blacklist [root@ovirtnode2 ~]#lsblk /dev/sdb NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sdb 8:16 0 200G 0 disk -> 3678da6e715b018f01f1abdb887594aae 253:2 0 200G 0 mpath edit /etc/multipath.conf and append the disk wwid to multipath.conf blacklist blacklist { wwid 3678da6e715b018f01f1abdb887594aae } then restart the multipathd service service multipathd restart On Tue, Feb 9, 2021 at 2:19 PM Strahil Nikolov via Users <users@ovirt.org> wrote:
What is the output of 'lsblk -t' ?
Best Regards, Strahil Nikolov
Ovirt newbie here - using v 4.4.4
Have been trying for days to get this installed on my HP DL380p G6. I have 2 disk 170GB Raid 0 for OS and 6 x 330GB disk Raid 5 for Gluster. DNS all set up (that took some working out), but I just can't fathom out whats (not) happening here. Block size is returned as 512.
I've had some help on Reddit where I've been told that Ovirt is seeing my single local disk ass an multipath device, which it is not/??! I think I removed the flag, but it still fails here.
So, Gluster install fails quite early through, though it carries on creating all the volumes (with default settings) but then gives me the 'Deployment Failed' message :( Here is where it fails....
Any help gratefully received!
TASK [fail] ********************************************************************
task path: /usr/share/cockpit/ovirt-dashboard/ansible/hc_wizard.yml:62
skipping: [ovirt-gluster.whichelo.com] => (item=[{'cmd': 'blockdev --getss /dev/sdb | grep -Po -q "512" && echo true || echo false\n', 'stdout': 'true', 'stderr': '', 'rc': 0, 'start': '2021-02-07 13:21:10.237701', 'end': '2021-02-07 13:21:10.243111', 'delta': '0:00:00.005410', 'changed': True, 'invocation': {'module_args': {'_raw_params': 'blockdev --getss /dev/sdb | grep -Po -q "512" && echo true || echo false\n', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['true'], 'stderr_lines': [], 'failed': False, 'item': {'vgname': 'gluster_vg_sdb', 'pvname': '/dev/sdb'}, 'ansible_loop_var': 'item'}, {'cmd': 'blockdev --getss /dev/sdb | grep -Po -q "4096" && echo true || echo false\n', 'stdout': 'false', 'stderr': '', 'rc': 0, 'start': '2021-02-07 13:21:14.760897', 'end': '2021-02-07 13:21:14.766395', 'delta': '0:00:00.005498', 'chang ed': True, 'invocation': {'module_args': {'_raw_params': 'blockdev --getss /dev/sdb | grep -Po -q "4096" && echo true || echo false\n', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['false'], 'stderr_lines': [], 'failed': False, 'item': {'vgname': 'gluster_vg_sdb', 'pvname': '/dev/sdb'}, 'ansible_loop_var': 'item'}]) => {"ansible_loop_var": "item", "changed": false, "item": [{"ansible_loop_var": "item", "changed": true, "cmd": "blockdev --getss /dev/sdb | grep -Po -q \"512\" && echo true || echo false\n", "delta": "0:00:00.005410", "end": "2021-02-07 13:21:10.243111", "failed": false, "invocation": {"module_args": {"_raw_params": "blockdev --getss /dev/sdb | grep -Po -q \"512\" && echo true || echo false\n", "_uses_shell": true, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, " stdin_add_newline": true, "strip_empty_ends": true, "warn": true}}, "item": {"pvname": "/dev/sdb", "vgname": "gluster_vg_sdb"}, "rc": 0, "start": "2021-02-07 13:21:10.237701", "stderr": "", "stderr_lines": [], "stdout": "true", "stdout_lines": ["true"]}, {"ansible_loop_var": "item", "changed": true, "cm
d": "blockdev --getss /dev/sdb | grep -Po -q \"4096\" && echo true || echo false\n", "delta": "0:00:00.005498", "end": "2021-02-07 13:21:14.766395", "failed": false, "invocation": {"module_args": {"_raw_params": "blockdev --getss /dev/sdb | grep -Po -q \"4096\" && echo true || echo false\n", "_uses_shell": true, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "stdin_add_newline": true, "strip_empty_ends": true, "warn": true}}, "item": {"pvname": "/dev/sdb", "vgname": "gluster_vg_sdb"}, "rc": 0, "start": "2021-02-07 13:21:14.760897", "stderr": "", "stderr_lines": [], "stdout": "false", "stdout_lines": ["false"]}], "skip_reason": "Conditional result was False"}
hc_wizard.yml excerpt:
- name: Check if block device is 4KN shell: > blockdev --getss {{ item.pvname }} | grep -Po -q "4096" && echo true || echo false register: is4KN with_items: "{{ gluster_infra_volume_groups }}"
- fail: ################ THIS IS LINE 62 ##################################### msg: "Mix of 4K and 512 Block devices are not allowed" with_nested: - "{{ is512.results }}" - "{{ is4KN.results }}" when: item[0].stdout|bool and item[1].stdout|bool
# logical block size of 512 bytes. To disable the check set # gluster_features_512B_check to false. DELETE the below task once # OVirt limitation is fixed - name: Check if disks have logical block size of 512B command: blockdev --getss {{ item.pvname }} register: logical_blk_size when: gluster_infra_volume_groups is defined and item.pvname is not search("/dev/mapper") and gluster_features_512B_check|default(false)
Can anyone help? _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/BNDBDKC4EBFC6A...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/Z6RJWHCHNDEKIP...

Thanks -for some reason this part of it now makes sense!

When uou create a blacklist file for the multipath, use /etc/multipath/conf.d/<someconfig>.conf Best Regards,Strahil Nikolov Thanks -for some reason this part of it now makes sense! _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/JU53JWFMY5EZC2...

Hi I've learned a lot today reading, but still not smooth sailing at all! I'm just doing a fresh install as I did find one way out of my above problem - to comment out the filters in /etc/lvm/lvm.conf - but I couldn't see why this worked!? I'm going to clean sdb once install finishes then look at the lsblk output to check all is well. I had found that sdb was being tagged as type mpath, which I managed to fix with wipefs. So hopefully that will be ok this time round. What I have found is that I'm still having problems working ou t the logic of the network The kvm host (ovirt-kvm.whichelo.com) is fixed ip 192.168.0.40 on my 1st nic Also in my DNS are ovirt-engine.whichelo.com - 192.168.0.50 and ovirt-gluster.whichelo.com on 192.168.0.60 I discovered that adding gluster's ip to that adapter allowed me to move on with provisioning, but if I add the engine's (.50) I get the he_FQDN resolves to this host message. I'm just working out how to use the nic's (machine has 4), bonds and vlans. Installation always stalls at "waiting for host to be up" - I'm guessing my dodgy networking is causing problems. Any help for a newbie would be very welcome! Is there any way to post screenshots? Thank you so much for trying to help me out! I've found people in the ovirt community very helpful.
участники (3)
-
Edward Berger
-
jhamiltonactually@gmail.com
-
Strahil Nikolov