Hello,
Can you share these files with me from the node,
/etc/ansible/roles/gluster.features/roles/gluster_hci/tasks/hci_volumes.yml
& /etc/ansible/hc_wizard_inventory.yml
Thanks
On Wed, May 18, 2022 at 4:15 PM <bpbp(a)fastmail.com> wrote:
I've run into the following issue with oVirt node on a single
host using
the single node hyperconverged wizard:
TASK [gluster.features/roles/gluster_hci : Create the GlusterFS volumes]
*******
failed: [
ovirt01.syd1.fqdn.com] (item={'volname': 'engine',
'brick':
'/gluster_bricks/engine/engine', 'arbiter': 0}) =>
{"ansible_loop_var":
"item", "changed": true, "cmd": "gluster volume create
engine replica
__omit_place_holder__2e79c54ac7e0f34125a3ed6965d6de550953524f transport
tcp ovirt01.syd1.fqdn.com:/gluster_bricks/engine/engine force\n",
"delta": "0:00:00.086880", "end": "2022-05-18
10:28:49.211929", "item":
{"arbiter": 0, "brick": "/gluster_bricks/engine/engine",
"volname":
"engine"}, "msg": "non-zero return code", "rc":
1, "start": "2022-05-18
10:28:49.125049", "stderr": "replica count should be greater than
1\n\nUsage:\nvolume create <NEW-VOLNAME> [[replica <COUNT> [arbiter
<COUNT>]]|[replica 2 thin-arbiter 1]] [disperse [<COUNT>]] [disperse-data
<COUNT>] [redundancy <COUNT>] [transport <tcp|rdma|tcp,rdma>]
<NEW-BRICK>
<TA-BRICK>... [force]", "stderr_lines": ["replica count should
be greater
than 1", "", "Usage:", "volume create <NEW-VOLNAME>
[[replica <COUNT>
[arbiter
<COUNT>]]|[replica 2 thin-arbiter 1]] [disperse [<COUNT>]]
[disperse-data <COUNT>] [redundancy <COUNT>] [transport
<tcp|rdma|tcp,rdma>] <NEW-BRICK> <TA-BRICK>... [force]"],
"stdout": "",
"stdout_lines": []}
failed: [
ovirt01.syd1.fqdn.com] (item={'volname': 'data',
'brick':
'/gluster_bricks/data/data', 'arbiter': 0}) =>
{"ansible_loop_var": "item",
"changed": true, "cmd": "gluster volume create data replica
__omit_place_holder__2e79c54ac7e0f34125a3ed6965d6de550953524f transport
tcp ovirt01.syd1.fqdn.com:/gluster_bricks/data/data force\n", "delta":
"0:00:00.088490", "end": "2022-05-18 10:28:49.905458",
"item": {"arbiter":
0, "brick": "/gluster_bricks/data/data", "volname":
"data"}, "msg":
"non-zero return code", "rc": 1, "start": "2022-05-18
10:28:49.816968",
"stderr": "replica count should be greater than 1\n\nUsage:\nvolume
create
<NEW-VOLNAME> [[replica <COUNT> [arbiter <COUNT>]]|[replica 2
thin-arbiter
1]] [disperse [<COUNT>]] [disperse-data <COUNT>] [redundancy <COUNT>]
[transport <tcp|rdma|tcp,rdma>] <NEW-BRICK> <TA-BRICK>...
[force]",
"stderr_lines": ["replica count should be greater than 1",
"", "Usage:",
"volume create <NEW-VOLNAME> [[replica <COUNT> [arbiter
<COUNT>]]|[replic
a 2 thin-arbiter 1]] [disperse [<COUNT>]] [disperse-data <COUNT>]
[redundancy <COUNT>] [transport <tcp|rdma|tcp,rdma>] <NEW-BRICK>
<TA-BRICK>... [force]"], "stdout": "",
"stdout_lines": []}
failed: [
ovirt01.syd1.fqdn.com] (item={'volname': 'vmstore',
'brick':
'/gluster_bricks/vmstore/vmstore', 'arbiter': 0}) =>
{"ansible_loop_var":
"item", "changed": true, "cmd": "gluster volume create
vmstore replica
__omit_place_holder__2e79c54ac7e0f34125a3ed6965d6de550953524f transport
tcp ovirt01.syd1.fqdn.com:/gluster_bricks/vmstore/vmstore force\n",
"delta": "0:00:00.086626", "end": "2022-05-18
10:28:50.604015", "item":
{"arbiter": 0, "brick": "/gluster_bricks/vmstore/vmstore",
"volname":
"vmstore"}, "msg": "non-zero return code", "rc":
1, "start": "2022-05-18
10:28:50.517389", "stderr": "replica count should be greater than
1\n\nUsage:\nvolume create <NEW-VOLNAME> [[replica <COUNT> [arbiter
<COUNT>]]|[replica 2 thin-arbiter 1]] [disperse [<COUNT>]] [disperse-data
<COUNT>] [redundancy <COUNT>] [transport <tcp|rdma|tcp,rdma>]
<NEW-BRICK>
<TA-BRICK>... [force]", "stderr_lines": ["replica count should
be greater
than 1", "", "Usage:", "volume create <NEW-VOLNAME>
[[replica <COUNT>
[arbiter <COUNT>]]|[replica 2 thin-arbiter 1]] [disperse [<COUNT>]]
[disperse-data <COUNT>] [redundancy <COUNT>] [transport
<tcp|rdma|tcp,rdma>] <NEW-BRICK> <TA-BRICK>... [force]"],
"stdout": "",
"stdout_lines": []}
The only non-default settings I changed were the stripe size and number of
disks. Following the steps here:
https://www.ovirt.org/dropped/gluster-hyperconverged/chap-Single_node_hyp...
Any ideas to work around this? I will be deploying to 6 nodes eventually
but wanted to try out the engine before the rest of my hardware arrives :)
_______________________________________________
Users mailing list -- users(a)ovirt.org
To unsubscribe send an email to users-leave(a)ovirt.org
Privacy Statement:
https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/M3HBNBFFNUV...