Dear friends,
Thanks to Donald and Strahil, my earlier Gluster deploy issue was resolved by disabling
multipath on the nvme drives. The Gluster deployment is now failing on the three node
hyperconverged oVirt v4.3.3 deployment at:
TASK [gluster.features/roles/gluster_hci : Set granual-entry-heal on] **********
task path: /etc/ansible/roles/gluster.features/roles/gluster_hci/tasks/hci_volumes.yml:67
with:
"stdout": "One or more bricks could be down. Please execute the command
again after bringing all bricks online and finishing any pending heals\nVolume heal
failed."
Specifically:
TASK [gluster.features/roles/gluster_hci : Set granual-entry-heal on] **********
task path: /etc/ansible/roles/gluster.features/roles/gluster_hci/tasks/hci_volumes.yml:67
failed: [
fmov1n1.sn.dtcorp.com] (item={'volname': 'engine',
'brick': '/gluster_bricks/engine/engine', 'arbiter': 0}) =>
{"ansible_loop_var": "item", "changed": true,
"cmd": ["gluster", "volume", "heal",
"engine", "granular-entry-heal", "enable"],
"delta": "0:00:10.112451", "end": "2020-12-18
19:50:22.818741", "item": {"arbiter": 0, "brick":
"/gluster_bricks/engine/engine", "volname": "engine"},
"msg": "non-zero return code", "rc": 107,
"start":
"2020-12-18 19:50:12.706290", "stderr": "",
"stderr_lines": [], "stdout": "One or more bricks could be down.
Please execute the command again after bringing all bricks online and finishing any
pending heals\nVolume heal failed.", "stdout_lines": ["One or more
bricks could be down. Please execute the command again after bringing all bricks online
and finishing any pending heals", "Volume heal failed."]}
failed: [
fmov1n1.sn.dtcorp.com] (item={'volname': 'data',
'brick':
'/gluster_bricks/data/data', 'arbiter': 0}) =>
{"ansible_loop_var": "item", "changed": true,
"cmd": ["gluster", "volume", "heal",
"data", "granular-entry-heal", "enable"],
"delta":
"0:00:10.110165", "end": "2020-12-18 19:50:38.260277",
"item": {"arbiter": 0, "brick":
"/gluster_bricks/data/data", "volname": "data"},
"msg": "non-zero return code", "rc": 107,
"start":
"2020-12-18 19:50:28.150112", "stderr": "",
"stderr_lines": [], "stdout": "One or more bricks could be down.
Please execute the command again after bringing all bricks online and finishing any
pending heals\nVolume heal failed.", "stdout_lines": ["One or more
bricks could be down. Please execute the command again after bringing all bricks online
and finishing any pending heals", "Volume heal failed."]}
failed: [
fmov1n1.sn.dtcorp.com] (item={'volname': 'vmstore',
'brick': '/gluster_bricks/vmstore/vmstore', 'arbiter': 0}) =>
{"ansible_loop_var": "item", "changed": true,
"cmd": ["gluster", "volume", "heal",
"vmstore", "granular-entry-heal", "enable"],
"delta": "0:00:10.113203", "end": "2020-12-18
19:50:53.767864", "item": {"arbiter": 0, "brick":
"/gluster_bricks/vmstore/vmstore", "volname": "vmstore"},
"msg": "non-zero return code", "rc": 107,
"start":
"2020-12-18 19:50:43.654661", "stderr": "",
"stderr_lines": [], "stdout": "One or more bricks could be down.
Please execute the command again after bringing all bricks online and finishing any
pending heals\nVolume heal failed.", "stdout_lines": ["One or more
bricks could be down. Please execute the command again after bringing all bricks online
and finishing any pending heals", "Volume heal failed."]}
Any suggestions regarding troubleshooting, insight or recommendations for reading are
greatly appreciated. I apologize for all the email and am only creating this as a
separate thread as it is a new, presumably unrelated issue. I welcome any recommendations
if I can improve my forum etiquette.
Respectfully,
Charles