Gluster mount still fails on Engine deployment - any suggestions...

Hi Engine deployment fails here... [ INFO ] TASK [ovirt.hosted_engine_setup : Add glusterfs storage domain] [ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is "[Unexpected exception]". HTTP response code is 400. [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault reason is \"Operation Failed\". Fault detail is \"[Unexpected exception]\". HTTP response code is 400."} However Gluster looks good... I have reinstalled all nodes from scratch. root@ovirt3 ~]# gluster volume status Status of volume: data Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick gfs3.gluster.private:/gluster_bricks/ data/data 49152 0 Y 3756 Brick gfs2.gluster.private:/gluster_bricks/ data/data 49153 0 Y 3181 Brick gfs1.gluster.private:/gluster_bricks/ data/data 49152 0 Y 15548 Self-heal Daemon on localhost N/A N/A Y 17602 Self-heal Daemon on gfs1.gluster.private N/A N/A Y 15706 Self-heal Daemon on gfs2.gluster.private N/A N/A Y 3348 Task Status of Volume data ------------------------------------------------------------------------------ There are no active volume tasks Status of volume: engine Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick gfs3.gluster.private:/gluster_bricks/ engine/engine 49153 0 Y 3769 Brick gfs2.gluster.private:/gluster_bricks/ engine/engine 49154 0 Y 3194 Brick gfs1.gluster.private:/gluster_bricks/ engine/engine 49153 0 Y 15559 Self-heal Daemon on localhost N/A N/A Y 17602 Self-heal Daemon on gfs1.gluster.private N/A N/A Y 15706 Self-heal Daemon on gfs2.gluster.private N/A N/A Y 3348 Task Status of Volume engine ------------------------------------------------------------------------------ There are no active volume tasks Status of volume: vmstore Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick gfs3.gluster.private:/gluster_bricks/ vmstore/vmstore 49154 0 Y 3786 Brick gfs2.gluster.private:/gluster_bricks/ vmstore/vmstore 49152 0 Y 2901 Brick gfs1.gluster.private:/gluster_bricks/ vmstore/vmstore 49154 0 Y 15568 Self-heal Daemon on localhost N/A N/A Y 17602 Self-heal Daemon on gfs1.gluster.private N/A N/A Y 15706 Self-heal Daemon on gfs2.gluster.private N/A N/A Y 3348 Task Status of Volume vmstore ------------------------------------------------------------------------------ There are no active volume tasks

I have failed on hosted machine IP. I set up as DHCp with etc/ hots set as well as DNS can I recover from this or do I need to start again ? [ INFO ] TASK [ovirt.hosted_engine_setup : Get target engine VM IP address from VDSM stats] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : Fail if Engine IP is different from engine's he_fqdn resolved IP] [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Engine VM IP address is while the engine's he_fqdn controller.kvm.private resolves to 192.168.100.179. If you are using DHCP, check your DHCP reservation configuration"} [ ERROR ] Failed to execute stage 'Closing up': Failed executing ansible-playbook [ INFO ] Stage: Clean up [ INFO ] Cleaning temporary resources [ INFO ] TASK [ovirt.hosted_engine_setup : Execute just a specific set of steps] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : Force facts gathering] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : Fetch logs from the engine VM] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : Set destination directory path] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : Create destination directory] [ INFO ] changed: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : include_tasks] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : Find the local appliance image] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : Set local_vm_disk_path] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : Give the vm time to flush dirty buffers] [ INFO ] ok: [localhost -> localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : Copy engine logs] [ INFO ] changed: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : include_tasks] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : Remove local vm dir] [ INFO ] changed: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : Remove temporary entry in /etc/hosts for the local VM] [ INFO ] ok: [localhost] [ INFO ] Generating answer file '/var/lib/ovirt-hosted-engine-setup/answers/answers-20191212090248.conf' [ INFO ] Stage: Pre-termination [ INFO ] Stage: Termination [ ERROR ] Hosted Engine deployment failed: please check the logs for the issue, fix accordingly or re-deploy from scratch. Log file is located at /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20191212073933-kim3n9.log You have new mail in /var/mail/root [root@ovirt3 ~]#
On 5 Dec 2019, at 11:16, rob.downer@orbitalsystems.co.uk wrote:
Hi Engine deployment fails here...
[ INFO ] TASK [ovirt.hosted_engine_setup : Add glusterfs storage domain] [ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is "[Unexpected exception]". HTTP response code is 400. [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault reason is \"Operation Failed\". Fault detail is \"[Unexpected exception]\". HTTP response code is 400."}
However Gluster looks good...
I have reinstalled all nodes from scratch.
root@ovirt3 ~]# gluster volume status Status of volume: data Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick gfs3.gluster.private:/gluster_bricks/ data/data 49152 0 Y 3756 Brick gfs2.gluster.private:/gluster_bricks/ data/data 49153 0 Y 3181 Brick gfs1.gluster.private:/gluster_bricks/ data/data 49152 0 Y 15548 Self-heal Daemon on localhost N/A N/A Y 17602 Self-heal Daemon on gfs1.gluster.private N/A N/A Y 15706 Self-heal Daemon on gfs2.gluster.private N/A N/A Y 3348
Task Status of Volume data ------------------------------------------------------------------------------ There are no active volume tasks
Status of volume: engine Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick gfs3.gluster.private:/gluster_bricks/ engine/engine 49153 0 Y 3769 Brick gfs2.gluster.private:/gluster_bricks/ engine/engine 49154 0 Y 3194 Brick gfs1.gluster.private:/gluster_bricks/ engine/engine 49153 0 Y 15559 Self-heal Daemon on localhost N/A N/A Y 17602 Self-heal Daemon on gfs1.gluster.private N/A N/A Y 15706 Self-heal Daemon on gfs2.gluster.private N/A N/A Y 3348
Task Status of Volume engine ------------------------------------------------------------------------------ There are no active volume tasks
Status of volume: vmstore Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick gfs3.gluster.private:/gluster_bricks/ vmstore/vmstore 49154 0 Y 3786 Brick gfs2.gluster.private:/gluster_bricks/ vmstore/vmstore 49152 0 Y 2901 Brick gfs1.gluster.private:/gluster_bricks/ vmstore/vmstore 49154 0 Y 15568 Self-heal Daemon on localhost N/A N/A Y 17602 Self-heal Daemon on gfs1.gluster.private N/A N/A Y 15706 Self-heal Daemon on gfs2.gluster.private N/A N/A Y 3348
Task Status of Volume vmstore ------------------------------------------------------------------------------ There are no active volume tasks _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/ACVH5XGIYYTEXN...
participants (2)
-
Rob
-
rob.downer@orbitalsystems.co.uk