
OK so, I enabled the firewall service as recommended but it was already enabled. I scrubbed the whole setup again with ovirt-hosted-engine-cleanup vdsm-tool configure --force systemctl restart libvirtd systemctl restart vdsm I restarted all hosts again. I went to the setup Gluster & Engine button and selected with the recognised already set up Gluster. I got this far with the gluster storage recognised by the system for the first time.. The wizard automatically returning this configuration. Storage Storage Type:glusterfs Storage Domain Connection:gfs1.gluster.private:/engine Mount Options:backup-volfile-servers=gfs2.gluster.private:gfs3.gluster.private Disk Size (GiB):58 Installation failed again with… [ INFO ] TASK [ovirt.hosted_engine_setup : Fetch host facts] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : Fetch cluster ID] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : Fetch cluster facts] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : Fetch Datacenter facts] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : Fetch Datacenter ID] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : Fetch Datacenter name] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : Add NFS storage domain] [ INFO ] skipping: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : Add glusterfs storage domain] [ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is "[Unexpected exception]". HTTP response code is 400. [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault reason is \"Operation Failed\". Fault detail is \"[Unexpected exception]\". HTTP response code is 400."}
On 26 Nov 2019, at 05:44, Strahil <hunter86_bg@yahoo.com> wrote:
You need to set it also permanent via '--permanent', otherwise after a reload - it's gone.
Best Regards, Strahil Nikolov
On Nov 25, 2019 19:51, Rob <rob.downer@orbitalsystems.co.uk> wrote: [root@ovirt1 ~]# firewall-cmd --add-service=cockpit Warning: ALREADY_ENABLED: 'cockpit' already in 'public' success [root@ovirt1 ~]#
On 25 Nov 2019, at 17:16, Parth Dhanjal <dparth@redhat.com <mailto:dparth@redhat.com>> wrote:
Rather than disabling firewalld, You can add the ports and restart the firewalld service
# firewall-cmd --add-service=cockpit # firewall-cmd --add-service=cockpit --permanent
On Mon, Nov 25, 2019 at 10:43 PM Amit Bawer <abawer@redhat.com <mailto:abawer@redhat.com>> wrote: firewalld is running?
systemctl disable --now firewalld
On Monday, November 25, 2019, Rob <rob.downer@orbitalsystems.co.uk <mailto:rob.downer@orbitalsystems.co.uk>> wrote: It can’t be DNS, since the engine runs on a separate network anyway ie front end, so why can’t it reach the Volume I wonder
On 25 Nov 2019, at 12:55, Gobinda Das <godas@redhat.com <mailto:godas@redhat.com>> wrote:
There could be two reason 1- Your gluster service may not be running. 2- In Storage Connection there mentioned <volumename> may not exists
can you please paste the output of "gluster volume status" ?
On Mon, Nov 25, 2019 at 5:03 PM Rob <rob.downer@orbitalsystems.co.uk <mailto:rob.downer@orbitalsystems.co.uk>> wrote: [ INFO ] skipping: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : Add glusterfs storage domain] [ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is "[Failed to fetch Gluster Volume List]". HTTP response code is 400. [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault reason is \"Operation Failed\". Fault detail is \"[Failed to fetch Gluster Volume List]\". HTTP response code is 400.”}
<