You need to set it also permanent via '--permanent', otherwise  after a reload - it's gone.

Best Regards,
Strahil Nikolov

On Nov 25, 2019 19:51, Rob <rob.downer@orbitalsystems.co.uk> wrote:
[root@ovirt1 ~]# firewall-cmd --add-service=cockpit
Warning: ALREADY_ENABLED: 'cockpit' already in 'public'
success
[root@ovirt1 ~]# 



On 25 Nov 2019, at 17:16, Parth Dhanjal <dparth@redhat.com> wrote:

Rather than disabling firewalld,
You can add the ports and restart the firewalld service

# firewall-cmd --add-service=cockpit
# firewall-cmd --add-service=cockpit --permanent


On Mon, Nov 25, 2019 at 10:43 PM Amit Bawer <abawer@redhat.com> wrote:
firewalld is running?

systemctl disable --now firewalld

On Monday, November 25, 2019, Rob <rob.downer@orbitalsystems.co.uk> wrote:
It can’t be DNS, since the engine runs on a separate network anyway ie front end, so why can’t it reach the Volume I wonder


On 25 Nov 2019, at 12:55, Gobinda Das <godas@redhat.com> wrote:

There could be two reason
1- Your gluster service may not be running.
2- In Storage Connection  there mentioned <volumename> may not exists

can you please paste the output of "gluster volume status" ?

On Mon, Nov 25, 2019 at 5:03 PM Rob <rob.downer@orbitalsystems.co.uk> wrote:
[ INFO ] skipping: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Add glusterfs storage domain]
[ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is "[Failed to fetch Gluster Volume List]". HTTP response code is 400.
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault reason is \"Operation Failed\". Fault detail is \"[Failed to fetch Gluster Volume List]\". HTTP response code is 400.”}

<