Hi Rob,
Can you check the gluster's volume ?
gluster volume status engine
gluster volume heal engine info summary
Still, this is strange that the firewall is having such differences.
Fastest way to sync that node to the settings from the other is to:
1. Backup /etc/firewalld
2. Remove /etc/firewalld
3. Copy /etc/firewalld from another node (either scp or rsync wil do)
4. Restart the firewalld daemon
5. Make a test firewalld reload
Best Regards,
Strahil NikolovOn Nov 27, 2019 00:09, Rob <rob.downer(a)orbitalsystems.co.uk> wrote:
Is this the issue ?
on the deployment host the firewall is active but only on eno2, eno1 appears to have
decided to be unmanaged …
also the deployment host has 17 rules active
the other two have 7 each…
Zone
Interfaces
IP Range
Public
default
eno2
*
Unmanaged Interfaces
Name
IP Address
Sending
Receiving
;vdsmdummy;
Inactive
eno1
ovirtmgmt
192.168.100.38/24
virbr0-nic
>
> On 25 Nov 2019, at 17:16, Parth Dhanjal <dparth(a)redhat.com> wrote:
>
> Rather than disabling firewalld,
> You can add the ports and restart the firewalld service
>
> # firewall-cmd --add-service=cockpit
>
> # firewall-cmd --add-service=cockpit --permanent
>
>
>
> On Mon, Nov 25, 2019 at 10:43 PM Amit Bawer <abawer(a)redhat.com> wrote:
>>
>> firewalld is running?
>>
>> systemctl disable --now firewalld
>>
>> On Monday, November 25, 2019, Rob <rob.downer(a)orbitalsystems.co.uk> wrote:
>>>
>>> It can’t be DNS, since the engine runs on a separate network anyway ie front
end, so why can’t it reach the Volume I wonder
>>>
>>>
>>>> On 25 Nov 2019, at 12:55, Gobinda Das <godas(a)redhat.com> wrote:
>>>>
>>>> There could be two reason
>>>> 1- Your gluster service may not be running.
>>>> 2- In Storage Connection there mentioned <volumename> may not
exists
>>>>
>>>> can you please paste the output of "gluster volume status" ?
>>>>
>>>> On Mon, Nov 25, 2019 at 5:03 PM Rob
<rob.downer(a)orbitalsystems.co.uk> wrote:
>>>>>
>>>>> [ INFO ] skipping: [localhost]
>>>>> [ INFO ] TASK [ovirt.hosted_engine_setup : Add glusterfs storage
domain]
>>>>> [ ERROR ] Error: Fault reason is "Operation Failed". Fault
detail is "[Failed to fetch Gluster Volume List]". HTTP response code is 400.
>>>>> [ ERROR ] fatal: [localhost]: FAILED! => {"changed":
false, "msg": "Fault reason is \"Operation Failed\". Fault detail
is \"[Failed to fetch Gluster Volume List]\". HTTP response code is 400.”}
>>>>>
>>>>>
>>>>>> On 25 Nov 2019, at 09:16, Rob
<rob.downer(a)orbitalsystems.co.uk> wrote:
>>>>>>
>>>>>> Yes,
>>>>>>
>>>>>> I’ll restart all Nodes after wiping the failed setup of Hosted
engine using.
>>>>>>
>>>>>> ovirt-hosted-engine-cleanup
>>>>>> vdsm-tool configure --force
>>>>>> systemctl restart libvirtd
>>>>>> systemctl restart vdsm
>>>>>>
>>>>>> although last time I did
>>>>>>
>>>>>> systemctl restart vdsm
>>>>>>
>>>>>> VDSM did not restart maybe that is OK as Hosted Engine was then
de deployed or is that the issue ?
>>>>>>
>>>>>>
>>>>>>> On 25 Nov 2019, at 09:13, Parth Dhanjal
<dparth(a)redhat.com> wrote:
>>>>>>>
>>>>>>> Can you please share the error in case it fails again?
>>>>>>>
>>>>>>> On Mon, Nov 25, 2019 at 2:42 PM Rob
<rob.downer(a)orbitalsystems.co.uk> wrote:
>>>>>>>>
>>>>>>>> hmm, I’’l try again, that failed last time.
>>>>>>>>
>>>>>>>>
>>>>>>>>> On 25 Nov 2019, at 09:08, Parth Dhanjal
<dparth(a)redhat.com> wrote:
>>>>>>>>>
>>>>>>>>> Hey!
>>>>>>>>>
>>>>>>>>> For
>>>>>>>>> Storage Connection you can add -
<hostname1>:/engine
>>>>>>>>> And for
>>>>>>>>> Mount Options
- backup-volfile-servers=<hostname2>:<hostname3>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Mon, Nov 25, 2019 at 2:31 PM
<rob.downer(a)orbitalsystems.co.uk> wrote:
>>>>>>>>>>
>>>>>>>>>> So...
>>>>>>>>>>
>>>>>>>>>> I have got to the last step
>>>>>>>>>>
>>>>>>>>>> 3 Machines with Gluster Storage configured
however at the last screen
>>>>>>>>>>
>>>>>>>>>> ....Deploying the Engine to Gluster and the
wizard does not auto fill the two fields
>>>>>>>>>>
>>>>>>>>>> Hosted Engine Deployment
>>>>>>>>>>
>>>>>>>>>> Storage Connection
>>>>>>>>>> and
>>>>>>>>>> Mount Options
>>>>>>>>>>
>>>>>>>>>> I also had to expand /tmp as it was not big
enough to fit the engine before moving...
>>>>>>>>>>
>>>>>>>>>> What can I do to get the auto complete sorted out
?
>>>>>>>>>>
>>>>>>>>>> I have tried entering
ovirt1.kvm.private:/gluster_lv_engine - The Volume name
>>>>>>>>>> and
>>>>>>>>>> ovirt1.kvm.private:/gluster_bricks/engine
>>>>>>>>>>
>>>>>>>>>> Ovirt1 being the actual machine I'm running
this on.
>>>>>>>>>>
>>>>>>>>>> Thanks
>>>>>>>>>> _______________________________________________
>>>>>>>>>> Users mailing list -- users(a)ovirt.org
>>>>>>>>>> To unsubscribe send an email to
users-leave(a)ovirt.org
>>>>>>>>>> Privacy Statement:
https://www.ovirt.org/site/privacy-policy/
>>>>>>>>>> oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
>>>>>>>>>> List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NL4HS6MIKWQ...
>>>>>>>>
>>>>>>>>
>>>>>>
>>>>>> _______________________________________________
>>>>>> Users mailing list -- users(a)ovirt.org
>>>>>> To unsubscribe send an email to users-leave(a)ovirt.org
>>>>>> Privacy Statement:
https://www.ovirt.org/site/privacy-policy/
>>>>>> oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
>>>>>> List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LQ5YXX7IVV6...
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> Users mailing list -- users(a)ovirt.org
>>>>> To unsubscribe send an email to users-leave(a)ovirt.org
>>>>> Privacy Statement:
https://www.ovirt.org/site/privacy-policy/
>>>>> oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
>>>>> List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VHQ3LTES6AB...
>>>>
>>>>
>>>>
>>>> --
>>>>
>>>>
>>>> Thanks,
>>>> Gobinda
>>>
>>>