fresh ovirt node 4.4.6 fail on firewalld both host and engine deployment

Hello - Deployed fresh ovirt node 4.4.6 and the only thing I did to the system was configure the NIC with nmtui During the gluster install the install errored out with gluster-deployment-1620832547044.log:failed: [n2] (item=5900/tcp) => {"ansible_loop_var": "item", "changed": false, "item": "5900/tcp", "msg": "ERROR: Exception caught: org.fedoraproject.FirewallD1.Exception: ALREADY_ENABLED: '5900:tcp' already in 'public' Permanent and Non-Permanent(immediate) operation"} The fix here was easy - I just deleted the port it was complaining about with firewall-cmd and restarted the installation and it was all fine During the hosted engine deployment when the VM is being deployed it dies here [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Open a port on firewalld] [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "ERROR: Exception caught: org.fedoraproject.FirewallD1.Exception: ALREADY_ENABLED: '6900:tcp' already in 'public' Non-permanent operation"} Now the issue here is that I do not have access to the engine VM as it is in a bit of a transient state since when it fails the current image that is open is discarded when the ansible playbook is kicked off again I cannot find any BZ on this and google is turning up nothing. I don't think firewalld failing due to the firewall rule already existing should be a reason to exit the installation The interesting part is that this only fails on certain ports. i.e when I reran the gluster wizard after 5900 failed, the other ports are presumably still added to the firewall, and the installation completes Suggestions? -- *Notice to Recipient*: https://www.fixflyer.com/disclaimer <https://www.fixflyer.com/disclaimer>

I also just upgraded to 4.6.6.1 and it is still occurring On Wed, May 12, 2021 at 12:36 PM Charles Kozler <charles@fixflyer.com> wrote:
Hello -
Deployed fresh ovirt node 4.4.6 and the only thing I did to the system was configure the NIC with nmtui
During the gluster install the install errored out with
gluster-deployment-1620832547044.log:failed: [n2] (item=5900/tcp) => {"ansible_loop_var": "item", "changed": false, "item": "5900/tcp", "msg": "ERROR: Exception caught: org.fedoraproject.FirewallD1.Exception: ALREADY_ENABLED: '5900:tcp' already in 'public' Permanent and Non-Permanent(immediate) operation"}
The fix here was easy - I just deleted the port it was complaining about with firewall-cmd and restarted the installation and it was all fine
During the hosted engine deployment when the VM is being deployed it dies here
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Open a port on firewalld] [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "ERROR: Exception caught: org.fedoraproject.FirewallD1.Exception: ALREADY_ENABLED: '6900:tcp' already in 'public' Non-permanent operation"}
Now the issue here is that I do not have access to the engine VM as it is in a bit of a transient state since when it fails the current image that is open is discarded when the ansible playbook is kicked off again
I cannot find any BZ on this and google is turning up nothing. I don't think firewalld failing due to the firewall rule already existing should be a reason to exit the installation
The interesting part is that this only fails on certain ports. i.e when I reran the gluster wizard after 5900 failed, the other ports are presumably still added to the firewall, and the installation completes
Suggestions?
-- *Notice to Recipient*: https://www.fixflyer.com/disclaimer <https://www.fixflyer.com/disclaimer>

Hello. I know this error. Please see which ports are used in firewalld configuration (6900). In gluster wizard click "Edit" button and remove gluster firewall config string like "port 6900". Save your configuration and try to deploy. Regards!

Yep! I thought the error was reporting as the port already configured inside the seed engine and not on the actual host. I deleted the firewalld 6900 port addition and everything seems to be flowing through On Wed, May 12, 2021 at 1:36 PM Patrick Lomakin <patrick.lomakin@gmail.com> wrote:
Hello. I know this error. Please see which ports are used in firewalld configuration (6900). In gluster wizard click "Edit" button and remove gluster firewall config string like "port 6900". Save your configuration and try to deploy. Regards! _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/DD2ATIFJEITEI5...
-- *Notice to Recipient*: https://www.fixflyer.com/disclaimer <https://www.fixflyer.com/disclaimer>

Hi, On Wed, May 12, 2021 at 8:53 PM Charles Kozler <charles@fixflyer.com> wrote:
Yep! I thought the error was reporting as the port already configured inside the seed engine and not on the actual host. I deleted the firewalld 6900 port addition and everything seems to be flowing through
On Wed, May 12, 2021 at 1:36 PM Patrick Lomakin <patrick.lomakin@gmail.com> wrote:
Hello. I know this error. Please see which ports are used in firewalld configuration (6900). In gluster wizard click "Edit" button and remove gluster firewall config string like "port 6900". Save your configuration and try to deploy. Regards!
Thanks, Charles and Patrick, for reporting the problem and the solution! Would one of you like to open a bug on this? I agree that deploy should not fail on this. Is it common to use port 6900 for gluster? Best regards, -- Didi

I don't see 6900 in https://github.com/gluster/glusterfs/blob/devel/extras/firewalld/glusterfs.x... Best Regards,Strahil Nikolov

Same situation. I had to remove port 5900 from the host to complete the installation. firewall-cmd --zone public --remove-port 5900/tcp --permanent As you have already mentioned there is no access to the hosted-engine vm, so as a workaround I commented out the open port task in the following role. This worked fine. I assume the port must be opened by default in the hosted-engine vm image. /usr/share/ansible/collections/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/bootstrap_local_vm/05_add_host.yml # - name: Open a port on firewalld # firewalld: # port: "{{ he_webui_forward_port }}/tcp" # permanent: false # immediate: true # state: enabled It looks like a fixed issue in a very old ansible version. https://github.com/ansible/ansible/issues/23895 Not sure why we see this here. There are no leading spaces in the port as the issue mentions and I cannot replicate the issue in the host using the following playbook. --- - name: test firewalld hosts: localhost tasks: - name: Open Common Public Ports firewalld: port: "{{item}}" permanent: true state: enabled zone: public immediate: true with_items: - 5900-6923/tcp Petros
participants (5)
-
Charles Kozler
-
Patrick Lomakin
-
ppetrou@gmail.com
-
Strahil Nikolov
-
Yedidyah Bar David