According to the output, the task expected a dictionary but it's not.

What is the status of your firewalld ?

Best Regards,
Strahil Nikolov

On Thu, Feb 11, 2021 at 16:59, lejeczek via Users
<users@ovirt.org> wrote:
Hi,

I filed a bugzilla report about the problem a while ago, but
since I got so much better feedback here on the list, about
different issue though, I thought I should try here again.
I attempt to deploy hosted engine and it fails:
...

[ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Check
firewalld status]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Enforce
firewalld status]
[ ERROR ] fatal: [localhost]: FAILED! => {"msg": "The
conditional check 'firewalld_s.status.SubState != 'running'
or firewalld_s.status.LoadState == 'masked'' failed. The
error was: error while evaluating conditional
(firewalld_s.status.SubState != 'running' or
firewalld_s.status.LoadState == 'masked'): 'dict object' has
no attribute 'SubState'\n\nThe error appears to be in
'/usr/share/ansible/collections/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/pre_checks/validate_firewalld.yml':
line 8, column 5, but may\nbe elsewhere in the file
depending on the exact syntax problem.\n\nThe offending line
appears to be:\n\n    register: firewalld_s\n  - name:
Enforce firewalld status\n    ^ here\n"}
[ ERROR ] Failed to execute stage 'Closing up': Failed
executing ansible-playbook

This is on Centos Stream with perhaps the latest of those:

cockpit-ovirt-dashboard-0.14.20-0.0.master.20210210161723.git7ea56eb.el8.noarch
ovirt-ansible-collection-1.3.2-0.1.master.20210210132916.el8.noarch
ovirt-engine-appliance-4.4-20210210182638.1.el8.x86_64
ovirt-host-4.4.6-0.0.20210127122119.gitef84c5a.el8.x86_64
ovirt-host-dependencies-4.4.6-0.0.20210127122119.gitef84c5a.el8.x86_64
ovirt-hosted-engine-ha-2.4.7-0.0.master.20210203134854.20210203134846.git7d297c2.el8.noarch
ovirt-hosted-engine-setup-2.5.0-0.0.master.20201216174101.git2a94b06.el8.noarch
ovirt-imageio-client-2.2.0-0.202102041750.git98b0a36.el8.x86_64
ovirt-imageio-common-2.2.0-0.202102041750.git98b0a36.el8.x86_64
ovirt-imageio-daemon-2.2.0-0.202102041750.git98b0a36.el8.x86_64
ovirt-openvswitch-2.11-0.2020061801.el8.noarch
ovirt-openvswitch-ovn-2.11-0.2020061801.el8.noarch
ovirt-openvswitch-ovn-common-2.11-0.2020061801.el8.noarch
ovirt-openvswitch-ovn-host-2.11-0.2020061801.el8.noarch
ovirt-provider-ovn-driver-1.2.34-0.20201207083749.git75016ed.el8.noarch
ovirt-python-openvswitch-2.11-0.2020061801.el8.noarch
ovirt-release-master-4.4.5-0.0.master.20210210011142.git0fb6ce0.el8.noarch
ovirt-vmconsole-1.0.9-1.20201130191550.git0bf874a.el8.noarch
ovirt-vmconsole-host-1.0.9-1.20201130191550.git0bf874a.el8.noarch
python3-ovirt-engine-sdk4-4.4.10-1.20210209.gitf3d6f43.el8.x86_64
python3-ovirt-setup-lib-1.3.3-0.0.master.20200727063144.git90cd6d9.el8.noarch

To make it a bit more curious - this is a bare-metal system
and I cannot reproduce that same errors on a simple KVM
host. One possibly major difference is that hardware has
multiple ifaces as oppose to single iface in KVM.

For any thoughts you care to share I'll be grateful.
thanks, L
_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-leave@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/MDSD46DASIGPKUVLNV6ZNEWAJQNZNZEJ/