On Tue, 22 Jan 2019 11:15:12 +0000
Markus Schaufler <markus.schaufler(a)digit-all.at> wrote:
Thanks for your reply,
getent ahosts ovirt-hci.res01.ads.ooe.local | cut -d' ' -f1 | uniq
10.1.31.20
attached you'll find the logs.
Thanks, to my eyes this looks like a bug.
I tried to isolate the relevant lines in the attached playbook.
Markus, would you be so kind to check if ovirt-4.2.8 is working for you?
________________________________
Von: Dominik Holler <dholler(a)redhat.com>
Gesendet: Montag, 21. Jänner 2019 17:52:35
An: Markus Schaufler
Cc: users(a)ovirt.org; Simone Tiraboschi
Betreff: Re: [ovirt-users] ovirt 4.2 HCI rollout
Would you please share the related ovirt-host-deploy-ansible-*.log
stored on the host in /var/log/ovirt-hosted-engine-setup ?
Would you please also share the output of
getent ahosts YOUR_HOSED_ENGNE_FQDN | cut -d' ' -f1 | uniq
if executed on this host?
On Mon, 21 Jan 2019 13:37:53 -0000
"Markus Schaufler" <markus.schaufler(a)digit-all.at> wrote:
> Hi,
>
> I'm trying a (nested) ovirt 4.2.7 HCI rollout on 3 centos VM's by
> following
>
https://ovirt.org/documentation/gluster-hyperconverged/chap-Deploying_Hyp...
> gluster deployment was successful but at HE deployment "stage 5" I
> got following error:
>
> [ INFO ] TASK [Reconfigure OVN central address]
> [ ERROR ] fatal: [localhost]: FAILED! => {"msg": "The task
includes
> an option with an undefined variable. The error was: 'dict object'
> has no attribute 'stdout_lines'\n\nThe error appears to have been
> in
> '/usr/share/ovirt-hosted-engine-setup/ansible/create_target_vm.yml':
> line 522, column 5, but may\nbe elsewhere in the file depending on
> the exact syntax problem.\n\nThe offending line appears to be:\n\n
> #
>
https://github.com/oVirt/ovirt-engine/blob/master/packaging/playbooks/rol...
> - name: Reconfigure OVN central address\n ^ here\n"}
>
>
> /var/log/messages:
> Jan 21 14:09:56 HCI01 journal: ovirt-ha-agent
> ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine ERROR
> Engine VM stopped on localhost Jan 21 14:10:01 HCI01 systemd:
> Started Session 22 of user root. Jan 21 14:10:02 HCI01 systemd:
> Started Session c306 of user root. Jan 21 14:10:03 HCI01 systemd:
> Started Session c307 of user root. Jan 21 14:10:06 HCI01
> vdsm[3650]: WARN executor state: count=5 workers=set([<Worker
> name=periodic/4 waiting task#=141 at 0x7fd2d4316910>, <Worker
> name=periodic/1 running <Task discardable <Operation
> action=<vdsm.virt.sampling.VMBulkstatsMonitor object at
> 0x7fd2d4679490> at 0x7fd2d4679710> timeout=7.5, duration=7 at
> 0x7fd2d4679490> 0x7fd33c1e0ed0> disca rded task#=413 at
> 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>, <Worker
> 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>name=periodic/3
> 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>waiting task#=414 at
> 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>,
> 0x7fd2d4679490> 0x7fd33c1e0ed0>
0x7fd2d5ed0510>0x7fd2d5ed0b10><Worker
> 0x7fd2d4679490> 0x7fd33c1e0ed0>
0x7fd2d5ed0510>0x7fd2d5ed0b10>name=periodic/5
> 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>waiting
> 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>task#=0
> 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>at
> 0x7fd2d4679490> 0x7fd33c1e0ed0>
0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>,
> 0x7fd2d4679490> 0x7fd33c1e0ed0>
0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650><Worker
> 0x7fd2d4679490> 0x7fd33c1e0ed0>
0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>name
> 0x7fd2d4679490> 0x7fd33c1e0ed0>
0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>=periodic/2
> 0x7fd2d4679490> 0x7fd33c1e0ed0>
0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>waiting
> 0x7fd2d4679490> 0x7fd33c1e0ed0>
0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>task#=412
> 0x7fd2d4679490> 0x7fd33c1e0ed0>
0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>at
> 0x7fd2d4679490> 0x7fd33c1e0ed0>
0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>])
> 0x7fd2d4679490> 0x7fd33c1e0ed0>
0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>Jan
> 0x7fd2d4679490> 0x7fd33c1e0ed0>
0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>21
> 0x7fd2d4679490> 0x7fd33c1e0ed0>
0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>14:10:06
> 0x7fd2d4679490> 0x7fd33c1e0ed0>
0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>HCI01
> 0x7fd2d4679490> 0x7fd33c1e0ed0>
0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>kernel:
> 0x7fd2d4679490> 0x7fd33c1e0ed0>
0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>ovirtmgmt:
> 0x7fd2d4679490> 0x7fd33c1e0ed0>
0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>port
> 0x7fd2d4679490> 0x7fd33c1e0ed0>
0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>2(vnet0)
> 0x7fd2d4679490> 0x7fd33c1e0ed0>
0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>entered
> 0x7fd2d4679490> 0x7fd33c1e0ed0>
0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>disabled
> 0x7fd2d4679490> 0x7fd33c1e0ed0>
0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>state
> 0x7fd2d4679490> 0x7fd33c1e0ed0>
0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>Jan
> 0x7fd2d4679490> 0x7fd33c1e0ed0>
0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>21
> 0x7fd2d4679490> 0x7fd33c1e0ed0>
0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>14:10:06
> 0x7fd2d4679490> 0x7fd33c1e0ed0>
0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>HCI01
> 0x7fd2d4679490> 0x7fd33c1e0ed0>
0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>kernel:
> 0x7fd2d4679490> 0x7fd33c1e0ed0>
0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>device
> 0x7fd2d4679490> 0x7fd33c1e0ed0>
0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>vnet0
> 0x7fd2d4679490> 0x7fd33c1e0ed0>
0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>left
> 0x7fd2d4679490> 0x7fd33c1e0ed0>
0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>promiscuous
> 0x7fd2d4679490> 0x7fd33c1e0ed0>
0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>mode
> 0x7fd2d4679490> 0x7fd33c1e0ed0>
0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>Jan
> 0x7fd2d4679490> 0x7fd33c1e0ed0>
0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>21
> 0x7fd2d4679490> 0x7fd33c1e0ed0>
0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>14:10:06
> 0x7fd2d4679490> 0x7fd33c1e0ed0>
0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>HCI01
> 0x7fd2d4679490> 0x7fd33c1e0ed0>
0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>kernel:
> 0x7fd2d4679490> 0x7fd33c1e0ed0>
0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>ovirtmgmt:
> 0x7fd2d4679490> 0x7fd33c1e0ed0>
0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>port
> 0x7fd2d4679490> 0x7fd33c1e0ed0>
0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>2(vnet0)
> 0x7fd2d4679490> 0x7fd33c1e0ed0>
0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>entered
> 0x7fd2d4679490> 0x7fd33c1e0ed0>
0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>disabled
> 0x7fd2d4679490> 0x7fd33c1e0ed0>
0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>state
> Jan 21 14:10:06 HCI01 NetworkManager[3666]: <info>
> [1548076206.9177] device (vnet0): state change: disconnected ->
> unmanaged (reason 'unmanaged', sys-iface-state: 'remo ved') Jan 21
> 14:10:06 HCI01 NetworkManager[3666]: <info> [1548076206.9180]
> device (vnet0): released from master device ovirtmgmt Jan 21
> 14:10:06 HCI01 lldpad: recvfrom(Event interface): No buffer space
> available Jan 21 14:10:06 HCI01 libvirtd: 2019-01-21
> 13:10:06.925+0000: 2651: error : qemuMonitorIORead:609 : Unable to
> read from monitor: Connection reset by peer Jan 21 14:10:07 HCI01
> kvm: 0 guests now active Jan 21 14:10:07 HCI01 systemd-machined:
> Machine qemu-3-HostedEngine terminated. Jan 21 14:10:07 HCI01
> libvirtd: 2019-01-21 13:10:07.125+0000: 2704: warning :
> qemuGetProcessInfo:1406 : cannot parse process status data Jan 21
> 14:10:07 HCI01 libvirtd: 2019-01-21 13:10:07.125+0000: 2704:
> warning : qemuGetProcessInfo:1406 : cannot parse process status
> data Jan 21 14:10:07 HCI01 libvirtd: 2019-01-21 13:10:07.125+0000:
> 2704: warning : qemuGetProcessInfo:1406 : cannot parse process
> status data Jan 21 14:10:07 HCI01 libvirtd: 2019-01-21
> 13:10:07.125+0000: 2704: warning : qemuGetProcessInfo:1406 : cannot
> parse process status data Jan 21 14:10:07 HCI01 libvirtd:
> 2019-01-21 13:10:07.126+0000: 2704: error :
> virNetDevTapInterfaceStats:764 : internal error: /proc/net/dev:
> Interface not found Jan 21 14:10:07 HCI01 firewalld[24040]:
> WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -D libvirt-out
> -m physdev --physdev-is-bridged --physdev-out vnet0 -g FP-vnet 0'
> failed: iptables v1.4.21: goto 'FP-vnet0' is not a chain#012#012Try
> `iptables -h' or 'iptables --help' for more information. Jan 21
> 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> '/usr/sbin/iptables -w2 -w -D libvirt-out -m physdev --physdev-out
> vnet0 -g FP-vnet0' failed: iptables v 1.4.21: goto 'FP-vnet0' is
> not a chain#012#012Try `iptables -h' or 'iptables --help' for more
> information. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -D libvirt-in -m physdev
> --physdev-in vnet0 -g FJ-vnet0' failed: iptables v1. 4.21: goto
> 'FJ-vnet0' is not a chain#012#012Try `iptables -h' or 'iptables
> --help' for more information. Jan 21 14:10:07 HCI01
> firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> -w -D libvirt-host-in -m physdev --physdev-in vnet0 -g HJ-vnet0'
> failed: iptable s v1.4.21: goto 'HJ-vnet0' is not a
> chain#012#012Try `iptables -h' or 'iptables --help' for more
> information. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F FP-vnet0' failed:
> iptables: No chain/target/match by that name. Jan 21 14:10:07 HCI01
> firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> -w -X FP-vnet0' failed: iptables: No chain/target/match by that
> name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F FJ-vnet0' failed:
> iptables: No chain/target/match by that name. Jan 21 14:10:07 HCI01
> firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> -w -X FJ-vnet0' failed: iptables: No chain/target/match by that
> name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F HJ-vnet0' failed:
> iptables: No chain/target/match by that name. Jan 21 14:10:07 HCI01
> firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> -w -X HJ-vnet0' failed: iptables: No chain/target/match by that
> name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -D libvirt-out -m
> physdev --physdev-is-bridged --physdev-out vnet0 -g FP-vne t0'
> failed: ip6tables v1.4.21: goto 'FP-vnet0' is not a
> chain#012#012Try `ip6tables -h' or 'ip6tables --help' for more
> information. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -D libvirt-out -m
> physdev --physdev-out vnet0 -g FP-vnet0' failed: ip6tables v1.4.21:
> goto 'FP-vnet0' is not a chain#012#012Try `ip6tables -h' or
> 'ip6tables --help' for more information. Jan 21 14:10:07 HCI01
> firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2
> -w -D libvirt-in -m physdev --physdev-in vnet0 -g FJ-vnet0' failed:
> ip6tables v 1.4.21: goto 'FJ-vnet0' is not a chain#012#012Try
> `ip6tables -h' or 'ip6tables --help' for more information. Jan 21
> 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> '/usr/sbin/ip6tables -w2 -w -D libvirt-host-in -m physdev
> --physdev-in vnet0 -g HJ-vnet0' failed: ip6tab les v1.4.21: goto
> 'HJ-vnet0' is not a chain#012#012Try `ip6tables -h' or 'ip6tables
> --help' for more information. Jan 21 14:10:07 HCI01
> firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2
> -w -F FP-vnet0' failed: ip6tables: No chain/target/match by that
> name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -X FP-vnet0' failed:
> ip6tables: No chain/target/match by that name. Jan 21 14:10:07
> HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> '/usr/sbin/ip6tables -w2 -w -F FJ-vnet0' failed: ip6tables: No
> chain/target/match by that name. Jan 21 14:10:07 HCI01
> firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2
> -w -X FJ-vnet0' failed: ip6tables: No chain/target/match by that
> name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -F HJ-vnet0' failed:
> ip6tables: No chain/target/match by that name. Jan 21 14:10:07
> HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> '/usr/sbin/ip6tables -w2 -w -X HJ-vnet0' failed: ip6tables: No
> chain/target/match by that name. Jan 21 14:10:07 HCI01
> firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
> --concurrent -t nat -D PREROUTING -i vnet0 -j libvirt-J-vnet0'
> failed: Illegal target name 'libvirt-J-vnet0'. Jan 21 14:10:07
> HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> '/usr/sbin/ebtables --concurrent -t nat -D POSTROUTING -o vnet0 -j
> libvirt-P-vnet0' failed: Illegal target name 'libvirt-P-vnet0'. Jan
> 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> '/usr/sbin/ebtables --concurrent -t nat -D PREROUTING -i vnet0 -j
> libvirt-J-vnet0' failed: Illegal targe t name 'libvirt-J-vnet0'.
> Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> '/usr/sbin/ebtables --concurrent -t nat -D POSTROUTING -o vnet0 -j
> libvirt-P-vnet0' failed: Illegal targ et name 'libvirt-P-vnet0'.
> Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> '/usr/sbin/ebtables --concurrent -t nat -L libvirt-J-vnet0' failed:
> Chain 'libvirt-J-vnet0' doesn't exis t. Jan 21 14:10:07 HCI01
> firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
> --concurrent -t nat -L libvirt-P-vnet0' failed: Chain
> 'libvirt-P-vnet0' doesn't exist. Jan 21 14:10:07 HCI01
> firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
> --concurrent -t nat -F libvirt-J-vnet0' failed: Chain
> 'libvirt-J-vnet0' doesn't exist. Jan 21 14:10:07 HCI01
> firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
> --concurrent -t nat -X libvirt-J-vnet0' failed: Chain
> 'libvirt-J-vnet0' doesn't exist. Jan 21 14:10:07 HCI01
> firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
> --concurrent -t nat -F libvirt-P-vnet0' failed: Chain
> 'libvirt-P-vnet0' doesn't exist. Jan 21 14:10:07 HCI01
> firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
> --concurrent -t nat -X libvirt-P-vnet0' failed: Chain
> 'libvirt-P-vnet0' doesn't exist. Jan 21 14:10:07 HCI01
> firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> -w -D libvirt-out -m physdev --physdev-is-bridged --physdev-out
> vnet0 -g FO-vnet0' failed: iptables v1.4.21: goto 'FO-vnet0' is not
> a chain#012#012Try `iptables -h' or 'iptables --help' for more
> information. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -D libvirt-out -m
> physdev --physdev-out vnet0 -g FO-vnet0' failed: iptables v1.4.21:
> goto 'FO-vnet0' is not a chain#012#012Try `iptables -h' or
> 'iptables --help' for more information. Jan 21 14:10:07 HCI01
> firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> -w -D libvirt-in -m physdev --physdev-in vnet0 -g FI-vnet0' failed:
> iptables v1.4.21: goto 'FI-vnet0' is not a chain#012#012Try
> `iptables -h' or 'iptables --help' for more information. Jan 21
> 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> '/usr/sbin/iptables -w2 -w -D libvirt-host-in -m physdev
> --physdev-in vnet0 -g HI-vnet0' failed: iptables v1.4.21: goto
> 'HI-vnet0' is not a chain#012#012Try `iptables -h' or 'iptables
> --help' for more information. Jan 21 14:10:07 HCI01
> firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> -w -D libvirt-in-post -m physdev --physdev-in vnet0 -j ACCEPT'
> failed: iptables: Bad rule (does a matching rule exist in that
> chain?). Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F FO-vnet0' failed:
> iptables: No chain/target/match by that name. Jan 21 14:10:07 HCI01
> firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> -w -X FO-vnet0' failed: iptables: No chain/target/match by that
> name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F FI-vnet0' failed:
> iptables: No chain/target/match by that name. Jan 21 14:10:07 HCI01
> firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> -w -X FI-vnet0' failed: iptables: No chain/target/match by that
> name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F HI-vnet0' failed:
> iptables: No chain/target/match by that name. Jan 21 14:10:07 HCI01
> firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> -w -X HI-vnet0' failed: iptables: No chain/target/match by that
> name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -D libvirt-out -m
> physdev --physdev-is-bridged --physdev-out vnet0 -g FO-vnet0'
> failed: ip6tables v1.4.21: goto 'FO-vnet0' is not a
> chain#012#012Try `ip6tables -h' or 'ip6tables --help' for more
> information. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -D libvirt-out -m
> physdev --physdev-out vnet0 -g FO-vnet0' failed: ip6tables v1.4.21:
> goto 'FO-vnet0' is not a chain#012#012Try `ip6tables -h' or
> 'ip6tables --help' for more information. Jan 21 14:10:07 HCI01
> firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2
> -w -D libvirt-in -m physdev --physdev-in vnet0 -g FI-vnet0' failed:
> ip6tables v1.4.21: goto 'FI-vnet0' is not a chain#012#012Try
> `ip6tables -h' or 'ip6tables --help' for more information. Jan 21
> 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> '/usr/sbin/ip6tables -w2 -w -D libvirt-host-in -m physdev
> --physdev-in vnet0 -g HI-vnet0' failed: ip6tables v1.4.21: goto
> 'HI-vnet0' is not a chain#012#012Try `ip6tables -h' or 'ip6tables
> --help' for more information. Jan 21 14:10:07 HCI01
> firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2
> -w -D libvirt-in-post -m physdev --physdev-in vnet0 -j ACCEPT'
> failed: ip6tables: Bad rule (does a matching rule exist in that
> chain?). Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -F FO-vnet0' failed:
> ip6tables: No chain/target/match by that name. Jan 21 14:10:07
> HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> '/usr/sbin/ip6tables -w2 -w -X FO-vnet0' failed: ip6tables: No
> chain/target/match by that name. Jan 21 14:10:07 HCI01
> firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2
> -w -F FI-vnet0' failed: ip6tables: No chain/target/match by that
> name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -X FI-vnet0' failed:
> ip6tables: No chain/target/match by that name. Jan 21 14:10:07
> HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> '/usr/sbin/ip6tables -w2 -w -F HI-vnet0' failed: ip6tables: No
> chain/target/match by that name. Jan 21 14:10:07 HCI01
> firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2
> -w -X HI-vnet0' failed: ip6tables: No chain/target/match by that
> name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -D
> POSTROUTING -o vnet0 -j libvirt-O-vnet0' failed: Illegal target
> name 'libvirt-O-vnet0'. Jan 21 14:10:07 HCI01 firewalld[24040]:
> WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -L
> libvirt-O-vnet0' failed: Chain 'libvirt-O-vnet0' doesn't exist. Jan
> 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> '/usr/sbin/ebtables --concurrent -t nat -F libvirt-O-vnet0' failed:
> Chain 'libvirt-O-vnet0' doesn't exist. Jan 21 14:10:07 HCI01
> firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
> --concurrent -t nat -X libvirt-O-vnet0' failed: Chain
> 'libvirt-O-vnet0' doesn't exist. Jan 21 14:10:07 HCI01 vdsm[3650]:
> WARN
> File:
/var/lib/libvirt/qemu/channels/ea1b312c-a462-45a9-ab75-78008bc4c9c9.ovirt-guest-agent.0
> already removed Jan 21 14:10:07 HCI01 vdsm[3650]: WARN Attempting
> to remove a non existing network:
> ovirtmgmt/ea1b312c-a462-45a9-ab75-78008bc4c9c9 Jan 21 14:10:07
> HCI01 vdsm[3650]: WARN Attempting to remove a non existing net
> user: ovirtmgmt/ea1b312c-a462-45a9-ab75-78008bc4c9c9 Jan 21
> 14:10:07 HCI01 vdsm[3650]: WARN
> File:
/var/lib/libvirt/qemu/channels/ea1b312c-a462-45a9-ab75-78008bc4c9c9.org.qemu.guest_agent.0
> already removed
>
> any ideas on that?
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org
> Privacy Statement:
https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
>
https://www.ovirt.org/community/about/community-guidelines/ List
> Archives:
>
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XMMX5CY6VHF...