The hosted engine is not running and cannot be started.
________________________________
Von: Simone Tiraboschi <stirabos(a)redhat.com>
Gesendet: Donnerstag, 24. Jänner 2019 14:45:59
An: Markus Schaufler
Cc: Dominik Holler; users(a)ovirt.org
Betreff: Re: [ovirt-users] ovirt 4.2 HCI rollout
On Thu, Jan 24, 2019 at 2:21 PM Markus Schaufler
<markus.schaufler@digit-all.at<mailto:markus.schaufler@digit-all.at>> wrote:
Hi,
thanks for the replies.
I updated to 4.2.8 and tried again:
[ INFO ] TASK [Check engine VM health]
[ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 120,
"changed": true, "cmd": ["hosted-engine",
"--vm-status", "--json"], "delta":
"0:00:00.165316", "end": "2019-01-24 14:12:06.899564",
"rc": 0, "start": "2019-01-24 14:12:06.734248",
"stderr": "", "stderr_lines": [], "stdout":
"{\"1\": {\"conf_on_shared_storage\": true,
\"live-data\": true, \"extra\":
\"metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=3049 (Thu Jan 24
14:11:59 2019)\\nhost-id=1\\nscore=3400\\nvm_conf_refresh_time=3049 (Thu Jan 24 14:11:59
2019)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineStarting\\nstopped=False\\n\",
\"hostname\": \"HCI01.res01.ads.ooe.local\", \"host-id\": 1,
\"engine-status\": {\"reason\": \"failed liveliness check\",
\"health\": \"bad\", \"vm\": \"up\",
\"detail\": \"Up\"}, \"score\": 3400, \"stopped\":
false, \"maintenance\": false, \"crc32\": \"0c1a3ddb\",
\"local_conf_timestamp\": 3049, \"host-ts\": 3049},
\"global_maintenance\": false}", "stdout_lines":
["{\"1\": {\"conf_on_shared_storage\": true,
\"live-data\": true, \"extra\":
\"metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=3049 (Thu Jan 24
14:11:59 2019)\\nhost-id=1\\nscore=3400\\nvm_conf_refresh_time=3049 (Thu Jan 24 14:11:59
2019)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineStarting\\nstopped=False\\n\",
\"hostname\": \"HCI01.res01.ads.ooe.local\", \"host-id\": 1,
\"engine-status\": {\"reason\": \"failed liveliness check\",
\"health\": \"bad\", \"vm\": \"up\",
\"detail\": \"Up\"}, \"score\": 3400, \"stopped\":
false, \"maintenance\": false, \"crc32\": \"0c1a3ddb\",
\"local_conf_timestamp\": 3049, \"host-ts\": 3049},
\"global_maintenance\": false}"]}
It's still the same issue: the host fail to properly check the status of the engine
over a dedicate health page.
You should connect to ovirt-hci.res01.ads.ooe.local and check the status of ovirt-engine
service and /var/log/ovirt-engine/engine.log there.
[ INFO ] TASK [Check VM status at virt level]
[ INFO ] changed: [localhost]
[ INFO ] TASK [Fail if engine VM is not running]
[ INFO ] skipping: [localhost]
[ INFO ] TASK [Get target engine VM IPv4 address]
[ INFO ] changed: [localhost]
[ INFO ] TASK [Get VDSM's target engine VM stats]
[ INFO ] changed: [localhost]
[ INFO ] TASK [Convert stats to JSON format]
[ INFO ] ok: [localhost]
[ INFO ] TASK [Get target engine VM IPv4 address from VDSM stats]
[ INFO ] ok: [localhost]
[ INFO ] TASK [Fail if the Engine has no IP address]
[ INFO ] skipping: [localhost]
[ INFO ] TASK [Fail if Engine IP is different from engine's FQDN resolved IP]
[ INFO ] skipping: [localhost]
[ INFO ] TASK [Get target engine VM IPv4 address]
[ INFO ] skipping: [localhost]
[ INFO ] TASK [Reconfigure OVN central address]
[ ERROR ] fatal: [localhost]: FAILED! => {"msg": "The task includes an
option with an undefined variable. The error was: 'dict object' has no attribute
'stdout_lines'\n\nThe error appears to have been in
'/usr/share/ovirt-hosted-engine-setup/ansible/create_target_vm.yml': line 518,
column 5, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe
offending line appears to be:\n\n #
https://github.com/oVirt/ovirt-engine/blob/master/packaging/playbooks/rol...
- name: Reconfigure OVN central address\n ^ here\n"}
attached you'll find the setup logs.
best regards,
Markus Schaufler
________________________________
Von: Simone Tiraboschi <stirabos@redhat.com<mailto:stirabos@redhat.com>>
Gesendet: Donnerstag, 24. Jänner 2019 11:56:50
An: Dominik Holler
Cc: Markus Schaufler; users@ovirt.org<mailto:users@ovirt.org>
Betreff: Re: [ovirt-users] ovirt 4.2 HCI rollout
On Thu, Jan 24, 2019 at 9:40 AM Dominik Holler
<dholler@redhat.com<mailto:dholler@redhat.com>> wrote:
On Tue, 22 Jan 2019 11:15:12 +0000
Markus Schaufler
<markus.schaufler@digit-all.at<mailto:markus.schaufler@digit-all.at>> wrote:
Thanks for your reply,
getent ahosts ovirt-hci.res01.ads.ooe.local | cut -d' ' -f1 | uniq
10.1.31.20
attached you'll find the logs.
Thanks, to my eyes this looks like a bug.
I tried to isolate the relevant lines in the attached playbook.
Markus, would you be so kind to check if ovirt-4.2.8 is working for you?
OK, understood: the real error was just a few lines before what Dominik pointed out:
"stdout": "{\"1\": {\"conf_on_shared_storage\":
true, \"live-data\": true, \"extra\":
\"metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=5792 (Mon Jan 21
13:57:45 2019)\\nhost-id=1\\nscore=3000\\nvm_conf_refresh_time=5792 (Mon Jan 21 13:57:45
2019)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineStarting\\nstopped=False\\n\",
\"hostname\": \"HCI01.res01.ads.ooe.local\", \"host-id\": 1,
\"engine-status\": {\"reason\": \"failed liveliness check\",
\"health\": \"bad\", \"vm\": \"up\",
\"detail\": \"Up\"}, \"score\": 3000, \"stopped\":
false, \"maintenance\": false, \"crc32\": \"ba303717\",
\"local_conf_timestamp\": 5792, \"host-ts\": 5792},
\"global_maintenance\": false}",
"stdout_lines": [
"{\"1\": {\"conf_on_shared_storage\": true,
\"live-data\": true, \"extra\":
\"metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=5792 (Mon Jan 21
13:57:45 2019)\\nhost-id=1\\nscore=3000\\nvm_conf_refresh_time=5792 (Mon Jan 21 13:57:45
2019)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineStarting\\nstopped=False\\n\",
\"hostname\": \"HCI01.res01.ads.ooe.local\", \"host-id\": 1,
\"engine-status\": {\"reason\": \"failed liveliness check\",
\"health\": \"bad\", \"vm\": \"up\",
\"detail\": \"Up\"}, \"score\": 3000, \"stopped\":
false, \"maintenance\": false, \"crc32\": \"ba303717\",
\"local_conf_timestamp\": 5792, \"host-ts\": 5792},
\"global_maintenance\": false}"
]
}"
2019-01-21 13:57:46,695+0100 ERROR ansible failed {'status': 'FAILED',
'ansible_type': 'task', 'ansible_task': u'Check engine VM
health', 'ansible_result': u'type: <type \'dict\'>\nstr:
{\'_ansible_parsed\': True, \'stderr_lines\': [], u\'changed\':
True, u\'end\': u\'2019-01-21 13:57:46.242423\',
\'_ansible_no_log\': False, u\'stdout\': u\'{"1":
{"conf_on_shared_storage": true, "live-data": true, "extra":
"metadata_parse_version=1\\\\nmetadata_feature_version=1\\\\ntimestamp=5792 (Mon Jan
21 13:57:4', 'ansible_host': u'localhost', 'ansible_playbook':
u'/usr/share/ovirt-hosted-engine-setup/ansible/create_target_vm.yml'}
and in particular it's here:
for some reason we got \"engine-status\": {\"reason\": \"failed
liveliness check\", \"health\": \"bad\", \"vm\":
\"up\", \"detail\": \"Up\"}
over 120 attempts: we have to check engine.log (it got collected as well from the engine
VM) to understand why the engine was failing to start.
________________________________
Von: Dominik Holler <dholler@redhat.com<mailto:dholler@redhat.com>>
Gesendet: Montag, 21. Jänner 2019 17:52:35
An: Markus Schaufler
Cc: users@ovirt.org<mailto:users@ovirt.org>; Simone Tiraboschi
Betreff: Re: [ovirt-users] ovirt 4.2 HCI rollout
Would you please share the related ovirt-host-deploy-ansible-*.log
stored on the host in /var/log/ovirt-hosted-engine-setup ?
Would you please also share the output of
getent ahosts YOUR_HOSED_ENGNE_FQDN | cut -d' ' -f1 | uniq
if executed on this host?
On Mon, 21 Jan 2019 13:37:53 -0000
"Markus Schaufler"
<markus.schaufler@digit-all.at<mailto:markus.schaufler@digit-all.at>> wrote:
> Hi,
>
> I'm trying a (nested) ovirt 4.2.7 HCI rollout on 3 centos VM's by
> following
>
https://ovirt.org/documentation/gluster-hyperconverged/chap-Deploying_Hyp...
> gluster deployment was successful but at HE deployment "stage 5" I
> got following error:
>
> [ INFO ] TASK [Reconfigure OVN central address]
> [ ERROR ] fatal: [localhost]: FAILED! => {"msg": "The task
includes
> an option with an undefined variable. The error was: 'dict object'
> has no attribute 'stdout_lines'\n\nThe error appears to have been
> in
> '/usr/share/ovirt-hosted-engine-setup/ansible/create_target_vm.yml':
> line 522, column 5, but may\nbe elsewhere in the file depending on
> the exact syntax problem.\n\nThe offending line appears to be:\n\n
> #
>
https://github.com/oVirt/ovirt-engine/blob/master/packaging/playbooks/rol...
> - name: Reconfigure OVN central address\n ^ here\n"}
>
>
> /var/log/messages:
> Jan 21 14:09:56 HCI01 journal: ovirt-ha-agent
> ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine ERROR
> Engine VM stopped on localhost Jan 21 14:10:01 HCI01 systemd:
> Started Session 22 of user root. Jan 21 14:10:02 HCI01 systemd:
> Started Session c306 of user root. Jan 21 14:10:03 HCI01 systemd:
> Started Session c307 of user root. Jan 21 14:10:06 HCI01
> vdsm[3650]: WARN executor state: count=5 workers=set([<Worker
> name=periodic/4 waiting task#=141 at 0x7fd2d4316910>, <Worker
> name=periodic/1 running <Task discardable <Operation
> action=<vdsm.virt.sampling.VMBulkstatsMonitor object at
> 0x7fd2d4679490> at 0x7fd2d4679710> timeout=7.5, duration=7 at
> 0x7fd2d4679490> 0x7fd33c1e0ed0> disca rded task#=413 at
> 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>, <Worker
> 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>name=periodic/3
> 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>waiting task#=414 at
> 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>,
> 0x7fd2d4679490> 0x7fd33c1e0ed0>
0x7fd2d5ed0510>0x7fd2d5ed0b10><Worker
> 0x7fd2d4679490> 0x7fd33c1e0ed0>
0x7fd2d5ed0510>0x7fd2d5ed0b10>name=periodic/5
> 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>waiting
> 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>task#=0
> 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>at
> 0x7fd2d4679490> 0x7fd33c1e0ed0>
0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>,
> 0x7fd2d4679490> 0x7fd33c1e0ed0>
0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650><Worker
> 0x7fd2d4679490> 0x7fd33c1e0ed0>
0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>name
> 0x7fd2d4679490> 0x7fd33c1e0ed0>
0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>=periodic/2
> 0x7fd2d4679490> 0x7fd33c1e0ed0>
0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>waiting
> 0x7fd2d4679490> 0x7fd33c1e0ed0>
0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>task#=412
> 0x7fd2d4679490> 0x7fd33c1e0ed0>
0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>at
> 0x7fd2d4679490> 0x7fd33c1e0ed0>
0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>])
> 0x7fd2d4679490> 0x7fd33c1e0ed0>
0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>Jan
> 0x7fd2d4679490> 0x7fd33c1e0ed0>
0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>21
> 0x7fd2d4679490> 0x7fd33c1e0ed0>
0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>14:10:06
> 0x7fd2d4679490> 0x7fd33c1e0ed0>
0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>HCI01
> 0x7fd2d4679490> 0x7fd33c1e0ed0>
0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>kernel:
> 0x7fd2d4679490> 0x7fd33c1e0ed0>
0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>ovirtmgmt:
> 0x7fd2d4679490> 0x7fd33c1e0ed0>
0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>port
> 0x7fd2d4679490> 0x7fd33c1e0ed0>
0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>2(vnet0)
> 0x7fd2d4679490> 0x7fd33c1e0ed0>
0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>entered
> 0x7fd2d4679490> 0x7fd33c1e0ed0>
0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>disabled
> 0x7fd2d4679490> 0x7fd33c1e0ed0>
0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>state
> 0x7fd2d4679490> 0x7fd33c1e0ed0>
0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>Jan
> 0x7fd2d4679490> 0x7fd33c1e0ed0>
0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>21
> 0x7fd2d4679490> 0x7fd33c1e0ed0>
0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>14:10:06
> 0x7fd2d4679490> 0x7fd33c1e0ed0>
0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>HCI01
> 0x7fd2d4679490> 0x7fd33c1e0ed0>
0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>kernel:
> 0x7fd2d4679490> 0x7fd33c1e0ed0>
0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>device
> 0x7fd2d4679490> 0x7fd33c1e0ed0>
0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>vnet0
> 0x7fd2d4679490> 0x7fd33c1e0ed0>
0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>left
> 0x7fd2d4679490> 0x7fd33c1e0ed0>
0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>promiscuous
> 0x7fd2d4679490> 0x7fd33c1e0ed0>
0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>mode
> 0x7fd2d4679490> 0x7fd33c1e0ed0>
0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>Jan
> 0x7fd2d4679490> 0x7fd33c1e0ed0>
0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>21
> 0x7fd2d4679490> 0x7fd33c1e0ed0>
0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>14:10:06
> 0x7fd2d4679490> 0x7fd33c1e0ed0>
0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>HCI01
> 0x7fd2d4679490> 0x7fd33c1e0ed0>
0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>kernel:
> 0x7fd2d4679490> 0x7fd33c1e0ed0>
0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>ovirtmgmt:
> 0x7fd2d4679490> 0x7fd33c1e0ed0>
0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>port
> 0x7fd2d4679490> 0x7fd33c1e0ed0>
0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>2(vnet0)
> 0x7fd2d4679490> 0x7fd33c1e0ed0>
0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>entered
> 0x7fd2d4679490> 0x7fd33c1e0ed0>
0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>disabled
> 0x7fd2d4679490> 0x7fd33c1e0ed0>
0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>state
> Jan 21 14:10:06 HCI01 NetworkManager[3666]: <info>
> [1548076206.9177] device (vnet0): state change: disconnected ->
> unmanaged (reason 'unmanaged', sys-iface-state: 'remo ved') Jan 21
> 14:10:06 HCI01 NetworkManager[3666]: <info> [1548076206.9180]
> device (vnet0): released from master device ovirtmgmt Jan 21
> 14:10:06 HCI01 lldpad: recvfrom(Event interface): No buffer space
> available Jan 21 14:10:06 HCI01 libvirtd: 2019-01-21
> 13:10:06.925+0000: 2651: error : qemuMonitorIORead:609 : Unable to
> read from monitor: Connection reset by peer Jan 21 14:10:07 HCI01
> kvm: 0 guests now active Jan 21 14:10:07 HCI01 systemd-machined:
> Machine qemu-3-HostedEngine terminated. Jan 21 14:10:07 HCI01
> libvirtd: 2019-01-21 13:10:07.125+0000: 2704: warning :
> qemuGetProcessInfo:1406 : cannot parse process status data Jan 21
> 14:10:07 HCI01 libvirtd: 2019-01-21 13:10:07.125+0000: 2704:
> warning : qemuGetProcessInfo:1406 : cannot parse process status
> data Jan 21 14:10:07 HCI01 libvirtd: 2019-01-21 13:10:07.125+0000:
> 2704: warning : qemuGetProcessInfo:1406 : cannot parse process
> status data Jan 21 14:10:07 HCI01 libvirtd: 2019-01-21
> 13:10:07.125+0000: 2704: warning : qemuGetProcessInfo:1406 : cannot
> parse process status data Jan 21 14:10:07 HCI01 libvirtd:
> 2019-01-21 13:10:07.126+0000: 2704: error :
> virNetDevTapInterfaceStats:764 : internal error: /proc/net/dev:
> Interface not found Jan 21 14:10:07 HCI01 firewalld[24040]:
> WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -D libvirt-out
> -m physdev --physdev-is-bridged --physdev-out vnet0 -g FP-vnet 0'
> failed: iptables v1.4.21: goto 'FP-vnet0' is not a chain#012#012Try
> `iptables -h' or 'iptables --help' for more information. Jan 21
> 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> '/usr/sbin/iptables -w2 -w -D libvirt-out -m physdev --physdev-out
> vnet0 -g FP-vnet0' failed: iptables v 1.4.21: goto 'FP-vnet0' is
> not a chain#012#012Try `iptables -h' or 'iptables --help' for more
> information. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -D libvirt-in -m physdev
> --physdev-in vnet0 -g FJ-vnet0' failed: iptables v1. 4.21: goto
> 'FJ-vnet0' is not a chain#012#012Try `iptables -h' or 'iptables
> --help' for more information. Jan 21 14:10:07 HCI01
> firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> -w -D libvirt-host-in -m physdev --physdev-in vnet0 -g HJ-vnet0'
> failed: iptable s v1.4.21: goto 'HJ-vnet0' is not a
> chain#012#012Try `iptables -h' or 'iptables --help' for more
> information. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F FP-vnet0' failed:
> iptables: No chain/target/match by that name. Jan 21 14:10:07 HCI01
> firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> -w -X FP-vnet0' failed: iptables: No chain/target/match by that
> name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F FJ-vnet0' failed:
> iptables: No chain/target/match by that name. Jan 21 14:10:07 HCI01
> firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> -w -X FJ-vnet0' failed: iptables: No chain/target/match by that
> name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F HJ-vnet0' failed:
> iptables: No chain/target/match by that name. Jan 21 14:10:07 HCI01
> firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> -w -X HJ-vnet0' failed: iptables: No chain/target/match by that
> name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -D libvirt-out -m
> physdev --physdev-is-bridged --physdev-out vnet0 -g FP-vne t0'
> failed: ip6tables v1.4.21: goto 'FP-vnet0' is not a
> chain#012#012Try `ip6tables -h' or 'ip6tables --help' for more
> information. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -D libvirt-out -m
> physdev --physdev-out vnet0 -g FP-vnet0' failed: ip6tables v1.4.21:
> goto 'FP-vnet0' is not a chain#012#012Try `ip6tables -h' or
> 'ip6tables --help' for more information. Jan 21 14:10:07 HCI01
> firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2
> -w -D libvirt-in -m physdev --physdev-in vnet0 -g FJ-vnet0' failed:
> ip6tables v 1.4.21: goto 'FJ-vnet0' is not a chain#012#012Try
> `ip6tables -h' or 'ip6tables --help' for more information. Jan 21
> 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> '/usr/sbin/ip6tables -w2 -w -D libvirt-host-in -m physdev
> --physdev-in vnet0 -g HJ-vnet0' failed: ip6tab les v1.4.21: goto
> 'HJ-vnet0' is not a chain#012#012Try `ip6tables -h' or 'ip6tables
> --help' for more information. Jan 21 14:10:07 HCI01
> firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2
> -w -F FP-vnet0' failed: ip6tables: No chain/target/match by that
> name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -X FP-vnet0' failed:
> ip6tables: No chain/target/match by that name. Jan 21 14:10:07
> HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> '/usr/sbin/ip6tables -w2 -w -F FJ-vnet0' failed: ip6tables: No
> chain/target/match by that name. Jan 21 14:10:07 HCI01
> firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2
> -w -X FJ-vnet0' failed: ip6tables: No chain/target/match by that
> name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -F HJ-vnet0' failed:
> ip6tables: No chain/target/match by that name. Jan 21 14:10:07
> HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> '/usr/sbin/ip6tables -w2 -w -X HJ-vnet0' failed: ip6tables: No
> chain/target/match by that name. Jan 21 14:10:07 HCI01
> firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
> --concurrent -t nat -D PREROUTING -i vnet0 -j libvirt-J-vnet0'
> failed: Illegal target name 'libvirt-J-vnet0'. Jan 21 14:10:07
> HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> '/usr/sbin/ebtables --concurrent -t nat -D POSTROUTING -o vnet0 -j
> libvirt-P-vnet0' failed: Illegal target name 'libvirt-P-vnet0'. Jan
> 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> '/usr/sbin/ebtables --concurrent -t nat -D PREROUTING -i vnet0 -j
> libvirt-J-vnet0' failed: Illegal targe t name 'libvirt-J-vnet0'.
> Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> '/usr/sbin/ebtables --concurrent -t nat -D POSTROUTING -o vnet0 -j
> libvirt-P-vnet0' failed: Illegal targ et name 'libvirt-P-vnet0'.
> Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> '/usr/sbin/ebtables --concurrent -t nat -L libvirt-J-vnet0' failed:
> Chain 'libvirt-J-vnet0' doesn't exis t. Jan 21 14:10:07 HCI01
> firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
> --concurrent -t nat -L libvirt-P-vnet0' failed: Chain
> 'libvirt-P-vnet0' doesn't exist. Jan 21 14:10:07 HCI01
> firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
> --concurrent -t nat -F libvirt-J-vnet0' failed: Chain
> 'libvirt-J-vnet0' doesn't exist. Jan 21 14:10:07 HCI01
> firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
> --concurrent -t nat -X libvirt-J-vnet0' failed: Chain
> 'libvirt-J-vnet0' doesn't exist. Jan 21 14:10:07 HCI01
> firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
> --concurrent -t nat -F libvirt-P-vnet0' failed: Chain
> 'libvirt-P-vnet0' doesn't exist. Jan 21 14:10:07 HCI01
> firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
> --concurrent -t nat -X libvirt-P-vnet0' failed: Chain
> 'libvirt-P-vnet0' doesn't exist. Jan 21 14:10:07 HCI01
> firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> -w -D libvirt-out -m physdev --physdev-is-bridged --physdev-out
> vnet0 -g FO-vnet0' failed: iptables v1.4.21: goto 'FO-vnet0' is not
> a chain#012#012Try `iptables -h' or 'iptables --help' for more
> information. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -D libvirt-out -m
> physdev --physdev-out vnet0 -g FO-vnet0' failed: iptables v1.4.21:
> goto 'FO-vnet0' is not a chain#012#012Try `iptables -h' or
> 'iptables --help' for more information. Jan 21 14:10:07 HCI01
> firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> -w -D libvirt-in -m physdev --physdev-in vnet0 -g FI-vnet0' failed:
> iptables v1.4.21: goto 'FI-vnet0' is not a chain#012#012Try
> `iptables -h' or 'iptables --help' for more information. Jan 21
> 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> '/usr/sbin/iptables -w2 -w -D libvirt-host-in -m physdev
> --physdev-in vnet0 -g HI-vnet0' failed: iptables v1.4.21: goto
> 'HI-vnet0' is not a chain#012#012Try `iptables -h' or 'iptables
> --help' for more information. Jan 21 14:10:07 HCI01
> firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> -w -D libvirt-in-post -m physdev --physdev-in vnet0 -j ACCEPT'
> failed: iptables: Bad rule (does a matching rule exist in that
> chain?). Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F FO-vnet0' failed:
> iptables: No chain/target/match by that name. Jan 21 14:10:07 HCI01
> firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> -w -X FO-vnet0' failed: iptables: No chain/target/match by that
> name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F FI-vnet0' failed:
> iptables: No chain/target/match by that name. Jan 21 14:10:07 HCI01
> firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> -w -X FI-vnet0' failed: iptables: No chain/target/match by that
> name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F HI-vnet0' failed:
> iptables: No chain/target/match by that name. Jan 21 14:10:07 HCI01
> firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> -w -X HI-vnet0' failed: iptables: No chain/target/match by that
> name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -D libvirt-out -m
> physdev --physdev-is-bridged --physdev-out vnet0 -g FO-vnet0'
> failed: ip6tables v1.4.21: goto 'FO-vnet0' is not a
> chain#012#012Try `ip6tables -h' or 'ip6tables --help' for more
> information. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -D libvirt-out -m
> physdev --physdev-out vnet0 -g FO-vnet0' failed: ip6tables v1.4.21:
> goto 'FO-vnet0' is not a chain#012#012Try `ip6tables -h' or
> 'ip6tables --help' for more information. Jan 21 14:10:07 HCI01
> firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2
> -w -D libvirt-in -m physdev --physdev-in vnet0 -g FI-vnet0' failed:
> ip6tables v1.4.21: goto 'FI-vnet0' is not a chain#012#012Try
> `ip6tables -h' or 'ip6tables --help' for more information. Jan 21
> 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> '/usr/sbin/ip6tables -w2 -w -D libvirt-host-in -m physdev
> --physdev-in vnet0 -g HI-vnet0' failed: ip6tables v1.4.21: goto
> 'HI-vnet0' is not a chain#012#012Try `ip6tables -h' or 'ip6tables
> --help' for more information. Jan 21 14:10:07 HCI01
> firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2
> -w -D libvirt-in-post -m physdev --physdev-in vnet0 -j ACCEPT'
> failed: ip6tables: Bad rule (does a matching rule exist in that
> chain?). Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -F FO-vnet0' failed:
> ip6tables: No chain/target/match by that name. Jan 21 14:10:07
> HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> '/usr/sbin/ip6tables -w2 -w -X FO-vnet0' failed: ip6tables: No
> chain/target/match by that name. Jan 21 14:10:07 HCI01
> firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2
> -w -F FI-vnet0' failed: ip6tables: No chain/target/match by that
> name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -X FI-vnet0' failed:
> ip6tables: No chain/target/match by that name. Jan 21 14:10:07
> HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> '/usr/sbin/ip6tables -w2 -w -F HI-vnet0' failed: ip6tables: No
> chain/target/match by that name. Jan 21 14:10:07 HCI01
> firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2
> -w -X HI-vnet0' failed: ip6tables: No chain/target/match by that
> name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -D
> POSTROUTING -o vnet0 -j libvirt-O-vnet0' failed: Illegal target
> name 'libvirt-O-vnet0'. Jan 21 14:10:07 HCI01 firewalld[24040]:
> WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -L
> libvirt-O-vnet0' failed: Chain 'libvirt-O-vnet0' doesn't exist. Jan
> 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> '/usr/sbin/ebtables --concurrent -t nat -F libvirt-O-vnet0' failed:
> Chain 'libvirt-O-vnet0' doesn't exist. Jan 21 14:10:07 HCI01
> firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
> --concurrent -t nat -X libvirt-O-vnet0' failed: Chain
> 'libvirt-O-vnet0' doesn't exist. Jan 21 14:10:07 HCI01 vdsm[3650]:
> WARN
> File:
/var/lib/libvirt/qemu/channels/ea1b312c-a462-45a9-ab75-78008bc4c9c9.ovirt-guest-agent.0
> already removed Jan 21 14:10:07 HCI01 vdsm[3650]: WARN Attempting
> to remove a non existing network:
> ovirtmgmt/ea1b312c-a462-45a9-ab75-78008bc4c9c9 Jan 21 14:10:07
> HCI01 vdsm[3650]: WARN Attempting to remove a non existing net
> user: ovirtmgmt/ea1b312c-a462-45a9-ab75-78008bc4c9c9 Jan 21
> 14:10:07 HCI01 vdsm[3650]: WARN
> File:
/var/lib/libvirt/qemu/channels/ea1b312c-a462-45a9-ab75-78008bc4c9c9.org.qemu.guest_agent.0
> already removed
>
> any ideas on that?
> _______________________________________________
> Users mailing list -- users@ovirt.org<mailto:users@ovirt.org>
> To unsubscribe send an email to
users-leave@ovirt.org<mailto:users-leave@ovirt.org>
> Privacy Statement:
https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
>
https://www.ovirt.org/community/about/community-guidelines/ List
> Archives:
>
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XMMX5CY6VHF...