Re: ovirt 4.2 HCI rollout
by Simone Tiraboschi
On Thu, Jan 24, 2019 at 3:20 PM Markus Schaufler <
markus.schaufler(a)digit-all.at> wrote:
> no...
>
> all logs in that folder are attached in the mail before.
>
OK, unfortunately in this case I can just suggest to retry and, when it
reaches
[ INFO ] TASK [Check engine VM health]
try to connect to the engine VM via ssh and check what's happening there to
ovirt-engine
> ------------------------------
> *Von:* Simone Tiraboschi <stirabos(a)redhat.com>
> *Gesendet:* Donnerstag, 24. Jänner 2019 15:16:52
> *An:* Markus Schaufler
> *Cc:* Dominik Holler; users(a)ovirt.org
> *Betreff:* Re: [ovirt-users] ovirt 4.2 HCI rollout
>
>
>
> On Thu, Jan 24, 2019 at 3:14 PM Markus Schaufler <
> markus.schaufler(a)digit-all.at> wrote:
>
> The hosted engine is not running and cannot be started.
>
>
>
> Do you have on your first host a directory
> like /var/log/ovirt-hosted-engine-setup/engine-logs-2019-01-21T22:47:03Z
> with logs from the engine VM?
>
>
>
> ------------------------------
> *Von:* Simone Tiraboschi <stirabos(a)redhat.com>
> *Gesendet:* Donnerstag, 24. Jänner 2019 14:45:59
> *An:* Markus Schaufler
> *Cc:* Dominik Holler; users(a)ovirt.org
> *Betreff:* Re: [ovirt-users] ovirt 4.2 HCI rollout
>
>
>
> On Thu, Jan 24, 2019 at 2:21 PM Markus Schaufler <
> markus.schaufler(a)digit-all.at> wrote:
>
> Hi,
>
>
> thanks for the replies.
>
>
> I updated to 4.2.8 and tried again:
>
>
> [ INFO ] TASK [Check engine VM health]
> [ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 120, "changed":
> true, "cmd": ["hosted-engine", "--vm-status", "--json"], "delta":
> "0:00:00.165316", "end": "2019-01-24 14:12:06.899564", "rc": 0, "start":
> "2019-01-24 14:12:06.734248", "stderr": "", "stderr_lines": [], "stdout":
> "{\"1\": {\"conf_on_shared_storage\": true, \"live-data\": true, \"extra\":
> \"metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=3049
> (Thu Jan 24 14:11:59
> 2019)\\nhost-id=1\\nscore=3400\\nvm_conf_refresh_time=3049 (Thu Jan 24
> 14:11:59
> 2019)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineStarting\\nstopped=False\\n\",
> \"hostname\": \"HCI01.res01.ads.ooe.local\", \"host-id\": 1,
> \"engine-status\": {\"reason\": \"failed liveliness check\", \"health\":
> \"bad\", \"vm\": \"up\", \"detail\": \"Up\"}, \"score\": 3400, \"stopped\":
> false, \"maintenance\": false, \"crc32\": \"0c1a3ddb\",
> \"local_conf_timestamp\": 3049, \"host-ts\": 3049}, \"global_maintenance\":
> false}", "stdout_lines": ["{\"1\": {\"conf_on_shared_storage\": true,
> \"live-data\": true, \"extra\":
> \"metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=3049
> (Thu Jan 24 14:11:59
> 2019)\\nhost-id=1\\nscore=3400\\nvm_conf_refresh_time=3049 (Thu Jan 24
> 14:11:59
> 2019)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineStarting\\nstopped=False\\n\",
> \"hostname\": \"HCI01.res01.ads.ooe.local\", \"host-id\": 1,
> \"engine-status\": {\"reason\": \"failed liveliness check\", \"health\":
> \"bad\", \"vm\": \"up\", \"detail\": \"Up\"}, \"score\": 3400, \"stopped\":
> false, \"maintenance\": false, \"crc32\": \"0c1a3ddb\",
> \"local_conf_timestamp\": 3049, \"host-ts\": 3049}, \"global_maintenance\":
> false}"]}
>
>
>
> It's still the same issue: the host fail to properly check the status of
> the engine over a dedicate health page.
>
> You should connect to ovirt-hci.res01.ads.ooe.local and check the status
> of ovirt-engine service and /var/log/ovirt-engine/engine.log there.
>
>
>
> [ INFO ] TASK [Check VM status at virt level]
> [ INFO ] changed: [localhost]
> [ INFO ] TASK [Fail if engine VM is not running]
> [ INFO ] skipping: [localhost]
> [ INFO ] TASK [Get target engine VM IPv4 address]
> [ INFO ] changed: [localhost]
> [ INFO ] TASK [Get VDSM's target engine VM stats]
> [ INFO ] changed: [localhost]
> [ INFO ] TASK [Convert stats to JSON format]
> [ INFO ] ok: [localhost]
> [ INFO ] TASK [Get target engine VM IPv4 address from VDSM stats]
> [ INFO ] ok: [localhost]
> [ INFO ] TASK [Fail if the Engine has no IP address]
> [ INFO ] skipping: [localhost]
> [ INFO ] TASK [Fail if Engine IP is different from engine's FQDN resolved
> IP]
> [ INFO ] skipping: [localhost]
> [ INFO ] TASK [Get target engine VM IPv4 address]
> [ INFO ] skipping: [localhost]
> [ INFO ] TASK [Reconfigure OVN central address]
> [ ERROR ] fatal: [localhost]: FAILED! => {"msg": "The task includes an
> option with an undefined variable. The error was: 'dict object' has no
> attribute 'stdout_lines'\n\nThe error appears to have been in
> '/usr/share/ovirt-hosted-engine-setup/ansible/create_target_vm.yml': line
> 518, column 5, but may\nbe elsewhere in the file depending on the exact
> syntax problem.\n\nThe offending line appears to be:\n\n #
> https://github.com/oVirt/ovirt-engine/blob/master/packaging/playbooks/rol...
> - name: Reconfigure OVN central address\n ^ here\n"}
>
>
>
> attached you'll find the setup logs.
>
>
> best regards,
>
> Markus Schaufler
> ------------------------------
> *Von:* Simone Tiraboschi <stirabos(a)redhat.com>
> *Gesendet:* Donnerstag, 24. Jänner 2019 11:56:50
> *An:* Dominik Holler
> *Cc:* Markus Schaufler; users(a)ovirt.org
> *Betreff:* Re: [ovirt-users] ovirt 4.2 HCI rollout
>
>
>
> On Thu, Jan 24, 2019 at 9:40 AM Dominik Holler <dholler(a)redhat.com> wrote:
>
> On Tue, 22 Jan 2019 11:15:12 +0000
> Markus Schaufler <markus.schaufler(a)digit-all.at> wrote:
>
> > Thanks for your reply,
> >
> > getent ahosts ovirt-hci.res01.ads.ooe.local | cut -d' ' -f1 | uniq
> > 10.1.31.20
> >
> > attached you'll find the logs.
> >
>
> Thanks, to my eyes this looks like a bug.
> I tried to isolate the relevant lines in the attached playbook.
>
> Markus, would you be so kind to check if ovirt-4.2.8 is working for you?
>
>
>
> OK, understood: the real error was just a few lines before what Dominik
> pointed out:
>
> "stdout": "{\"1\": {\"conf_on_shared_storage\": true, \"live-data\":
> true, \"extra\":
> \"metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=5792
> (Mon Jan 21 13:57:45
> 2019)\\nhost-id=1\\nscore=3000\\nvm_conf_refresh_time=5792 (Mon Jan 21
> 13:57:45
> 2019)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineStarting\\nstopped=False\\n\",
> \"hostname\": \"HCI01.res01.ads.ooe.local\", \"host-id\": 1,
> \"engine-status\": {\"reason\": \"failed liveliness check\", \"health\":
> \"bad\", \"vm\": \"up\", \"detail\": \"Up\"}, \"score\": 3000, \"stopped\":
> false, \"maintenance\": false, \"crc32\": \"ba303717\",
> \"local_conf_timestamp\": 5792, \"host-ts\": 5792}, \"global_maintenance\":
> false}",
> "stdout_lines": [
> "{\"1\": {\"conf_on_shared_storage\": true, \"live-data\": true,
> \"extra\":
> \"metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=5792
> (Mon Jan 21 13:57:45
> 2019)\\nhost-id=1\\nscore=3000\\nvm_conf_refresh_time=5792 (Mon Jan 21
> 13:57:45
> 2019)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineStarting\\nstopped=False\\n\",
> \"hostname\": \"HCI01.res01.ads.ooe.local\", \"host-id\": 1,
> \"engine-status\": {\"reason\": \"failed liveliness check\", \"health\":
> \"bad\", \"vm\": \"up\", \"detail\": \"Up\"}, \"score\": 3000, \"stopped\":
> false, \"maintenance\": false, \"crc32\": \"ba303717\",
> \"local_conf_timestamp\": 5792, \"host-ts\": 5792}, \"global_maintenance\":
> false}"
> ]
> }"
> 2019-01-21 13:57:46,695+0100 ERROR ansible failed {'status': 'FAILED',
> 'ansible_type': 'task', 'ansible_task': u'Check engine VM health',
> 'ansible_result': u'type: <type \'dict\'>\nstr: {\'_ansible_parsed\': True,
> \'stderr_lines\': [], u\'changed\': True, u\'end\': u\'2019-01-21
> 13:57:46.242423\', \'_ansible_no_log\': False, u\'stdout\': u\'{"1":
> {"conf_on_shared_storage": true, "live-data": true, "extra":
> "metadata_parse_version=1\\\\nmetadata_feature_version=1\\\\ntimestamp=5792
> (Mon Jan 21 13:57:4', 'ansible_host': u'localhost', 'ansible_playbook':
> u'/usr/share/ovirt-hosted-engine-setup/ansible/create_target_vm.yml'}
>
> and in particular it's here:
> for some reason we got \"engine-status\": {\"reason\": \"failed
> liveliness check\", \"health\": \"bad\", \"vm\": \"up\", \"detail\":
> \"Up\"}
> over 120 attempts: we have to check engine.log (it got collected as well
> from the engine VM) to understand why the engine was failing to start.
>
>
>
>
> > ________________________________
> > Von: Dominik Holler <dholler(a)redhat.com>
> > Gesendet: Montag, 21. Jänner 2019 17:52:35
> > An: Markus Schaufler
> > Cc: users(a)ovirt.org; Simone Tiraboschi
> > Betreff: Re: [ovirt-users] ovirt 4.2 HCI rollout
> >
> > Would you please share the related ovirt-host-deploy-ansible-*.log
> > stored on the host in /var/log/ovirt-hosted-engine-setup ?
> >
> > Would you please also share the output of
> > getent ahosts YOUR_HOSED_ENGNE_FQDN | cut -d' ' -f1 | uniq
> > if executed on this host?
> >
> >
> > On Mon, 21 Jan 2019 13:37:53 -0000
> > "Markus Schaufler" <markus.schaufler(a)digit-all.at> wrote:
> >
> > > Hi,
> > >
> > > I'm trying a (nested) ovirt 4.2.7 HCI rollout on 3 centos VM's by
> > > following
> > >
> https://ovirt.org/documentation/gluster-hyperconverged/chap-Deploying_Hyp...
> > > gluster deployment was successful but at HE deployment "stage 5" I
> > > got following error:
> > >
> > > [ INFO ] TASK [Reconfigure OVN central address]
> > > [ ERROR ] fatal: [localhost]: FAILED! => {"msg": "The task includes
> > > an option with an undefined variable. The error was: 'dict object'
> > > has no attribute 'stdout_lines'\n\nThe error appears to have been
> > > in
> > > '/usr/share/ovirt-hosted-engine-setup/ansible/create_target_vm.yml':
> > > line 522, column 5, but may\nbe elsewhere in the file depending on
> > > the exact syntax problem.\n\nThe offending line appears to be:\n\n
> > > #
> > >
> https://github.com/oVirt/ovirt-engine/blob/master/packaging/playbooks/rol...
> > > - name: Reconfigure OVN central address\n ^ here\n"}
> > >
> > >
> > > /var/log/messages:
> > > Jan 21 14:09:56 HCI01 journal: ovirt-ha-agent
> > > ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine ERROR
> > > Engine VM stopped on localhost Jan 21 14:10:01 HCI01 systemd:
> > > Started Session 22 of user root. Jan 21 14:10:02 HCI01 systemd:
> > > Started Session c306 of user root. Jan 21 14:10:03 HCI01 systemd:
> > > Started Session c307 of user root. Jan 21 14:10:06 HCI01
> > > vdsm[3650]: WARN executor state: count=5 workers=set([<Worker
> > > name=periodic/4 waiting task#=141 at 0x7fd2d4316910>, <Worker
> > > name=periodic/1 running <Task discardable <Operation
> > > action=<vdsm.virt.sampling.VMBulkstatsMonitor object at
> > > 0x7fd2d4679490> at 0x7fd2d4679710> timeout=7.5, duration=7 at
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0> disca rded task#=413 at
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>, <Worker
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>name=periodic/3
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>waiting task#=414 at
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>,
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10><Worker
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>name=periodic/5
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>waiting
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>task#=0
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>at
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>,
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650><Worker
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>name
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>=periodic/2
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>waiting
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>task#=412
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>at
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>])
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>Jan
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>21
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>14:10:06
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>HCI01
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>kernel:
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>ovirtmgmt:
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>port
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>2(vnet0)
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>entered
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>disabled
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>state
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>Jan
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>21
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>14:10:06
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>HCI01
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>kernel:
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>device
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>vnet0
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>left
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>promiscuous
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>mode
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>Jan
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>21
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>14:10:06
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>HCI01
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>kernel:
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>ovirtmgmt:
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>port
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>2(vnet0)
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>entered
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>disabled
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>state
> > > Jan 21 14:10:06 HCI01 NetworkManager[3666]: <info>
> > > [1548076206.9177] device (vnet0): state change: disconnected ->
> > > unmanaged (reason 'unmanaged', sys-iface-state: 'remo ved') Jan 21
> > > 14:10:06 HCI01 NetworkManager[3666]: <info> [1548076206.9180]
> > > device (vnet0): released from master device ovirtmgmt Jan 21
> > > 14:10:06 HCI01 lldpad: recvfrom(Event interface): No buffer space
> > > available Jan 21 14:10:06 HCI01 libvirtd: 2019-01-21
> > > 13:10:06.925+0000: 2651: error : qemuMonitorIORead:609 : Unable to
> > > read from monitor: Connection reset by peer Jan 21 14:10:07 HCI01
> > > kvm: 0 guests now active Jan 21 14:10:07 HCI01 systemd-machined:
> > > Machine qemu-3-HostedEngine terminated. Jan 21 14:10:07 HCI01
> > > libvirtd: 2019-01-21 13:10:07.125+0000: 2704: warning :
> > > qemuGetProcessInfo:1406 : cannot parse process status data Jan 21
> > > 14:10:07 HCI01 libvirtd: 2019-01-21 13:10:07.125+0000: 2704:
> > > warning : qemuGetProcessInfo:1406 : cannot parse process status
> > > data Jan 21 14:10:07 HCI01 libvirtd: 2019-01-21 13:10:07.125+0000:
> > > 2704: warning : qemuGetProcessInfo:1406 : cannot parse process
> > > status data Jan 21 14:10:07 HCI01 libvirtd: 2019-01-21
> > > 13:10:07.125+0000: 2704: warning : qemuGetProcessInfo:1406 : cannot
> > > parse process status data Jan 21 14:10:07 HCI01 libvirtd:
> > > 2019-01-21 13:10:07.126+0000: 2704: error :
> > > virNetDevTapInterfaceStats:764 : internal error: /proc/net/dev:
> > > Interface not found Jan 21 14:10:07 HCI01 firewalld[24040]:
> > > WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -D libvirt-out
> > > -m physdev --physdev-is-bridged --physdev-out vnet0 -g FP-vnet 0'
> > > failed: iptables v1.4.21: goto 'FP-vnet0' is not a chain#012#012Try
> > > `iptables -h' or 'iptables --help' for more information. Jan 21
> > > 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/iptables -w2 -w -D libvirt-out -m physdev --physdev-out
> > > vnet0 -g FP-vnet0' failed: iptables v 1.4.21: goto 'FP-vnet0' is
> > > not a chain#012#012Try `iptables -h' or 'iptables --help' for more
> > > information. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -D libvirt-in -m physdev
> > > --physdev-in vnet0 -g FJ-vnet0' failed: iptables v1. 4.21: goto
> > > 'FJ-vnet0' is not a chain#012#012Try `iptables -h' or 'iptables
> > > --help' for more information. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -D libvirt-host-in -m physdev --physdev-in vnet0 -g HJ-vnet0'
> > > failed: iptable s v1.4.21: goto 'HJ-vnet0' is not a
> > > chain#012#012Try `iptables -h' or 'iptables --help' for more
> > > information. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F FP-vnet0' failed:
> > > iptables: No chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -X FP-vnet0' failed: iptables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F FJ-vnet0' failed:
> > > iptables: No chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -X FJ-vnet0' failed: iptables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F HJ-vnet0' failed:
> > > iptables: No chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -X HJ-vnet0' failed: iptables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -D libvirt-out -m
> > > physdev --physdev-is-bridged --physdev-out vnet0 -g FP-vne t0'
> > > failed: ip6tables v1.4.21: goto 'FP-vnet0' is not a
> > > chain#012#012Try `ip6tables -h' or 'ip6tables --help' for more
> > > information. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -D libvirt-out -m
> > > physdev --physdev-out vnet0 -g FP-vnet0' failed: ip6tables v1.4.21:
> > > goto 'FP-vnet0' is not a chain#012#012Try `ip6tables -h' or
> > > 'ip6tables --help' for more information. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2
> > > -w -D libvirt-in -m physdev --physdev-in vnet0 -g FJ-vnet0' failed:
> > > ip6tables v 1.4.21: goto 'FJ-vnet0' is not a chain#012#012Try
> > > `ip6tables -h' or 'ip6tables --help' for more information. Jan 21
> > > 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ip6tables -w2 -w -D libvirt-host-in -m physdev
> > > --physdev-in vnet0 -g HJ-vnet0' failed: ip6tab les v1.4.21: goto
> > > 'HJ-vnet0' is not a chain#012#012Try `ip6tables -h' or 'ip6tables
> > > --help' for more information. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2
> > > -w -F FP-vnet0' failed: ip6tables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -X FP-vnet0' failed:
> > > ip6tables: No chain/target/match by that name. Jan 21 14:10:07
> > > HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ip6tables -w2 -w -F FJ-vnet0' failed: ip6tables: No
> > > chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2
> > > -w -X FJ-vnet0' failed: ip6tables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -F HJ-vnet0' failed:
> > > ip6tables: No chain/target/match by that name. Jan 21 14:10:07
> > > HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ip6tables -w2 -w -X HJ-vnet0' failed: ip6tables: No
> > > chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
> > > --concurrent -t nat -D PREROUTING -i vnet0 -j libvirt-J-vnet0'
> > > failed: Illegal target name 'libvirt-J-vnet0'. Jan 21 14:10:07
> > > HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ebtables --concurrent -t nat -D POSTROUTING -o vnet0 -j
> > > libvirt-P-vnet0' failed: Illegal target name 'libvirt-P-vnet0'. Jan
> > > 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ebtables --concurrent -t nat -D PREROUTING -i vnet0 -j
> > > libvirt-J-vnet0' failed: Illegal targe t name 'libvirt-J-vnet0'.
> > > Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ebtables --concurrent -t nat -D POSTROUTING -o vnet0 -j
> > > libvirt-P-vnet0' failed: Illegal targ et name 'libvirt-P-vnet0'.
> > > Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ebtables --concurrent -t nat -L libvirt-J-vnet0' failed:
> > > Chain 'libvirt-J-vnet0' doesn't exis t. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
> > > --concurrent -t nat -L libvirt-P-vnet0' failed: Chain
> > > 'libvirt-P-vnet0' doesn't exist. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
> > > --concurrent -t nat -F libvirt-J-vnet0' failed: Chain
> > > 'libvirt-J-vnet0' doesn't exist. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
> > > --concurrent -t nat -X libvirt-J-vnet0' failed: Chain
> > > 'libvirt-J-vnet0' doesn't exist. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
> > > --concurrent -t nat -F libvirt-P-vnet0' failed: Chain
> > > 'libvirt-P-vnet0' doesn't exist. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
> > > --concurrent -t nat -X libvirt-P-vnet0' failed: Chain
> > > 'libvirt-P-vnet0' doesn't exist. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -D libvirt-out -m physdev --physdev-is-bridged --physdev-out
> > > vnet0 -g FO-vnet0' failed: iptables v1.4.21: goto 'FO-vnet0' is not
> > > a chain#012#012Try `iptables -h' or 'iptables --help' for more
> > > information. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -D libvirt-out -m
> > > physdev --physdev-out vnet0 -g FO-vnet0' failed: iptables v1.4.21:
> > > goto 'FO-vnet0' is not a chain#012#012Try `iptables -h' or
> > > 'iptables --help' for more information. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -D libvirt-in -m physdev --physdev-in vnet0 -g FI-vnet0' failed:
> > > iptables v1.4.21: goto 'FI-vnet0' is not a chain#012#012Try
> > > `iptables -h' or 'iptables --help' for more information. Jan 21
> > > 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/iptables -w2 -w -D libvirt-host-in -m physdev
> > > --physdev-in vnet0 -g HI-vnet0' failed: iptables v1.4.21: goto
> > > 'HI-vnet0' is not a chain#012#012Try `iptables -h' or 'iptables
> > > --help' for more information. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -D libvirt-in-post -m physdev --physdev-in vnet0 -j ACCEPT'
> > > failed: iptables: Bad rule (does a matching rule exist in that
> > > chain?). Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F FO-vnet0' failed:
> > > iptables: No chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -X FO-vnet0' failed: iptables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F FI-vnet0' failed:
> > > iptables: No chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -X FI-vnet0' failed: iptables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F HI-vnet0' failed:
> > > iptables: No chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -X HI-vnet0' failed: iptables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -D libvirt-out -m
> > > physdev --physdev-is-bridged --physdev-out vnet0 -g FO-vnet0'
> > > failed: ip6tables v1.4.21: goto 'FO-vnet0' is not a
> > > chain#012#012Try `ip6tables -h' or 'ip6tables --help' for more
> > > information. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -D libvirt-out -m
> > > physdev --physdev-out vnet0 -g FO-vnet0' failed: ip6tables v1.4.21:
> > > goto 'FO-vnet0' is not a chain#012#012Try `ip6tables -h' or
> > > 'ip6tables --help' for more information. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2
> > > -w -D libvirt-in -m physdev --physdev-in vnet0 -g FI-vnet0' failed:
> > > ip6tables v1.4.21: goto 'FI-vnet0' is not a chain#012#012Try
> > > `ip6tables -h' or 'ip6tables --help' for more information. Jan 21
> > > 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ip6tables -w2 -w -D libvirt-host-in -m physdev
> > > --physdev-in vnet0 -g HI-vnet0' failed: ip6tables v1.4.21: goto
> > > 'HI-vnet0' is not a chain#012#012Try `ip6tables -h' or 'ip6tables
> > > --help' for more information. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2
> > > -w -D libvirt-in-post -m physdev --physdev-in vnet0 -j ACCEPT'
> > > failed: ip6tables: Bad rule (does a matching rule exist in that
> > > chain?). Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -F FO-vnet0' failed:
> > > ip6tables: No chain/target/match by that name. Jan 21 14:10:07
> > > HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ip6tables -w2 -w -X FO-vnet0' failed: ip6tables: No
> > > chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2
> > > -w -F FI-vnet0' failed: ip6tables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -X FI-vnet0' failed:
> > > ip6tables: No chain/target/match by that name. Jan 21 14:10:07
> > > HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ip6tables -w2 -w -F HI-vnet0' failed: ip6tables: No
> > > chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2
> > > -w -X HI-vnet0' failed: ip6tables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -D
> > > POSTROUTING -o vnet0 -j libvirt-O-vnet0' failed: Illegal target
> > > name 'libvirt-O-vnet0'. Jan 21 14:10:07 HCI01 firewalld[24040]:
> > > WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -L
> > > libvirt-O-vnet0' failed: Chain 'libvirt-O-vnet0' doesn't exist. Jan
> > > 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ebtables --concurrent -t nat -F libvirt-O-vnet0' failed:
> > > Chain 'libvirt-O-vnet0' doesn't exist. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
> > > --concurrent -t nat -X libvirt-O-vnet0' failed: Chain
> > > 'libvirt-O-vnet0' doesn't exist. Jan 21 14:10:07 HCI01 vdsm[3650]:
> > > WARN
> > > File:
> /var/lib/libvirt/qemu/channels/ea1b312c-a462-45a9-ab75-78008bc4c9c9.ovirt-guest-agent.0
> > > already removed Jan 21 14:10:07 HCI01 vdsm[3650]: WARN Attempting
> > > to remove a non existing network:
> > > ovirtmgmt/ea1b312c-a462-45a9-ab75-78008bc4c9c9 Jan 21 14:10:07
> > > HCI01 vdsm[3650]: WARN Attempting to remove a non existing net
> > > user: ovirtmgmt/ea1b312c-a462-45a9-ab75-78008bc4c9c9 Jan 21
> > > 14:10:07 HCI01 vdsm[3650]: WARN
> > > File:
> /var/lib/libvirt/qemu/channels/ea1b312c-a462-45a9-ab75-78008bc4c9c9.org.qemu.guest_agent.0
> > > already removed
> > >
> > > any ideas on that?
> > > _______________________________________________
> > > Users mailing list -- users(a)ovirt.org
> > > To unsubscribe send an email to users-leave(a)ovirt.org
> > > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> > > oVirt Code of Conduct:
> > > https://www.ovirt.org/community/about/community-guidelines/ List
> > > Archives:
> > >
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/XMMX5CY6VHF...
> >
>
>
5 years, 11 months
Re: ovirt 4.2 HCI rollout
by Simone Tiraboschi
On Thu, Jan 24, 2019 at 3:14 PM Markus Schaufler <
markus.schaufler(a)digit-all.at> wrote:
> The hosted engine is not running and cannot be started.
>
>
>
Do you have on your first host a directory
like /var/log/ovirt-hosted-engine-setup/engine-logs-2019-01-21T22:47:03Z
with logs from the engine VM?
>
> ------------------------------
> *Von:* Simone Tiraboschi <stirabos(a)redhat.com>
> *Gesendet:* Donnerstag, 24. Jänner 2019 14:45:59
> *An:* Markus Schaufler
> *Cc:* Dominik Holler; users(a)ovirt.org
> *Betreff:* Re: [ovirt-users] ovirt 4.2 HCI rollout
>
>
>
> On Thu, Jan 24, 2019 at 2:21 PM Markus Schaufler <
> markus.schaufler(a)digit-all.at> wrote:
>
> Hi,
>
>
> thanks for the replies.
>
>
> I updated to 4.2.8 and tried again:
>
>
> [ INFO ] TASK [Check engine VM health]
> [ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 120, "changed":
> true, "cmd": ["hosted-engine", "--vm-status", "--json"], "delta":
> "0:00:00.165316", "end": "2019-01-24 14:12:06.899564", "rc": 0, "start":
> "2019-01-24 14:12:06.734248", "stderr": "", "stderr_lines": [], "stdout":
> "{\"1\": {\"conf_on_shared_storage\": true, \"live-data\": true, \"extra\":
> \"metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=3049
> (Thu Jan 24 14:11:59
> 2019)\\nhost-id=1\\nscore=3400\\nvm_conf_refresh_time=3049 (Thu Jan 24
> 14:11:59
> 2019)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineStarting\\nstopped=False\\n\",
> \"hostname\": \"HCI01.res01.ads.ooe.local\", \"host-id\": 1,
> \"engine-status\": {\"reason\": \"failed liveliness check\", \"health\":
> \"bad\", \"vm\": \"up\", \"detail\": \"Up\"}, \"score\": 3400, \"stopped\":
> false, \"maintenance\": false, \"crc32\": \"0c1a3ddb\",
> \"local_conf_timestamp\": 3049, \"host-ts\": 3049}, \"global_maintenance\":
> false}", "stdout_lines": ["{\"1\": {\"conf_on_shared_storage\": true,
> \"live-data\": true, \"extra\":
> \"metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=3049
> (Thu Jan 24 14:11:59
> 2019)\\nhost-id=1\\nscore=3400\\nvm_conf_refresh_time=3049 (Thu Jan 24
> 14:11:59
> 2019)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineStarting\\nstopped=False\\n\",
> \"hostname\": \"HCI01.res01.ads.ooe.local\", \"host-id\": 1,
> \"engine-status\": {\"reason\": \"failed liveliness check\", \"health\":
> \"bad\", \"vm\": \"up\", \"detail\": \"Up\"}, \"score\": 3400, \"stopped\":
> false, \"maintenance\": false, \"crc32\": \"0c1a3ddb\",
> \"local_conf_timestamp\": 3049, \"host-ts\": 3049}, \"global_maintenance\":
> false}"]}
>
>
>
> It's still the same issue: the host fail to properly check the status of
> the engine over a dedicate health page.
>
> You should connect to ovirt-hci.res01.ads.ooe.local and check the status
> of ovirt-engine service and /var/log/ovirt-engine/engine.log there.
>
>
>
> [ INFO ] TASK [Check VM status at virt level]
> [ INFO ] changed: [localhost]
> [ INFO ] TASK [Fail if engine VM is not running]
> [ INFO ] skipping: [localhost]
> [ INFO ] TASK [Get target engine VM IPv4 address]
> [ INFO ] changed: [localhost]
> [ INFO ] TASK [Get VDSM's target engine VM stats]
> [ INFO ] changed: [localhost]
> [ INFO ] TASK [Convert stats to JSON format]
> [ INFO ] ok: [localhost]
> [ INFO ] TASK [Get target engine VM IPv4 address from VDSM stats]
> [ INFO ] ok: [localhost]
> [ INFO ] TASK [Fail if the Engine has no IP address]
> [ INFO ] skipping: [localhost]
> [ INFO ] TASK [Fail if Engine IP is different from engine's FQDN resolved
> IP]
> [ INFO ] skipping: [localhost]
> [ INFO ] TASK [Get target engine VM IPv4 address]
> [ INFO ] skipping: [localhost]
> [ INFO ] TASK [Reconfigure OVN central address]
> [ ERROR ] fatal: [localhost]: FAILED! => {"msg": "The task includes an
> option with an undefined variable. The error was: 'dict object' has no
> attribute 'stdout_lines'\n\nThe error appears to have been in
> '/usr/share/ovirt-hosted-engine-setup/ansible/create_target_vm.yml': line
> 518, column 5, but may\nbe elsewhere in the file depending on the exact
> syntax problem.\n\nThe offending line appears to be:\n\n #
> https://github.com/oVirt/ovirt-engine/blob/master/packaging/playbooks/rol...
> - name: Reconfigure OVN central address\n ^ here\n"}
>
>
>
> attached you'll find the setup logs.
>
>
> best regards,
>
> Markus Schaufler
> ------------------------------
> *Von:* Simone Tiraboschi <stirabos(a)redhat.com>
> *Gesendet:* Donnerstag, 24. Jänner 2019 11:56:50
> *An:* Dominik Holler
> *Cc:* Markus Schaufler; users(a)ovirt.org
> *Betreff:* Re: [ovirt-users] ovirt 4.2 HCI rollout
>
>
>
> On Thu, Jan 24, 2019 at 9:40 AM Dominik Holler <dholler(a)redhat.com> wrote:
>
> On Tue, 22 Jan 2019 11:15:12 +0000
> Markus Schaufler <markus.schaufler(a)digit-all.at> wrote:
>
> > Thanks for your reply,
> >
> > getent ahosts ovirt-hci.res01.ads.ooe.local | cut -d' ' -f1 | uniq
> > 10.1.31.20
> >
> > attached you'll find the logs.
> >
>
> Thanks, to my eyes this looks like a bug.
> I tried to isolate the relevant lines in the attached playbook.
>
> Markus, would you be so kind to check if ovirt-4.2.8 is working for you?
>
>
>
> OK, understood: the real error was just a few lines before what Dominik
> pointed out:
>
> "stdout": "{\"1\": {\"conf_on_shared_storage\": true, \"live-data\":
> true, \"extra\":
> \"metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=5792
> (Mon Jan 21 13:57:45
> 2019)\\nhost-id=1\\nscore=3000\\nvm_conf_refresh_time=5792 (Mon Jan 21
> 13:57:45
> 2019)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineStarting\\nstopped=False\\n\",
> \"hostname\": \"HCI01.res01.ads.ooe.local\", \"host-id\": 1,
> \"engine-status\": {\"reason\": \"failed liveliness check\", \"health\":
> \"bad\", \"vm\": \"up\", \"detail\": \"Up\"}, \"score\": 3000, \"stopped\":
> false, \"maintenance\": false, \"crc32\": \"ba303717\",
> \"local_conf_timestamp\": 5792, \"host-ts\": 5792}, \"global_maintenance\":
> false}",
> "stdout_lines": [
> "{\"1\": {\"conf_on_shared_storage\": true, \"live-data\": true,
> \"extra\":
> \"metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=5792
> (Mon Jan 21 13:57:45
> 2019)\\nhost-id=1\\nscore=3000\\nvm_conf_refresh_time=5792 (Mon Jan 21
> 13:57:45
> 2019)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineStarting\\nstopped=False\\n\",
> \"hostname\": \"HCI01.res01.ads.ooe.local\", \"host-id\": 1,
> \"engine-status\": {\"reason\": \"failed liveliness check\", \"health\":
> \"bad\", \"vm\": \"up\", \"detail\": \"Up\"}, \"score\": 3000, \"stopped\":
> false, \"maintenance\": false, \"crc32\": \"ba303717\",
> \"local_conf_timestamp\": 5792, \"host-ts\": 5792}, \"global_maintenance\":
> false}"
> ]
> }"
> 2019-01-21 13:57:46,695+0100 ERROR ansible failed {'status': 'FAILED',
> 'ansible_type': 'task', 'ansible_task': u'Check engine VM health',
> 'ansible_result': u'type: <type \'dict\'>\nstr: {\'_ansible_parsed\': True,
> \'stderr_lines\': [], u\'changed\': True, u\'end\': u\'2019-01-21
> 13:57:46.242423\', \'_ansible_no_log\': False, u\'stdout\': u\'{"1":
> {"conf_on_shared_storage": true, "live-data": true, "extra":
> "metadata_parse_version=1\\\\nmetadata_feature_version=1\\\\ntimestamp=5792
> (Mon Jan 21 13:57:4', 'ansible_host': u'localhost', 'ansible_playbook':
> u'/usr/share/ovirt-hosted-engine-setup/ansible/create_target_vm.yml'}
>
> and in particular it's here:
> for some reason we got \"engine-status\": {\"reason\": \"failed
> liveliness check\", \"health\": \"bad\", \"vm\": \"up\", \"detail\":
> \"Up\"}
> over 120 attempts: we have to check engine.log (it got collected as well
> from the engine VM) to understand why the engine was failing to start.
>
>
>
>
> > ________________________________
> > Von: Dominik Holler <dholler(a)redhat.com>
> > Gesendet: Montag, 21. Jänner 2019 17:52:35
> > An: Markus Schaufler
> > Cc: users(a)ovirt.org; Simone Tiraboschi
> > Betreff: Re: [ovirt-users] ovirt 4.2 HCI rollout
> >
> > Would you please share the related ovirt-host-deploy-ansible-*.log
> > stored on the host in /var/log/ovirt-hosted-engine-setup ?
> >
> > Would you please also share the output of
> > getent ahosts YOUR_HOSED_ENGNE_FQDN | cut -d' ' -f1 | uniq
> > if executed on this host?
> >
> >
> > On Mon, 21 Jan 2019 13:37:53 -0000
> > "Markus Schaufler" <markus.schaufler(a)digit-all.at> wrote:
> >
> > > Hi,
> > >
> > > I'm trying a (nested) ovirt 4.2.7 HCI rollout on 3 centos VM's by
> > > following
> > >
> https://ovirt.org/documentation/gluster-hyperconverged/chap-Deploying_Hyp...
> > > gluster deployment was successful but at HE deployment "stage 5" I
> > > got following error:
> > >
> > > [ INFO ] TASK [Reconfigure OVN central address]
> > > [ ERROR ] fatal: [localhost]: FAILED! => {"msg": "The task includes
> > > an option with an undefined variable. The error was: 'dict object'
> > > has no attribute 'stdout_lines'\n\nThe error appears to have been
> > > in
> > > '/usr/share/ovirt-hosted-engine-setup/ansible/create_target_vm.yml':
> > > line 522, column 5, but may\nbe elsewhere in the file depending on
> > > the exact syntax problem.\n\nThe offending line appears to be:\n\n
> > > #
> > >
> https://github.com/oVirt/ovirt-engine/blob/master/packaging/playbooks/rol...
> > > - name: Reconfigure OVN central address\n ^ here\n"}
> > >
> > >
> > > /var/log/messages:
> > > Jan 21 14:09:56 HCI01 journal: ovirt-ha-agent
> > > ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine ERROR
> > > Engine VM stopped on localhost Jan 21 14:10:01 HCI01 systemd:
> > > Started Session 22 of user root. Jan 21 14:10:02 HCI01 systemd:
> > > Started Session c306 of user root. Jan 21 14:10:03 HCI01 systemd:
> > > Started Session c307 of user root. Jan 21 14:10:06 HCI01
> > > vdsm[3650]: WARN executor state: count=5 workers=set([<Worker
> > > name=periodic/4 waiting task#=141 at 0x7fd2d4316910>, <Worker
> > > name=periodic/1 running <Task discardable <Operation
> > > action=<vdsm.virt.sampling.VMBulkstatsMonitor object at
> > > 0x7fd2d4679490> at 0x7fd2d4679710> timeout=7.5, duration=7 at
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0> disca rded task#=413 at
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>, <Worker
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>name=periodic/3
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>waiting task#=414 at
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>,
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10><Worker
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>name=periodic/5
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>waiting
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>task#=0
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>at
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>,
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650><Worker
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>name
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>=periodic/2
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>waiting
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>task#=412
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>at
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>])
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>Jan
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>21
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>14:10:06
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>HCI01
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>kernel:
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>ovirtmgmt:
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>port
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>2(vnet0)
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>entered
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>disabled
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>state
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>Jan
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>21
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>14:10:06
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>HCI01
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>kernel:
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>device
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>vnet0
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>left
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>promiscuous
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>mode
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>Jan
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>21
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>14:10:06
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>HCI01
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>kernel:
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>ovirtmgmt:
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>port
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>2(vnet0)
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>entered
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>disabled
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>state
> > > Jan 21 14:10:06 HCI01 NetworkManager[3666]: <info>
> > > [1548076206.9177] device (vnet0): state change: disconnected ->
> > > unmanaged (reason 'unmanaged', sys-iface-state: 'remo ved') Jan 21
> > > 14:10:06 HCI01 NetworkManager[3666]: <info> [1548076206.9180]
> > > device (vnet0): released from master device ovirtmgmt Jan 21
> > > 14:10:06 HCI01 lldpad: recvfrom(Event interface): No buffer space
> > > available Jan 21 14:10:06 HCI01 libvirtd: 2019-01-21
> > > 13:10:06.925+0000: 2651: error : qemuMonitorIORead:609 : Unable to
> > > read from monitor: Connection reset by peer Jan 21 14:10:07 HCI01
> > > kvm: 0 guests now active Jan 21 14:10:07 HCI01 systemd-machined:
> > > Machine qemu-3-HostedEngine terminated. Jan 21 14:10:07 HCI01
> > > libvirtd: 2019-01-21 13:10:07.125+0000: 2704: warning :
> > > qemuGetProcessInfo:1406 : cannot parse process status data Jan 21
> > > 14:10:07 HCI01 libvirtd: 2019-01-21 13:10:07.125+0000: 2704:
> > > warning : qemuGetProcessInfo:1406 : cannot parse process status
> > > data Jan 21 14:10:07 HCI01 libvirtd: 2019-01-21 13:10:07.125+0000:
> > > 2704: warning : qemuGetProcessInfo:1406 : cannot parse process
> > > status data Jan 21 14:10:07 HCI01 libvirtd: 2019-01-21
> > > 13:10:07.125+0000: 2704: warning : qemuGetProcessInfo:1406 : cannot
> > > parse process status data Jan 21 14:10:07 HCI01 libvirtd:
> > > 2019-01-21 13:10:07.126+0000: 2704: error :
> > > virNetDevTapInterfaceStats:764 : internal error: /proc/net/dev:
> > > Interface not found Jan 21 14:10:07 HCI01 firewalld[24040]:
> > > WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -D libvirt-out
> > > -m physdev --physdev-is-bridged --physdev-out vnet0 -g FP-vnet 0'
> > > failed: iptables v1.4.21: goto 'FP-vnet0' is not a chain#012#012Try
> > > `iptables -h' or 'iptables --help' for more information. Jan 21
> > > 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/iptables -w2 -w -D libvirt-out -m physdev --physdev-out
> > > vnet0 -g FP-vnet0' failed: iptables v 1.4.21: goto 'FP-vnet0' is
> > > not a chain#012#012Try `iptables -h' or 'iptables --help' for more
> > > information. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -D libvirt-in -m physdev
> > > --physdev-in vnet0 -g FJ-vnet0' failed: iptables v1. 4.21: goto
> > > 'FJ-vnet0' is not a chain#012#012Try `iptables -h' or 'iptables
> > > --help' for more information. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -D libvirt-host-in -m physdev --physdev-in vnet0 -g HJ-vnet0'
> > > failed: iptable s v1.4.21: goto 'HJ-vnet0' is not a
> > > chain#012#012Try `iptables -h' or 'iptables --help' for more
> > > information. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F FP-vnet0' failed:
> > > iptables: No chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -X FP-vnet0' failed: iptables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F FJ-vnet0' failed:
> > > iptables: No chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -X FJ-vnet0' failed: iptables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F HJ-vnet0' failed:
> > > iptables: No chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -X HJ-vnet0' failed: iptables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -D libvirt-out -m
> > > physdev --physdev-is-bridged --physdev-out vnet0 -g FP-vne t0'
> > > failed: ip6tables v1.4.21: goto 'FP-vnet0' is not a
> > > chain#012#012Try `ip6tables -h' or 'ip6tables --help' for more
> > > information. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -D libvirt-out -m
> > > physdev --physdev-out vnet0 -g FP-vnet0' failed: ip6tables v1.4.21:
> > > goto 'FP-vnet0' is not a chain#012#012Try `ip6tables -h' or
> > > 'ip6tables --help' for more information. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2
> > > -w -D libvirt-in -m physdev --physdev-in vnet0 -g FJ-vnet0' failed:
> > > ip6tables v 1.4.21: goto 'FJ-vnet0' is not a chain#012#012Try
> > > `ip6tables -h' or 'ip6tables --help' for more information. Jan 21
> > > 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ip6tables -w2 -w -D libvirt-host-in -m physdev
> > > --physdev-in vnet0 -g HJ-vnet0' failed: ip6tab les v1.4.21: goto
> > > 'HJ-vnet0' is not a chain#012#012Try `ip6tables -h' or 'ip6tables
> > > --help' for more information. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2
> > > -w -F FP-vnet0' failed: ip6tables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -X FP-vnet0' failed:
> > > ip6tables: No chain/target/match by that name. Jan 21 14:10:07
> > > HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ip6tables -w2 -w -F FJ-vnet0' failed: ip6tables: No
> > > chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2
> > > -w -X FJ-vnet0' failed: ip6tables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -F HJ-vnet0' failed:
> > > ip6tables: No chain/target/match by that name. Jan 21 14:10:07
> > > HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ip6tables -w2 -w -X HJ-vnet0' failed: ip6tables: No
> > > chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
> > > --concurrent -t nat -D PREROUTING -i vnet0 -j libvirt-J-vnet0'
> > > failed: Illegal target name 'libvirt-J-vnet0'. Jan 21 14:10:07
> > > HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ebtables --concurrent -t nat -D POSTROUTING -o vnet0 -j
> > > libvirt-P-vnet0' failed: Illegal target name 'libvirt-P-vnet0'. Jan
> > > 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ebtables --concurrent -t nat -D PREROUTING -i vnet0 -j
> > > libvirt-J-vnet0' failed: Illegal targe t name 'libvirt-J-vnet0'.
> > > Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ebtables --concurrent -t nat -D POSTROUTING -o vnet0 -j
> > > libvirt-P-vnet0' failed: Illegal targ et name 'libvirt-P-vnet0'.
> > > Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ebtables --concurrent -t nat -L libvirt-J-vnet0' failed:
> > > Chain 'libvirt-J-vnet0' doesn't exis t. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
> > > --concurrent -t nat -L libvirt-P-vnet0' failed: Chain
> > > 'libvirt-P-vnet0' doesn't exist. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
> > > --concurrent -t nat -F libvirt-J-vnet0' failed: Chain
> > > 'libvirt-J-vnet0' doesn't exist. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
> > > --concurrent -t nat -X libvirt-J-vnet0' failed: Chain
> > > 'libvirt-J-vnet0' doesn't exist. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
> > > --concurrent -t nat -F libvirt-P-vnet0' failed: Chain
> > > 'libvirt-P-vnet0' doesn't exist. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
> > > --concurrent -t nat -X libvirt-P-vnet0' failed: Chain
> > > 'libvirt-P-vnet0' doesn't exist. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -D libvirt-out -m physdev --physdev-is-bridged --physdev-out
> > > vnet0 -g FO-vnet0' failed: iptables v1.4.21: goto 'FO-vnet0' is not
> > > a chain#012#012Try `iptables -h' or 'iptables --help' for more
> > > information. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -D libvirt-out -m
> > > physdev --physdev-out vnet0 -g FO-vnet0' failed: iptables v1.4.21:
> > > goto 'FO-vnet0' is not a chain#012#012Try `iptables -h' or
> > > 'iptables --help' for more information. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -D libvirt-in -m physdev --physdev-in vnet0 -g FI-vnet0' failed:
> > > iptables v1.4.21: goto 'FI-vnet0' is not a chain#012#012Try
> > > `iptables -h' or 'iptables --help' for more information. Jan 21
> > > 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/iptables -w2 -w -D libvirt-host-in -m physdev
> > > --physdev-in vnet0 -g HI-vnet0' failed: iptables v1.4.21: goto
> > > 'HI-vnet0' is not a chain#012#012Try `iptables -h' or 'iptables
> > > --help' for more information. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -D libvirt-in-post -m physdev --physdev-in vnet0 -j ACCEPT'
> > > failed: iptables: Bad rule (does a matching rule exist in that
> > > chain?). Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F FO-vnet0' failed:
> > > iptables: No chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -X FO-vnet0' failed: iptables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F FI-vnet0' failed:
> > > iptables: No chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -X FI-vnet0' failed: iptables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F HI-vnet0' failed:
> > > iptables: No chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -X HI-vnet0' failed: iptables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -D libvirt-out -m
> > > physdev --physdev-is-bridged --physdev-out vnet0 -g FO-vnet0'
> > > failed: ip6tables v1.4.21: goto 'FO-vnet0' is not a
> > > chain#012#012Try `ip6tables -h' or 'ip6tables --help' for more
> > > information. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -D libvirt-out -m
> > > physdev --physdev-out vnet0 -g FO-vnet0' failed: ip6tables v1.4.21:
> > > goto 'FO-vnet0' is not a chain#012#012Try `ip6tables -h' or
> > > 'ip6tables --help' for more information. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2
> > > -w -D libvirt-in -m physdev --physdev-in vnet0 -g FI-vnet0' failed:
> > > ip6tables v1.4.21: goto 'FI-vnet0' is not a chain#012#012Try
> > > `ip6tables -h' or 'ip6tables --help' for more information. Jan 21
> > > 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ip6tables -w2 -w -D libvirt-host-in -m physdev
> > > --physdev-in vnet0 -g HI-vnet0' failed: ip6tables v1.4.21: goto
> > > 'HI-vnet0' is not a chain#012#012Try `ip6tables -h' or 'ip6tables
> > > --help' for more information. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2
> > > -w -D libvirt-in-post -m physdev --physdev-in vnet0 -j ACCEPT'
> > > failed: ip6tables: Bad rule (does a matching rule exist in that
> > > chain?). Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -F FO-vnet0' failed:
> > > ip6tables: No chain/target/match by that name. Jan 21 14:10:07
> > > HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ip6tables -w2 -w -X FO-vnet0' failed: ip6tables: No
> > > chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2
> > > -w -F FI-vnet0' failed: ip6tables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -X FI-vnet0' failed:
> > > ip6tables: No chain/target/match by that name. Jan 21 14:10:07
> > > HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ip6tables -w2 -w -F HI-vnet0' failed: ip6tables: No
> > > chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2
> > > -w -X HI-vnet0' failed: ip6tables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -D
> > > POSTROUTING -o vnet0 -j libvirt-O-vnet0' failed: Illegal target
> > > name 'libvirt-O-vnet0'. Jan 21 14:10:07 HCI01 firewalld[24040]:
> > > WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -L
> > > libvirt-O-vnet0' failed: Chain 'libvirt-O-vnet0' doesn't exist. Jan
> > > 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ebtables --concurrent -t nat -F libvirt-O-vnet0' failed:
> > > Chain 'libvirt-O-vnet0' doesn't exist. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
> > > --concurrent -t nat -X libvirt-O-vnet0' failed: Chain
> > > 'libvirt-O-vnet0' doesn't exist. Jan 21 14:10:07 HCI01 vdsm[3650]:
> > > WARN
> > > File:
> /var/lib/libvirt/qemu/channels/ea1b312c-a462-45a9-ab75-78008bc4c9c9.ovirt-guest-agent.0
> > > already removed Jan 21 14:10:07 HCI01 vdsm[3650]: WARN Attempting
> > > to remove a non existing network:
> > > ovirtmgmt/ea1b312c-a462-45a9-ab75-78008bc4c9c9 Jan 21 14:10:07
> > > HCI01 vdsm[3650]: WARN Attempting to remove a non existing net
> > > user: ovirtmgmt/ea1b312c-a462-45a9-ab75-78008bc4c9c9 Jan 21
> > > 14:10:07 HCI01 vdsm[3650]: WARN
> > > File:
> /var/lib/libvirt/qemu/channels/ea1b312c-a462-45a9-ab75-78008bc4c9c9.org.qemu.guest_agent.0
> > > already removed
> > >
> > > any ideas on that?
> > > _______________________________________________
> > > Users mailing list -- users(a)ovirt.org
> > > To unsubscribe send an email to users-leave(a)ovirt.org
> > > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> > > oVirt Code of Conduct:
> > > https://www.ovirt.org/community/about/community-guidelines/ List
> > > Archives:
> > >
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/XMMX5CY6VHF...
> >
>
>
5 years, 11 months
Re: ovirt 4.2 HCI rollout
by Simone Tiraboschi
On Thu, Jan 24, 2019 at 2:21 PM Markus Schaufler <
markus.schaufler(a)digit-all.at> wrote:
> Hi,
>
>
> thanks for the replies.
>
>
> I updated to 4.2.8 and tried again:
>
>
> [ INFO ] TASK [Check engine VM health]
> [ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 120, "changed":
> true, "cmd": ["hosted-engine", "--vm-status", "--json"], "delta":
> "0:00:00.165316", "end": "2019-01-24 14:12:06.899564", "rc": 0, "start":
> "2019-01-24 14:12:06.734248", "stderr": "", "stderr_lines": [], "stdout":
> "{\"1\": {\"conf_on_shared_storage\": true, \"live-data\": true, \"extra\":
> \"metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=3049
> (Thu Jan 24 14:11:59
> 2019)\\nhost-id=1\\nscore=3400\\nvm_conf_refresh_time=3049 (Thu Jan 24
> 14:11:59
> 2019)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineStarting\\nstopped=False\\n\",
> \"hostname\": \"HCI01.res01.ads.ooe.local\", \"host-id\": 1,
> \"engine-status\": {\"reason\": \"failed liveliness check\", \"health\":
> \"bad\", \"vm\": \"up\", \"detail\": \"Up\"}, \"score\": 3400, \"stopped\":
> false, \"maintenance\": false, \"crc32\": \"0c1a3ddb\",
> \"local_conf_timestamp\": 3049, \"host-ts\": 3049}, \"global_maintenance\":
> false}", "stdout_lines": ["{\"1\": {\"conf_on_shared_storage\": true,
> \"live-data\": true, \"extra\":
> \"metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=3049
> (Thu Jan 24 14:11:59
> 2019)\\nhost-id=1\\nscore=3400\\nvm_conf_refresh_time=3049 (Thu Jan 24
> 14:11:59
> 2019)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineStarting\\nstopped=False\\n\",
> \"hostname\": \"HCI01.res01.ads.ooe.local\", \"host-id\": 1,
> \"engine-status\": {\"reason\": \"failed liveliness check\", \"health\":
> \"bad\", \"vm\": \"up\", \"detail\": \"Up\"}, \"score\": 3400, \"stopped\":
> false, \"maintenance\": false, \"crc32\": \"0c1a3ddb\",
> \"local_conf_timestamp\": 3049, \"host-ts\": 3049}, \"global_maintenance\":
> false}"]}
>
It's still the same issue: the host fail to properly check the status of
the engine over a dedicate health page.
You should connect to ovirt-hci.res01.ads.ooe.local and check the status of
ovirt-engine service and /var/log/ovirt-engine/engine.log there.
> [ INFO ] TASK [Check VM status at virt level]
> [ INFO ] changed: [localhost]
> [ INFO ] TASK [Fail if engine VM is not running]
> [ INFO ] skipping: [localhost]
> [ INFO ] TASK [Get target engine VM IPv4 address]
> [ INFO ] changed: [localhost]
> [ INFO ] TASK [Get VDSM's target engine VM stats]
> [ INFO ] changed: [localhost]
> [ INFO ] TASK [Convert stats to JSON format]
> [ INFO ] ok: [localhost]
> [ INFO ] TASK [Get target engine VM IPv4 address from VDSM stats]
> [ INFO ] ok: [localhost]
> [ INFO ] TASK [Fail if the Engine has no IP address]
> [ INFO ] skipping: [localhost]
> [ INFO ] TASK [Fail if Engine IP is different from engine's FQDN resolved
> IP]
> [ INFO ] skipping: [localhost]
> [ INFO ] TASK [Get target engine VM IPv4 address]
> [ INFO ] skipping: [localhost]
> [ INFO ] TASK [Reconfigure OVN central address]
> [ ERROR ] fatal: [localhost]: FAILED! => {"msg": "The task includes an
> option with an undefined variable. The error was: 'dict object' has no
> attribute 'stdout_lines'\n\nThe error appears to have been in
> '/usr/share/ovirt-hosted-engine-setup/ansible/create_target_vm.yml': line
> 518, column 5, but may\nbe elsewhere in the file depending on the exact
> syntax problem.\n\nThe offending line appears to be:\n\n #
> https://github.com/oVirt/ovirt-engine/blob/master/packaging/playbooks/rol...
> - name: Reconfigure OVN central address\n ^ here\n"}
>
>
>
> attached you'll find the setup logs.
>
>
> best regards,
>
> Markus Schaufler
> ------------------------------
> *Von:* Simone Tiraboschi <stirabos(a)redhat.com>
> *Gesendet:* Donnerstag, 24. Jänner 2019 11:56:50
> *An:* Dominik Holler
> *Cc:* Markus Schaufler; users(a)ovirt.org
> *Betreff:* Re: [ovirt-users] ovirt 4.2 HCI rollout
>
>
>
> On Thu, Jan 24, 2019 at 9:40 AM Dominik Holler <dholler(a)redhat.com> wrote:
>
> On Tue, 22 Jan 2019 11:15:12 +0000
> Markus Schaufler <markus.schaufler(a)digit-all.at> wrote:
>
> > Thanks for your reply,
> >
> > getent ahosts ovirt-hci.res01.ads.ooe.local | cut -d' ' -f1 | uniq
> > 10.1.31.20
> >
> > attached you'll find the logs.
> >
>
> Thanks, to my eyes this looks like a bug.
> I tried to isolate the relevant lines in the attached playbook.
>
> Markus, would you be so kind to check if ovirt-4.2.8 is working for you?
>
>
>
> OK, understood: the real error was just a few lines before what Dominik
> pointed out:
>
> "stdout": "{\"1\": {\"conf_on_shared_storage\": true, \"live-data\":
> true, \"extra\":
> \"metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=5792
> (Mon Jan 21 13:57:45
> 2019)\\nhost-id=1\\nscore=3000\\nvm_conf_refresh_time=5792 (Mon Jan 21
> 13:57:45
> 2019)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineStarting\\nstopped=False\\n\",
> \"hostname\": \"HCI01.res01.ads.ooe.local\", \"host-id\": 1,
> \"engine-status\": {\"reason\": \"failed liveliness check\", \"health\":
> \"bad\", \"vm\": \"up\", \"detail\": \"Up\"}, \"score\": 3000, \"stopped\":
> false, \"maintenance\": false, \"crc32\": \"ba303717\",
> \"local_conf_timestamp\": 5792, \"host-ts\": 5792}, \"global_maintenance\":
> false}",
> "stdout_lines": [
> "{\"1\": {\"conf_on_shared_storage\": true, \"live-data\": true,
> \"extra\":
> \"metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=5792
> (Mon Jan 21 13:57:45
> 2019)\\nhost-id=1\\nscore=3000\\nvm_conf_refresh_time=5792 (Mon Jan 21
> 13:57:45
> 2019)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineStarting\\nstopped=False\\n\",
> \"hostname\": \"HCI01.res01.ads.ooe.local\", \"host-id\": 1,
> \"engine-status\": {\"reason\": \"failed liveliness check\", \"health\":
> \"bad\", \"vm\": \"up\", \"detail\": \"Up\"}, \"score\": 3000, \"stopped\":
> false, \"maintenance\": false, \"crc32\": \"ba303717\",
> \"local_conf_timestamp\": 5792, \"host-ts\": 5792}, \"global_maintenance\":
> false}"
> ]
> }"
> 2019-01-21 13:57:46,695+0100 ERROR ansible failed {'status': 'FAILED',
> 'ansible_type': 'task', 'ansible_task': u'Check engine VM health',
> 'ansible_result': u'type: <type \'dict\'>\nstr: {\'_ansible_parsed\': True,
> \'stderr_lines\': [], u\'changed\': True, u\'end\': u\'2019-01-21
> 13:57:46.242423\', \'_ansible_no_log\': False, u\'stdout\': u\'{"1":
> {"conf_on_shared_storage": true, "live-data": true, "extra":
> "metadata_parse_version=1\\\\nmetadata_feature_version=1\\\\ntimestamp=5792
> (Mon Jan 21 13:57:4', 'ansible_host': u'localhost', 'ansible_playbook':
> u'/usr/share/ovirt-hosted-engine-setup/ansible/create_target_vm.yml'}
>
> and in particular it's here:
> for some reason we got \"engine-status\": {\"reason\": \"failed
> liveliness check\", \"health\": \"bad\", \"vm\": \"up\", \"detail\":
> \"Up\"}
> over 120 attempts: we have to check engine.log (it got collected as well
> from the engine VM) to understand why the engine was failing to start.
>
>
>
>
> > ________________________________
> > Von: Dominik Holler <dholler(a)redhat.com>
> > Gesendet: Montag, 21. Jänner 2019 17:52:35
> > An: Markus Schaufler
> > Cc: users(a)ovirt.org; Simone Tiraboschi
> > Betreff: Re: [ovirt-users] ovirt 4.2 HCI rollout
> >
> > Would you please share the related ovirt-host-deploy-ansible-*.log
> > stored on the host in /var/log/ovirt-hosted-engine-setup ?
> >
> > Would you please also share the output of
> > getent ahosts YOUR_HOSED_ENGNE_FQDN | cut -d' ' -f1 | uniq
> > if executed on this host?
> >
> >
> > On Mon, 21 Jan 2019 13:37:53 -0000
> > "Markus Schaufler" <markus.schaufler(a)digit-all.at> wrote:
> >
> > > Hi,
> > >
> > > I'm trying a (nested) ovirt 4.2.7 HCI rollout on 3 centos VM's by
> > > following
> > >
> https://ovirt.org/documentation/gluster-hyperconverged/chap-Deploying_Hyp...
> > > gluster deployment was successful but at HE deployment "stage 5" I
> > > got following error:
> > >
> > > [ INFO ] TASK [Reconfigure OVN central address]
> > > [ ERROR ] fatal: [localhost]: FAILED! => {"msg": "The task includes
> > > an option with an undefined variable. The error was: 'dict object'
> > > has no attribute 'stdout_lines'\n\nThe error appears to have been
> > > in
> > > '/usr/share/ovirt-hosted-engine-setup/ansible/create_target_vm.yml':
> > > line 522, column 5, but may\nbe elsewhere in the file depending on
> > > the exact syntax problem.\n\nThe offending line appears to be:\n\n
> > > #
> > >
> https://github.com/oVirt/ovirt-engine/blob/master/packaging/playbooks/rol...
> > > - name: Reconfigure OVN central address\n ^ here\n"}
> > >
> > >
> > > /var/log/messages:
> > > Jan 21 14:09:56 HCI01 journal: ovirt-ha-agent
> > > ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine ERROR
> > > Engine VM stopped on localhost Jan 21 14:10:01 HCI01 systemd:
> > > Started Session 22 of user root. Jan 21 14:10:02 HCI01 systemd:
> > > Started Session c306 of user root. Jan 21 14:10:03 HCI01 systemd:
> > > Started Session c307 of user root. Jan 21 14:10:06 HCI01
> > > vdsm[3650]: WARN executor state: count=5 workers=set([<Worker
> > > name=periodic/4 waiting task#=141 at 0x7fd2d4316910>, <Worker
> > > name=periodic/1 running <Task discardable <Operation
> > > action=<vdsm.virt.sampling.VMBulkstatsMonitor object at
> > > 0x7fd2d4679490> at 0x7fd2d4679710> timeout=7.5, duration=7 at
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0> disca rded task#=413 at
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>, <Worker
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>name=periodic/3
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>waiting task#=414 at
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>,
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10><Worker
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>name=periodic/5
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>waiting
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>task#=0
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>at
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>,
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650><Worker
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>name
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>=periodic/2
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>waiting
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>task#=412
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>at
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>])
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>Jan
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>21
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>14:10:06
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>HCI01
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>kernel:
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>ovirtmgmt:
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>port
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>2(vnet0)
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>entered
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>disabled
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>state
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>Jan
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>21
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>14:10:06
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>HCI01
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>kernel:
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>device
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>vnet0
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>left
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>promiscuous
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>mode
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>Jan
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>21
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>14:10:06
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>HCI01
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>kernel:
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>ovirtmgmt:
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>port
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>2(vnet0)
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>entered
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>disabled
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>state
> > > Jan 21 14:10:06 HCI01 NetworkManager[3666]: <info>
> > > [1548076206.9177] device (vnet0): state change: disconnected ->
> > > unmanaged (reason 'unmanaged', sys-iface-state: 'remo ved') Jan 21
> > > 14:10:06 HCI01 NetworkManager[3666]: <info> [1548076206.9180]
> > > device (vnet0): released from master device ovirtmgmt Jan 21
> > > 14:10:06 HCI01 lldpad: recvfrom(Event interface): No buffer space
> > > available Jan 21 14:10:06 HCI01 libvirtd: 2019-01-21
> > > 13:10:06.925+0000: 2651: error : qemuMonitorIORead:609 : Unable to
> > > read from monitor: Connection reset by peer Jan 21 14:10:07 HCI01
> > > kvm: 0 guests now active Jan 21 14:10:07 HCI01 systemd-machined:
> > > Machine qemu-3-HostedEngine terminated. Jan 21 14:10:07 HCI01
> > > libvirtd: 2019-01-21 13:10:07.125+0000: 2704: warning :
> > > qemuGetProcessInfo:1406 : cannot parse process status data Jan 21
> > > 14:10:07 HCI01 libvirtd: 2019-01-21 13:10:07.125+0000: 2704:
> > > warning : qemuGetProcessInfo:1406 : cannot parse process status
> > > data Jan 21 14:10:07 HCI01 libvirtd: 2019-01-21 13:10:07.125+0000:
> > > 2704: warning : qemuGetProcessInfo:1406 : cannot parse process
> > > status data Jan 21 14:10:07 HCI01 libvirtd: 2019-01-21
> > > 13:10:07.125+0000: 2704: warning : qemuGetProcessInfo:1406 : cannot
> > > parse process status data Jan 21 14:10:07 HCI01 libvirtd:
> > > 2019-01-21 13:10:07.126+0000: 2704: error :
> > > virNetDevTapInterfaceStats:764 : internal error: /proc/net/dev:
> > > Interface not found Jan 21 14:10:07 HCI01 firewalld[24040]:
> > > WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -D libvirt-out
> > > -m physdev --physdev-is-bridged --physdev-out vnet0 -g FP-vnet 0'
> > > failed: iptables v1.4.21: goto 'FP-vnet0' is not a chain#012#012Try
> > > `iptables -h' or 'iptables --help' for more information. Jan 21
> > > 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/iptables -w2 -w -D libvirt-out -m physdev --physdev-out
> > > vnet0 -g FP-vnet0' failed: iptables v 1.4.21: goto 'FP-vnet0' is
> > > not a chain#012#012Try `iptables -h' or 'iptables --help' for more
> > > information. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -D libvirt-in -m physdev
> > > --physdev-in vnet0 -g FJ-vnet0' failed: iptables v1. 4.21: goto
> > > 'FJ-vnet0' is not a chain#012#012Try `iptables -h' or 'iptables
> > > --help' for more information. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -D libvirt-host-in -m physdev --physdev-in vnet0 -g HJ-vnet0'
> > > failed: iptable s v1.4.21: goto 'HJ-vnet0' is not a
> > > chain#012#012Try `iptables -h' or 'iptables --help' for more
> > > information. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F FP-vnet0' failed:
> > > iptables: No chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -X FP-vnet0' failed: iptables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F FJ-vnet0' failed:
> > > iptables: No chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -X FJ-vnet0' failed: iptables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F HJ-vnet0' failed:
> > > iptables: No chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -X HJ-vnet0' failed: iptables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -D libvirt-out -m
> > > physdev --physdev-is-bridged --physdev-out vnet0 -g FP-vne t0'
> > > failed: ip6tables v1.4.21: goto 'FP-vnet0' is not a
> > > chain#012#012Try `ip6tables -h' or 'ip6tables --help' for more
> > > information. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -D libvirt-out -m
> > > physdev --physdev-out vnet0 -g FP-vnet0' failed: ip6tables v1.4.21:
> > > goto 'FP-vnet0' is not a chain#012#012Try `ip6tables -h' or
> > > 'ip6tables --help' for more information. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2
> > > -w -D libvirt-in -m physdev --physdev-in vnet0 -g FJ-vnet0' failed:
> > > ip6tables v 1.4.21: goto 'FJ-vnet0' is not a chain#012#012Try
> > > `ip6tables -h' or 'ip6tables --help' for more information. Jan 21
> > > 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ip6tables -w2 -w -D libvirt-host-in -m physdev
> > > --physdev-in vnet0 -g HJ-vnet0' failed: ip6tab les v1.4.21: goto
> > > 'HJ-vnet0' is not a chain#012#012Try `ip6tables -h' or 'ip6tables
> > > --help' for more information. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2
> > > -w -F FP-vnet0' failed: ip6tables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -X FP-vnet0' failed:
> > > ip6tables: No chain/target/match by that name. Jan 21 14:10:07
> > > HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ip6tables -w2 -w -F FJ-vnet0' failed: ip6tables: No
> > > chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2
> > > -w -X FJ-vnet0' failed: ip6tables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -F HJ-vnet0' failed:
> > > ip6tables: No chain/target/match by that name. Jan 21 14:10:07
> > > HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ip6tables -w2 -w -X HJ-vnet0' failed: ip6tables: No
> > > chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
> > > --concurrent -t nat -D PREROUTING -i vnet0 -j libvirt-J-vnet0'
> > > failed: Illegal target name 'libvirt-J-vnet0'. Jan 21 14:10:07
> > > HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ebtables --concurrent -t nat -D POSTROUTING -o vnet0 -j
> > > libvirt-P-vnet0' failed: Illegal target name 'libvirt-P-vnet0'. Jan
> > > 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ebtables --concurrent -t nat -D PREROUTING -i vnet0 -j
> > > libvirt-J-vnet0' failed: Illegal targe t name 'libvirt-J-vnet0'.
> > > Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ebtables --concurrent -t nat -D POSTROUTING -o vnet0 -j
> > > libvirt-P-vnet0' failed: Illegal targ et name 'libvirt-P-vnet0'.
> > > Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ebtables --concurrent -t nat -L libvirt-J-vnet0' failed:
> > > Chain 'libvirt-J-vnet0' doesn't exis t. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
> > > --concurrent -t nat -L libvirt-P-vnet0' failed: Chain
> > > 'libvirt-P-vnet0' doesn't exist. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
> > > --concurrent -t nat -F libvirt-J-vnet0' failed: Chain
> > > 'libvirt-J-vnet0' doesn't exist. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
> > > --concurrent -t nat -X libvirt-J-vnet0' failed: Chain
> > > 'libvirt-J-vnet0' doesn't exist. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
> > > --concurrent -t nat -F libvirt-P-vnet0' failed: Chain
> > > 'libvirt-P-vnet0' doesn't exist. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
> > > --concurrent -t nat -X libvirt-P-vnet0' failed: Chain
> > > 'libvirt-P-vnet0' doesn't exist. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -D libvirt-out -m physdev --physdev-is-bridged --physdev-out
> > > vnet0 -g FO-vnet0' failed: iptables v1.4.21: goto 'FO-vnet0' is not
> > > a chain#012#012Try `iptables -h' or 'iptables --help' for more
> > > information. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -D libvirt-out -m
> > > physdev --physdev-out vnet0 -g FO-vnet0' failed: iptables v1.4.21:
> > > goto 'FO-vnet0' is not a chain#012#012Try `iptables -h' or
> > > 'iptables --help' for more information. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -D libvirt-in -m physdev --physdev-in vnet0 -g FI-vnet0' failed:
> > > iptables v1.4.21: goto 'FI-vnet0' is not a chain#012#012Try
> > > `iptables -h' or 'iptables --help' for more information. Jan 21
> > > 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/iptables -w2 -w -D libvirt-host-in -m physdev
> > > --physdev-in vnet0 -g HI-vnet0' failed: iptables v1.4.21: goto
> > > 'HI-vnet0' is not a chain#012#012Try `iptables -h' or 'iptables
> > > --help' for more information. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -D libvirt-in-post -m physdev --physdev-in vnet0 -j ACCEPT'
> > > failed: iptables: Bad rule (does a matching rule exist in that
> > > chain?). Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F FO-vnet0' failed:
> > > iptables: No chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -X FO-vnet0' failed: iptables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F FI-vnet0' failed:
> > > iptables: No chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -X FI-vnet0' failed: iptables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F HI-vnet0' failed:
> > > iptables: No chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -X HI-vnet0' failed: iptables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -D libvirt-out -m
> > > physdev --physdev-is-bridged --physdev-out vnet0 -g FO-vnet0'
> > > failed: ip6tables v1.4.21: goto 'FO-vnet0' is not a
> > > chain#012#012Try `ip6tables -h' or 'ip6tables --help' for more
> > > information. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -D libvirt-out -m
> > > physdev --physdev-out vnet0 -g FO-vnet0' failed: ip6tables v1.4.21:
> > > goto 'FO-vnet0' is not a chain#012#012Try `ip6tables -h' or
> > > 'ip6tables --help' for more information. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2
> > > -w -D libvirt-in -m physdev --physdev-in vnet0 -g FI-vnet0' failed:
> > > ip6tables v1.4.21: goto 'FI-vnet0' is not a chain#012#012Try
> > > `ip6tables -h' or 'ip6tables --help' for more information. Jan 21
> > > 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ip6tables -w2 -w -D libvirt-host-in -m physdev
> > > --physdev-in vnet0 -g HI-vnet0' failed: ip6tables v1.4.21: goto
> > > 'HI-vnet0' is not a chain#012#012Try `ip6tables -h' or 'ip6tables
> > > --help' for more information. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2
> > > -w -D libvirt-in-post -m physdev --physdev-in vnet0 -j ACCEPT'
> > > failed: ip6tables: Bad rule (does a matching rule exist in that
> > > chain?). Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -F FO-vnet0' failed:
> > > ip6tables: No chain/target/match by that name. Jan 21 14:10:07
> > > HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ip6tables -w2 -w -X FO-vnet0' failed: ip6tables: No
> > > chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2
> > > -w -F FI-vnet0' failed: ip6tables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -X FI-vnet0' failed:
> > > ip6tables: No chain/target/match by that name. Jan 21 14:10:07
> > > HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ip6tables -w2 -w -F HI-vnet0' failed: ip6tables: No
> > > chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2
> > > -w -X HI-vnet0' failed: ip6tables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -D
> > > POSTROUTING -o vnet0 -j libvirt-O-vnet0' failed: Illegal target
> > > name 'libvirt-O-vnet0'. Jan 21 14:10:07 HCI01 firewalld[24040]:
> > > WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -L
> > > libvirt-O-vnet0' failed: Chain 'libvirt-O-vnet0' doesn't exist. Jan
> > > 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ebtables --concurrent -t nat -F libvirt-O-vnet0' failed:
> > > Chain 'libvirt-O-vnet0' doesn't exist. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
> > > --concurrent -t nat -X libvirt-O-vnet0' failed: Chain
> > > 'libvirt-O-vnet0' doesn't exist. Jan 21 14:10:07 HCI01 vdsm[3650]:
> > > WARN
> > > File:
> /var/lib/libvirt/qemu/channels/ea1b312c-a462-45a9-ab75-78008bc4c9c9.ovirt-guest-agent.0
> > > already removed Jan 21 14:10:07 HCI01 vdsm[3650]: WARN Attempting
> > > to remove a non existing network:
> > > ovirtmgmt/ea1b312c-a462-45a9-ab75-78008bc4c9c9 Jan 21 14:10:07
> > > HCI01 vdsm[3650]: WARN Attempting to remove a non existing net
> > > user: ovirtmgmt/ea1b312c-a462-45a9-ab75-78008bc4c9c9 Jan 21
> > > 14:10:07 HCI01 vdsm[3650]: WARN
> > > File:
> /var/lib/libvirt/qemu/channels/ea1b312c-a462-45a9-ab75-78008bc4c9c9.org.qemu.guest_agent.0
> > > already removed
> > >
> > > any ideas on that?
> > > _______________________________________________
> > > Users mailing list -- users(a)ovirt.org
> > > To unsubscribe send an email to users-leave(a)ovirt.org
> > > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> > > oVirt Code of Conduct:
> > > https://www.ovirt.org/community/about/community-guidelines/ List
> > > Archives:
> > >
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/XMMX5CY6VHF...
> >
>
>
5 years, 11 months
lvm problem
by Nyika Csaba
Hi all,
I have an Ovirt 4.2.8 cluster. Nodes are 4.2 ovirt nodes and the volumes (5) attached to nodes by FC.
2 weeks ago i made a small vm (centos 7 based) for me to test (name A). After the test i dropped a vm.
Next day a made an another vm (named B) to the developes and tired to add a new disk to the vm (B). Than a original volume group (vg) of the vm (B) missed and i got back the yesterday dropped vm`s (A) vg!
I tired to restart a vm, but the vm never started again.
I dropped this vm (B) too, and tired to add a new disk an older running vm (C) too, but a volume group changed to that vm`s vg what i dropped before (B).
I checked this „error” and i got it ,then i delete or move disks from the end of the FC volume.
Have sombody ever seen error like this?
Thanks,
csaba
Ps: in this cluster a managed 120 productive vm, so….
5 years, 11 months
Lvm problem
by csabany@freemail.hu
Hi all,
I have an Ovirt 4.2.8 cluster. Nodes are 4.2 ovirt nodes and the volumes (5) attached to nodes by FC.
2 weeks ago i made a small vm (centos 7 based) for me to test (name A). After the test i dropped a vm.
Next day a made an another vm (named B) to the developes and tired to add a new disk to the vm (B). Than a original volume group (vg) of the vm (B) missed and i got back the yesterday dropped vm`s (A) vg!
I tired to restart a vm, but the vm never started again.
I dropped this vm (B) too, and tired to add a new disk an older running vm (C) too, but a volume group changed to that vm`s vg what i dropped before (B).
I checked this „error” and i got it ,then i delete or move disks from the end of the FC volume.
Have sombody ever seen error like this?
Thanks,
csaba
Ps: in this cluster a managed 120 productive vm, so….
5 years, 11 months
latest pycurl 7.43 brokes ovirtsdk4
by Nathanaël Blanchet
Hi all,
If anyone uses latest pycurl 7.43 provided by pip or ansible tower/awx,
any ovirtsdk4 calling will issue with the log:
The full traceback is:
WARNING: The below traceback may *not* be related to the actual failure.
File "/tmp/ansible_ovirt_auth_payload_L1HK9E/__main__.py", line 202,
in <module>
import ovirtsdk4 as sdk
File
"/opt/awx/embedded/lib64/python2.7/site-packages/ovirtsdk4/__init__.py",
line 22, in <module>
import pycurl
fatal: [localhost]: FAILED! => {
"changed": false,
"invocation": {
"module_args": {
"ca_file": null,
"compress": true,
"headers": null,
"hostname": null,
"insecure": true,
"kerberos": false,
"ovirt_auth": null,
"password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"state": "present",
"timeout": 0,
"token": null,
"url": "https://acore.v100.abes.fr/ovirt-engine/api",
"username": "admin@internal"
}
},
"msg": "ovirtsdk4 version 4.2.4 or higher is required for this module"
}
The only way is to set the version of pycurl with
pip install -U "pycurl == 7.19.0"
(Before this, in tower/awx, you should create venv)
--
Nathanaël Blanchet
Supervision réseau
Pôle Infrastrutures Informatiques
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5
Tél. 33 (0)4 67 54 84 55
Fax 33 (0)4 67 54 84 14
blanchet(a)abes.fr
5 years, 11 months
oVirt 4.2.8 CPU Compatibility
by Stefano Danzi
Hello!
I'm running oVirt 4.2.7.5-1.el7 on 3 hosts cluster.
Cluster CPU Type is "AMD Opteron G3".
On default cluster I can see the warning:
"Warning: The CPU type 'AMD Opteron G3' will not be supported in the
next minor version update'"
Is it still supported in version 4.2.8? I can't see references in
documentation or changelog.
5 years, 11 months
ovirt 4.2 HCI rollout
by Markus Schaufler
Hi,
I'm trying a (nested) ovirt 4.2.7 HCI rollout on 3 centos VM's by following https://ovirt.org/documentation/gluster-hyperconverged/chap-Deploying_Hyp...
gluster deployment was successful but at HE deployment "stage 5" I got following error:
[ INFO ] TASK [Reconfigure OVN central address]
[ ERROR ] fatal: [localhost]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: 'dict object' has no attribute 'stdout_lines'\n\nThe error appears to have been in '/usr/share/ovirt-hosted-engine-setup/ansible/create_target_vm.yml': line 522, column 5, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n # https://github.com/oVirt/ovirt-engine/blob/master/packaging/playbooks/rol... - name: Reconfigure OVN central address\n ^ here\n"}
/var/log/messages:
Jan 21 14:09:56 HCI01 journal: ovirt-ha-agent ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine ERROR Engine VM stopped on localhost
Jan 21 14:10:01 HCI01 systemd: Started Session 22 of user root.
Jan 21 14:10:02 HCI01 systemd: Started Session c306 of user root.
Jan 21 14:10:03 HCI01 systemd: Started Session c307 of user root.
Jan 21 14:10:06 HCI01 vdsm[3650]: WARN executor state: count=5 workers=set([<Worker name=periodic/4 waiting task#=141 at 0x7fd2d4316910>, <Worker name=periodic/1 running
<Task discardable <Operation action=<vdsm.virt.sampling.VMBulkstatsMonitor object at 0x7fd2d4679490> at 0x7fd2d4679710> timeout=7.5, duration=7 at 0x7fd33c1e0ed0> disca
rded task#=413 at 0x7fd2d5ed0510>, <Worker name=periodic/3 waiting task#=414 at 0x7fd2d5ed0b10>, <Worker name=periodic/5 waiting task#=0 at 0x7fd2d425f650>, <Worker name
=periodic/2 waiting task#=412 at 0x7fd2d5ed07d0>])
Jan 21 14:10:06 HCI01 kernel: ovirtmgmt: port 2(vnet0) entered disabled state
Jan 21 14:10:06 HCI01 kernel: device vnet0 left promiscuous mode
Jan 21 14:10:06 HCI01 kernel: ovirtmgmt: port 2(vnet0) entered disabled state
Jan 21 14:10:06 HCI01 NetworkManager[3666]: <info> [1548076206.9177] device (vnet0): state change: disconnected -> unmanaged (reason 'unmanaged', sys-iface-state: 'remo
ved')
Jan 21 14:10:06 HCI01 NetworkManager[3666]: <info> [1548076206.9180] device (vnet0): released from master device ovirtmgmt
Jan 21 14:10:06 HCI01 lldpad: recvfrom(Event interface): No buffer space available
Jan 21 14:10:06 HCI01 libvirtd: 2019-01-21 13:10:06.925+0000: 2651: error : qemuMonitorIORead:609 : Unable to read from monitor: Connection reset by peer
Jan 21 14:10:07 HCI01 kvm: 0 guests now active
Jan 21 14:10:07 HCI01 systemd-machined: Machine qemu-3-HostedEngine terminated.
Jan 21 14:10:07 HCI01 libvirtd: 2019-01-21 13:10:07.125+0000: 2704: warning : qemuGetProcessInfo:1406 : cannot parse process status data
Jan 21 14:10:07 HCI01 libvirtd: 2019-01-21 13:10:07.125+0000: 2704: warning : qemuGetProcessInfo:1406 : cannot parse process status data
Jan 21 14:10:07 HCI01 libvirtd: 2019-01-21 13:10:07.125+0000: 2704: warning : qemuGetProcessInfo:1406 : cannot parse process status data
Jan 21 14:10:07 HCI01 libvirtd: 2019-01-21 13:10:07.125+0000: 2704: warning : qemuGetProcessInfo:1406 : cannot parse process status data
Jan 21 14:10:07 HCI01 libvirtd: 2019-01-21 13:10:07.126+0000: 2704: error : virNetDevTapInterfaceStats:764 : internal error: /proc/net/dev: Interface not found
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -D libvirt-out -m physdev --physdev-is-bridged --physdev-out vnet0 -g FP-vnet
0' failed: iptables v1.4.21: goto 'FP-vnet0' is not a chain#012#012Try `iptables -h' or 'iptables --help' for more information.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -D libvirt-out -m physdev --physdev-out vnet0 -g FP-vnet0' failed: iptables v
1.4.21: goto 'FP-vnet0' is not a chain#012#012Try `iptables -h' or 'iptables --help' for more information.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -D libvirt-in -m physdev --physdev-in vnet0 -g FJ-vnet0' failed: iptables v1.
4.21: goto 'FJ-vnet0' is not a chain#012#012Try `iptables -h' or 'iptables --help' for more information.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -D libvirt-host-in -m physdev --physdev-in vnet0 -g HJ-vnet0' failed: iptable
s v1.4.21: goto 'HJ-vnet0' is not a chain#012#012Try `iptables -h' or 'iptables --help' for more information.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F FP-vnet0' failed: iptables: No chain/target/match by that name.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -X FP-vnet0' failed: iptables: No chain/target/match by that name.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F FJ-vnet0' failed: iptables: No chain/target/match by that name.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -X FJ-vnet0' failed: iptables: No chain/target/match by that name.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F HJ-vnet0' failed: iptables: No chain/target/match by that name.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -X HJ-vnet0' failed: iptables: No chain/target/match by that name.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -D libvirt-out -m physdev --physdev-is-bridged --physdev-out vnet0 -g FP-vne
t0' failed: ip6tables v1.4.21: goto 'FP-vnet0' is not a chain#012#012Try `ip6tables -h' or 'ip6tables --help' for more information.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -D libvirt-out -m physdev --physdev-out vnet0 -g FP-vnet0' failed: ip6tables
v1.4.21: goto 'FP-vnet0' is not a chain#012#012Try `ip6tables -h' or 'ip6tables --help' for more information.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -D libvirt-in -m physdev --physdev-in vnet0 -g FJ-vnet0' failed: ip6tables v
1.4.21: goto 'FJ-vnet0' is not a chain#012#012Try `ip6tables -h' or 'ip6tables --help' for more information.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -D libvirt-host-in -m physdev --physdev-in vnet0 -g HJ-vnet0' failed: ip6tab
les v1.4.21: goto 'HJ-vnet0' is not a chain#012#012Try `ip6tables -h' or 'ip6tables --help' for more information.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -F FP-vnet0' failed: ip6tables: No chain/target/match by that name.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -X FP-vnet0' failed: ip6tables: No chain/target/match by that name.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -F FJ-vnet0' failed: ip6tables: No chain/target/match by that name.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -X FJ-vnet0' failed: ip6tables: No chain/target/match by that name.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -F HJ-vnet0' failed: ip6tables: No chain/target/match by that name.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -X HJ-vnet0' failed: ip6tables: No chain/target/match by that name.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -D PREROUTING -i vnet0 -j libvirt-J-vnet0' failed: Illegal target name 'libvirt-J-vnet0'.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -D POSTROUTING -o vnet0 -j libvirt-P-vnet0' failed: Illegal target name 'libvirt-P-vnet0'.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -D PREROUTING -i vnet0 -j libvirt-J-vnet0' failed: Illegal targe
t name 'libvirt-J-vnet0'.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -D POSTROUTING -o vnet0 -j libvirt-P-vnet0' failed: Illegal targ
et name 'libvirt-P-vnet0'.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -L libvirt-J-vnet0' failed: Chain 'libvirt-J-vnet0' doesn't exis
t.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -L libvirt-P-vnet0' failed: Chain 'libvirt-P-vnet0' doesn't exist.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -F libvirt-J-vnet0' failed: Chain 'libvirt-J-vnet0' doesn't exist.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -X libvirt-J-vnet0' failed: Chain 'libvirt-J-vnet0' doesn't exist.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -F libvirt-P-vnet0' failed: Chain 'libvirt-P-vnet0' doesn't exist.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -X libvirt-P-vnet0' failed: Chain 'libvirt-P-vnet0' doesn't exist.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -D libvirt-out -m physdev --physdev-is-bridged --physdev-out vnet0 -g FO-vnet0' failed: iptables v1.4.21: goto 'FO-vnet0' is not a chain#012#012Try `iptables -h' or 'iptables --help' for more information.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -D libvirt-out -m physdev --physdev-out vnet0 -g FO-vnet0' failed: iptables v1.4.21: goto 'FO-vnet0' is not a chain#012#012Try `iptables -h' or 'iptables --help' for more information.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -D libvirt-in -m physdev --physdev-in vnet0 -g FI-vnet0' failed: iptables v1.4.21: goto 'FI-vnet0' is not a chain#012#012Try `iptables -h' or 'iptables --help' for more information.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -D libvirt-host-in -m physdev --physdev-in vnet0 -g HI-vnet0' failed: iptables v1.4.21: goto 'HI-vnet0' is not a chain#012#012Try `iptables -h' or 'iptables --help' for more information.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -D libvirt-in-post -m physdev --physdev-in vnet0 -j ACCEPT' failed: iptables: Bad rule (does a matching rule exist in that chain?).
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F FO-vnet0' failed: iptables: No chain/target/match by that name.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -X FO-vnet0' failed: iptables: No chain/target/match by that name.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F FI-vnet0' failed: iptables: No chain/target/match by that name.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -X FI-vnet0' failed: iptables: No chain/target/match by that name.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F HI-vnet0' failed: iptables: No chain/target/match by that name.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -X HI-vnet0' failed: iptables: No chain/target/match by that name.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -D libvirt-out -m physdev --physdev-is-bridged --physdev-out vnet0 -g FO-vnet0' failed: ip6tables v1.4.21: goto 'FO-vnet0' is not a chain#012#012Try `ip6tables -h' or 'ip6tables --help' for more information.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -D libvirt-out -m physdev --physdev-out vnet0 -g FO-vnet0' failed: ip6tables v1.4.21: goto 'FO-vnet0' is not a chain#012#012Try `ip6tables -h' or 'ip6tables --help' for more information.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -D libvirt-in -m physdev --physdev-in vnet0 -g FI-vnet0' failed: ip6tables v1.4.21: goto 'FI-vnet0' is not a chain#012#012Try `ip6tables -h' or 'ip6tables --help' for more information.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -D libvirt-host-in -m physdev --physdev-in vnet0 -g HI-vnet0' failed: ip6tables v1.4.21: goto 'HI-vnet0' is not a chain#012#012Try `ip6tables -h' or 'ip6tables --help' for more information.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -D libvirt-in-post -m physdev --physdev-in vnet0 -j ACCEPT' failed: ip6tables: Bad rule (does a matching rule exist in that chain?).
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -F FO-vnet0' failed: ip6tables: No chain/target/match by that name.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -X FO-vnet0' failed: ip6tables: No chain/target/match by that name.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -F FI-vnet0' failed: ip6tables: No chain/target/match by that name.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -X FI-vnet0' failed: ip6tables: No chain/target/match by that name.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -F HI-vnet0' failed: ip6tables: No chain/target/match by that name.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -X HI-vnet0' failed: ip6tables: No chain/target/match by that name.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -D POSTROUTING -o vnet0 -j libvirt-O-vnet0' failed: Illegal target name 'libvirt-O-vnet0'.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -L libvirt-O-vnet0' failed: Chain 'libvirt-O-vnet0' doesn't exist.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -F libvirt-O-vnet0' failed: Chain 'libvirt-O-vnet0' doesn't exist.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -X libvirt-O-vnet0' failed: Chain 'libvirt-O-vnet0' doesn't exist.
Jan 21 14:10:07 HCI01 vdsm[3650]: WARN File: /var/lib/libvirt/qemu/channels/ea1b312c-a462-45a9-ab75-78008bc4c9c9.ovirt-guest-agent.0 already removed
Jan 21 14:10:07 HCI01 vdsm[3650]: WARN Attempting to remove a non existing network: ovirtmgmt/ea1b312c-a462-45a9-ab75-78008bc4c9c9
Jan 21 14:10:07 HCI01 vdsm[3650]: WARN Attempting to remove a non existing net user: ovirtmgmt/ea1b312c-a462-45a9-ab75-78008bc4c9c9
Jan 21 14:10:07 HCI01 vdsm[3650]: WARN File: /var/lib/libvirt/qemu/channels/ea1b312c-a462-45a9-ab75-78008bc4c9c9.org.qemu.guest_agent.0 already removed
any ideas on that?
5 years, 11 months
Q: Is it safe to execute on node "saslpasswd2 -a libvirt username" ?
by Andrei Verovski
Hi !
Is it safe to execute on oVirt node this command ?
saslpasswd2 -a libvirt username
Its a production environment, screwing up anything is not an option.
I have no idea how VDSM interacts with libvirt, so not sure about this.
Thanks in advance
Andrei
5 years, 11 months
Host non-responsive after yum update CentOS7/Ovirt3.6
by jaherring@usa.net
Hi, I working on a CentOS7 based Ovirt 3.6 system (ovirt-engine/db on one machine, two separate ovirt vm hosts) which has been running fine but mostly ignored for 2-3 of years. Recently it was decided to update the OS as it was far behind on security updates, so one host was put into maintenance mode, yum update'd, rebooted, and then it was attempted to take out of maintenance mode but it's "non-responsive" now.
If I look in /var/log/ovirt-engine/engine.log on the engine machine I see for this host (vmserver2):
"ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand] (DefaultQuartzScheduler_Worker-36) [2bc0978d] Command 'GetCapabilitiesVDSCommand(HostName = vmserver2, VdsIdAndVdsVDSCommandParametersBase:{runAsync='true', hostId='6725086f-42c0-40eb-91f1-0f2411ea9432', vds='Host[vmserver2,6725086f-42c0-40eb-91f1-0f2411ea9432]'})' execution failed: org.ovirt.vdsm.jsonrpc.client.ClientConnectionException: Connection failed" and thereafter more errors. This keep repeating in the log.
In the Ovirt GUI I see multiple occurrences of log entries for the problem host:
"vmserver2...command failed: Vds timeout occurred"
"vmserver2...command failed: Heartbeat exceeded"
"vmserver2...command failed: internal error: Unknown CPU model Broadwell-noTSX-IBRS"
Firewall rules look identical to the host which is working normally but has not been updated.
Any thoughts about how to fix or further troubleshoot this?
5 years, 11 months