Users
Threads by month
- ----- 2026 -----
- January
- ----- 2025 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2024 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2023 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2022 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2021 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2020 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2019 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2018 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2017 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2016 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2015 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2014 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2013 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2012 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2011 -----
- December
- November
- October
- 6 participants
- 19162 discussions
Hi Martin,
this is my history (please keep in mind that it might get distorted due to mail client).Note: I didn't stop the ovirt-engine.service and this caused some errors to be logged - but the engine is still working without issues. As I said - this is my test lab and I was willing to play around :)
Good Luck!
ssh root@engine
#Switch to postgre usersu - postgres
#If you don't load this , there will be no path for psql , nor it will start at allsource /opt/rh/rh-postgresql95/enable
#open the DB. psql engine
#Commands in the DB:select id, storage_name from storage_domain_static;
select storage_domain_id, ovf_disk_id from storage_domains_ovf_info where storag e_domain_id='fbe7bf1a-2f03-4311-89fa-5031eab638bf';
delete from storage_domain_dynamic where id = 'fbe7bf1a-2f03-4311-89fa-5031eab63 8bf';
delete from storage_domain_static where id = 'fbe7bf1a-2f03-4311-89fa-5031eab638 bf';
delete from base_disks where disk_id = '7a155ede-5317-4860-aa93-de1dc283213e';delete from base_disks where disk_id = '7dedd0e1-8ce8-444e-8a3d-117c46845bb0';
delete from storage_domains_ovf_info where storage_domain_id = 'fbe7bf1a-2f03-43 11-89fa-5031eab638bf';
delete from storage_pool_iso_map where storage_id = 'fbe7bf1a-2f03-4311-89fa-503 1eab638bf';
#I think this shows all tables:select table_schema ,table_name from information_schema.tables order by table_sc hema,table_name;#Maybe you don't need this one and you need to find the NFS volume:select * from gluster_volumes ;delete from gluster_volumes where id = '9b06a1e9-8102-4cd7-bc56-84960a1efaa2';
select table_schema ,table_name from information_schema.tables order by table_sc hema,table_name;
# The previous delete failed as there was an entry in storage_server_connections.#In your case could be differentselect * from storage_server_connections;delete from storage_server_connections where id = '490ee1c7-ae29-45c0-bddd-61708 22c8490';delete from gluster_volumes where id = '9b06a1e9-8102-4cd7-bc56-84960a1efaa2';
Best Regards,Strahil Nikolov
В петък, 25 януари 2019 г., 11:04:01 ч. Гринуич+2, Martin Humaj <mhumaj(a)gmail.com> написа:
Hi StrahilI have tried to use the same ip and nfs export to replace the original, did not work properly.
If you can guide me how to do it in engine DB i would appreciate it. This is a test system.
thank you Martin
On Fri, Jan 25, 2019 at 9:56 AM Strahil <hunter86_bg(a)yahoo.com> wrote:
Can you create a temporary NFS server which to be accessed during the removal?I have managed to edit the engine's DB to get rid of cluster domain, but this is not recommended for production systems :)
1
0
Ovirt - 4.2.4.5-1.el7
Is there any way how to remove the nfs ISO domain in DB? We cannot get rid of it in GUI and we are not able to use it anymore. The problem is that NFS server which was responsible for DATA TYPE ISO domain was deleted. Even we are trying to change it in settings it will not allow us to do it.
Error messages:
Failed to activate Storage Domain oVirt-ISO (Data Center InnovationCenter) by admin@internal-authz
VDSM command ActivateStorageDomainVDS failed: Storage domain does not exist: (u'61045461-10ff-4f7a-b464-67198c4a6c27',)
tank you
1
0
On Thu, Jan 24, 2019 at 3:20 PM Markus Schaufler <
markus.schaufler(a)digit-all.at> wrote:
> no...
>
> all logs in that folder are attached in the mail before.
>
OK, unfortunately in this case I can just suggest to retry and, when it
reaches
[ INFO ] TASK [Check engine VM health]
try to connect to the engine VM via ssh and check what's happening there to
ovirt-engine
> ------------------------------
> *Von:* Simone Tiraboschi <stirabos(a)redhat.com>
> *Gesendet:* Donnerstag, 24. Jänner 2019 15:16:52
> *An:* Markus Schaufler
> *Cc:* Dominik Holler; users(a)ovirt.org
> *Betreff:* Re: [ovirt-users] ovirt 4.2 HCI rollout
>
>
>
> On Thu, Jan 24, 2019 at 3:14 PM Markus Schaufler <
> markus.schaufler(a)digit-all.at> wrote:
>
> The hosted engine is not running and cannot be started.
>
>
>
> Do you have on your first host a directory
> like /var/log/ovirt-hosted-engine-setup/engine-logs-2019-01-21T22:47:03Z
> with logs from the engine VM?
>
>
>
> ------------------------------
> *Von:* Simone Tiraboschi <stirabos(a)redhat.com>
> *Gesendet:* Donnerstag, 24. Jänner 2019 14:45:59
> *An:* Markus Schaufler
> *Cc:* Dominik Holler; users(a)ovirt.org
> *Betreff:* Re: [ovirt-users] ovirt 4.2 HCI rollout
>
>
>
> On Thu, Jan 24, 2019 at 2:21 PM Markus Schaufler <
> markus.schaufler(a)digit-all.at> wrote:
>
> Hi,
>
>
> thanks for the replies.
>
>
> I updated to 4.2.8 and tried again:
>
>
> [ INFO ] TASK [Check engine VM health]
> [ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 120, "changed":
> true, "cmd": ["hosted-engine", "--vm-status", "--json"], "delta":
> "0:00:00.165316", "end": "2019-01-24 14:12:06.899564", "rc": 0, "start":
> "2019-01-24 14:12:06.734248", "stderr": "", "stderr_lines": [], "stdout":
> "{\"1\": {\"conf_on_shared_storage\": true, \"live-data\": true, \"extra\":
> \"metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=3049
> (Thu Jan 24 14:11:59
> 2019)\\nhost-id=1\\nscore=3400\\nvm_conf_refresh_time=3049 (Thu Jan 24
> 14:11:59
> 2019)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineStarting\\nstopped=False\\n\",
> \"hostname\": \"HCI01.res01.ads.ooe.local\", \"host-id\": 1,
> \"engine-status\": {\"reason\": \"failed liveliness check\", \"health\":
> \"bad\", \"vm\": \"up\", \"detail\": \"Up\"}, \"score\": 3400, \"stopped\":
> false, \"maintenance\": false, \"crc32\": \"0c1a3ddb\",
> \"local_conf_timestamp\": 3049, \"host-ts\": 3049}, \"global_maintenance\":
> false}", "stdout_lines": ["{\"1\": {\"conf_on_shared_storage\": true,
> \"live-data\": true, \"extra\":
> \"metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=3049
> (Thu Jan 24 14:11:59
> 2019)\\nhost-id=1\\nscore=3400\\nvm_conf_refresh_time=3049 (Thu Jan 24
> 14:11:59
> 2019)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineStarting\\nstopped=False\\n\",
> \"hostname\": \"HCI01.res01.ads.ooe.local\", \"host-id\": 1,
> \"engine-status\": {\"reason\": \"failed liveliness check\", \"health\":
> \"bad\", \"vm\": \"up\", \"detail\": \"Up\"}, \"score\": 3400, \"stopped\":
> false, \"maintenance\": false, \"crc32\": \"0c1a3ddb\",
> \"local_conf_timestamp\": 3049, \"host-ts\": 3049}, \"global_maintenance\":
> false}"]}
>
>
>
> It's still the same issue: the host fail to properly check the status of
> the engine over a dedicate health page.
>
> You should connect to ovirt-hci.res01.ads.ooe.local and check the status
> of ovirt-engine service and /var/log/ovirt-engine/engine.log there.
>
>
>
> [ INFO ] TASK [Check VM status at virt level]
> [ INFO ] changed: [localhost]
> [ INFO ] TASK [Fail if engine VM is not running]
> [ INFO ] skipping: [localhost]
> [ INFO ] TASK [Get target engine VM IPv4 address]
> [ INFO ] changed: [localhost]
> [ INFO ] TASK [Get VDSM's target engine VM stats]
> [ INFO ] changed: [localhost]
> [ INFO ] TASK [Convert stats to JSON format]
> [ INFO ] ok: [localhost]
> [ INFO ] TASK [Get target engine VM IPv4 address from VDSM stats]
> [ INFO ] ok: [localhost]
> [ INFO ] TASK [Fail if the Engine has no IP address]
> [ INFO ] skipping: [localhost]
> [ INFO ] TASK [Fail if Engine IP is different from engine's FQDN resolved
> IP]
> [ INFO ] skipping: [localhost]
> [ INFO ] TASK [Get target engine VM IPv4 address]
> [ INFO ] skipping: [localhost]
> [ INFO ] TASK [Reconfigure OVN central address]
> [ ERROR ] fatal: [localhost]: FAILED! => {"msg": "The task includes an
> option with an undefined variable. The error was: 'dict object' has no
> attribute 'stdout_lines'\n\nThe error appears to have been in
> '/usr/share/ovirt-hosted-engine-setup/ansible/create_target_vm.yml': line
> 518, column 5, but may\nbe elsewhere in the file depending on the exact
> syntax problem.\n\nThe offending line appears to be:\n\n #
> https://github.com/oVirt/ovirt-engine/blob/master/packaging/playbooks/roles…
> - name: Reconfigure OVN central address\n ^ here\n"}
>
>
>
> attached you'll find the setup logs.
>
>
> best regards,
>
> Markus Schaufler
> ------------------------------
> *Von:* Simone Tiraboschi <stirabos(a)redhat.com>
> *Gesendet:* Donnerstag, 24. Jänner 2019 11:56:50
> *An:* Dominik Holler
> *Cc:* Markus Schaufler; users(a)ovirt.org
> *Betreff:* Re: [ovirt-users] ovirt 4.2 HCI rollout
>
>
>
> On Thu, Jan 24, 2019 at 9:40 AM Dominik Holler <dholler(a)redhat.com> wrote:
>
> On Tue, 22 Jan 2019 11:15:12 +0000
> Markus Schaufler <markus.schaufler(a)digit-all.at> wrote:
>
> > Thanks for your reply,
> >
> > getent ahosts ovirt-hci.res01.ads.ooe.local | cut -d' ' -f1 | uniq
> > 10.1.31.20
> >
> > attached you'll find the logs.
> >
>
> Thanks, to my eyes this looks like a bug.
> I tried to isolate the relevant lines in the attached playbook.
>
> Markus, would you be so kind to check if ovirt-4.2.8 is working for you?
>
>
>
> OK, understood: the real error was just a few lines before what Dominik
> pointed out:
>
> "stdout": "{\"1\": {\"conf_on_shared_storage\": true, \"live-data\":
> true, \"extra\":
> \"metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=5792
> (Mon Jan 21 13:57:45
> 2019)\\nhost-id=1\\nscore=3000\\nvm_conf_refresh_time=5792 (Mon Jan 21
> 13:57:45
> 2019)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineStarting\\nstopped=False\\n\",
> \"hostname\": \"HCI01.res01.ads.ooe.local\", \"host-id\": 1,
> \"engine-status\": {\"reason\": \"failed liveliness check\", \"health\":
> \"bad\", \"vm\": \"up\", \"detail\": \"Up\"}, \"score\": 3000, \"stopped\":
> false, \"maintenance\": false, \"crc32\": \"ba303717\",
> \"local_conf_timestamp\": 5792, \"host-ts\": 5792}, \"global_maintenance\":
> false}",
> "stdout_lines": [
> "{\"1\": {\"conf_on_shared_storage\": true, \"live-data\": true,
> \"extra\":
> \"metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=5792
> (Mon Jan 21 13:57:45
> 2019)\\nhost-id=1\\nscore=3000\\nvm_conf_refresh_time=5792 (Mon Jan 21
> 13:57:45
> 2019)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineStarting\\nstopped=False\\n\",
> \"hostname\": \"HCI01.res01.ads.ooe.local\", \"host-id\": 1,
> \"engine-status\": {\"reason\": \"failed liveliness check\", \"health\":
> \"bad\", \"vm\": \"up\", \"detail\": \"Up\"}, \"score\": 3000, \"stopped\":
> false, \"maintenance\": false, \"crc32\": \"ba303717\",
> \"local_conf_timestamp\": 5792, \"host-ts\": 5792}, \"global_maintenance\":
> false}"
> ]
> }"
> 2019-01-21 13:57:46,695+0100 ERROR ansible failed {'status': 'FAILED',
> 'ansible_type': 'task', 'ansible_task': u'Check engine VM health',
> 'ansible_result': u'type: <type \'dict\'>\nstr: {\'_ansible_parsed\': True,
> \'stderr_lines\': [], u\'changed\': True, u\'end\': u\'2019-01-21
> 13:57:46.242423\', \'_ansible_no_log\': False, u\'stdout\': u\'{"1":
> {"conf_on_shared_storage": true, "live-data": true, "extra":
> "metadata_parse_version=1\\\\nmetadata_feature_version=1\\\\ntimestamp=5792
> (Mon Jan 21 13:57:4', 'ansible_host': u'localhost', 'ansible_playbook':
> u'/usr/share/ovirt-hosted-engine-setup/ansible/create_target_vm.yml'}
>
> and in particular it's here:
> for some reason we got \"engine-status\": {\"reason\": \"failed
> liveliness check\", \"health\": \"bad\", \"vm\": \"up\", \"detail\":
> \"Up\"}
> over 120 attempts: we have to check engine.log (it got collected as well
> from the engine VM) to understand why the engine was failing to start.
>
>
>
>
> > ________________________________
> > Von: Dominik Holler <dholler(a)redhat.com>
> > Gesendet: Montag, 21. Jänner 2019 17:52:35
> > An: Markus Schaufler
> > Cc: users(a)ovirt.org; Simone Tiraboschi
> > Betreff: Re: [ovirt-users] ovirt 4.2 HCI rollout
> >
> > Would you please share the related ovirt-host-deploy-ansible-*.log
> > stored on the host in /var/log/ovirt-hosted-engine-setup ?
> >
> > Would you please also share the output of
> > getent ahosts YOUR_HOSED_ENGNE_FQDN | cut -d' ' -f1 | uniq
> > if executed on this host?
> >
> >
> > On Mon, 21 Jan 2019 13:37:53 -0000
> > "Markus Schaufler" <markus.schaufler(a)digit-all.at> wrote:
> >
> > > Hi,
> > >
> > > I'm trying a (nested) ovirt 4.2.7 HCI rollout on 3 centos VM's by
> > > following
> > >
> https://ovirt.org/documentation/gluster-hyperconverged/chap-Deploying_Hyper…
> > > gluster deployment was successful but at HE deployment "stage 5" I
> > > got following error:
> > >
> > > [ INFO ] TASK [Reconfigure OVN central address]
> > > [ ERROR ] fatal: [localhost]: FAILED! => {"msg": "The task includes
> > > an option with an undefined variable. The error was: 'dict object'
> > > has no attribute 'stdout_lines'\n\nThe error appears to have been
> > > in
> > > '/usr/share/ovirt-hosted-engine-setup/ansible/create_target_vm.yml':
> > > line 522, column 5, but may\nbe elsewhere in the file depending on
> > > the exact syntax problem.\n\nThe offending line appears to be:\n\n
> > > #
> > >
> https://github.com/oVirt/ovirt-engine/blob/master/packaging/playbooks/roles…
> > > - name: Reconfigure OVN central address\n ^ here\n"}
> > >
> > >
> > > /var/log/messages:
> > > Jan 21 14:09:56 HCI01 journal: ovirt-ha-agent
> > > ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine ERROR
> > > Engine VM stopped on localhost Jan 21 14:10:01 HCI01 systemd:
> > > Started Session 22 of user root. Jan 21 14:10:02 HCI01 systemd:
> > > Started Session c306 of user root. Jan 21 14:10:03 HCI01 systemd:
> > > Started Session c307 of user root. Jan 21 14:10:06 HCI01
> > > vdsm[3650]: WARN executor state: count=5 workers=set([<Worker
> > > name=periodic/4 waiting task#=141 at 0x7fd2d4316910>, <Worker
> > > name=periodic/1 running <Task discardable <Operation
> > > action=<vdsm.virt.sampling.VMBulkstatsMonitor object at
> > > 0x7fd2d4679490> at 0x7fd2d4679710> timeout=7.5, duration=7 at
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0> disca rded task#=413 at
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>, <Worker
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>name=periodic/3
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>waiting task#=414 at
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>,
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10><Worker
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>name=periodic/5
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>waiting
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>task#=0
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>at
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>,
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650><Worker
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>name
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>=periodic/2
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>waiting
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>task#=412
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>at
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>])
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>Jan
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>21
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>14:10:06
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>HCI01
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>kernel:
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>ovirtmgmt:
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>port
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>2(vnet0)
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>entered
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>disabled
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>state
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>Jan
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>21
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>14:10:06
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>HCI01
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>kernel:
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>device
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>vnet0
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>left
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>promiscuous
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>mode
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>Jan
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>21
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>14:10:06
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>HCI01
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>kernel:
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>ovirtmgmt:
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>port
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>2(vnet0)
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>entered
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>disabled
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>state
> > > Jan 21 14:10:06 HCI01 NetworkManager[3666]: <info>
> > > [1548076206.9177] device (vnet0): state change: disconnected ->
> > > unmanaged (reason 'unmanaged', sys-iface-state: 'remo ved') Jan 21
> > > 14:10:06 HCI01 NetworkManager[3666]: <info> [1548076206.9180]
> > > device (vnet0): released from master device ovirtmgmt Jan 21
> > > 14:10:06 HCI01 lldpad: recvfrom(Event interface): No buffer space
> > > available Jan 21 14:10:06 HCI01 libvirtd: 2019-01-21
> > > 13:10:06.925+0000: 2651: error : qemuMonitorIORead:609 : Unable to
> > > read from monitor: Connection reset by peer Jan 21 14:10:07 HCI01
> > > kvm: 0 guests now active Jan 21 14:10:07 HCI01 systemd-machined:
> > > Machine qemu-3-HostedEngine terminated. Jan 21 14:10:07 HCI01
> > > libvirtd: 2019-01-21 13:10:07.125+0000: 2704: warning :
> > > qemuGetProcessInfo:1406 : cannot parse process status data Jan 21
> > > 14:10:07 HCI01 libvirtd: 2019-01-21 13:10:07.125+0000: 2704:
> > > warning : qemuGetProcessInfo:1406 : cannot parse process status
> > > data Jan 21 14:10:07 HCI01 libvirtd: 2019-01-21 13:10:07.125+0000:
> > > 2704: warning : qemuGetProcessInfo:1406 : cannot parse process
> > > status data Jan 21 14:10:07 HCI01 libvirtd: 2019-01-21
> > > 13:10:07.125+0000: 2704: warning : qemuGetProcessInfo:1406 : cannot
> > > parse process status data Jan 21 14:10:07 HCI01 libvirtd:
> > > 2019-01-21 13:10:07.126+0000: 2704: error :
> > > virNetDevTapInterfaceStats:764 : internal error: /proc/net/dev:
> > > Interface not found Jan 21 14:10:07 HCI01 firewalld[24040]:
> > > WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -D libvirt-out
> > > -m physdev --physdev-is-bridged --physdev-out vnet0 -g FP-vnet 0'
> > > failed: iptables v1.4.21: goto 'FP-vnet0' is not a chain#012#012Try
> > > `iptables -h' or 'iptables --help' for more information. Jan 21
> > > 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/iptables -w2 -w -D libvirt-out -m physdev --physdev-out
> > > vnet0 -g FP-vnet0' failed: iptables v 1.4.21: goto 'FP-vnet0' is
> > > not a chain#012#012Try `iptables -h' or 'iptables --help' for more
> > > information. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -D libvirt-in -m physdev
> > > --physdev-in vnet0 -g FJ-vnet0' failed: iptables v1. 4.21: goto
> > > 'FJ-vnet0' is not a chain#012#012Try `iptables -h' or 'iptables
> > > --help' for more information. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -D libvirt-host-in -m physdev --physdev-in vnet0 -g HJ-vnet0'
> > > failed: iptable s v1.4.21: goto 'HJ-vnet0' is not a
> > > chain#012#012Try `iptables -h' or 'iptables --help' for more
> > > information. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F FP-vnet0' failed:
> > > iptables: No chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -X FP-vnet0' failed: iptables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F FJ-vnet0' failed:
> > > iptables: No chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -X FJ-vnet0' failed: iptables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F HJ-vnet0' failed:
> > > iptables: No chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -X HJ-vnet0' failed: iptables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -D libvirt-out -m
> > > physdev --physdev-is-bridged --physdev-out vnet0 -g FP-vne t0'
> > > failed: ip6tables v1.4.21: goto 'FP-vnet0' is not a
> > > chain#012#012Try `ip6tables -h' or 'ip6tables --help' for more
> > > information. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -D libvirt-out -m
> > > physdev --physdev-out vnet0 -g FP-vnet0' failed: ip6tables v1.4.21:
> > > goto 'FP-vnet0' is not a chain#012#012Try `ip6tables -h' or
> > > 'ip6tables --help' for more information. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2
> > > -w -D libvirt-in -m physdev --physdev-in vnet0 -g FJ-vnet0' failed:
> > > ip6tables v 1.4.21: goto 'FJ-vnet0' is not a chain#012#012Try
> > > `ip6tables -h' or 'ip6tables --help' for more information. Jan 21
> > > 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ip6tables -w2 -w -D libvirt-host-in -m physdev
> > > --physdev-in vnet0 -g HJ-vnet0' failed: ip6tab les v1.4.21: goto
> > > 'HJ-vnet0' is not a chain#012#012Try `ip6tables -h' or 'ip6tables
> > > --help' for more information. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2
> > > -w -F FP-vnet0' failed: ip6tables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -X FP-vnet0' failed:
> > > ip6tables: No chain/target/match by that name. Jan 21 14:10:07
> > > HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ip6tables -w2 -w -F FJ-vnet0' failed: ip6tables: No
> > > chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2
> > > -w -X FJ-vnet0' failed: ip6tables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -F HJ-vnet0' failed:
> > > ip6tables: No chain/target/match by that name. Jan 21 14:10:07
> > > HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ip6tables -w2 -w -X HJ-vnet0' failed: ip6tables: No
> > > chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
> > > --concurrent -t nat -D PREROUTING -i vnet0 -j libvirt-J-vnet0'
> > > failed: Illegal target name 'libvirt-J-vnet0'. Jan 21 14:10:07
> > > HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ebtables --concurrent -t nat -D POSTROUTING -o vnet0 -j
> > > libvirt-P-vnet0' failed: Illegal target name 'libvirt-P-vnet0'. Jan
> > > 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ebtables --concurrent -t nat -D PREROUTING -i vnet0 -j
> > > libvirt-J-vnet0' failed: Illegal targe t name 'libvirt-J-vnet0'.
> > > Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ebtables --concurrent -t nat -D POSTROUTING -o vnet0 -j
> > > libvirt-P-vnet0' failed: Illegal targ et name 'libvirt-P-vnet0'.
> > > Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ebtables --concurrent -t nat -L libvirt-J-vnet0' failed:
> > > Chain 'libvirt-J-vnet0' doesn't exis t. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
> > > --concurrent -t nat -L libvirt-P-vnet0' failed: Chain
> > > 'libvirt-P-vnet0' doesn't exist. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
> > > --concurrent -t nat -F libvirt-J-vnet0' failed: Chain
> > > 'libvirt-J-vnet0' doesn't exist. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
> > > --concurrent -t nat -X libvirt-J-vnet0' failed: Chain
> > > 'libvirt-J-vnet0' doesn't exist. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
> > > --concurrent -t nat -F libvirt-P-vnet0' failed: Chain
> > > 'libvirt-P-vnet0' doesn't exist. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
> > > --concurrent -t nat -X libvirt-P-vnet0' failed: Chain
> > > 'libvirt-P-vnet0' doesn't exist. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -D libvirt-out -m physdev --physdev-is-bridged --physdev-out
> > > vnet0 -g FO-vnet0' failed: iptables v1.4.21: goto 'FO-vnet0' is not
> > > a chain#012#012Try `iptables -h' or 'iptables --help' for more
> > > information. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -D libvirt-out -m
> > > physdev --physdev-out vnet0 -g FO-vnet0' failed: iptables v1.4.21:
> > > goto 'FO-vnet0' is not a chain#012#012Try `iptables -h' or
> > > 'iptables --help' for more information. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -D libvirt-in -m physdev --physdev-in vnet0 -g FI-vnet0' failed:
> > > iptables v1.4.21: goto 'FI-vnet0' is not a chain#012#012Try
> > > `iptables -h' or 'iptables --help' for more information. Jan 21
> > > 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/iptables -w2 -w -D libvirt-host-in -m physdev
> > > --physdev-in vnet0 -g HI-vnet0' failed: iptables v1.4.21: goto
> > > 'HI-vnet0' is not a chain#012#012Try `iptables -h' or 'iptables
> > > --help' for more information. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -D libvirt-in-post -m physdev --physdev-in vnet0 -j ACCEPT'
> > > failed: iptables: Bad rule (does a matching rule exist in that
> > > chain?). Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F FO-vnet0' failed:
> > > iptables: No chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -X FO-vnet0' failed: iptables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F FI-vnet0' failed:
> > > iptables: No chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -X FI-vnet0' failed: iptables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F HI-vnet0' failed:
> > > iptables: No chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -X HI-vnet0' failed: iptables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -D libvirt-out -m
> > > physdev --physdev-is-bridged --physdev-out vnet0 -g FO-vnet0'
> > > failed: ip6tables v1.4.21: goto 'FO-vnet0' is not a
> > > chain#012#012Try `ip6tables -h' or 'ip6tables --help' for more
> > > information. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -D libvirt-out -m
> > > physdev --physdev-out vnet0 -g FO-vnet0' failed: ip6tables v1.4.21:
> > > goto 'FO-vnet0' is not a chain#012#012Try `ip6tables -h' or
> > > 'ip6tables --help' for more information. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2
> > > -w -D libvirt-in -m physdev --physdev-in vnet0 -g FI-vnet0' failed:
> > > ip6tables v1.4.21: goto 'FI-vnet0' is not a chain#012#012Try
> > > `ip6tables -h' or 'ip6tables --help' for more information. Jan 21
> > > 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ip6tables -w2 -w -D libvirt-host-in -m physdev
> > > --physdev-in vnet0 -g HI-vnet0' failed: ip6tables v1.4.21: goto
> > > 'HI-vnet0' is not a chain#012#012Try `ip6tables -h' or 'ip6tables
> > > --help' for more information. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2
> > > -w -D libvirt-in-post -m physdev --physdev-in vnet0 -j ACCEPT'
> > > failed: ip6tables: Bad rule (does a matching rule exist in that
> > > chain?). Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -F FO-vnet0' failed:
> > > ip6tables: No chain/target/match by that name. Jan 21 14:10:07
> > > HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ip6tables -w2 -w -X FO-vnet0' failed: ip6tables: No
> > > chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2
> > > -w -F FI-vnet0' failed: ip6tables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -X FI-vnet0' failed:
> > > ip6tables: No chain/target/match by that name. Jan 21 14:10:07
> > > HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ip6tables -w2 -w -F HI-vnet0' failed: ip6tables: No
> > > chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2
> > > -w -X HI-vnet0' failed: ip6tables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -D
> > > POSTROUTING -o vnet0 -j libvirt-O-vnet0' failed: Illegal target
> > > name 'libvirt-O-vnet0'. Jan 21 14:10:07 HCI01 firewalld[24040]:
> > > WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -L
> > > libvirt-O-vnet0' failed: Chain 'libvirt-O-vnet0' doesn't exist. Jan
> > > 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ebtables --concurrent -t nat -F libvirt-O-vnet0' failed:
> > > Chain 'libvirt-O-vnet0' doesn't exist. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
> > > --concurrent -t nat -X libvirt-O-vnet0' failed: Chain
> > > 'libvirt-O-vnet0' doesn't exist. Jan 21 14:10:07 HCI01 vdsm[3650]:
> > > WARN
> > > File:
> /var/lib/libvirt/qemu/channels/ea1b312c-a462-45a9-ab75-78008bc4c9c9.ovirt-guest-agent.0
> > > already removed Jan 21 14:10:07 HCI01 vdsm[3650]: WARN Attempting
> > > to remove a non existing network:
> > > ovirtmgmt/ea1b312c-a462-45a9-ab75-78008bc4c9c9 Jan 21 14:10:07
> > > HCI01 vdsm[3650]: WARN Attempting to remove a non existing net
> > > user: ovirtmgmt/ea1b312c-a462-45a9-ab75-78008bc4c9c9 Jan 21
> > > 14:10:07 HCI01 vdsm[3650]: WARN
> > > File:
> /var/lib/libvirt/qemu/channels/ea1b312c-a462-45a9-ab75-78008bc4c9c9.org.qemu.guest_agent.0
> > > already removed
> > >
> > > any ideas on that?
> > > _______________________________________________
> > > Users mailing list -- users(a)ovirt.org
> > > To unsubscribe send an email to users-leave(a)ovirt.org
> > > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> > > oVirt Code of Conduct:
> > > https://www.ovirt.org/community/about/community-guidelines/ List
> > > Archives:
> > >
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/XMMX5CY6VHFWW…
> >
>
>
1
0
On Thu, Jan 24, 2019 at 3:14 PM Markus Schaufler <
markus.schaufler(a)digit-all.at> wrote:
> The hosted engine is not running and cannot be started.
>
>
>
Do you have on your first host a directory
like /var/log/ovirt-hosted-engine-setup/engine-logs-2019-01-21T22:47:03Z
with logs from the engine VM?
>
> ------------------------------
> *Von:* Simone Tiraboschi <stirabos(a)redhat.com>
> *Gesendet:* Donnerstag, 24. Jänner 2019 14:45:59
> *An:* Markus Schaufler
> *Cc:* Dominik Holler; users(a)ovirt.org
> *Betreff:* Re: [ovirt-users] ovirt 4.2 HCI rollout
>
>
>
> On Thu, Jan 24, 2019 at 2:21 PM Markus Schaufler <
> markus.schaufler(a)digit-all.at> wrote:
>
> Hi,
>
>
> thanks for the replies.
>
>
> I updated to 4.2.8 and tried again:
>
>
> [ INFO ] TASK [Check engine VM health]
> [ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 120, "changed":
> true, "cmd": ["hosted-engine", "--vm-status", "--json"], "delta":
> "0:00:00.165316", "end": "2019-01-24 14:12:06.899564", "rc": 0, "start":
> "2019-01-24 14:12:06.734248", "stderr": "", "stderr_lines": [], "stdout":
> "{\"1\": {\"conf_on_shared_storage\": true, \"live-data\": true, \"extra\":
> \"metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=3049
> (Thu Jan 24 14:11:59
> 2019)\\nhost-id=1\\nscore=3400\\nvm_conf_refresh_time=3049 (Thu Jan 24
> 14:11:59
> 2019)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineStarting\\nstopped=False\\n\",
> \"hostname\": \"HCI01.res01.ads.ooe.local\", \"host-id\": 1,
> \"engine-status\": {\"reason\": \"failed liveliness check\", \"health\":
> \"bad\", \"vm\": \"up\", \"detail\": \"Up\"}, \"score\": 3400, \"stopped\":
> false, \"maintenance\": false, \"crc32\": \"0c1a3ddb\",
> \"local_conf_timestamp\": 3049, \"host-ts\": 3049}, \"global_maintenance\":
> false}", "stdout_lines": ["{\"1\": {\"conf_on_shared_storage\": true,
> \"live-data\": true, \"extra\":
> \"metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=3049
> (Thu Jan 24 14:11:59
> 2019)\\nhost-id=1\\nscore=3400\\nvm_conf_refresh_time=3049 (Thu Jan 24
> 14:11:59
> 2019)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineStarting\\nstopped=False\\n\",
> \"hostname\": \"HCI01.res01.ads.ooe.local\", \"host-id\": 1,
> \"engine-status\": {\"reason\": \"failed liveliness check\", \"health\":
> \"bad\", \"vm\": \"up\", \"detail\": \"Up\"}, \"score\": 3400, \"stopped\":
> false, \"maintenance\": false, \"crc32\": \"0c1a3ddb\",
> \"local_conf_timestamp\": 3049, \"host-ts\": 3049}, \"global_maintenance\":
> false}"]}
>
>
>
> It's still the same issue: the host fail to properly check the status of
> the engine over a dedicate health page.
>
> You should connect to ovirt-hci.res01.ads.ooe.local and check the status
> of ovirt-engine service and /var/log/ovirt-engine/engine.log there.
>
>
>
> [ INFO ] TASK [Check VM status at virt level]
> [ INFO ] changed: [localhost]
> [ INFO ] TASK [Fail if engine VM is not running]
> [ INFO ] skipping: [localhost]
> [ INFO ] TASK [Get target engine VM IPv4 address]
> [ INFO ] changed: [localhost]
> [ INFO ] TASK [Get VDSM's target engine VM stats]
> [ INFO ] changed: [localhost]
> [ INFO ] TASK [Convert stats to JSON format]
> [ INFO ] ok: [localhost]
> [ INFO ] TASK [Get target engine VM IPv4 address from VDSM stats]
> [ INFO ] ok: [localhost]
> [ INFO ] TASK [Fail if the Engine has no IP address]
> [ INFO ] skipping: [localhost]
> [ INFO ] TASK [Fail if Engine IP is different from engine's FQDN resolved
> IP]
> [ INFO ] skipping: [localhost]
> [ INFO ] TASK [Get target engine VM IPv4 address]
> [ INFO ] skipping: [localhost]
> [ INFO ] TASK [Reconfigure OVN central address]
> [ ERROR ] fatal: [localhost]: FAILED! => {"msg": "The task includes an
> option with an undefined variable. The error was: 'dict object' has no
> attribute 'stdout_lines'\n\nThe error appears to have been in
> '/usr/share/ovirt-hosted-engine-setup/ansible/create_target_vm.yml': line
> 518, column 5, but may\nbe elsewhere in the file depending on the exact
> syntax problem.\n\nThe offending line appears to be:\n\n #
> https://github.com/oVirt/ovirt-engine/blob/master/packaging/playbooks/roles…
> - name: Reconfigure OVN central address\n ^ here\n"}
>
>
>
> attached you'll find the setup logs.
>
>
> best regards,
>
> Markus Schaufler
> ------------------------------
> *Von:* Simone Tiraboschi <stirabos(a)redhat.com>
> *Gesendet:* Donnerstag, 24. Jänner 2019 11:56:50
> *An:* Dominik Holler
> *Cc:* Markus Schaufler; users(a)ovirt.org
> *Betreff:* Re: [ovirt-users] ovirt 4.2 HCI rollout
>
>
>
> On Thu, Jan 24, 2019 at 9:40 AM Dominik Holler <dholler(a)redhat.com> wrote:
>
> On Tue, 22 Jan 2019 11:15:12 +0000
> Markus Schaufler <markus.schaufler(a)digit-all.at> wrote:
>
> > Thanks for your reply,
> >
> > getent ahosts ovirt-hci.res01.ads.ooe.local | cut -d' ' -f1 | uniq
> > 10.1.31.20
> >
> > attached you'll find the logs.
> >
>
> Thanks, to my eyes this looks like a bug.
> I tried to isolate the relevant lines in the attached playbook.
>
> Markus, would you be so kind to check if ovirt-4.2.8 is working for you?
>
>
>
> OK, understood: the real error was just a few lines before what Dominik
> pointed out:
>
> "stdout": "{\"1\": {\"conf_on_shared_storage\": true, \"live-data\":
> true, \"extra\":
> \"metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=5792
> (Mon Jan 21 13:57:45
> 2019)\\nhost-id=1\\nscore=3000\\nvm_conf_refresh_time=5792 (Mon Jan 21
> 13:57:45
> 2019)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineStarting\\nstopped=False\\n\",
> \"hostname\": \"HCI01.res01.ads.ooe.local\", \"host-id\": 1,
> \"engine-status\": {\"reason\": \"failed liveliness check\", \"health\":
> \"bad\", \"vm\": \"up\", \"detail\": \"Up\"}, \"score\": 3000, \"stopped\":
> false, \"maintenance\": false, \"crc32\": \"ba303717\",
> \"local_conf_timestamp\": 5792, \"host-ts\": 5792}, \"global_maintenance\":
> false}",
> "stdout_lines": [
> "{\"1\": {\"conf_on_shared_storage\": true, \"live-data\": true,
> \"extra\":
> \"metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=5792
> (Mon Jan 21 13:57:45
> 2019)\\nhost-id=1\\nscore=3000\\nvm_conf_refresh_time=5792 (Mon Jan 21
> 13:57:45
> 2019)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineStarting\\nstopped=False\\n\",
> \"hostname\": \"HCI01.res01.ads.ooe.local\", \"host-id\": 1,
> \"engine-status\": {\"reason\": \"failed liveliness check\", \"health\":
> \"bad\", \"vm\": \"up\", \"detail\": \"Up\"}, \"score\": 3000, \"stopped\":
> false, \"maintenance\": false, \"crc32\": \"ba303717\",
> \"local_conf_timestamp\": 5792, \"host-ts\": 5792}, \"global_maintenance\":
> false}"
> ]
> }"
> 2019-01-21 13:57:46,695+0100 ERROR ansible failed {'status': 'FAILED',
> 'ansible_type': 'task', 'ansible_task': u'Check engine VM health',
> 'ansible_result': u'type: <type \'dict\'>\nstr: {\'_ansible_parsed\': True,
> \'stderr_lines\': [], u\'changed\': True, u\'end\': u\'2019-01-21
> 13:57:46.242423\', \'_ansible_no_log\': False, u\'stdout\': u\'{"1":
> {"conf_on_shared_storage": true, "live-data": true, "extra":
> "metadata_parse_version=1\\\\nmetadata_feature_version=1\\\\ntimestamp=5792
> (Mon Jan 21 13:57:4', 'ansible_host': u'localhost', 'ansible_playbook':
> u'/usr/share/ovirt-hosted-engine-setup/ansible/create_target_vm.yml'}
>
> and in particular it's here:
> for some reason we got \"engine-status\": {\"reason\": \"failed
> liveliness check\", \"health\": \"bad\", \"vm\": \"up\", \"detail\":
> \"Up\"}
> over 120 attempts: we have to check engine.log (it got collected as well
> from the engine VM) to understand why the engine was failing to start.
>
>
>
>
> > ________________________________
> > Von: Dominik Holler <dholler(a)redhat.com>
> > Gesendet: Montag, 21. Jänner 2019 17:52:35
> > An: Markus Schaufler
> > Cc: users(a)ovirt.org; Simone Tiraboschi
> > Betreff: Re: [ovirt-users] ovirt 4.2 HCI rollout
> >
> > Would you please share the related ovirt-host-deploy-ansible-*.log
> > stored on the host in /var/log/ovirt-hosted-engine-setup ?
> >
> > Would you please also share the output of
> > getent ahosts YOUR_HOSED_ENGNE_FQDN | cut -d' ' -f1 | uniq
> > if executed on this host?
> >
> >
> > On Mon, 21 Jan 2019 13:37:53 -0000
> > "Markus Schaufler" <markus.schaufler(a)digit-all.at> wrote:
> >
> > > Hi,
> > >
> > > I'm trying a (nested) ovirt 4.2.7 HCI rollout on 3 centos VM's by
> > > following
> > >
> https://ovirt.org/documentation/gluster-hyperconverged/chap-Deploying_Hyper…
> > > gluster deployment was successful but at HE deployment "stage 5" I
> > > got following error:
> > >
> > > [ INFO ] TASK [Reconfigure OVN central address]
> > > [ ERROR ] fatal: [localhost]: FAILED! => {"msg": "The task includes
> > > an option with an undefined variable. The error was: 'dict object'
> > > has no attribute 'stdout_lines'\n\nThe error appears to have been
> > > in
> > > '/usr/share/ovirt-hosted-engine-setup/ansible/create_target_vm.yml':
> > > line 522, column 5, but may\nbe elsewhere in the file depending on
> > > the exact syntax problem.\n\nThe offending line appears to be:\n\n
> > > #
> > >
> https://github.com/oVirt/ovirt-engine/blob/master/packaging/playbooks/roles…
> > > - name: Reconfigure OVN central address\n ^ here\n"}
> > >
> > >
> > > /var/log/messages:
> > > Jan 21 14:09:56 HCI01 journal: ovirt-ha-agent
> > > ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine ERROR
> > > Engine VM stopped on localhost Jan 21 14:10:01 HCI01 systemd:
> > > Started Session 22 of user root. Jan 21 14:10:02 HCI01 systemd:
> > > Started Session c306 of user root. Jan 21 14:10:03 HCI01 systemd:
> > > Started Session c307 of user root. Jan 21 14:10:06 HCI01
> > > vdsm[3650]: WARN executor state: count=5 workers=set([<Worker
> > > name=periodic/4 waiting task#=141 at 0x7fd2d4316910>, <Worker
> > > name=periodic/1 running <Task discardable <Operation
> > > action=<vdsm.virt.sampling.VMBulkstatsMonitor object at
> > > 0x7fd2d4679490> at 0x7fd2d4679710> timeout=7.5, duration=7 at
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0> disca rded task#=413 at
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>, <Worker
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>name=periodic/3
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>waiting task#=414 at
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>,
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10><Worker
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>name=periodic/5
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>waiting
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>task#=0
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>at
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>,
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650><Worker
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>name
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>=periodic/2
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>waiting
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>task#=412
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>at
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>])
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>Jan
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>21
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>14:10:06
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>HCI01
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>kernel:
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>ovirtmgmt:
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>port
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>2(vnet0)
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>entered
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>disabled
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>state
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>Jan
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>21
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>14:10:06
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>HCI01
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>kernel:
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>device
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>vnet0
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>left
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>promiscuous
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>mode
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>Jan
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>21
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>14:10:06
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>HCI01
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>kernel:
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>ovirtmgmt:
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>port
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>2(vnet0)
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>entered
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>disabled
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>state
> > > Jan 21 14:10:06 HCI01 NetworkManager[3666]: <info>
> > > [1548076206.9177] device (vnet0): state change: disconnected ->
> > > unmanaged (reason 'unmanaged', sys-iface-state: 'remo ved') Jan 21
> > > 14:10:06 HCI01 NetworkManager[3666]: <info> [1548076206.9180]
> > > device (vnet0): released from master device ovirtmgmt Jan 21
> > > 14:10:06 HCI01 lldpad: recvfrom(Event interface): No buffer space
> > > available Jan 21 14:10:06 HCI01 libvirtd: 2019-01-21
> > > 13:10:06.925+0000: 2651: error : qemuMonitorIORead:609 : Unable to
> > > read from monitor: Connection reset by peer Jan 21 14:10:07 HCI01
> > > kvm: 0 guests now active Jan 21 14:10:07 HCI01 systemd-machined:
> > > Machine qemu-3-HostedEngine terminated. Jan 21 14:10:07 HCI01
> > > libvirtd: 2019-01-21 13:10:07.125+0000: 2704: warning :
> > > qemuGetProcessInfo:1406 : cannot parse process status data Jan 21
> > > 14:10:07 HCI01 libvirtd: 2019-01-21 13:10:07.125+0000: 2704:
> > > warning : qemuGetProcessInfo:1406 : cannot parse process status
> > > data Jan 21 14:10:07 HCI01 libvirtd: 2019-01-21 13:10:07.125+0000:
> > > 2704: warning : qemuGetProcessInfo:1406 : cannot parse process
> > > status data Jan 21 14:10:07 HCI01 libvirtd: 2019-01-21
> > > 13:10:07.125+0000: 2704: warning : qemuGetProcessInfo:1406 : cannot
> > > parse process status data Jan 21 14:10:07 HCI01 libvirtd:
> > > 2019-01-21 13:10:07.126+0000: 2704: error :
> > > virNetDevTapInterfaceStats:764 : internal error: /proc/net/dev:
> > > Interface not found Jan 21 14:10:07 HCI01 firewalld[24040]:
> > > WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -D libvirt-out
> > > -m physdev --physdev-is-bridged --physdev-out vnet0 -g FP-vnet 0'
> > > failed: iptables v1.4.21: goto 'FP-vnet0' is not a chain#012#012Try
> > > `iptables -h' or 'iptables --help' for more information. Jan 21
> > > 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/iptables -w2 -w -D libvirt-out -m physdev --physdev-out
> > > vnet0 -g FP-vnet0' failed: iptables v 1.4.21: goto 'FP-vnet0' is
> > > not a chain#012#012Try `iptables -h' or 'iptables --help' for more
> > > information. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -D libvirt-in -m physdev
> > > --physdev-in vnet0 -g FJ-vnet0' failed: iptables v1. 4.21: goto
> > > 'FJ-vnet0' is not a chain#012#012Try `iptables -h' or 'iptables
> > > --help' for more information. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -D libvirt-host-in -m physdev --physdev-in vnet0 -g HJ-vnet0'
> > > failed: iptable s v1.4.21: goto 'HJ-vnet0' is not a
> > > chain#012#012Try `iptables -h' or 'iptables --help' for more
> > > information. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F FP-vnet0' failed:
> > > iptables: No chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -X FP-vnet0' failed: iptables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F FJ-vnet0' failed:
> > > iptables: No chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -X FJ-vnet0' failed: iptables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F HJ-vnet0' failed:
> > > iptables: No chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -X HJ-vnet0' failed: iptables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -D libvirt-out -m
> > > physdev --physdev-is-bridged --physdev-out vnet0 -g FP-vne t0'
> > > failed: ip6tables v1.4.21: goto 'FP-vnet0' is not a
> > > chain#012#012Try `ip6tables -h' or 'ip6tables --help' for more
> > > information. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -D libvirt-out -m
> > > physdev --physdev-out vnet0 -g FP-vnet0' failed: ip6tables v1.4.21:
> > > goto 'FP-vnet0' is not a chain#012#012Try `ip6tables -h' or
> > > 'ip6tables --help' for more information. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2
> > > -w -D libvirt-in -m physdev --physdev-in vnet0 -g FJ-vnet0' failed:
> > > ip6tables v 1.4.21: goto 'FJ-vnet0' is not a chain#012#012Try
> > > `ip6tables -h' or 'ip6tables --help' for more information. Jan 21
> > > 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ip6tables -w2 -w -D libvirt-host-in -m physdev
> > > --physdev-in vnet0 -g HJ-vnet0' failed: ip6tab les v1.4.21: goto
> > > 'HJ-vnet0' is not a chain#012#012Try `ip6tables -h' or 'ip6tables
> > > --help' for more information. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2
> > > -w -F FP-vnet0' failed: ip6tables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -X FP-vnet0' failed:
> > > ip6tables: No chain/target/match by that name. Jan 21 14:10:07
> > > HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ip6tables -w2 -w -F FJ-vnet0' failed: ip6tables: No
> > > chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2
> > > -w -X FJ-vnet0' failed: ip6tables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -F HJ-vnet0' failed:
> > > ip6tables: No chain/target/match by that name. Jan 21 14:10:07
> > > HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ip6tables -w2 -w -X HJ-vnet0' failed: ip6tables: No
> > > chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
> > > --concurrent -t nat -D PREROUTING -i vnet0 -j libvirt-J-vnet0'
> > > failed: Illegal target name 'libvirt-J-vnet0'. Jan 21 14:10:07
> > > HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ebtables --concurrent -t nat -D POSTROUTING -o vnet0 -j
> > > libvirt-P-vnet0' failed: Illegal target name 'libvirt-P-vnet0'. Jan
> > > 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ebtables --concurrent -t nat -D PREROUTING -i vnet0 -j
> > > libvirt-J-vnet0' failed: Illegal targe t name 'libvirt-J-vnet0'.
> > > Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ebtables --concurrent -t nat -D POSTROUTING -o vnet0 -j
> > > libvirt-P-vnet0' failed: Illegal targ et name 'libvirt-P-vnet0'.
> > > Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ebtables --concurrent -t nat -L libvirt-J-vnet0' failed:
> > > Chain 'libvirt-J-vnet0' doesn't exis t. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
> > > --concurrent -t nat -L libvirt-P-vnet0' failed: Chain
> > > 'libvirt-P-vnet0' doesn't exist. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
> > > --concurrent -t nat -F libvirt-J-vnet0' failed: Chain
> > > 'libvirt-J-vnet0' doesn't exist. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
> > > --concurrent -t nat -X libvirt-J-vnet0' failed: Chain
> > > 'libvirt-J-vnet0' doesn't exist. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
> > > --concurrent -t nat -F libvirt-P-vnet0' failed: Chain
> > > 'libvirt-P-vnet0' doesn't exist. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
> > > --concurrent -t nat -X libvirt-P-vnet0' failed: Chain
> > > 'libvirt-P-vnet0' doesn't exist. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -D libvirt-out -m physdev --physdev-is-bridged --physdev-out
> > > vnet0 -g FO-vnet0' failed: iptables v1.4.21: goto 'FO-vnet0' is not
> > > a chain#012#012Try `iptables -h' or 'iptables --help' for more
> > > information. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -D libvirt-out -m
> > > physdev --physdev-out vnet0 -g FO-vnet0' failed: iptables v1.4.21:
> > > goto 'FO-vnet0' is not a chain#012#012Try `iptables -h' or
> > > 'iptables --help' for more information. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -D libvirt-in -m physdev --physdev-in vnet0 -g FI-vnet0' failed:
> > > iptables v1.4.21: goto 'FI-vnet0' is not a chain#012#012Try
> > > `iptables -h' or 'iptables --help' for more information. Jan 21
> > > 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/iptables -w2 -w -D libvirt-host-in -m physdev
> > > --physdev-in vnet0 -g HI-vnet0' failed: iptables v1.4.21: goto
> > > 'HI-vnet0' is not a chain#012#012Try `iptables -h' or 'iptables
> > > --help' for more information. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -D libvirt-in-post -m physdev --physdev-in vnet0 -j ACCEPT'
> > > failed: iptables: Bad rule (does a matching rule exist in that
> > > chain?). Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F FO-vnet0' failed:
> > > iptables: No chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -X FO-vnet0' failed: iptables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F FI-vnet0' failed:
> > > iptables: No chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -X FI-vnet0' failed: iptables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F HI-vnet0' failed:
> > > iptables: No chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -X HI-vnet0' failed: iptables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -D libvirt-out -m
> > > physdev --physdev-is-bridged --physdev-out vnet0 -g FO-vnet0'
> > > failed: ip6tables v1.4.21: goto 'FO-vnet0' is not a
> > > chain#012#012Try `ip6tables -h' or 'ip6tables --help' for more
> > > information. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -D libvirt-out -m
> > > physdev --physdev-out vnet0 -g FO-vnet0' failed: ip6tables v1.4.21:
> > > goto 'FO-vnet0' is not a chain#012#012Try `ip6tables -h' or
> > > 'ip6tables --help' for more information. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2
> > > -w -D libvirt-in -m physdev --physdev-in vnet0 -g FI-vnet0' failed:
> > > ip6tables v1.4.21: goto 'FI-vnet0' is not a chain#012#012Try
> > > `ip6tables -h' or 'ip6tables --help' for more information. Jan 21
> > > 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ip6tables -w2 -w -D libvirt-host-in -m physdev
> > > --physdev-in vnet0 -g HI-vnet0' failed: ip6tables v1.4.21: goto
> > > 'HI-vnet0' is not a chain#012#012Try `ip6tables -h' or 'ip6tables
> > > --help' for more information. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2
> > > -w -D libvirt-in-post -m physdev --physdev-in vnet0 -j ACCEPT'
> > > failed: ip6tables: Bad rule (does a matching rule exist in that
> > > chain?). Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -F FO-vnet0' failed:
> > > ip6tables: No chain/target/match by that name. Jan 21 14:10:07
> > > HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ip6tables -w2 -w -X FO-vnet0' failed: ip6tables: No
> > > chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2
> > > -w -F FI-vnet0' failed: ip6tables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -X FI-vnet0' failed:
> > > ip6tables: No chain/target/match by that name. Jan 21 14:10:07
> > > HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ip6tables -w2 -w -F HI-vnet0' failed: ip6tables: No
> > > chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2
> > > -w -X HI-vnet0' failed: ip6tables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -D
> > > POSTROUTING -o vnet0 -j libvirt-O-vnet0' failed: Illegal target
> > > name 'libvirt-O-vnet0'. Jan 21 14:10:07 HCI01 firewalld[24040]:
> > > WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -L
> > > libvirt-O-vnet0' failed: Chain 'libvirt-O-vnet0' doesn't exist. Jan
> > > 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ebtables --concurrent -t nat -F libvirt-O-vnet0' failed:
> > > Chain 'libvirt-O-vnet0' doesn't exist. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
> > > --concurrent -t nat -X libvirt-O-vnet0' failed: Chain
> > > 'libvirt-O-vnet0' doesn't exist. Jan 21 14:10:07 HCI01 vdsm[3650]:
> > > WARN
> > > File:
> /var/lib/libvirt/qemu/channels/ea1b312c-a462-45a9-ab75-78008bc4c9c9.ovirt-guest-agent.0
> > > already removed Jan 21 14:10:07 HCI01 vdsm[3650]: WARN Attempting
> > > to remove a non existing network:
> > > ovirtmgmt/ea1b312c-a462-45a9-ab75-78008bc4c9c9 Jan 21 14:10:07
> > > HCI01 vdsm[3650]: WARN Attempting to remove a non existing net
> > > user: ovirtmgmt/ea1b312c-a462-45a9-ab75-78008bc4c9c9 Jan 21
> > > 14:10:07 HCI01 vdsm[3650]: WARN
> > > File:
> /var/lib/libvirt/qemu/channels/ea1b312c-a462-45a9-ab75-78008bc4c9c9.org.qemu.guest_agent.0
> > > already removed
> > >
> > > any ideas on that?
> > > _______________________________________________
> > > Users mailing list -- users(a)ovirt.org
> > > To unsubscribe send an email to users-leave(a)ovirt.org
> > > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> > > oVirt Code of Conduct:
> > > https://www.ovirt.org/community/about/community-guidelines/ List
> > > Archives:
> > >
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/XMMX5CY6VHFWW…
> >
>
>
2
1
On Thu, Jan 24, 2019 at 2:21 PM Markus Schaufler <
markus.schaufler(a)digit-all.at> wrote:
> Hi,
>
>
> thanks for the replies.
>
>
> I updated to 4.2.8 and tried again:
>
>
> [ INFO ] TASK [Check engine VM health]
> [ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 120, "changed":
> true, "cmd": ["hosted-engine", "--vm-status", "--json"], "delta":
> "0:00:00.165316", "end": "2019-01-24 14:12:06.899564", "rc": 0, "start":
> "2019-01-24 14:12:06.734248", "stderr": "", "stderr_lines": [], "stdout":
> "{\"1\": {\"conf_on_shared_storage\": true, \"live-data\": true, \"extra\":
> \"metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=3049
> (Thu Jan 24 14:11:59
> 2019)\\nhost-id=1\\nscore=3400\\nvm_conf_refresh_time=3049 (Thu Jan 24
> 14:11:59
> 2019)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineStarting\\nstopped=False\\n\",
> \"hostname\": \"HCI01.res01.ads.ooe.local\", \"host-id\": 1,
> \"engine-status\": {\"reason\": \"failed liveliness check\", \"health\":
> \"bad\", \"vm\": \"up\", \"detail\": \"Up\"}, \"score\": 3400, \"stopped\":
> false, \"maintenance\": false, \"crc32\": \"0c1a3ddb\",
> \"local_conf_timestamp\": 3049, \"host-ts\": 3049}, \"global_maintenance\":
> false}", "stdout_lines": ["{\"1\": {\"conf_on_shared_storage\": true,
> \"live-data\": true, \"extra\":
> \"metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=3049
> (Thu Jan 24 14:11:59
> 2019)\\nhost-id=1\\nscore=3400\\nvm_conf_refresh_time=3049 (Thu Jan 24
> 14:11:59
> 2019)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineStarting\\nstopped=False\\n\",
> \"hostname\": \"HCI01.res01.ads.ooe.local\", \"host-id\": 1,
> \"engine-status\": {\"reason\": \"failed liveliness check\", \"health\":
> \"bad\", \"vm\": \"up\", \"detail\": \"Up\"}, \"score\": 3400, \"stopped\":
> false, \"maintenance\": false, \"crc32\": \"0c1a3ddb\",
> \"local_conf_timestamp\": 3049, \"host-ts\": 3049}, \"global_maintenance\":
> false}"]}
>
It's still the same issue: the host fail to properly check the status of
the engine over a dedicate health page.
You should connect to ovirt-hci.res01.ads.ooe.local and check the status of
ovirt-engine service and /var/log/ovirt-engine/engine.log there.
> [ INFO ] TASK [Check VM status at virt level]
> [ INFO ] changed: [localhost]
> [ INFO ] TASK [Fail if engine VM is not running]
> [ INFO ] skipping: [localhost]
> [ INFO ] TASK [Get target engine VM IPv4 address]
> [ INFO ] changed: [localhost]
> [ INFO ] TASK [Get VDSM's target engine VM stats]
> [ INFO ] changed: [localhost]
> [ INFO ] TASK [Convert stats to JSON format]
> [ INFO ] ok: [localhost]
> [ INFO ] TASK [Get target engine VM IPv4 address from VDSM stats]
> [ INFO ] ok: [localhost]
> [ INFO ] TASK [Fail if the Engine has no IP address]
> [ INFO ] skipping: [localhost]
> [ INFO ] TASK [Fail if Engine IP is different from engine's FQDN resolved
> IP]
> [ INFO ] skipping: [localhost]
> [ INFO ] TASK [Get target engine VM IPv4 address]
> [ INFO ] skipping: [localhost]
> [ INFO ] TASK [Reconfigure OVN central address]
> [ ERROR ] fatal: [localhost]: FAILED! => {"msg": "The task includes an
> option with an undefined variable. The error was: 'dict object' has no
> attribute 'stdout_lines'\n\nThe error appears to have been in
> '/usr/share/ovirt-hosted-engine-setup/ansible/create_target_vm.yml': line
> 518, column 5, but may\nbe elsewhere in the file depending on the exact
> syntax problem.\n\nThe offending line appears to be:\n\n #
> https://github.com/oVirt/ovirt-engine/blob/master/packaging/playbooks/roles…
> - name: Reconfigure OVN central address\n ^ here\n"}
>
>
>
> attached you'll find the setup logs.
>
>
> best regards,
>
> Markus Schaufler
> ------------------------------
> *Von:* Simone Tiraboschi <stirabos(a)redhat.com>
> *Gesendet:* Donnerstag, 24. Jänner 2019 11:56:50
> *An:* Dominik Holler
> *Cc:* Markus Schaufler; users(a)ovirt.org
> *Betreff:* Re: [ovirt-users] ovirt 4.2 HCI rollout
>
>
>
> On Thu, Jan 24, 2019 at 9:40 AM Dominik Holler <dholler(a)redhat.com> wrote:
>
> On Tue, 22 Jan 2019 11:15:12 +0000
> Markus Schaufler <markus.schaufler(a)digit-all.at> wrote:
>
> > Thanks for your reply,
> >
> > getent ahosts ovirt-hci.res01.ads.ooe.local | cut -d' ' -f1 | uniq
> > 10.1.31.20
> >
> > attached you'll find the logs.
> >
>
> Thanks, to my eyes this looks like a bug.
> I tried to isolate the relevant lines in the attached playbook.
>
> Markus, would you be so kind to check if ovirt-4.2.8 is working for you?
>
>
>
> OK, understood: the real error was just a few lines before what Dominik
> pointed out:
>
> "stdout": "{\"1\": {\"conf_on_shared_storage\": true, \"live-data\":
> true, \"extra\":
> \"metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=5792
> (Mon Jan 21 13:57:45
> 2019)\\nhost-id=1\\nscore=3000\\nvm_conf_refresh_time=5792 (Mon Jan 21
> 13:57:45
> 2019)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineStarting\\nstopped=False\\n\",
> \"hostname\": \"HCI01.res01.ads.ooe.local\", \"host-id\": 1,
> \"engine-status\": {\"reason\": \"failed liveliness check\", \"health\":
> \"bad\", \"vm\": \"up\", \"detail\": \"Up\"}, \"score\": 3000, \"stopped\":
> false, \"maintenance\": false, \"crc32\": \"ba303717\",
> \"local_conf_timestamp\": 5792, \"host-ts\": 5792}, \"global_maintenance\":
> false}",
> "stdout_lines": [
> "{\"1\": {\"conf_on_shared_storage\": true, \"live-data\": true,
> \"extra\":
> \"metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=5792
> (Mon Jan 21 13:57:45
> 2019)\\nhost-id=1\\nscore=3000\\nvm_conf_refresh_time=5792 (Mon Jan 21
> 13:57:45
> 2019)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineStarting\\nstopped=False\\n\",
> \"hostname\": \"HCI01.res01.ads.ooe.local\", \"host-id\": 1,
> \"engine-status\": {\"reason\": \"failed liveliness check\", \"health\":
> \"bad\", \"vm\": \"up\", \"detail\": \"Up\"}, \"score\": 3000, \"stopped\":
> false, \"maintenance\": false, \"crc32\": \"ba303717\",
> \"local_conf_timestamp\": 5792, \"host-ts\": 5792}, \"global_maintenance\":
> false}"
> ]
> }"
> 2019-01-21 13:57:46,695+0100 ERROR ansible failed {'status': 'FAILED',
> 'ansible_type': 'task', 'ansible_task': u'Check engine VM health',
> 'ansible_result': u'type: <type \'dict\'>\nstr: {\'_ansible_parsed\': True,
> \'stderr_lines\': [], u\'changed\': True, u\'end\': u\'2019-01-21
> 13:57:46.242423\', \'_ansible_no_log\': False, u\'stdout\': u\'{"1":
> {"conf_on_shared_storage": true, "live-data": true, "extra":
> "metadata_parse_version=1\\\\nmetadata_feature_version=1\\\\ntimestamp=5792
> (Mon Jan 21 13:57:4', 'ansible_host': u'localhost', 'ansible_playbook':
> u'/usr/share/ovirt-hosted-engine-setup/ansible/create_target_vm.yml'}
>
> and in particular it's here:
> for some reason we got \"engine-status\": {\"reason\": \"failed
> liveliness check\", \"health\": \"bad\", \"vm\": \"up\", \"detail\":
> \"Up\"}
> over 120 attempts: we have to check engine.log (it got collected as well
> from the engine VM) to understand why the engine was failing to start.
>
>
>
>
> > ________________________________
> > Von: Dominik Holler <dholler(a)redhat.com>
> > Gesendet: Montag, 21. Jänner 2019 17:52:35
> > An: Markus Schaufler
> > Cc: users(a)ovirt.org; Simone Tiraboschi
> > Betreff: Re: [ovirt-users] ovirt 4.2 HCI rollout
> >
> > Would you please share the related ovirt-host-deploy-ansible-*.log
> > stored on the host in /var/log/ovirt-hosted-engine-setup ?
> >
> > Would you please also share the output of
> > getent ahosts YOUR_HOSED_ENGNE_FQDN | cut -d' ' -f1 | uniq
> > if executed on this host?
> >
> >
> > On Mon, 21 Jan 2019 13:37:53 -0000
> > "Markus Schaufler" <markus.schaufler(a)digit-all.at> wrote:
> >
> > > Hi,
> > >
> > > I'm trying a (nested) ovirt 4.2.7 HCI rollout on 3 centos VM's by
> > > following
> > >
> https://ovirt.org/documentation/gluster-hyperconverged/chap-Deploying_Hyper…
> > > gluster deployment was successful but at HE deployment "stage 5" I
> > > got following error:
> > >
> > > [ INFO ] TASK [Reconfigure OVN central address]
> > > [ ERROR ] fatal: [localhost]: FAILED! => {"msg": "The task includes
> > > an option with an undefined variable. The error was: 'dict object'
> > > has no attribute 'stdout_lines'\n\nThe error appears to have been
> > > in
> > > '/usr/share/ovirt-hosted-engine-setup/ansible/create_target_vm.yml':
> > > line 522, column 5, but may\nbe elsewhere in the file depending on
> > > the exact syntax problem.\n\nThe offending line appears to be:\n\n
> > > #
> > >
> https://github.com/oVirt/ovirt-engine/blob/master/packaging/playbooks/roles…
> > > - name: Reconfigure OVN central address\n ^ here\n"}
> > >
> > >
> > > /var/log/messages:
> > > Jan 21 14:09:56 HCI01 journal: ovirt-ha-agent
> > > ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine ERROR
> > > Engine VM stopped on localhost Jan 21 14:10:01 HCI01 systemd:
> > > Started Session 22 of user root. Jan 21 14:10:02 HCI01 systemd:
> > > Started Session c306 of user root. Jan 21 14:10:03 HCI01 systemd:
> > > Started Session c307 of user root. Jan 21 14:10:06 HCI01
> > > vdsm[3650]: WARN executor state: count=5 workers=set([<Worker
> > > name=periodic/4 waiting task#=141 at 0x7fd2d4316910>, <Worker
> > > name=periodic/1 running <Task discardable <Operation
> > > action=<vdsm.virt.sampling.VMBulkstatsMonitor object at
> > > 0x7fd2d4679490> at 0x7fd2d4679710> timeout=7.5, duration=7 at
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0> disca rded task#=413 at
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>, <Worker
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>name=periodic/3
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>waiting task#=414 at
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>,
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10><Worker
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>name=periodic/5
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>waiting
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>task#=0
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>at
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>,
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650><Worker
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>name
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>=periodic/2
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>waiting
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>task#=412
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>at
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>])
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>Jan
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>21
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>14:10:06
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>HCI01
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>kernel:
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>ovirtmgmt:
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>port
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>2(vnet0)
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>entered
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>disabled
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>state
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>Jan
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>21
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>14:10:06
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>HCI01
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>kernel:
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>device
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>vnet0
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>left
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>promiscuous
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>mode
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>Jan
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>21
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>14:10:06
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>HCI01
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>kernel:
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>ovirtmgmt:
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>port
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>2(vnet0)
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>entered
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>disabled
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>state
> > > Jan 21 14:10:06 HCI01 NetworkManager[3666]: <info>
> > > [1548076206.9177] device (vnet0): state change: disconnected ->
> > > unmanaged (reason 'unmanaged', sys-iface-state: 'remo ved') Jan 21
> > > 14:10:06 HCI01 NetworkManager[3666]: <info> [1548076206.9180]
> > > device (vnet0): released from master device ovirtmgmt Jan 21
> > > 14:10:06 HCI01 lldpad: recvfrom(Event interface): No buffer space
> > > available Jan 21 14:10:06 HCI01 libvirtd: 2019-01-21
> > > 13:10:06.925+0000: 2651: error : qemuMonitorIORead:609 : Unable to
> > > read from monitor: Connection reset by peer Jan 21 14:10:07 HCI01
> > > kvm: 0 guests now active Jan 21 14:10:07 HCI01 systemd-machined:
> > > Machine qemu-3-HostedEngine terminated. Jan 21 14:10:07 HCI01
> > > libvirtd: 2019-01-21 13:10:07.125+0000: 2704: warning :
> > > qemuGetProcessInfo:1406 : cannot parse process status data Jan 21
> > > 14:10:07 HCI01 libvirtd: 2019-01-21 13:10:07.125+0000: 2704:
> > > warning : qemuGetProcessInfo:1406 : cannot parse process status
> > > data Jan 21 14:10:07 HCI01 libvirtd: 2019-01-21 13:10:07.125+0000:
> > > 2704: warning : qemuGetProcessInfo:1406 : cannot parse process
> > > status data Jan 21 14:10:07 HCI01 libvirtd: 2019-01-21
> > > 13:10:07.125+0000: 2704: warning : qemuGetProcessInfo:1406 : cannot
> > > parse process status data Jan 21 14:10:07 HCI01 libvirtd:
> > > 2019-01-21 13:10:07.126+0000: 2704: error :
> > > virNetDevTapInterfaceStats:764 : internal error: /proc/net/dev:
> > > Interface not found Jan 21 14:10:07 HCI01 firewalld[24040]:
> > > WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -D libvirt-out
> > > -m physdev --physdev-is-bridged --physdev-out vnet0 -g FP-vnet 0'
> > > failed: iptables v1.4.21: goto 'FP-vnet0' is not a chain#012#012Try
> > > `iptables -h' or 'iptables --help' for more information. Jan 21
> > > 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/iptables -w2 -w -D libvirt-out -m physdev --physdev-out
> > > vnet0 -g FP-vnet0' failed: iptables v 1.4.21: goto 'FP-vnet0' is
> > > not a chain#012#012Try `iptables -h' or 'iptables --help' for more
> > > information. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -D libvirt-in -m physdev
> > > --physdev-in vnet0 -g FJ-vnet0' failed: iptables v1. 4.21: goto
> > > 'FJ-vnet0' is not a chain#012#012Try `iptables -h' or 'iptables
> > > --help' for more information. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -D libvirt-host-in -m physdev --physdev-in vnet0 -g HJ-vnet0'
> > > failed: iptable s v1.4.21: goto 'HJ-vnet0' is not a
> > > chain#012#012Try `iptables -h' or 'iptables --help' for more
> > > information. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F FP-vnet0' failed:
> > > iptables: No chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -X FP-vnet0' failed: iptables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F FJ-vnet0' failed:
> > > iptables: No chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -X FJ-vnet0' failed: iptables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F HJ-vnet0' failed:
> > > iptables: No chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -X HJ-vnet0' failed: iptables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -D libvirt-out -m
> > > physdev --physdev-is-bridged --physdev-out vnet0 -g FP-vne t0'
> > > failed: ip6tables v1.4.21: goto 'FP-vnet0' is not a
> > > chain#012#012Try `ip6tables -h' or 'ip6tables --help' for more
> > > information. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -D libvirt-out -m
> > > physdev --physdev-out vnet0 -g FP-vnet0' failed: ip6tables v1.4.21:
> > > goto 'FP-vnet0' is not a chain#012#012Try `ip6tables -h' or
> > > 'ip6tables --help' for more information. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2
> > > -w -D libvirt-in -m physdev --physdev-in vnet0 -g FJ-vnet0' failed:
> > > ip6tables v 1.4.21: goto 'FJ-vnet0' is not a chain#012#012Try
> > > `ip6tables -h' or 'ip6tables --help' for more information. Jan 21
> > > 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ip6tables -w2 -w -D libvirt-host-in -m physdev
> > > --physdev-in vnet0 -g HJ-vnet0' failed: ip6tab les v1.4.21: goto
> > > 'HJ-vnet0' is not a chain#012#012Try `ip6tables -h' or 'ip6tables
> > > --help' for more information. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2
> > > -w -F FP-vnet0' failed: ip6tables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -X FP-vnet0' failed:
> > > ip6tables: No chain/target/match by that name. Jan 21 14:10:07
> > > HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ip6tables -w2 -w -F FJ-vnet0' failed: ip6tables: No
> > > chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2
> > > -w -X FJ-vnet0' failed: ip6tables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -F HJ-vnet0' failed:
> > > ip6tables: No chain/target/match by that name. Jan 21 14:10:07
> > > HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ip6tables -w2 -w -X HJ-vnet0' failed: ip6tables: No
> > > chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
> > > --concurrent -t nat -D PREROUTING -i vnet0 -j libvirt-J-vnet0'
> > > failed: Illegal target name 'libvirt-J-vnet0'. Jan 21 14:10:07
> > > HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ebtables --concurrent -t nat -D POSTROUTING -o vnet0 -j
> > > libvirt-P-vnet0' failed: Illegal target name 'libvirt-P-vnet0'. Jan
> > > 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ebtables --concurrent -t nat -D PREROUTING -i vnet0 -j
> > > libvirt-J-vnet0' failed: Illegal targe t name 'libvirt-J-vnet0'.
> > > Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ebtables --concurrent -t nat -D POSTROUTING -o vnet0 -j
> > > libvirt-P-vnet0' failed: Illegal targ et name 'libvirt-P-vnet0'.
> > > Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ebtables --concurrent -t nat -L libvirt-J-vnet0' failed:
> > > Chain 'libvirt-J-vnet0' doesn't exis t. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
> > > --concurrent -t nat -L libvirt-P-vnet0' failed: Chain
> > > 'libvirt-P-vnet0' doesn't exist. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
> > > --concurrent -t nat -F libvirt-J-vnet0' failed: Chain
> > > 'libvirt-J-vnet0' doesn't exist. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
> > > --concurrent -t nat -X libvirt-J-vnet0' failed: Chain
> > > 'libvirt-J-vnet0' doesn't exist. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
> > > --concurrent -t nat -F libvirt-P-vnet0' failed: Chain
> > > 'libvirt-P-vnet0' doesn't exist. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
> > > --concurrent -t nat -X libvirt-P-vnet0' failed: Chain
> > > 'libvirt-P-vnet0' doesn't exist. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -D libvirt-out -m physdev --physdev-is-bridged --physdev-out
> > > vnet0 -g FO-vnet0' failed: iptables v1.4.21: goto 'FO-vnet0' is not
> > > a chain#012#012Try `iptables -h' or 'iptables --help' for more
> > > information. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -D libvirt-out -m
> > > physdev --physdev-out vnet0 -g FO-vnet0' failed: iptables v1.4.21:
> > > goto 'FO-vnet0' is not a chain#012#012Try `iptables -h' or
> > > 'iptables --help' for more information. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -D libvirt-in -m physdev --physdev-in vnet0 -g FI-vnet0' failed:
> > > iptables v1.4.21: goto 'FI-vnet0' is not a chain#012#012Try
> > > `iptables -h' or 'iptables --help' for more information. Jan 21
> > > 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/iptables -w2 -w -D libvirt-host-in -m physdev
> > > --physdev-in vnet0 -g HI-vnet0' failed: iptables v1.4.21: goto
> > > 'HI-vnet0' is not a chain#012#012Try `iptables -h' or 'iptables
> > > --help' for more information. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -D libvirt-in-post -m physdev --physdev-in vnet0 -j ACCEPT'
> > > failed: iptables: Bad rule (does a matching rule exist in that
> > > chain?). Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F FO-vnet0' failed:
> > > iptables: No chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -X FO-vnet0' failed: iptables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F FI-vnet0' failed:
> > > iptables: No chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -X FI-vnet0' failed: iptables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F HI-vnet0' failed:
> > > iptables: No chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -X HI-vnet0' failed: iptables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -D libvirt-out -m
> > > physdev --physdev-is-bridged --physdev-out vnet0 -g FO-vnet0'
> > > failed: ip6tables v1.4.21: goto 'FO-vnet0' is not a
> > > chain#012#012Try `ip6tables -h' or 'ip6tables --help' for more
> > > information. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -D libvirt-out -m
> > > physdev --physdev-out vnet0 -g FO-vnet0' failed: ip6tables v1.4.21:
> > > goto 'FO-vnet0' is not a chain#012#012Try `ip6tables -h' or
> > > 'ip6tables --help' for more information. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2
> > > -w -D libvirt-in -m physdev --physdev-in vnet0 -g FI-vnet0' failed:
> > > ip6tables v1.4.21: goto 'FI-vnet0' is not a chain#012#012Try
> > > `ip6tables -h' or 'ip6tables --help' for more information. Jan 21
> > > 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ip6tables -w2 -w -D libvirt-host-in -m physdev
> > > --physdev-in vnet0 -g HI-vnet0' failed: ip6tables v1.4.21: goto
> > > 'HI-vnet0' is not a chain#012#012Try `ip6tables -h' or 'ip6tables
> > > --help' for more information. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2
> > > -w -D libvirt-in-post -m physdev --physdev-in vnet0 -j ACCEPT'
> > > failed: ip6tables: Bad rule (does a matching rule exist in that
> > > chain?). Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -F FO-vnet0' failed:
> > > ip6tables: No chain/target/match by that name. Jan 21 14:10:07
> > > HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ip6tables -w2 -w -X FO-vnet0' failed: ip6tables: No
> > > chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2
> > > -w -F FI-vnet0' failed: ip6tables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -X FI-vnet0' failed:
> > > ip6tables: No chain/target/match by that name. Jan 21 14:10:07
> > > HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ip6tables -w2 -w -F HI-vnet0' failed: ip6tables: No
> > > chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2
> > > -w -X HI-vnet0' failed: ip6tables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -D
> > > POSTROUTING -o vnet0 -j libvirt-O-vnet0' failed: Illegal target
> > > name 'libvirt-O-vnet0'. Jan 21 14:10:07 HCI01 firewalld[24040]:
> > > WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -L
> > > libvirt-O-vnet0' failed: Chain 'libvirt-O-vnet0' doesn't exist. Jan
> > > 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ebtables --concurrent -t nat -F libvirt-O-vnet0' failed:
> > > Chain 'libvirt-O-vnet0' doesn't exist. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
> > > --concurrent -t nat -X libvirt-O-vnet0' failed: Chain
> > > 'libvirt-O-vnet0' doesn't exist. Jan 21 14:10:07 HCI01 vdsm[3650]:
> > > WARN
> > > File:
> /var/lib/libvirt/qemu/channels/ea1b312c-a462-45a9-ab75-78008bc4c9c9.ovirt-guest-agent.0
> > > already removed Jan 21 14:10:07 HCI01 vdsm[3650]: WARN Attempting
> > > to remove a non existing network:
> > > ovirtmgmt/ea1b312c-a462-45a9-ab75-78008bc4c9c9 Jan 21 14:10:07
> > > HCI01 vdsm[3650]: WARN Attempting to remove a non existing net
> > > user: ovirtmgmt/ea1b312c-a462-45a9-ab75-78008bc4c9c9 Jan 21
> > > 14:10:07 HCI01 vdsm[3650]: WARN
> > > File:
> /var/lib/libvirt/qemu/channels/ea1b312c-a462-45a9-ab75-78008bc4c9c9.org.qemu.guest_agent.0
> > > already removed
> > >
> > > any ideas on that?
> > > _______________________________________________
> > > Users mailing list -- users(a)ovirt.org
> > > To unsubscribe send an email to users-leave(a)ovirt.org
> > > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> > > oVirt Code of Conduct:
> > > https://www.ovirt.org/community/about/community-guidelines/ List
> > > Archives:
> > >
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/XMMX5CY6VHFWW…
> >
>
>
2
1
Hi all,
I have an Ovirt 4.2.8 cluster. Nodes are 4.2 ovirt nodes and the volumes (5) attached to nodes by FC.
2 weeks ago i made a small vm (centos 7 based) for me to test (name A). After the test i dropped a vm.
Next day a made an another vm (named B) to the developes and tired to add a new disk to the vm (B). Than a original volume group (vg) of the vm (B) missed and i got back the yesterday dropped vm`s (A) vg!
I tired to restart a vm, but the vm never started again.
I dropped this vm (B) too, and tired to add a new disk an older running vm (C) too, but a volume group changed to that vm`s vg what i dropped before (B).
I checked this „error” and i got it ,then i delete or move disks from the end of the FC volume.
Have sombody ever seen error like this?
Thanks,
csaba
Ps: in this cluster a managed 120 productive vm, so….
1
0
Hi all,
I have an Ovirt 4.2.8 cluster. Nodes are 4.2 ovirt nodes and the volumes (5) attached to nodes by FC.
2 weeks ago i made a small vm (centos 7 based) for me to test (name A). After the test i dropped a vm.
Next day a made an another vm (named B) to the developes and tired to add a new disk to the vm (B). Than a original volume group (vg) of the vm (B) missed and i got back the yesterday dropped vm`s (A) vg!
I tired to restart a vm, but the vm never started again.
I dropped this vm (B) too, and tired to add a new disk an older running vm (C) too, but a volume group changed to that vm`s vg what i dropped before (B).
I checked this „error” and i got it ,then i delete or move disks from the end of the FC volume.
Have sombody ever seen error like this?
Thanks,
csaba
Ps: in this cluster a managed 120 productive vm, so….
1
0
Hi all,
If anyone uses latest pycurl 7.43 provided by pip or ansible tower/awx,
any ovirtsdk4 calling will issue with the log:
The full traceback is:
WARNING: The below traceback may *not* be related to the actual failure.
File "/tmp/ansible_ovirt_auth_payload_L1HK9E/__main__.py", line 202,
in <module>
import ovirtsdk4 as sdk
File
"/opt/awx/embedded/lib64/python2.7/site-packages/ovirtsdk4/__init__.py",
line 22, in <module>
import pycurl
fatal: [localhost]: FAILED! => {
"changed": false,
"invocation": {
"module_args": {
"ca_file": null,
"compress": true,
"headers": null,
"hostname": null,
"insecure": true,
"kerberos": false,
"ovirt_auth": null,
"password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"state": "present",
"timeout": 0,
"token": null,
"url": "https://acore.v100.abes.fr/ovirt-engine/api",
"username": "admin@internal"
}
},
"msg": "ovirtsdk4 version 4.2.4 or higher is required for this module"
}
The only way is to set the version of pycurl with
pip install -U "pycurl == 7.19.0"
(Before this, in tower/awx, you should create venv)
--
Nathanaël Blanchet
Supervision réseau
Pôle Infrastrutures Informatiques
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5
Tél. 33 (0)4 67 54 84 55
Fax 33 (0)4 67 54 84 14
blanchet(a)abes.fr
2
4
Hello!
I'm running oVirt 4.2.7.5-1.el7 on 3 hosts cluster.
Cluster CPU Type is "AMD Opteron G3".
On default cluster I can see the warning:
"Warning: The CPU type 'AMD Opteron G3' will not be supported in the
next minor version update'"
Is it still supported in version 4.2.8? I can't see references in
documentation or changelog.
5
5
Hi,
I'm trying a (nested) ovirt 4.2.7 HCI rollout on 3 centos VM's by following https://ovirt.org/documentation/gluster-hyperconverged/chap-Deploying_Hyper…
gluster deployment was successful but at HE deployment "stage 5" I got following error:
[ INFO ] TASK [Reconfigure OVN central address]
[ ERROR ] fatal: [localhost]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: 'dict object' has no attribute 'stdout_lines'\n\nThe error appears to have been in '/usr/share/ovirt-hosted-engine-setup/ansible/create_target_vm.yml': line 522, column 5, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n # https://github.com/oVirt/ovirt-engine/blob/master/packaging/playbooks/roles… - name: Reconfigure OVN central address\n ^ here\n"}
/var/log/messages:
Jan 21 14:09:56 HCI01 journal: ovirt-ha-agent ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine ERROR Engine VM stopped on localhost
Jan 21 14:10:01 HCI01 systemd: Started Session 22 of user root.
Jan 21 14:10:02 HCI01 systemd: Started Session c306 of user root.
Jan 21 14:10:03 HCI01 systemd: Started Session c307 of user root.
Jan 21 14:10:06 HCI01 vdsm[3650]: WARN executor state: count=5 workers=set([<Worker name=periodic/4 waiting task#=141 at 0x7fd2d4316910>, <Worker name=periodic/1 running
<Task discardable <Operation action=<vdsm.virt.sampling.VMBulkstatsMonitor object at 0x7fd2d4679490> at 0x7fd2d4679710> timeout=7.5, duration=7 at 0x7fd33c1e0ed0> disca
rded task#=413 at 0x7fd2d5ed0510>, <Worker name=periodic/3 waiting task#=414 at 0x7fd2d5ed0b10>, <Worker name=periodic/5 waiting task#=0 at 0x7fd2d425f650>, <Worker name
=periodic/2 waiting task#=412 at 0x7fd2d5ed07d0>])
Jan 21 14:10:06 HCI01 kernel: ovirtmgmt: port 2(vnet0) entered disabled state
Jan 21 14:10:06 HCI01 kernel: device vnet0 left promiscuous mode
Jan 21 14:10:06 HCI01 kernel: ovirtmgmt: port 2(vnet0) entered disabled state
Jan 21 14:10:06 HCI01 NetworkManager[3666]: <info> [1548076206.9177] device (vnet0): state change: disconnected -> unmanaged (reason 'unmanaged', sys-iface-state: 'remo
ved')
Jan 21 14:10:06 HCI01 NetworkManager[3666]: <info> [1548076206.9180] device (vnet0): released from master device ovirtmgmt
Jan 21 14:10:06 HCI01 lldpad: recvfrom(Event interface): No buffer space available
Jan 21 14:10:06 HCI01 libvirtd: 2019-01-21 13:10:06.925+0000: 2651: error : qemuMonitorIORead:609 : Unable to read from monitor: Connection reset by peer
Jan 21 14:10:07 HCI01 kvm: 0 guests now active
Jan 21 14:10:07 HCI01 systemd-machined: Machine qemu-3-HostedEngine terminated.
Jan 21 14:10:07 HCI01 libvirtd: 2019-01-21 13:10:07.125+0000: 2704: warning : qemuGetProcessInfo:1406 : cannot parse process status data
Jan 21 14:10:07 HCI01 libvirtd: 2019-01-21 13:10:07.125+0000: 2704: warning : qemuGetProcessInfo:1406 : cannot parse process status data
Jan 21 14:10:07 HCI01 libvirtd: 2019-01-21 13:10:07.125+0000: 2704: warning : qemuGetProcessInfo:1406 : cannot parse process status data
Jan 21 14:10:07 HCI01 libvirtd: 2019-01-21 13:10:07.125+0000: 2704: warning : qemuGetProcessInfo:1406 : cannot parse process status data
Jan 21 14:10:07 HCI01 libvirtd: 2019-01-21 13:10:07.126+0000: 2704: error : virNetDevTapInterfaceStats:764 : internal error: /proc/net/dev: Interface not found
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -D libvirt-out -m physdev --physdev-is-bridged --physdev-out vnet0 -g FP-vnet
0' failed: iptables v1.4.21: goto 'FP-vnet0' is not a chain#012#012Try `iptables -h' or 'iptables --help' for more information.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -D libvirt-out -m physdev --physdev-out vnet0 -g FP-vnet0' failed: iptables v
1.4.21: goto 'FP-vnet0' is not a chain#012#012Try `iptables -h' or 'iptables --help' for more information.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -D libvirt-in -m physdev --physdev-in vnet0 -g FJ-vnet0' failed: iptables v1.
4.21: goto 'FJ-vnet0' is not a chain#012#012Try `iptables -h' or 'iptables --help' for more information.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -D libvirt-host-in -m physdev --physdev-in vnet0 -g HJ-vnet0' failed: iptable
s v1.4.21: goto 'HJ-vnet0' is not a chain#012#012Try `iptables -h' or 'iptables --help' for more information.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F FP-vnet0' failed: iptables: No chain/target/match by that name.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -X FP-vnet0' failed: iptables: No chain/target/match by that name.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F FJ-vnet0' failed: iptables: No chain/target/match by that name.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -X FJ-vnet0' failed: iptables: No chain/target/match by that name.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F HJ-vnet0' failed: iptables: No chain/target/match by that name.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -X HJ-vnet0' failed: iptables: No chain/target/match by that name.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -D libvirt-out -m physdev --physdev-is-bridged --physdev-out vnet0 -g FP-vne
t0' failed: ip6tables v1.4.21: goto 'FP-vnet0' is not a chain#012#012Try `ip6tables -h' or 'ip6tables --help' for more information.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -D libvirt-out -m physdev --physdev-out vnet0 -g FP-vnet0' failed: ip6tables
v1.4.21: goto 'FP-vnet0' is not a chain#012#012Try `ip6tables -h' or 'ip6tables --help' for more information.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -D libvirt-in -m physdev --physdev-in vnet0 -g FJ-vnet0' failed: ip6tables v
1.4.21: goto 'FJ-vnet0' is not a chain#012#012Try `ip6tables -h' or 'ip6tables --help' for more information.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -D libvirt-host-in -m physdev --physdev-in vnet0 -g HJ-vnet0' failed: ip6tab
les v1.4.21: goto 'HJ-vnet0' is not a chain#012#012Try `ip6tables -h' or 'ip6tables --help' for more information.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -F FP-vnet0' failed: ip6tables: No chain/target/match by that name.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -X FP-vnet0' failed: ip6tables: No chain/target/match by that name.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -F FJ-vnet0' failed: ip6tables: No chain/target/match by that name.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -X FJ-vnet0' failed: ip6tables: No chain/target/match by that name.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -F HJ-vnet0' failed: ip6tables: No chain/target/match by that name.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -X HJ-vnet0' failed: ip6tables: No chain/target/match by that name.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -D PREROUTING -i vnet0 -j libvirt-J-vnet0' failed: Illegal target name 'libvirt-J-vnet0'.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -D POSTROUTING -o vnet0 -j libvirt-P-vnet0' failed: Illegal target name 'libvirt-P-vnet0'.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -D PREROUTING -i vnet0 -j libvirt-J-vnet0' failed: Illegal targe
t name 'libvirt-J-vnet0'.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -D POSTROUTING -o vnet0 -j libvirt-P-vnet0' failed: Illegal targ
et name 'libvirt-P-vnet0'.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -L libvirt-J-vnet0' failed: Chain 'libvirt-J-vnet0' doesn't exis
t.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -L libvirt-P-vnet0' failed: Chain 'libvirt-P-vnet0' doesn't exist.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -F libvirt-J-vnet0' failed: Chain 'libvirt-J-vnet0' doesn't exist.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -X libvirt-J-vnet0' failed: Chain 'libvirt-J-vnet0' doesn't exist.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -F libvirt-P-vnet0' failed: Chain 'libvirt-P-vnet0' doesn't exist.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -X libvirt-P-vnet0' failed: Chain 'libvirt-P-vnet0' doesn't exist.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -D libvirt-out -m physdev --physdev-is-bridged --physdev-out vnet0 -g FO-vnet0' failed: iptables v1.4.21: goto 'FO-vnet0' is not a chain#012#012Try `iptables -h' or 'iptables --help' for more information.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -D libvirt-out -m physdev --physdev-out vnet0 -g FO-vnet0' failed: iptables v1.4.21: goto 'FO-vnet0' is not a chain#012#012Try `iptables -h' or 'iptables --help' for more information.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -D libvirt-in -m physdev --physdev-in vnet0 -g FI-vnet0' failed: iptables v1.4.21: goto 'FI-vnet0' is not a chain#012#012Try `iptables -h' or 'iptables --help' for more information.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -D libvirt-host-in -m physdev --physdev-in vnet0 -g HI-vnet0' failed: iptables v1.4.21: goto 'HI-vnet0' is not a chain#012#012Try `iptables -h' or 'iptables --help' for more information.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -D libvirt-in-post -m physdev --physdev-in vnet0 -j ACCEPT' failed: iptables: Bad rule (does a matching rule exist in that chain?).
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F FO-vnet0' failed: iptables: No chain/target/match by that name.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -X FO-vnet0' failed: iptables: No chain/target/match by that name.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F FI-vnet0' failed: iptables: No chain/target/match by that name.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -X FI-vnet0' failed: iptables: No chain/target/match by that name.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F HI-vnet0' failed: iptables: No chain/target/match by that name.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -X HI-vnet0' failed: iptables: No chain/target/match by that name.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -D libvirt-out -m physdev --physdev-is-bridged --physdev-out vnet0 -g FO-vnet0' failed: ip6tables v1.4.21: goto 'FO-vnet0' is not a chain#012#012Try `ip6tables -h' or 'ip6tables --help' for more information.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -D libvirt-out -m physdev --physdev-out vnet0 -g FO-vnet0' failed: ip6tables v1.4.21: goto 'FO-vnet0' is not a chain#012#012Try `ip6tables -h' or 'ip6tables --help' for more information.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -D libvirt-in -m physdev --physdev-in vnet0 -g FI-vnet0' failed: ip6tables v1.4.21: goto 'FI-vnet0' is not a chain#012#012Try `ip6tables -h' or 'ip6tables --help' for more information.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -D libvirt-host-in -m physdev --physdev-in vnet0 -g HI-vnet0' failed: ip6tables v1.4.21: goto 'HI-vnet0' is not a chain#012#012Try `ip6tables -h' or 'ip6tables --help' for more information.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -D libvirt-in-post -m physdev --physdev-in vnet0 -j ACCEPT' failed: ip6tables: Bad rule (does a matching rule exist in that chain?).
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -F FO-vnet0' failed: ip6tables: No chain/target/match by that name.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -X FO-vnet0' failed: ip6tables: No chain/target/match by that name.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -F FI-vnet0' failed: ip6tables: No chain/target/match by that name.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -X FI-vnet0' failed: ip6tables: No chain/target/match by that name.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -F HI-vnet0' failed: ip6tables: No chain/target/match by that name.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -X HI-vnet0' failed: ip6tables: No chain/target/match by that name.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -D POSTROUTING -o vnet0 -j libvirt-O-vnet0' failed: Illegal target name 'libvirt-O-vnet0'.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -L libvirt-O-vnet0' failed: Chain 'libvirt-O-vnet0' doesn't exist.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -F libvirt-O-vnet0' failed: Chain 'libvirt-O-vnet0' doesn't exist.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -X libvirt-O-vnet0' failed: Chain 'libvirt-O-vnet0' doesn't exist.
Jan 21 14:10:07 HCI01 vdsm[3650]: WARN File: /var/lib/libvirt/qemu/channels/ea1b312c-a462-45a9-ab75-78008bc4c9c9.ovirt-guest-agent.0 already removed
Jan 21 14:10:07 HCI01 vdsm[3650]: WARN Attempting to remove a non existing network: ovirtmgmt/ea1b312c-a462-45a9-ab75-78008bc4c9c9
Jan 21 14:10:07 HCI01 vdsm[3650]: WARN Attempting to remove a non existing net user: ovirtmgmt/ea1b312c-a462-45a9-ab75-78008bc4c9c9
Jan 21 14:10:07 HCI01 vdsm[3650]: WARN File: /var/lib/libvirt/qemu/channels/ea1b312c-a462-45a9-ab75-78008bc4c9c9.org.qemu.guest_agent.0 already removed
any ideas on that?
2
2