I'm not sure if I should submit a bug report about this, so I ask around here
first...
I've found a bug that "seems" related but I think it's not
(
https://bugzilla.redhat.com/show_bug.cgi?id=1523661)
This is the problem:
- I'm trying to do a clean HE deploy with oVirt 4.2.1 on a clean CentOS 7.4 host
- I have a LACP bond (bond0) and I need my management network to be on vlan 1005, si I
have created interface bond0.1005 on the host and everything works
- I run hosted-engine deploy, and it always fails with
[ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 50,
"changed": true, "cmd": "ip rule list | grep ovirtmgmt | sed
s/\\\\[.*\\\\]\\ //g | awk '{ print $9 }'", "delta":
"0:00:00.006473", "end": "2018-02-15 13:57:11.132359",
"rc": 0, "start": "2018-02-15 13:57:11.125886",
"stderr": "", "stderr_lines": [], "stdout":
"", "stdout_lines": []}
[ ERROR ] Failed to execute stage 'Closing up': Failed executing ansible-playbook
- Looking at the ansible playbook, I see it's trying to look for an ip rule using a
custom routing table, but I have no such rule
[root@ovirt1 ~]# ip rule
0: from all lookup local
32766: from all lookup main
32767: from all lookup default
- I also find that I have no "ovirtmgmt" bridge
bridge name bridge id STP enabled interfaces
;vdsmdummy; 8000.000000000000 no
virbr0 8000.525400e6ca97 yes virbr0-nic
vnet0
- But I haven't found any reference in the ansible playbook to this network creation.
- The HE VM gets created and I can connect with SSH, so I tried to find out if the
ovirtmgmt network is created via vdsm from the engine
- Looking at the engine.log I found this:
2018-02-15 13:49:26,850Z INFO [org.ovirt.engine.core.bll.host.HostConnectivityChecker]
(EE-ManagedThreadFactory-engine-Thread-1) [15c7e33a] Engine managed to communicate wi
th VDSM agent on host 'ovirt1' with address 'ovirt1'
('06651b32-4ef8-4b5d-ab2d-c38e84c2d790')
2018-02-15 13:49:30,302Z ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(EE-ManagedThreadFactory-engine-Thread-1) [15c7e33a] EVENT_ID: VLAN_ID_
MISMATCH_FOR_MANAGEMENT_NETWORK_CONFIGURATION(1,119), Failed to configure management
network on host ovirt1. Host ovirt1 has an interface bond0.1005 for the management netwo
rk configuration with VLAN-ID (1005), which is different from data-center definition
(none).
2018-02-15 13:49:30,302Z ERROR
[org.ovirt.engine.core.bll.hostdeploy.InstallVdsInternalCommand]
(EE-ManagedThreadFactory-engine-Thread-1) [15c7e33a] Exception: org.ovirt.eng
ine.core.bll.network.NetworkConfigurator$NetworkConfiguratorException: Failed to configure
management network
- So I guess that the engine tried to create the ovirtmgmt bridge on the host via vdsm,
but it failed because "Host ovirt1 has an interface bond0.1005 for the management
netwo
rk configuration with VLAN-ID (1005), which is different from data-center definition
(none)"
- Of course I haven't had the opportunity to setup the management network's vlan
in the datacenter yet, because I'm still trying to deploy the Hosted Engine
Is this a supported configuration? Is there a way I can tell the datacenter that the
management network is on vlan 1005? Should I file a bug report?
Is there a workaround?
Salu2!
--
Miguel Armas
CanaryTek Consultoria y Sistemas SL
http://www.canarytek.com/