moin,

i have learned to install a self hosted engine directly to the physical interfaces.

later you can move it with the hosted engine into the different bonds or vlans.

it works fine by me round 20 times.

br
marcel


Am 3. Februar 2021 19:56:47 MEZ schrieb Nardus Geldenhuys <nardusg@gmail.com>:
Hi oVirt land

Hope you are well. Running into this issue, I hope you can help.

Centos7 and it is updated.
Ovirt 4.3, latest packages.

My network config:

[root@mob-r1-d-ovirt-aa-1-01 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
   link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
   inet 127.0.0.1/8 scope host lo
      valid_lft forever preferred_lft forever
   inet6 ::1/128 scope host  
      valid_lft forever preferred_lft forever
2: ens1f0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
   link/ether 00:90:fa:c2:d2:48 brd ff:ff:ff:ff:ff:ff
3: ens1f1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
   link/ether 00:90:fa:c2:d2:48 brd ff:ff:ff:ff:ff:ff
4: enp11s0f0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
   link/ether 00:90:fa:c2:d2:50 brd ff:ff:ff:ff:ff:ff
5: enp11s0f1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
   link/ether 00:90:fa:c2:d2:54 brd ff:ff:ff:ff:ff:ff
21: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
   link/ether 00:90:fa:c2:d2:48 brd ff:ff:ff:ff:ff:ff
   inet6 fe80::290:faff:fec2:d248/64 scope link  
      valid_lft forever preferred_lft forever
22: bond0.1131@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
   link/ether 00:90:fa:c2:d2:48 brd ff:ff:ff:ff:ff:ff
   inet 172.18.206.184/23 brd 172.18.207.255 scope global bond0.1131
      valid_lft forever preferred_lft forever
   inet6 fe80::290:faff:fec2:d248/64 scope link  
      valid_lft forever preferred_lft forever

[root@mob-r1-d-ovirt-aa-1-01 network-scripts]# cat ifcfg-bond0
BONDING_OPTS='mode=1 miimon=100'
TYPE=Bond
BONDING_MASTER=yes
PROXY_METHOD=none
BROWSER_ONLY=no
IPV6INIT=no
NAME=bond0
UUID=c11ef6ef-794f-4683-a068-d6338e5c19b6
DEVICE=bond0
ONBOOT=yes
[root@mob-r1-d-ovirt-aa-1-01 network-scripts]# cat ifcfg-bond0.1131
DEVICE=bond0.1131
VLAN=yes
ONBOOT=yes
MTU=1500
IPADDR=172.18.206.184
NETMASK=255.255.254.0
GATEWAY=172.18.206.1
BOOTPROTO=none
MTU=1500
DEFROUTE=yes
NM_CONTROLLED=no
IPV6INIT=yes
DNS1=172.20.150.10
DNS2=172.20.150.11

I get the following error:

[ INFO  ] TASK [ovirt.hosted_engine_setup : Generate output list]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [ovirt.hosted_engine_setup : Validate selected bridge interface if management bridge does not exists]
[ INFO  ] skipping: [localhost]
         Please indicate a nic to set ovirtmgmt bridge on: (bond0, bond0.1131) [bond0.1131]:  
         Please specify which way the network connectivity should be checked (ping, dns, tcp, none) [dns]:
..
..
..
..
..
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [ovirt.hosted_engine_setup : Validate selected bridge interface if management bridge does not exists]
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "The selected network interface is not valid"}
[ ERROR ] Failed to execute stage 'Closing up': Failed executing ansible-playbook
[ INFO  ] Stage: Clean up

And if I create the ifcfg-ovirtmgmt as a bridge it fails later.

What is the correct network setup for my bond configuration to do a self hosted-engine setup ?

Regards

Nar