Hello,
I'm just writing my experiences in case others have come across similar issues or have
a similar configuration.
I have a single PC, well iMac, home lab environment with VMware Fusion 12.x for
Virtualisation. I have tried to setup oVirt 4.5 as a self-hosted engine, deployed on an
oVirt 4.5.0 node VM (with nested virtualisation) and the engine deployment fails at this
point:
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Get local VM IP]
[ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 90,
"changed": true, "cmd": "virsh -r net-dhcp-leases default | grep
-i 00:16:3e:36:90:88 | awk '{ print $5 }' | cut -f1 -d'/'",
"delta": "0:00:00.052065", "end": "2022-04-27
21:12:08.176323", "msg": "", "rc": 0,
"start": "2022-04-27 21:12:08.124258", "stderr":
"", "stderr_lines": [], "stdout": "",
"stdout_lines": []}
This seems to be the point where the oVirt Engine has gone though an initial deployment
and Ansible is trying to get the IP of the ovirt engine to resume configuration and
testing. I believe the VM is running on the oVirt node at this point and there is a Linux
bridge between the oVirt node and engine VM. The oVirt node itself has two vNICs, one as
the oVirt management network and a second unconfigured NIC to be used for Gluster storage.
I access oVirt nodes and Cockpit via a "client" linux VM also on the oVirt
management network.
It feels like it's this Linux bridge and networking which is not working correctly and
causing the deployment to fail. This could be due to the fact that I'm using a
virtualised NIC and while promiscuous mode and perhaps some other features are sort-of
supported, features needed for this kind of complex networking are not. This is despite
the network I'm using being an internal one without the iMAC host connected. WAN
connections for oVirt are via a NAT gateway/firewall VM that has the LAN side on the oVirt
management network and the WAN side using a bridge with the hosts's NIC (wifi in my
case).
I did get similar results when I tried to set up OpenStack Neutron Networking; I could not
get traffic to cross the Linux Bridge from one Compute Node VM to another. There are a
small handful of results of other people coming across this same issue/error and many of
them are also using nested Virtualisation either in ESXi, VirtualBox etc
The Linux Bridge created by the oVirt engine deployment is (from Cockpit):
virbr0Bridge 52:54:00:8F:28:3A
Status
192.168.222.1/24, fe80:0:0:0:5054:ff:fe8f:283a/64, fd00:1234:5678:900:0:0:0:1/64
Carrier
Yes
General
Connect automatically
IPv4
Address 192.168.222.1/24
IPv6
Address fd00:1234:5678:900:0:0:0:1/64
Bridge
Spanning tree protocol
Forward delay 2
I have actually seen traffic cross this bridge while watching the stats as the engine was
being deployed.
If anyone has experienced this problem before and is certain it is due to having a
virtualised network, then please let me know. Alternatively, if there is a workaround,
that would be great, otherwise I can do more troubleshooting on request but it does feel
like this is ultimately a problem related to the platform I'm using - which would not
be encountered in a production environment.
Thanks
Show replies by date