On Wed, Aug 25, 2021 at 12:35 PM Gianluca Cecchi <gianluca.cecchi@gmail.com> wrote:
Hello,
I'm testing what in object in a test env with novirt1 and novirt2 as hosts.
First reinstalled host is novirt2
For this I downloaded the 4.4.8 iso of the node:
https://resources.ovirt.org/pub/ovirt-4.4/iso/ovirt-node-ng-installer/4.4.8-2021081816/el8/ovirt-node-ng-installer-4.4.8-2021081816.el8.iso

before running the restore command for the first scratched node I pre-installed the appliance rpm on it and I got:
ovirt-engine-appliance-4.4-20210818155544.1.el8.x86_64

I selected to pause an d I arrived here with local vm engine completing its setup:

 INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Add host]
[ INFO  ] changed: [localhost]
[ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Include after_add_host tasks files]
[ INFO  ] You can now connect to https://novirt2.localdomain.local:6900/ovirt-engine/ and check the status of this host and eventually remediate it, please continue only when the host is listed as 'up'
[ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : include_tasks]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Create temporary lock file]
[ INFO  ] changed: [localhost]
[ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Pause execution until /tmp/ansible.4_o6a2wo_he_setup_lock is removed, delete it once ready to proceed]

But connecting t the provided https://novirt2.localdomain.local:6900/ovirt-engine/ url
I see that only the still 4.3.10 host results up while novirt2 is not resp[onsive

vm situation:
https://drive.google.com/file/d/1OwHHzK0owU2HWZqvHFaLLbHVvjnBhRRX/view?usp=sharing

storage situation:
https://drive.google.com/file/d/1D-rmlpGsKfRRmYx2avBk_EYCG7XWMXNq/view?usp=sharing

hosts situation:
https://drive.google.com/file/d/1yrmfYF6hJFzKaG54Xk0Rhe2kY-TIcUvA/view?usp=sharing

In engine.log I see

2021-08-25 09:14:38,548+02 ERROR [org.ovirt.engine.core.vdsbroker.HostDevListByCapsVDSCommand] (EE-ManagedThreadFactory-engine-Thread-4) [5f4541ee] Command 'HostDevListByCapsVDSCommand(HostName = novirt2.localdomain.local, VdsIdAndVdsVDSCommandParametersBase:{hostId='ca9ff6f7-5a7c-4168-9632-998c52f76cfa', vds='Host[novirt2.localdomain.local,ca9ff6f7-5a7c-4168-9632-998c52f76cfa]'})' execution failed: java.net.ConnectException: Connection refused

and continuouslly this message...

I also tried to restart vdsmd on novit2 but nothing changed.

Do I have to restart the HA daemons on novirt2?

Any insight?

Thanks
Gianluca


it seems it has not been able to configure networks on novirt2.localdomain.local, as I see no ovirtmgmt bridge...
During setup it asked for network card and I specified enp1s0 (default proposed in square brackets was enp2s0)

172.19.0 is for mgmt network and ip of novirt2, 172.24.0 is for iscsi

[root@novirt2 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 56:6f:bc:9a:00:5b brd ff:ff:ff:ff:ff:ff
    inet 172.19.0.232/24 brd 172.19.0.255 scope global noprefixroute enp1s0
       valid_lft forever preferred_lft forever
    inet6 fe80::546f:bcff:fe9a:5b/64 scope link noprefixroute
       valid_lft forever preferred_lft forever
3: enp2s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 56:6f:bc:9a:00:5c brd ff:ff:ff:ff:ff:ff
    inet 172.24.0.232/24 brd 172.24.0.255 scope global noprefixroute enp2s0
       valid_lft forever preferred_lft forever
    inet6 fe80::546f:bcff:fe9a:5c/64 scope link noprefixroute
       valid_lft forever preferred_lft forever
4: enp3s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 56:6f:bc:9a:00:5d brd ff:ff:ff:ff:ff:ff
6: virbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 52:54:00:8b:b3:3a brd ff:ff:ff:ff:ff:ff
    inet 192.168.222.1/24 brd 192.168.222.255 scope global virbr0
       valid_lft forever preferred_lft forever
7: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master virbr0 state UNKNOWN group default qlen 1000
    link/ether fe:16:3e:78:35:42 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::fc16:3eff:fe78:3542/64 scope link
       valid_lft forever preferred_lft forever
[root@novirt2 ~]#

[root@novirt2 network-scripts]# ll
total 12
-rw-r--r--. 1 root root 368 Aug 25 00:43 ifcfg-enp1s0
-rw-r--r--. 1 root root 277 Aug 25 00:51 ifcfg-enp2s0
-rw-r--r--. 1 root root 247 Aug 25 00:43 ifcfg-enp3s0
[root@novirt2 network-scripts]#

the strange thing is that if I go to the temporary manager ip web admin page and select the host novirt2, network interfaces, setup host networks, I see eth0, eth1 and eth2 and not enp1s0, enp2s0, enp3s0...
See
https://drive.google.com/file/d/1CMmIuK5b1nOx6096FlTGMd2uj3vP3ruv/view?usp=sharing
Gianluca