On Thu, Nov 12, 2020 at 11:08 AM Gianluca Cecchi <gianluca.cecchi(a)gmail.com>
wrote:
On Wed, Nov 11, 2020 at 10:01 PM Gianluca Cecchi <
gianluca.cecchi(a)gmail.com> wrote:
>
> NOTE: this was a cluster in 4.3.10 and I updated it to 4.4.2 and I
> noticed that the OVN config was not retained and I had to run on hosts:
>
> [root@ov200 ~]# vdsm-tool ovn-config engine_ip ov200_ip_on_mgmt
> Using default PKI files
> Created symlink
> /etc/systemd/system/multi-user.target.wants/openvswitch.service →
> /usr/lib/systemd/system/openvswitch.service.
> Created symlink
> /etc/systemd/system/multi-user.target.wants/ovn-controller.service →
> /usr/lib/systemd/system/ovn-controller.service.
> [root@ov200 ~]#
>
> Now it seems the problem persists...
> Why do I have to run each time?
>
> Gianluca
>
In the mean time I confirm that the manual step below on ov301 let me saw
it again between the chassis of OVN southbound on engine and I was able to
migrate VMs to update the other host and then for example to successfully
ping between VMs on OVN across the two hosts:
[root@ov301 vdsm]# vdsm-tool ovn-config 10.4.192.43 10.4.192.34
Using default PKI files
Created symlink
/etc/systemd/system/multi-user.target.wants/openvswitch.service →
/usr/lib/systemd/system/openvswitch.service.
Created symlink
/etc/systemd/system/multi-user.target.wants/ovn-controller.service →
/usr/lib/systemd/system/ovn-controller.service.
[root@ov301 vdsm]#
One further update.
On the other 4.4.2 host I applied the now recommended approach of updating
from web admin gui, after putting the host into maintenenace:
Hosts --> Select Host --> Installation --> Upgrade
I deselected the reboot host and the update completed successfully.
Then I manually rebooted the host from web admin gui:
Management --> SSH Management --> Restart
At reboot all is ok and I still see the host as one of the southbound
chassis
I can activate the host (why at least 10 popups with the same message
"Finished Activating Host ov200"???)
If I compare with diff the packages installed on the two hosts I see:
< = ov200 (the one from web admin gui)
= ov301 (updated through dnf update)
19c19
< ansible-2.9.14-1.el8.noarch
---
ansible-2.9.15-2.el8.noarch
262d261
< gpg-pubkey-56863776-5f117571
658c657
< NetworkManager-1.26.2-1.el8.x86_64
---
NetworkManager-1.22.14-1.el8.x86_64
660,663c659,662
< NetworkManager-libnm-1.26.2-1.el8.x86_64
< NetworkManager-ovs-1.26.2-1.el8.x86_64
< NetworkManager-team-1.26.2-1.el8.x86_64
< NetworkManager-tui-1.26.2-1.el8.x86_64
---
NetworkManager-libnm-1.22.14-1.el8.x86_64
NetworkManager-ovs-1.22.14-1.el8.x86_64
NetworkManager-team-1.22.14-1.el8.x86_64
NetworkManager-tui-1.22.14-1.el8.x86_64
1079d1077
< yum-utils-4.0.12-4.el8_2.noarch
any comments?
On the host updated through the web admin gui, if I run dnf update I 'm
proposed with:
Dependencies resolved.
======================================================================================================================
Package Arch Version Repository
Size
======================================================================================================================
Upgrading:
NetworkManager-config-server
noarch 1:1.26.2-1.el8
ovirt-4.4-copr:copr.fedorainfracloud.org:networkmanager:NetworkManager-1.26
117 k
ansible noarch 2.9.15-2.el8 ovirt-4.4-centos-ovirt44
17 M
nmstate noarch 0.3.6-2.el8
ovirt-4.4-copr:copr.fedorainfracloud.org:nmstate:nmstate-0.3 34 k
python3-libnmstate noarch 0.3.6-2.el8
ovirt-4.4-copr:copr.fedorainfracloud.org:nmstate:nmstate-0.3 178 k
Installing dependencies:
python3-varlink noarch 29.0.0-1.el8 BaseOS
49 k
Transaction Summary
======================================================================================================================
Install 1 Package
Upgrade 4 Packages
Total download size: 18 M
Why ansible has not been updated?
Probably on CentOS lInux plain host I shouldn't run at all any "dnf update"
command? Or what is a clear statement for managing plain CentOS Linux hosts
in 4.4?
In case couldn't be put in place sort of global version lock to prevent
"dnf update" commands?
Thanks,
Gianluca