Upgrade oVirt Host from 4.4.0 to 4.4.2 fails

Hey, Bunch of hosts installed from oVirt Node Image, i have upgraded the self-hosted Engine successfully. I have ran Check Upgrade on one of the hosts and it was entitled for an upgrade. I use the UI to let it Upgrade, after multiple retries it always fails on "Prepare NGN host for upgrade." so i chose another host as a test. I have set the Host into Maintenance and let all the VMs migrate successfully. made sure i do have the latest 4.4.2 repo (it was 4.4.0) yum install https://resources.ovirt.org/pub/yum-repo/ovirt-release44.rpm And issued "dnf upgrade" Installing: ovirt-openvswitch replacing openvswitch.x86_64 2.11.1-5.el8 ovirt-openvswitch-ovn replacing ovn.x86_64 2.11.1-5.el8 ovirt-openvswitch-ovn-host replacing ovn-host.x86_64 2.11.1-5.el8 ovirt-python-openvswitch replacing python3-openvswitch.x86_64 2.11.1-5.el8 Upgrading: ovirt-node-ng-image-update-placeholder Installing dependencies: openvswitch2.11 ovirt-openvswitch-ovn-common ovn2.11 ovn2.11-host python3-openvswitch2.11 Installing weak dependencies: network-scripts-openvswitch network-scripts-openvswitch2.11 It was very quick, but nothing else happened, I did try to reboot the host but I still see the host as oVirt 4.4.0 and as expected it still says that an update is available.

This is a shot in the dark, but it's possible that your dnf command was running off of cached repo metadata. Try running 'dnf clean metadata' before 'dnf upgrade'. --Mike On 10/2/20 12:38 PM, Erez Zarum wrote:
Hey, Bunch of hosts installed from oVirt Node Image, i have upgraded the self-hosted Engine successfully. I have ran Check Upgrade on one of the hosts and it was entitled for an upgrade. I use the UI to let it Upgrade, after multiple retries it always fails on "Prepare NGN host for upgrade." so i chose another host as a test. I have set the Host into Maintenance and let all the VMs migrate successfully. made sure i do have the latest 4.4.2 repo (it was 4.4.0) yum install https://resources.ovirt.org/pub/yum-repo/ovirt-release44.rpm And issued "dnf upgrade" Installing: ovirt-openvswitch replacing openvswitch.x86_64 2.11.1-5.el8 ovirt-openvswitch-ovn replacing ovn.x86_64 2.11.1-5.el8 ovirt-openvswitch-ovn-host replacing ovn-host.x86_64 2.11.1-5.el8 ovirt-python-openvswitch replacing python3-openvswitch.x86_64 2.11.1-5.el8 Upgrading: ovirt-node-ng-image-update-placeholder Installing dependencies: openvswitch2.11 ovirt-openvswitch-ovn-common ovn2.11 ovn2.11-host python3-openvswitch2.11 Installing weak dependencies: network-scripts-openvswitch network-scripts-openvswitch2.11
It was very quick, but nothing else happened, I did try to reboot the host but I still see the host as oVirt 4.4.0 and as expected it still says that an update is available. _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/WL67T6DNFNS3QO...

Nope, i already tried it :( The Prepare NGN fails always after ansible-dnf check "virt-v2v" package.

For oVirt nodes upgrading from 4.4.0 to 4.4.2 you must remove the LVM filter before the node upgrade produces a proper booting host. Its in the upgrade release notes as a known issue. https://www.ovirt.org/release/4.4.2/ On Fri, Oct 2, 2020 at 1:39 PM Erez Zarum <erezz@nanosek.com> wrote:
Hey, Bunch of hosts installed from oVirt Node Image, i have upgraded the self-hosted Engine successfully. I have ran Check Upgrade on one of the hosts and it was entitled for an upgrade. I use the UI to let it Upgrade, after multiple retries it always fails on "Prepare NGN host for upgrade." so i chose another host as a test. I have set the Host into Maintenance and let all the VMs migrate successfully. made sure i do have the latest 4.4.2 repo (it was 4.4.0) yum install https://resources.ovirt.org/pub/yum-repo/ovirt-release44.rpm And issued "dnf upgrade" Installing: ovirt-openvswitch replacing openvswitch.x86_64 2.11.1-5.el8 ovirt-openvswitch-ovn replacing ovn.x86_64 2.11.1-5.el8 ovirt-openvswitch-ovn-host replacing ovn-host.x86_64 2.11.1-5.el8 ovirt-python-openvswitch replacing python3-openvswitch.x86_64 2.11.1-5.el8 Upgrading: ovirt-node-ng-image-update-placeholder Installing dependencies: openvswitch2.11 ovirt-openvswitch-ovn-common ovn2.11 ovn2.11-host python3-openvswitch2.11 Installing weak dependencies: network-scripts-openvswitch network-scripts-openvswitch2.11
It was very quick, but nothing else happened, I did try to reboot the host but I still see the host as oVirt 4.4.0 and as expected it still says that an update is available. _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/WL67T6DNFNS3QO...

It's not related, i have no multipath devices and i don't get into an emergency mode.

Seems the problem, atleast part of it (because still, it doesn't get to the part of creating the imgbase layer) is related to the /tmp/yum_updates file. /usr/share/ovirt-engine/ansible-runner-service-project/project/roles/ovirt-host-check-upgrade/tasks/main.yml yum check-update -q | cut -d ' ' -f1 | sed '/^$/d' >> /tmp/yum_updates For some reason it also lists those packages: gluster-ansible-cluster.src gluster-ansible-infra.src gluster-ansible-maintenance.src gluster-ansible-roles.src Which do not exists, those are source RPMs, so the ansible playbook for the host upgrade fails. I did a quick workaround and added an "egrep -v src" yum check-update -q | egrep -v src | cut -d ' ' -f1 | sed '/^$/d' >> /tmp/yum_updates So now the ansible-playbook doesn't fail, it does says the upgrade was successful the host reboots, but it doesn't get upgraded. Also, i noticed that the playbook does make sure that the dnf cache is up to date (update_cache is set to true) when first checking the ovirt-host package, but it also does this for every single package in the task after, so there's no need for update_cache there. As a workaround to upgrade to 4.4.2 i have to reinstall every host with oVirt node 4.4.2 as it seems the upgrade process is broken.

On Sat, Oct 3, 2020 at 1:04 AM Erez Zarum <erezz@nanosek.com> wrote:
Seems the problem, atleast part of it (because still, it doesn't get to the part of creating the imgbase layer) is related to the /tmp/yum_updates file. /usr/share/ovirt-engine/ansible-runner-service-project/project/roles/ovirt-host-check-upgrade/tasks/main.yml yum check-update -q | cut -d ' ' -f1 | sed '/^$/d' >> /tmp/yum_updates For some reason it also lists those packages: gluster-ansible-cluster.src gluster-ansible-infra.src gluster-ansible-maintenance.src gluster-ansible-roles.src
Which do not exists, those are source RPMs, so the ansible playbook for the host upgrade fails.
I did a quick workaround and added an "egrep -v src" yum check-update -q | egrep -v src | cut -d ' ' -f1 | sed '/^$/d' >> /tmp/yum_updates
So now the ansible-playbook doesn't fail, it does says the upgrade was successful the host reboots, but it doesn't get upgraded.
Also, i noticed that the playbook does make sure that the dnf cache is up to date (update_cache is set to true) when first checking the ovirt-host package, but it also does this for every single package in the task after, so there's no need for update_cache there.
This is tracked in: https://bugzilla.redhat.com/show_bug.cgi?id=1880962
As a workaround to upgrade to 4.4.2 i have to reinstall every host with oVirt node 4.4.2 as it seems the upgrade process is broken.
Adding Dana. Would you like to open a bug about this? Not sure about the exact flow - I guess it's not strictly about having src packages but about having packages that are not re-installable (meaning, you installed them not from a repo, or removed the repo after installation, or they were removed from their repo, etc.). Thanks and best regards, -- Didi

Hi, Fixes were added in the 2 tasks that were mentioned: 1. The command that we run to check for updates: https://gerrit.ovirt.org/#/c/110713/ 2. Making sure that cache is up to date is done only once and not for each package: https://gerrit.ovirt.org/#/c/111419/ Thanks, Dana On Mon, Oct 12, 2020 at 10:52 AM Yedidyah Bar David <didi@redhat.com> wrote:
On Sat, Oct 3, 2020 at 1:04 AM Erez Zarum <erezz@nanosek.com> wrote:
Seems the problem, atleast part of it (because still, it doesn't get to
the part of creating the imgbase layer) is related to the /tmp/yum_updates file.
yum check-update -q | cut -d ' ' -f1 | sed '/^$/d' >> /tmp/yum_updates For some reason it also lists those packages: gluster-ansible-cluster.src gluster-ansible-infra.src gluster-ansible-maintenance.src gluster-ansible-roles.src
Which do not exists, those are source RPMs, so the ansible playbook for
/usr/share/ovirt-engine/ansible-runner-service-project/project/roles/ovirt-host-check-upgrade/tasks/main.yml the host upgrade fails.
I did a quick workaround and added an "egrep -v src" yum check-update -q | egrep -v src | cut -d ' ' -f1 | sed '/^$/d' >>
/tmp/yum_updates
So now the ansible-playbook doesn't fail, it does says the upgrade was
successful the host reboots, but it doesn't get upgraded.
Also, i noticed that the playbook does make sure that the dnf cache is
up to date (update_cache is set to true) when first checking the ovirt-host package, but it also does this for every single package in the task after, so there's no need for update_cache there.
This is tracked in: https://bugzilla.redhat.com/show_bug.cgi?id=1880962
As a workaround to upgrade to 4.4.2 i have to reinstall every host with
oVirt node 4.4.2 as it seems the upgrade process is broken.
Adding Dana.
Would you like to open a bug about this? Not sure about the exact flow - I guess it's not strictly about having src packages but about having packages that are not re-installable (meaning, you installed them not from a repo, or removed the repo after installation, or they were removed from their repo, etc.).
Thanks and best regards, -- Didi
participants (5)
-
Dana Elfassy
-
Edward Berger
-
Erez Zarum
-
Michael Thomas
-
Yedidyah Bar David