Yes.
The additional main tasks that we execute during host upgrade besides updating packages are certificates related (check for certificates validity, enroll certificates) , configuring advanced virtualization and lvm filter
Dana

On Mon, Oct 5, 2020 at 9:31 AM Sandro Bonazzola <sbonazzo@redhat.com> wrote:


Il giorno sab 3 ott 2020 alle ore 14:16 Gianluca Cecchi <gianluca.cecchi@gmail.com> ha scritto:
On Fri, Sep 25, 2020 at 4:06 PM Sandro Bonazzola <sbonazzo@redhat.com> wrote:


Il giorno ven 25 set 2020 alle ore 15:32 Gianluca Cecchi <gianluca.cecchi@gmail.com> ha scritto:


On Fri, Sep 25, 2020 at 1:57 PM Sandro Bonazzola <sbonazzo@redhat.com> wrote:

oVirt Node 4.4.2 is now generally available


The oVirt project is pleased to announce the general availability of oVirt Node 4.4.2 , as of September 25th, 2020.


This release completes the oVirt 4.4.2 release published on September 17th


Thanks fir the news!

How to prevent hosts entering emergency mode after upgrade from oVirt 4.4.1


Due to Bug 1837864 - Host enter emergency mode after upgrading to latest build 

If you have your root file system on a multipath device on your hosts you should be aware that after upgrading from 4.4.1 to 4.4.2 you may get your host entering emergency mode.

In order to prevent this be sure to upgrade oVirt Engine first, then on your hosts:

  1. Remove the current lvm filter while still on 4.4.1, or in emergency mode (if rebooted).

  2. Reboot.

  3. Upgrade to 4.4.2 (redeploy in case of already being on 4.4.2).

  4. Run vdsm-tool config-lvm-filter to confirm there is a new filter in place.

  5. Only if not using oVirt Node:
    - run "dracut --force --add multipath” to rebuild initramfs with the correct filter configuration

  6. Reboot.



What if I'm currently in 4.4.0 and want to upgrade to 4.4.2? Do I have to follow the same steps as if I were in 4.4.1 or what?
I would like to avoid going through 4.4.1 if possible.

I don't think we had someone testing 4.4.0 to 4.4.2 but above procedure should work for the same case.
The problematic filter in /etc/lvm/lvm.conf looks like:
# grep '^filter = ' /etc/lvm/lvm.conf
filter = ["a|^/dev/mapper/mpatha2$|", "r|.*|"]

 

Thanks,
Gianluca


OK, so I tried on my single host HCI installed with ovirt-node-ng 4.4.0 and gluster wizard and never update until now.
Updated self hosted engine to 4.4.2 without problems.

My host doesn't have any filter or global_filter set up in lvm.conf  in 4.4.0.

So I update it:

[root@ovirt01 vdsm]# yum update

Please use the update command from the engine admin portal.
The ansible code running from there also performs additional steps other than just yum update.
+Dana Elfassy can you elaborate on other steps performed during the upgrade?

 
Last metadata expiration check: 0:01:38 ago on Sat 03 Oct 2020 01:09:51 PM CEST.
Dependencies resolved.
====================================================================================================
 Package                             Architecture    Version               Repository          Size
====================================================================================================
Installing:
 ovirt-node-ng-image-update          noarch          4.4.2-1.el8           ovirt-4.4          782 M
     replacing  ovirt-node-ng-image-update-placeholder.noarch 4.4.0-2.el8

Transaction Summary
====================================================================================================
Install  1 Package

Total download size: 782 M
Is this ok [y/N]: y
Downloading Packages:
ovirt-node-ng-image-update-4.4  27% [=====                 ] 6.0 MB/s | 145 MB     01:45 ETA

----------------------------------------------------------------------------------------------------
Total                                                               5.3 MB/s | 782 MB     02:28    
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
  Preparing        :                                                                            1/1
  Running scriptlet: ovirt-node-ng-image-update-4.4.2-1.el8.noarch                              1/2
  Installing       : ovirt-node-ng-image-update-4.4.2-1.el8.noarch                              1/2
  Running scriptlet: ovirt-node-ng-image-update-4.4.2-1.el8.noarch                              1/2
  Obsoleting       : ovirt-node-ng-image-update-placeholder-4.4.0-2.el8.noarch                  2/2
  Verifying        : ovirt-node-ng-image-update-4.4.2-1.el8.noarch                              1/2
  Verifying        : ovirt-node-ng-image-update-placeholder-4.4.0-2.el8.noarch                  2/2
Unpersisting: ovirt-node-ng-image-update-placeholder-4.4.0-2.el8.noarch.rpm

Installed:
  ovirt-node-ng-image-update-4.4.2-1.el8.noarch                                                    

Complete!
[root@ovirt01 vdsm]# sync
[root@ovirt01 vdsm]#

I reboot and I'm proposed 4.4.2 by default with 4.4.0 available too.
But the default 4.4.2 goes into emergency mode and if I login, I see that it indeed has a filter inside lvm.conf.
During boot I see this getting blocked:

A start job is running for dev-disk-by\x2d-id .......
the same for apparently 3 disks ( I think the gluster volumes...)


And at emergency mode:


if I login and then exit

Reloading system manager configuration
Starting default target

and then stumped there. After some minutes I get confirmation that I am in emergency mode and give the password again and that I jocan only reboot or see journal log

contents of output of "journalctl -xb" here:

I verified that I can safely boot in 4.4.0 in case..
What to do now?
Thanks,

Gianluca


--