The good host:

bootloader:
  default: ovirt-node-ng-4.4.6.3-0.20210518.0 (4.18.0-301.1.el8.x86_64)
  entries:
    ovirt-node-ng-4.4.6.3-0.20210518.0 (4.18.0-301.1.el8.x86_64):
      index: 0
      kernel: /boot//ovirt-node-ng-4.4.6.3-0.20210518.0+1/vmlinuz-4.18.0-301.1.el8.x86_64
      args: crashkernel=auto resume=/dev/mapper/onn_orchard1-swap rd.lvm.lv=onn_orchard1/ovirt-node-ng-4.4.6.3-0.20210518.0+1 rd.lvm.lv=onn_orchard1/swap rhgb quiet boot=UUID=3069e23f-5dd6-49a8-824d-e54efbeeb9a3 rootflags=discard img.bootid=ovirt-node-ng-4.4.6.3-0.20210518.0+1
      root: /dev/onn_orchard1/ovirt-node-ng-4.4.6.3-0.20210518.0+1
      initrd: /boot//ovirt-node-ng-4.4.6.3-0.20210518.0+1/initramfs-4.18.0-301.1.el8.x86_64.img
      title: ovirt-node-ng-4.4.6.3-0.20210518.0 (4.18.0-301.1.el8.x86_64)
      blsid: ovirt-node-ng-4.4.6.3-0.20210518.0+1-4.18.0-301.1.el8.x86_64
    ovirt-node-ng-4.4.5.1-0.20210323.0 (4.18.0-240.15.1.el8_3.x86_64):
      index: 1
      kernel: /boot//ovirt-node-ng-4.4.5.1-0.20210323.0+1/vmlinuz-4.18.0-240.15.1.el8_3.x86_64
      args: crashkernel=auto resume=/dev/mapper/onn_orchard1-swap rd.lvm.lv=onn_orchard1/ovirt-node-ng-4.4.5.1-0.20210323.0+1 rd.lvm.lv=onn_orchard1/swap rhgb quiet boot=UUID=3069e23f-5dd6-49a8-824d-e54efbeeb9a3 rootflags=discard img.bootid=ovirt-node-ng-4.4.5.1-0.20210323.0+1
      root: /dev/onn_orchard1/ovirt-node-ng-4.4.5.1-0.20210323.0+1
      initrd: /boot//ovirt-node-ng-4.4.5.1-0.20210323.0+1/initramfs-4.18.0-240.15.1.el8_3.x86_64.img
      title: ovirt-node-ng-4.4.5.1-0.20210323.0 (4.18.0-240.15.1.el8_3.x86_64)
      blsid: ovirt-node-ng-4.4.5.1-0.20210323.0+1-4.18.0-240.15.1.el8_3.x86_64
layers:
  ovirt-node-ng-4.4.5.1-0.20210323.0:
    ovirt-node-ng-4.4.5.1-0.20210323.0+1
  ovirt-node-ng-4.4.6.3-0.20210518.0:
    ovirt-node-ng-4.4.6.3-0.20210518.0+1
current_layer: ovirt-node-ng-4.4.6.3-0.20210518.0+1


The other two show:

bootloader:
  default: ovirt-node-ng-4.4.5.1-0.20210323.0 (4.18.0-240.15.1.el8_3.x86_64)
  entries:
    ovirt-node-ng-4.4.5.1-0.20210323.0 (4.18.0-240.15.1.el8_3.x86_64):
      index: 0
      kernel: /boot//ovirt-node-ng-4.4.5.1-0.20210323.0+1/vmlinuz-4.18.0-240.15.1.el8_3.x86_64
      args: crashkernel=auto resume=/dev/mapper/onn_orchard2-swap rd.lvm.lv=onn_orchard2/ovirt-node-ng-4.4.5.1-0.20210323.0+1 rd.lvm.lv=onn_orchard2/swap rhgb quiet boot=UUID=cd9dd412-2acd-4f3d-9b3e-44030153856f rootflags=discard img.bootid=ovirt-node-ng-4.4.5.1-0.20210323.0+1
      root: /dev/onn_orchard2/ovirt-node-ng-4.4.5.1-0.20210323.0+1
      initrd: /boot//ovirt-node-ng-4.4.5.1-0.20210323.0+1/initramfs-4.18.0-240.15.1.el8_3.x86_64.img
      title: ovirt-node-ng-4.4.5.1-0.20210323.0 (4.18.0-240.15.1.el8_3.x86_64)
      blsid: ovirt-node-ng-4.4.5.1-0.20210323.0+1-4.18.0-240.15.1.el8_3.x86_64
layers:
  ovirt-node-ng-4.4.5.1-0.20210323.0:
    ovirt-node-ng-4.4.5.1-0.20210323.0+1
current_layer: ovirt-node-ng-4.4.5.1-0.20210323.0+1

On Thu, May 27, 2021 at 6:18 PM Jayme <jaymef@gmail.com> wrote:
It shows 4.4.5 image on two hosts and 4.4.6 on one. Yum update shows noting available nor does check upgrade in admin GUI. 

I believe these two hosts failed on first install and succeeded on second attempt which may have something to do with it. How can I force them to update to 4.4.6 image? Would reinstall host do it?

On Thu, May 27, 2021 at 6:03 PM wodel youchi <wodel.youchi@gmail.com> wrote:
Hi,

What does "nodectl info" reports on all hosts?
did you execute "refresh capabilities" after the update?

Regards.

Virus-free. www.avast.com

Le jeu. 27 mai 2021 à 20:37, Jayme <jaymef@gmail.com> a écrit :
I updated my three server HCI cluster from 4.4.5 to 4.4.6. All hosts updated successfully and rebooted and are active. I notice that only one host out of the three is actually running oVirt node 4.4.6 and the other two are running 4.4.5. If I check for upgrade in admin it shows no upgrades available.

Why are two hosts still running 4.4.5 after being successfully upgraded/rebooted and how can I get them on 4.4.6 if no upgrades are being found?
_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-leave@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/UN57DRLYE3OIOP7O3SPKH7P5SHB4XJRJ/