# rpm -qa | grep ovirt-node
ovirt-node-ng-nodectl-4.4.0-1.el8.noarch
python3-ovirt-node-ng-nodectl-4.4.0-1.el8.noarch
ovirt-node-ng-image-update-4.4.6.3-1.el8.noarch
I removed ovirt-node-ng-image-update-4.4.6.3-1.el8.noarch but yum update
and check for updates in GUI still show no updates available.
I can attempt re-installing the package tomorrow, but I'm not confident it
will work since it was already installed.
On Thu, May 27, 2021 at 9:32 PM wodel youchi <wodel.youchi(a)gmail.com> wrote:
Hi,
On the "bad hosts" try to find if there is/are any 4.4.6 rpm installed, if
yes, try to remove them, then try the update again.
You can try to install the ovirt-node rpm manually, here is the link
https://resources.ovirt.org/pub/ovirt-4.4/rpm/el8/noarch/ovirt-node-ng-im...
> # dnf install ovirt-node-ng-image-update-4.4.6.3-1.el8.noarch.rpm
>
PS: remember to use tmux if executing via ssh.
Regards.
Le jeu. 27 mai 2021 à 22:21, Jayme <jaymef(a)gmail.com> a écrit :
> The good host:
>
> bootloader:
> default: ovirt-node-ng-4.4.6.3-0.20210518.0 (4.18.0-301.1.el8.x86_64)
> entries:
> ovirt-node-ng-4.4.6.3-0.20210518.0 (4.18.0-301.1.el8.x86_64):
> index: 0
> kernel:
> /boot//ovirt-node-ng-4.4.6.3-0.20210518.0+1/vmlinuz-4.18.0-301.1.el8.x86_64
> args: crashkernel=auto resume=/dev/mapper/onn_orchard1-swap
> rd.lvm.lv=onn_orchard1/ovirt-node-ng-4.4.6.3-0.20210518.0+1
rd.lvm.lv=onn_orchard1/swap
> rhgb quiet boot=UUID=3069e23f-5dd6-49a8-824d-e54efbeeb9a3 rootflags=discard
> img.bootid=ovirt-node-ng-4.4.6.3-0.20210518.0+1
> root: /dev/onn_orchard1/ovirt-node-ng-4.4.6.3-0.20210518.0+1
> initrd:
> /boot//ovirt-node-ng-4.4.6.3-0.20210518.0+1/initramfs-4.18.0-301.1.el8.x86_64.img
> title: ovirt-node-ng-4.4.6.3-0.20210518.0 (4.18.0-301.1.el8.x86_64)
> blsid: ovirt-node-ng-4.4.6.3-0.20210518.0+1-4.18.0-301.1.el8.x86_64
> ovirt-node-ng-4.4.5.1-0.20210323.0 (4.18.0-240.15.1.el8_3.x86_64):
> index: 1
> kernel:
> /boot//ovirt-node-ng-4.4.5.1-0.20210323.0+1/vmlinuz-4.18.0-240.15.1.el8_3.x86_64
> args: crashkernel=auto resume=/dev/mapper/onn_orchard1-swap
> rd.lvm.lv=onn_orchard1/ovirt-node-ng-4.4.5.1-0.20210323.0+1
rd.lvm.lv=onn_orchard1/swap
> rhgb quiet boot=UUID=3069e23f-5dd6-49a8-824d-e54efbeeb9a3 rootflags=discard
> img.bootid=ovirt-node-ng-4.4.5.1-0.20210323.0+1
> root: /dev/onn_orchard1/ovirt-node-ng-4.4.5.1-0.20210323.0+1
> initrd:
>
/boot//ovirt-node-ng-4.4.5.1-0.20210323.0+1/initramfs-4.18.0-240.15.1.el8_3.x86_64.img
> title: ovirt-node-ng-4.4.5.1-0.20210323.0
> (4.18.0-240.15.1.el8_3.x86_64)
> blsid:
> ovirt-node-ng-4.4.5.1-0.20210323.0+1-4.18.0-240.15.1.el8_3.x86_64
> layers:
> ovirt-node-ng-4.4.5.1-0.20210323.0:
> ovirt-node-ng-4.4.5.1-0.20210323.0+1
> ovirt-node-ng-4.4.6.3-0.20210518.0:
> ovirt-node-ng-4.4.6.3-0.20210518.0+1
> current_layer: ovirt-node-ng-4.4.6.3-0.20210518.0+1
>
>
> The other two show:
>
> bootloader:
> default: ovirt-node-ng-4.4.5.1-0.20210323.0
> (4.18.0-240.15.1.el8_3.x86_64)
> entries:
> ovirt-node-ng-4.4.5.1-0.20210323.0 (4.18.0-240.15.1.el8_3.x86_64):
> index: 0
> kernel:
> /boot//ovirt-node-ng-4.4.5.1-0.20210323.0+1/vmlinuz-4.18.0-240.15.1.el8_3.x86_64
> args: crashkernel=auto resume=/dev/mapper/onn_orchard2-swap
> rd.lvm.lv=onn_orchard2/ovirt-node-ng-4.4.5.1-0.20210323.0+1
rd.lvm.lv=onn_orchard2/swap
> rhgb quiet boot=UUID=cd9dd412-2acd-4f3d-9b3e-44030153856f rootflags=discard
> img.bootid=ovirt-node-ng-4.4.5.1-0.20210323.0+1
> root: /dev/onn_orchard2/ovirt-node-ng-4.4.5.1-0.20210323.0+1
> initrd:
>
/boot//ovirt-node-ng-4.4.5.1-0.20210323.0+1/initramfs-4.18.0-240.15.1.el8_3.x86_64.img
> title: ovirt-node-ng-4.4.5.1-0.20210323.0
> (4.18.0-240.15.1.el8_3.x86_64)
> blsid:
> ovirt-node-ng-4.4.5.1-0.20210323.0+1-4.18.0-240.15.1.el8_3.x86_64
> layers:
> ovirt-node-ng-4.4.5.1-0.20210323.0:
> ovirt-node-ng-4.4.5.1-0.20210323.0+1
> current_layer: ovirt-node-ng-4.4.5.1-0.20210323.0+1
>
> On Thu, May 27, 2021 at 6:18 PM Jayme <jaymef(a)gmail.com> wrote:
>
>> It shows 4.4.5 image on two hosts and 4.4.6 on one. Yum update shows
>> noting available nor does check upgrade in admin GUI.
>>
>> I believe these two hosts failed on first install and succeeded on
>> second attempt which may have something to do with it. How can I force them
>> to update to 4.4.6 image? Would reinstall host do it?
>>
>> On Thu, May 27, 2021 at 6:03 PM wodel youchi <wodel.youchi(a)gmail.com>
>> wrote:
>>
>>> Hi,
>>>
>>> What does "nodectl info" reports on all hosts?
>>> did you execute "refresh capabilities" after the update?
>>>
>>> Regards.
>>>
>>>
>>>
<
https://www.avast.com/sig-email?utm_medium=email&utm_source=link&...
Virus-free.
>>>
www.avast.com
>>>
<
https://www.avast.com/sig-email?utm_medium=email&utm_source=link&...
>>>
<#m_-995438561685975429_m_1584774078427632385_m_-2192448828611170138_m_1909242515811637061_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
>>>
>>> Le jeu. 27 mai 2021 à 20:37, Jayme <jaymef(a)gmail.com> a écrit :
>>>
>>>> I updated my three server HCI cluster from 4.4.5 to 4.4.6. All hosts
>>>> updated successfully and rebooted and are active. I notice that only one
>>>> host out of the three is actually running oVirt node 4.4.6 and the other
>>>> two are running 4.4.5. If I check for upgrade in admin it shows no
upgrades
>>>> available.
>>>>
>>>> Why are two hosts still running 4.4.5 after being successfully
>>>> upgraded/rebooted and how can I get them on 4.4.6 if no upgrades are
being
>>>> found?
>>>> _______________________________________________
>>>> Users mailing list -- users(a)ovirt.org
>>>> To unsubscribe send an email to users-leave(a)ovirt.org
>>>> Privacy Statement:
https://www.ovirt.org/privacy-policy.html
>>>> oVirt Code of Conduct:
>>>>
https://www.ovirt.org/community/about/community-guidelines/
>>>> List Archives:
>>>>
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UN57DRLYE3O...
>>>>
>>>