oVirt 4.2.3 to 4.2.4 failed

Hello, I did yum updates on 2 of my oVirt 4.2.3 nodes running the prebuilt node platform and it doesn't appear the updates worked. [root@node6-g8-h4 ~]# yum update Loaded plugins: enabled_repos_upload, fastestmirror, imgbased-persist, : package_upload, product-id, search-disabled-repos, subscription- : manager This system is not registered with an entitlement server. You can use subscription-manager to register. Loading mirror speeds from cached hostfile * ovirt-4.2-epel: linux.mirrors.es.net Resolving Dependencies --> Running transaction check ---> Package ovirt-node-ng-image-update.noarch 0:4.2.3.1-1.el7 will be updated ---> Package ovirt-node-ng-image-update.noarch 0:4.2.4-1.el7 will be obsoleting ---> Package ovirt-node-ng-image-update-placeholder.noarch 0:4.2.3.1-1.el7 will be obsoleted --> Finished Dependency Resolution Dependencies Resolved ========================================================================================================================= Package Arch Version Repository Size ========================================================================================================================= Installing: ovirt-node-ng-image-update noarch 4.2.4-1.el7 ovirt-4.2 647 M replacing ovirt-node-ng-image-update-placeholder.noarch 4.2.3.1-1.el7 Transaction Summary ========================================================================================================================= Install 1 Package Total download size: 647 M Is this ok [y/d/N]: y Downloading packages: warning: /var/cache/yum/x86_64/7/ovirt-4.2/packages/ovirt-node-ng-image-update-4.2.4-1.el7.noarch.rpm: Header V4 RSA/SHA1 Signature, key ID fe590cb7: NOKEY Public key for ovirt-node-ng-image-update-4.2.4-1.el7.noarch.rpm is not installed ovirt-node-ng-image-update-4.2.4-1.el7.noarch.rpm | 647 MB 00:02:07 Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-ovirt-4.2 Importing GPG key 0xFE590CB7: Userid : "oVirt <infra@ovirt.org>" Fingerprint: 31a5 d783 7fad 7cb2 86cd 3469 ab8c 4f9d fe59 0cb7 Package : ovirt-release42-4.2.3.1-1.el7.noarch (installed) From : /etc/pki/rpm-gpg/RPM-GPG-ovirt-4.2 Is this ok [y/N]: y Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : ovirt-node-ng-image-update-4.2.4-1.el7.noarch 1/3 warning: %post(ovirt-node-ng-image-update-4.2.4-1.el7.noarch) scriptlet failed, exit status 1 Non-fatal POSTIN scriptlet failure in rpm package ovirt-node-ng-image-update-4.2.4-1.el7.noarch Erasing : ovirt-node-ng-image-update-placeholder-4.2.3.1-1.el7.noarch 2/3 Cleanup : ovirt-node-ng-image-update-4.2.3.1-1.el7.noarch 3/3 warning: file /usr/share/ovirt-node-ng/image/ovirt-node-ng-4.2.0-0.20180530.0.el7.squashfs.img: remove failed: No such file or directory Uploading Package Profile Unable to upload Package Profile Verifying : ovirt-node-ng-image-update-4.2.4-1.el7.noarch 1/3 Verifying : ovirt-node-ng-image-update-4.2.3.1-1.el7.noarch 2/3 Verifying : ovirt-node-ng-image-update-placeholder-4.2.3.1-1.el7.noarch 3/3 Installed: ovirt-node-ng-image-update.noarch 0:4.2.4-1.el7 Replaced: ovirt-node-ng-image-update-placeholder.noarch 0:4.2.3.1-1.el7 Complete! Uploading Enabled Repositories Report Loaded plugins: fastestmirror, product-id, subscription-manager This system is not registered with an entitlement server. You can use subscription-manager to register. Cannot upload enabled repos report, is this client registered? My engine shows the nodes as having no updates, however the major components including the kernel version and port 9090 admin GUI show 4.2.3 Is there anything I can provide to help diagnose the issue? [root@node6-g8-h4 ~]# rpm -qa | grep ovirt ovirt-imageio-common-1.3.1.2-0.el7.centos.noarch ovirt-host-deploy-1.7.3-1.el7.centos.noarch ovirt-vmconsole-host-1.0.5-4.el7.centos.noarch ovirt-provider-ovn-driver-1.2.10-1.el7.centos.noarch ovirt-engine-sdk-python-3.6.9.1-1.el7.noarch ovirt-setup-lib-1.1.4-1.el7.centos.noarch ovirt-release42-4.2.3.1-1.el7.noarch ovirt-imageio-daemon-1.3.1.2-0.el7.centos.noarch ovirt-hosted-engine-setup-2.2.20-1.el7.centos.noarch ovirt-host-dependencies-4.2.2-2.el7.centos.x86_64 ovirt-hosted-engine-ha-2.2.11-1.el7.centos.noarch ovirt-host-4.2.2-2.el7.centos.x86_64 ovirt-node-ng-image-update-4.2.4-1.el7.noarch ovirt-vmconsole-1.0.5-4.el7.centos.noarch ovirt-release-host-node-4.2.3.1-1.el7.noarch cockpit-ovirt-dashboard-0.11.24-1.el7.centos.noarch ovirt-node-ng-nodectl-4.2.0-0.20180524.0.el7.noarch python-ovirt-engine-sdk4-4.2.6-2.el7.centos.x86_64 [root@node6-g8-h4 ~]# yum update Loaded plugins: enabled_repos_upload, fastestmirror, imgbased-persist, package_upload, product-id, search-disabled-repos, subscription-manager This system is not registered with an entitlement server. You can use subscription-manager to register. Loading mirror speeds from cached hostfile * ovirt-4.2-epel: linux.mirrors.es.net No packages marked for update Uploading Enabled Repositories Report Loaded plugins: fastestmirror, product-id, subscription-manager This system is not registered with an entitlement server. You can use subscription-manager to register. Cannot upload enabled repos report, is this client registered?

Yes, here is the same. It seams the bootloader isn’t configured right ? I did the Upgrade and reboot to 4.2.4 from UI and got: [root@ovn-monster ~]# nodectl info layers: ovirt-node-ng-4.2.4-0.20180626.0: ovirt-node-ng-4.2.4-0.20180626.0+1 ovirt-node-ng-4.2.3.1-0.20180530.0: ovirt-node-ng-4.2.3.1-0.20180530.0+1 ovirt-node-ng-4.2.3-0.20180524.0: ovirt-node-ng-4.2.3-0.20180524.0+1 ovirt-node-ng-4.2.1.1-0.20180223.0: ovirt-node-ng-4.2.1.1-0.20180223.0+1 bootloader: default: ovirt-node-ng-4.2.3-0.20180524.0+1 entries: ovirt-node-ng-4.2.3-0.20180524.0+1: index: 0 title: ovirt-node-ng-4.2.3-0.20180524.0 kernel: /boot/ovirt-node-ng-4.2.3-0.20180524.0+1/vmlinuz-3.10.0-862.3.2.el7.x86_64 args: "ro crashkernel=auto rd.lvm.lv=onn_ovn-monster/ovirt-node-ng-4.2.3-0.20180524.0+1 rd.lvm.lv=onn_ovn-monster/swap rd.md.uuid=c6c3013b:027a9346:67dfd181:89635587 rhgb quiet LANG=de_DE.UTF-8 img.bootid=ovirt-node-ng-4.2.3-0.20180524.0+1" initrd: /boot/ovirt-node-ng-4.2.3-0.20180524.0+1/initramfs-3.10.0-862.3.2.el7.x86_64.img root: /dev/onn_ovn-monster/ovirt-node-ng-4.2.3-0.20180524.0+1 ovirt-node-ng-4.2.1.1-0.20180223.0+1: index: 1 title: ovirt-node-ng-4.2.1.1-0.20180223.0 kernel: /boot/ovirt-node-ng-4.2.1.1-0.20180223.0+1/vmlinuz-3.10.0-693.17.1.el7.x86_64 args: "ro crashkernel=auto rd.lvm.lv=onn_ovn-monster/ovirt-node-ng-4.2.1.1-0.20180223.0+1 rd.lvm.lv=onn_ovn-monster/swap rd.md.uuid=c6c3013b:027a9346:67dfd181:89635587 rhgb quiet LANG=de_DE.UTF-8 img.bootid=ovirt-node-ng-4.2.1.1-0.20180223.0+1" initrd: /boot/ovirt-node-ng-4.2.1.1-0.20180223.0+1/initramfs-3.10.0-693.17.1.el7.x86_64.img root: /dev/onn_ovn-monster/ovirt-node-ng-4.2.1.1-0.20180223.0+1 current_layer: ovirt-node-ng-4.2.3-0.20180524.0+1 [root@ovn-monster ~]# uptime 07:35:27 up 2 days, 15:42, 1 user, load average: 1,07, 1,00, 0,95
Am 29.06.2018 um 23:53 schrieb Matt Simonsen <matt@khoza.com>:
Hello,
I did yum updates on 2 of my oVirt 4.2.3 nodes running the prebuilt node platform and it doesn't appear the updates worked.
[root@node6-g8-h4 ~]# yum update Loaded plugins: enabled_repos_upload, fastestmirror, imgbased-persist, : package_upload, product-id, search-disabled-repos, subscription- : manager This system is not registered with an entitlement server. You can use subscription-manager to register. Loading mirror speeds from cached hostfile * ovirt-4.2-epel: linux.mirrors.es.net Resolving Dependencies --> Running transaction check ---> Package ovirt-node-ng-image-update.noarch 0:4.2.3.1-1.el7 will be updated ---> Package ovirt-node-ng-image-update.noarch 0:4.2.4-1.el7 will be obsoleting ---> Package ovirt-node-ng-image-update-placeholder.noarch 0:4.2.3.1-1.el7 will be obsoleted --> Finished Dependency Resolution
Dependencies Resolved
========================================================================================================================= Package Arch Version Repository Size ========================================================================================================================= Installing: ovirt-node-ng-image-update noarch 4.2.4-1.el7 ovirt-4.2 647 M replacing ovirt-node-ng-image-update-placeholder.noarch 4.2.3.1-1.el7
Transaction Summary ========================================================================================================================= Install 1 Package
Total download size: 647 M Is this ok [y/d/N]: y Downloading packages: warning: /var/cache/yum/x86_64/7/ovirt-4.2/packages/ovirt-node-ng-image-update-4.2.4-1.el7.noarch.rpm: Header V4 RSA/SHA1 Signature, key ID fe590cb7: NOKEY Public key for ovirt-node-ng-image-update-4.2.4-1.el7.noarch.rpm is not installed ovirt-node-ng-image-update-4.2.4-1.el7.noarch.rpm | 647 MB 00:02:07 Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-ovirt-4.2 Importing GPG key 0xFE590CB7: Userid : "oVirt <infra@ovirt.org>" Fingerprint: 31a5 d783 7fad 7cb2 86cd 3469 ab8c 4f9d fe59 0cb7 Package : ovirt-release42-4.2.3.1-1.el7.noarch (installed) From : /etc/pki/rpm-gpg/RPM-GPG-ovirt-4.2 Is this ok [y/N]: y Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : ovirt-node-ng-image-update-4.2.4-1.el7.noarch 1/3 warning: %post(ovirt-node-ng-image-update-4.2.4-1.el7.noarch) scriptlet failed, exit status 1 Non-fatal POSTIN scriptlet failure in rpm package ovirt-node-ng-image-update-4.2.4-1.el7.noarch Erasing : ovirt-node-ng-image-update-placeholder-4.2.3.1-1.el7.noarch 2/3 Cleanup : ovirt-node-ng-image-update-4.2.3.1-1.el7.noarch 3/3 warning: file /usr/share/ovirt-node-ng/image/ovirt-node-ng-4.2.0-0.20180530.0.el7.squashfs.img: remove failed: No such file or directory Uploading Package Profile Unable to upload Package Profile Verifying : ovirt-node-ng-image-update-4.2.4-1.el7.noarch 1/3 Verifying : ovirt-node-ng-image-update-4.2.3.1-1.el7.noarch 2/3 Verifying : ovirt-node-ng-image-update-placeholder-4.2.3.1-1.el7.noarch 3/3
Installed: ovirt-node-ng-image-update.noarch 0:4.2.4-1.el7
Replaced: ovirt-node-ng-image-update-placeholder.noarch 0:4.2.3.1-1.el7
Complete! Uploading Enabled Repositories Report Loaded plugins: fastestmirror, product-id, subscription-manager This system is not registered with an entitlement server. You can use subscription-manager to register. Cannot upload enabled repos report, is this client registered?
My engine shows the nodes as having no updates, however the major components including the kernel version and port 9090 admin GUI show 4.2.3
Is there anything I can provide to help diagnose the issue?
[root@node6-g8-h4 ~]# rpm -qa | grep ovirt
ovirt-imageio-common-1.3.1.2-0.el7.centos.noarch ovirt-host-deploy-1.7.3-1.el7.centos.noarch ovirt-vmconsole-host-1.0.5-4.el7.centos.noarch ovirt-provider-ovn-driver-1.2.10-1.el7.centos.noarch ovirt-engine-sdk-python-3.6.9.1-1.el7.noarch ovirt-setup-lib-1.1.4-1.el7.centos.noarch ovirt-release42-4.2.3.1-1.el7.noarch ovirt-imageio-daemon-1.3.1.2-0.el7.centos.noarch ovirt-hosted-engine-setup-2.2.20-1.el7.centos.noarch ovirt-host-dependencies-4.2.2-2.el7.centos.x86_64 ovirt-hosted-engine-ha-2.2.11-1.el7.centos.noarch ovirt-host-4.2.2-2.el7.centos.x86_64 ovirt-node-ng-image-update-4.2.4-1.el7.noarch ovirt-vmconsole-1.0.5-4.el7.centos.noarch ovirt-release-host-node-4.2.3.1-1.el7.noarch cockpit-ovirt-dashboard-0.11.24-1.el7.centos.noarch ovirt-node-ng-nodectl-4.2.0-0.20180524.0.el7.noarch python-ovirt-engine-sdk4-4.2.6-2.el7.centos.x86_64
[root@node6-g8-h4 ~]# yum update Loaded plugins: enabled_repos_upload, fastestmirror, imgbased-persist, package_upload, product-id, search-disabled-repos, subscription-manager This system is not registered with an entitlement server. You can use subscription-manager to register. Loading mirror speeds from cached hostfile * ovirt-4.2-epel: linux.mirrors.es.net No packages marked for update Uploading Enabled Repositories Report Loaded plugins: fastestmirror, product-id, subscription-manager This system is not registered with an entitlement server. You can use subscription-manager to register. Cannot upload enabled repos report, is this client registered? _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/UHQMGULUHL4GBB...

Yuval, can you please have a look? 2018-06-30 7:48 GMT+02:00 Oliver Riesener <Oliver.Riesener@hs-bremen.de>:
Yes, here is the same.
It seams the bootloader isn’t configured right ?
I did the Upgrade and reboot to 4.2.4 from UI and got:
[root@ovn-monster ~]# nodectl info layers: ovirt-node-ng-4.2.4-0.20180626.0: ovirt-node-ng-4.2.4-0.20180626.0+1 ovirt-node-ng-4.2.3.1-0.20180530.0: ovirt-node-ng-4.2.3.1-0.20180530.0+1 ovirt-node-ng-4.2.3-0.20180524.0: ovirt-node-ng-4.2.3-0.20180524.0+1 ovirt-node-ng-4.2.1.1-0.20180223.0: ovirt-node-ng-4.2.1.1-0.20180223.0+1 bootloader: default: ovirt-node-ng-4.2.3-0.20180524.0+1 entries: ovirt-node-ng-4.2.3-0.20180524.0+1: index: 0 title: ovirt-node-ng-4.2.3-0.20180524.0 kernel: /boot/ovirt-node-ng-4.2.3-0.20180524.0+1/vmlinuz-3.10.0- 862.3.2.el7.x86_64 args: "ro crashkernel=auto rd.lvm.lv=onn_ovn-monster/ ovirt-node-ng-4.2.3-0.20180524.0+1 rd.lvm.lv=onn_ovn-monster/swap rd.md.uuid=c6c3013b:027a9346:67dfd181:89635587 rhgb quiet LANG=de_DE.UTF-8 img.bootid=ovirt-node-ng-4.2.3-0.20180524.0+1" initrd: /boot/ovirt-node-ng-4.2.3-0.20180524.0+1/initramfs-3.10.0- 862.3.2.el7.x86_64.img root: /dev/onn_ovn-monster/ovirt-node-ng-4.2.3-0.20180524.0+1 ovirt-node-ng-4.2.1.1-0.20180223.0+1: index: 1 title: ovirt-node-ng-4.2.1.1-0.20180223.0 kernel: /boot/ovirt-node-ng-4.2.1.1-0.20180223.0+1/vmlinuz-3.10.0- 693.17.1.el7.x86_64 args: "ro crashkernel=auto rd.lvm.lv=onn_ovn-monster/ ovirt-node-ng-4.2.1.1-0.20180223.0+1 rd.lvm.lv=onn_ovn-monster/swap rd.md.uuid=c6c3013b:027a9346:67dfd181:89635587 rhgb quiet LANG=de_DE.UTF-8 img.bootid=ovirt-node-ng-4.2.1.1-0.20180223.0+1" initrd: /boot/ovirt-node-ng-4.2.1.1-0.20180223.0+1/initramfs-3.10.0- 693.17.1.el7.x86_64.img root: /dev/onn_ovn-monster/ovirt-node-ng-4.2.1.1-0.20180223.0+1 current_layer: ovirt-node-ng-4.2.3-0.20180524.0+1 [root@ovn-monster ~]# uptime 07:35:27 up 2 days, 15:42, 1 user, load average: 1,07, 1,00, 0,95
Am 29.06.2018 um 23:53 schrieb Matt Simonsen <matt@khoza.com>:
Hello,
I did yum updates on 2 of my oVirt 4.2.3 nodes running the prebuilt node platform and it doesn't appear the updates worked.
[root@node6-g8-h4 ~]# yum update Loaded plugins: enabled_repos_upload, fastestmirror, imgbased-persist, : package_upload, product-id, search-disabled-repos, subscription- : manager This system is not registered with an entitlement server. You can use subscription-manager to register. Loading mirror speeds from cached hostfile * ovirt-4.2-epel: linux.mirrors.es.net Resolving Dependencies --> Running transaction check ---> Package ovirt-node-ng-image-update.noarch 0:4.2.3.1-1.el7 will be updated ---> Package ovirt-node-ng-image-update.noarch 0:4.2.4-1.el7 will be obsoleting ---> Package ovirt-node-ng-image-update-placeholder.noarch 0:4.2.3.1-1.el7 will be obsoleted --> Finished Dependency Resolution
Dependencies Resolved
============================================================ ============================================================= Package Arch Version Repository Size ============================================================ ============================================================= Installing: ovirt-node-ng-image-update noarch 4.2.4-1.el7 ovirt-4.2 647 M replacing ovirt-node-ng-image-update-placeholder.noarch 4.2.3.1-1.el7
Transaction Summary ============================================================ ============================================================= Install 1 Package
Total download size: 647 M Is this ok [y/d/N]: y Downloading packages: warning: /var/cache/yum/x86_64/7/ovirt-4.2/packages/ovirt-node-ng- image-update-4.2.4-1.el7.noarch.rpm: Header V4 RSA/SHA1 Signature, key ID fe590cb7: NOKEY Public key for ovirt-node-ng-image-update-4.2.4-1.el7.noarch.rpm is not installed ovirt-node-ng-image-update-4.2.4-1.el7.noarch.rpm | 647 MB 00:02:07 Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-ovirt-4.2 Importing GPG key 0xFE590CB7: Userid : "oVirt <infra@ovirt.org>" Fingerprint: 31a5 d783 7fad 7cb2 86cd 3469 ab8c 4f9d fe59 0cb7 Package : ovirt-release42-4.2.3.1-1.el7.noarch (installed) From : /etc/pki/rpm-gpg/RPM-GPG-ovirt-4.2 Is this ok [y/N]: y Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : ovirt-node-ng-image-update-4.2.4-1.el7.noarch 1/3 warning: %post(ovirt-node-ng-image-update-4.2.4-1.el7.noarch) scriptlet failed, exit status 1 Non-fatal POSTIN scriptlet failure in rpm package ovirt-node-ng-image-update-4.2.4-1.el7.noarch Erasing : ovirt-node-ng-image-update-placeholder-4.2.3.1-1.el7.noarch 2/3 Cleanup : ovirt-node-ng-image-update-4.2.3.1-1.el7.noarch 3/3 warning: file /usr/share/ovirt-node-ng/image/ovirt-node-ng-4.2.0-0.20180530.0.el7.squashfs.img: remove failed: No such file or directory Uploading Package Profile Unable to upload Package Profile Verifying : ovirt-node-ng-image-update-4.2.4-1.el7.noarch 1/3 Verifying : ovirt-node-ng-image-update-4.2.3.1-1.el7.noarch 2/3 Verifying : ovirt-node-ng-image-update-placeholder-4.2.3.1-1.el7.noarch 3/3
Installed: ovirt-node-ng-image-update.noarch 0:4.2.4-1.el7
Replaced: ovirt-node-ng-image-update-placeholder.noarch 0:4.2.3.1-1.el7
Complete! Uploading Enabled Repositories Report Loaded plugins: fastestmirror, product-id, subscription-manager This system is not registered with an entitlement server. You can use subscription-manager to register. Cannot upload enabled repos report, is this client registered?
My engine shows the nodes as having no updates, however the major components including the kernel version and port 9090 admin GUI show 4.2.3
Is there anything I can provide to help diagnose the issue?
[root@node6-g8-h4 ~]# rpm -qa | grep ovirt
ovirt-imageio-common-1.3.1.2-0.el7.centos.noarch ovirt-host-deploy-1.7.3-1.el7.centos.noarch ovirt-vmconsole-host-1.0.5-4.el7.centos.noarch ovirt-provider-ovn-driver-1.2.10-1.el7.centos.noarch ovirt-engine-sdk-python-3.6.9.1-1.el7.noarch ovirt-setup-lib-1.1.4-1.el7.centos.noarch ovirt-release42-4.2.3.1-1.el7.noarch ovirt-imageio-daemon-1.3.1.2-0.el7.centos.noarch ovirt-hosted-engine-setup-2.2.20-1.el7.centos.noarch ovirt-host-dependencies-4.2.2-2.el7.centos.x86_64 ovirt-hosted-engine-ha-2.2.11-1.el7.centos.noarch ovirt-host-4.2.2-2.el7.centos.x86_64 ovirt-node-ng-image-update-4.2.4-1.el7.noarch ovirt-vmconsole-1.0.5-4.el7.centos.noarch ovirt-release-host-node-4.2.3.1-1.el7.noarch cockpit-ovirt-dashboard-0.11.24-1.el7.centos.noarch ovirt-node-ng-nodectl-4.2.0-0.20180524.0.el7.noarch python-ovirt-engine-sdk4-4.2.6-2.el7.centos.x86_64
[root@node6-g8-h4 ~]# yum update Loaded plugins: enabled_repos_upload, fastestmirror, imgbased-persist, package_upload, product-id, search-disabled-repos, subscription-manager This system is not registered with an entitlement server. You can use subscription-manager to register. Loading mirror speeds from cached hostfile * ovirt-4.2-epel: linux.mirrors.es.net No packages marked for update Uploading Enabled Repositories Report Loaded plugins: fastestmirror, product-id, subscription-manager This system is not registered with an entitlement server. You can use subscription-manager to register. Cannot upload enabled repos report, is this client registered? _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community- guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/ message/UHQMGULUHL4GBBHUBNGOAICJEM6W3RVW/
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community- guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/ message/RTOXFNAXQ3NJBWX7RXOYK5H5RZBHX2OK/
-- SANDRO BONAZZOLA MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV Red Hat EMEA <https://www.redhat.com/> sbonazzo@redhat.com <https://red.ht/sig>

Looks like the upgrade script failed - can you please attach /var/log/imgbased.log or /tmp/imgbased.log ? Thanks, Yuval. On Mon, Jul 2, 2018 at 2:54 PM, Sandro Bonazzola <sbonazzo@redhat.com> wrote:
Yuval, can you please have a look?
2018-06-30 7:48 GMT+02:00 Oliver Riesener <Oliver.Riesener@hs-bremen.de>:
Yes, here is the same.
It seams the bootloader isn’t configured right ?
I did the Upgrade and reboot to 4.2.4 from UI and got:
[root@ovn-monster ~]# nodectl info layers: ovirt-node-ng-4.2.4-0.20180626.0: ovirt-node-ng-4.2.4-0.20180626.0+1 ovirt-node-ng-4.2.3.1-0.20180530.0: ovirt-node-ng-4.2.3.1-0.20180530.0+1 ovirt-node-ng-4.2.3-0.20180524.0: ovirt-node-ng-4.2.3-0.20180524.0+1 ovirt-node-ng-4.2.1.1-0.20180223.0: ovirt-node-ng-4.2.1.1-0.20180223.0+1 bootloader: default: ovirt-node-ng-4.2.3-0.20180524.0+1 entries: ovirt-node-ng-4.2.3-0.20180524.0+1: index: 0 title: ovirt-node-ng-4.2.3-0.20180524.0 kernel: /boot/ovirt-node-ng-4.2.3-0.20 180524.0+1/vmlinuz-3.10.0-862.3.2.el7.x86_64 args: "ro crashkernel=auto rd.lvm.lv=onn_ovn-monster/ovirt-node-ng-4.2.3-0.20180524.0+1 rd.lvm.lv=onn_ovn-monster/swap rd.md.uuid=c6c3013b:027a9346:67dfd181:89635587 rhgb quiet LANG=de_DE.UTF-8 img.bootid=ovirt-node-ng-4.2.3 -0.20180524.0+1" initrd: /boot/ovirt-node-ng-4.2.3-0.20 180524.0+1/initramfs-3.10.0-862.3.2.el7.x86_64.img root: /dev/onn_ovn-monster/ovirt-node-ng-4.2.3-0.20180524.0+1 ovirt-node-ng-4.2.1.1-0.20180223.0+1: index: 1 title: ovirt-node-ng-4.2.1.1-0.20180223.0 kernel: /boot/ovirt-node-ng-4.2.1.1-0. 20180223.0+1/vmlinuz-3.10.0-693.17.1.el7.x86_64 args: "ro crashkernel=auto rd.lvm.lv=onn_ovn-monster/ovir t-node-ng-4.2.1.1-0.20180223.0+1 rd.lvm.lv=onn_ovn-monster/swap rd.md.uuid=c6c3013b:027a9346:67dfd181:89635587 rhgb quiet LANG=de_DE.UTF-8 img.bootid=ovirt-node-ng-4.2.1.1-0.20180223.0+1" initrd: /boot/ovirt-node-ng-4.2.1.1-0. 20180223.0+1/initramfs-3.10.0-693.17.1.el7.x86_64.img root: /dev/onn_ovn-monster/ovirt-node-ng-4.2.1.1-0.20180223.0+1 current_layer: ovirt-node-ng-4.2.3-0.20180524.0+1 [root@ovn-monster ~]# uptime 07:35:27 up 2 days, 15:42, 1 user, load average: 1,07, 1,00, 0,95
Am 29.06.2018 um 23:53 schrieb Matt Simonsen <matt@khoza.com>:
Hello,
I did yum updates on 2 of my oVirt 4.2.3 nodes running the prebuilt node platform and it doesn't appear the updates worked.
[root@node6-g8-h4 ~]# yum update Loaded plugins: enabled_repos_upload, fastestmirror, imgbased-persist, : package_upload, product-id, search-disabled-repos, subscription- : manager This system is not registered with an entitlement server. You can use subscription-manager to register. Loading mirror speeds from cached hostfile * ovirt-4.2-epel: linux.mirrors.es.net Resolving Dependencies --> Running transaction check ---> Package ovirt-node-ng-image-update.noarch 0:4.2.3.1-1.el7 will be updated ---> Package ovirt-node-ng-image-update.noarch 0:4.2.4-1.el7 will be obsoleting ---> Package ovirt-node-ng-image-update-placeholder.noarch 0:4.2.3.1-1.el7 will be obsoleted --> Finished Dependency Resolution
Dependencies Resolved
============================================================ ============================================================= Package Arch Version Repository Size ============================================================ ============================================================= Installing: ovirt-node-ng-image-update noarch 4.2.4-1.el7 ovirt-4.2 647 M replacing ovirt-node-ng-image-update-placeholder.noarch 4.2.3.1-1.el7
Transaction Summary ============================================================ ============================================================= Install 1 Package
Total download size: 647 M Is this ok [y/d/N]: y Downloading packages: warning: /var/cache/yum/x86_64/7/ovirt-4.2/packages/ovirt-node-ng-ima ge-update-4.2.4-1.el7.noarch.rpm: Header V4 RSA/SHA1 Signature, key ID fe590cb7: NOKEY Public key for ovirt-node-ng-image-update-4.2.4-1.el7.noarch.rpm is not installed ovirt-node-ng-image-update-4.2.4-1.el7.noarch.rpm | 647 MB 00:02:07 Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-ovirt-4.2 Importing GPG key 0xFE590CB7: Userid : "oVirt <infra@ovirt.org>" Fingerprint: 31a5 d783 7fad 7cb2 86cd 3469 ab8c 4f9d fe59 0cb7 Package : ovirt-release42-4.2.3.1-1.el7.noarch (installed) From : /etc/pki/rpm-gpg/RPM-GPG-ovirt-4.2 Is this ok [y/N]: y Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : ovirt-node-ng-image-update-4.2.4-1.el7.noarch 1/3 warning: %post(ovirt-node-ng-image-update-4.2.4-1.el7.noarch) scriptlet failed, exit status 1 Non-fatal POSTIN scriptlet failure in rpm package ovirt-node-ng-image-update-4.2.4-1.el7.noarch Erasing : ovirt-node-ng-image-update-placeholder-4.2.3.1-1.el7.noarch 2/3 Cleanup : ovirt-node-ng-image-update-4.2.3.1-1.el7.noarch 3/3 warning: file /usr/share/ovirt-node-ng/image /ovirt-node-ng-4.2.0-0.20180530.0.el7.squashfs.img: remove failed: No such file or directory Uploading Package Profile Unable to upload Package Profile Verifying : ovirt-node-ng-image-update-4.2.4-1.el7.noarch 1/3 Verifying : ovirt-node-ng-image-update-4.2.3.1-1.el7.noarch 2/3 Verifying : ovirt-node-ng-image-update-placeholder-4.2.3.1-1.el7.noarch 3/3
Installed: ovirt-node-ng-image-update.noarch 0:4.2.4-1.el7
Replaced: ovirt-node-ng-image-update-placeholder.noarch 0:4.2.3.1-1.el7
Complete! Uploading Enabled Repositories Report Loaded plugins: fastestmirror, product-id, subscription-manager This system is not registered with an entitlement server. You can use subscription-manager to register. Cannot upload enabled repos report, is this client registered?
My engine shows the nodes as having no updates, however the major components including the kernel version and port 9090 admin GUI show 4.2.3
Is there anything I can provide to help diagnose the issue?
[root@node6-g8-h4 ~]# rpm -qa | grep ovirt
ovirt-imageio-common-1.3.1.2-0.el7.centos.noarch ovirt-host-deploy-1.7.3-1.el7.centos.noarch ovirt-vmconsole-host-1.0.5-4.el7.centos.noarch ovirt-provider-ovn-driver-1.2.10-1.el7.centos.noarch ovirt-engine-sdk-python-3.6.9.1-1.el7.noarch ovirt-setup-lib-1.1.4-1.el7.centos.noarch ovirt-release42-4.2.3.1-1.el7.noarch ovirt-imageio-daemon-1.3.1.2-0.el7.centos.noarch ovirt-hosted-engine-setup-2.2.20-1.el7.centos.noarch ovirt-host-dependencies-4.2.2-2.el7.centos.x86_64 ovirt-hosted-engine-ha-2.2.11-1.el7.centos.noarch ovirt-host-4.2.2-2.el7.centos.x86_64 ovirt-node-ng-image-update-4.2.4-1.el7.noarch ovirt-vmconsole-1.0.5-4.el7.centos.noarch ovirt-release-host-node-4.2.3.1-1.el7.noarch cockpit-ovirt-dashboard-0.11.24-1.el7.centos.noarch ovirt-node-ng-nodectl-4.2.0-0.20180524.0.el7.noarch python-ovirt-engine-sdk4-4.2.6-2.el7.centos.x86_64
[root@node6-g8-h4 ~]# yum update Loaded plugins: enabled_repos_upload, fastestmirror, imgbased-persist, package_upload, product-id, search-disabled-repos, subscription-manager This system is not registered with an entitlement server. You can use subscription-manager to register. Loading mirror speeds from cached hostfile * ovirt-4.2-epel: linux.mirrors.es.net No packages marked for update Uploading Enabled Repositories Report Loaded plugins: fastestmirror, product-id, subscription-manager This system is not registered with an entitlement server. You can use subscription-manager to register. Cannot upload enabled repos report, is this client registered? _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/communit y/about/community-guidelines/ List Archives: https://lists.ovirt.org/archiv es/list/users@ovirt.org/message/UHQMGULUHL4GBBHUBNGOAICJEM6W3RVW/
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/communit y/about/community-guidelines/ List Archives: https://lists.ovirt.org/archiv es/list/users@ovirt.org/message/RTOXFNAXQ3NJBWX7RXOYK5H5RZBHX2OK/
--
SANDRO BONAZZOLA
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo@redhat.com <https://red.ht/sig>

2018-07-02 13:58 GMT+02:00 Yuval Turgeman <yuvalt@redhat.com>:
Looks like the upgrade script failed - can you please attach /var/log/imgbased.log or /tmp/imgbased.log ?
Just re-tested locally in a VM 4.2.3.1 -> 4.2.4 and it worked perfectly. # nodectl info layers: ovirt-node-ng-4.2.4-0.20180626.0: ovirt-node-ng-4.2.4-0.20180626.0+1 ovirt-node-ng-4.2.3.1-0.20180530.0: ovirt-node-ng-4.2.3.1-0.20180530.0+1 bootloader: default: ovirt-node-ng-4.2.4-0.20180626.0+1 entries: ovirt-node-ng-4.2.3.1-0.20180530.0+1: index: 1 title: ovirt-node-ng-4.2.3.1-0.20180530.0 kernel: /boot/ovirt-node-ng-4.2.3.1-0.20180530.0+1/vmlinuz-3.10.0-862.3.2.el7.x86_64 args: "ro crashkernel=auto rd.lvm.lv=onn_host/ovirt-node-ng-4.2.3.1-0.20180530.0+1 rd.lvm.lv=onn_host/swap rhgb quiet LANG=it_IT.UTF-8 img.bootid=ovirt-node-ng-4.2.3.1-0.20180530.0+1" initrd: /boot/ovirt-node-ng-4.2.3.1-0.20180530.0+1/initramfs-3.10.0-862.3.2.el7.x86_64.img root: /dev/onn_host/ovirt-node-ng-4.2.3.1-0.20180530.0+1 ovirt-node-ng-4.2.4-0.20180626.0+1: index: 0 title: ovirt-node-ng-4.2.4-0.20180626.0 kernel: /boot/ovirt-node-ng-4.2.4-0.20180626.0+1/vmlinuz-3.10.0-862.3.3.el7.x86_64 args: "ro crashkernel=auto rd.lvm.lv=onn_host/swap rd.lvm.lv=onn_host/ovirt-node-ng-4.2.4-0.20180626.0+1 rhgb quiet LANG=it_IT.UTF-8 img.bootid=ovirt-node-ng-4.2.4-0.20180626.0+1" initrd: /boot/ovirt-node-ng-4.2.4-0.20180626.0+1/initramfs-3.10.0-862.3.3.el7.x86_64.img root: /dev/onn_host/ovirt-node-ng-4.2.4-0.20180626.0+1 current_layer: ovirt-node-ng-4.2.4-0.20180626.0+1
Thanks, Yuval.
On Mon, Jul 2, 2018 at 2:54 PM, Sandro Bonazzola <sbonazzo@redhat.com> wrote:
Yuval, can you please have a look?
2018-06-30 7:48 GMT+02:00 Oliver Riesener <Oliver.Riesener@hs-bremen.de>:
Yes, here is the same.
It seams the bootloader isn’t configured right ?
I did the Upgrade and reboot to 4.2.4 from UI and got:
[root@ovn-monster ~]# nodectl info layers: ovirt-node-ng-4.2.4-0.20180626.0: ovirt-node-ng-4.2.4-0.20180626.0+1 ovirt-node-ng-4.2.3.1-0.20180530.0: ovirt-node-ng-4.2.3.1-0.20180530.0+1 ovirt-node-ng-4.2.3-0.20180524.0: ovirt-node-ng-4.2.3-0.20180524.0+1 ovirt-node-ng-4.2.1.1-0.20180223.0: ovirt-node-ng-4.2.1.1-0.20180223.0+1 bootloader: default: ovirt-node-ng-4.2.3-0.20180524.0+1 entries: ovirt-node-ng-4.2.3-0.20180524.0+1: index: 0 title: ovirt-node-ng-4.2.3-0.20180524.0 kernel: /boot/ovirt-node-ng-4.2.3-0.20 180524.0+1/vmlinuz-3.10.0-862.3.2.el7.x86_64 args: "ro crashkernel=auto rd.lvm.lv=onn_ovn-monster/ovirt-node-ng-4.2.3-0.20180524.0+1 rd.lvm.lv=onn_ovn-monster/swap rd.md.uuid=c6c3013b:027a9346:67dfd181:89635587 rhgb quiet LANG=de_DE.UTF-8 img.bootid=ovirt-node-ng-4.2.3 -0.20180524.0+1" initrd: /boot/ovirt-node-ng-4.2.3-0.20 180524.0+1/initramfs-3.10.0-862.3.2.el7.x86_64.img root: /dev/onn_ovn-monster/ovirt-node-ng-4.2.3-0.20180524.0+1 ovirt-node-ng-4.2.1.1-0.20180223.0+1: index: 1 title: ovirt-node-ng-4.2.1.1-0.20180223.0 kernel: /boot/ovirt-node-ng-4.2.1.1-0. 20180223.0+1/vmlinuz-3.10.0-693.17.1.el7.x86_64 args: "ro crashkernel=auto rd.lvm.lv=onn_ovn-monster/ovir t-node-ng-4.2.1.1-0.20180223.0+1 rd.lvm.lv=onn_ovn-monster/swap rd.md.uuid=c6c3013b:027a9346:67dfd181:89635587 rhgb quiet LANG=de_DE.UTF-8 img.bootid=ovirt-node-ng-4.2.1.1-0.20180223.0+1" initrd: /boot/ovirt-node-ng-4.2.1.1-0. 20180223.0+1/initramfs-3.10.0-693.17.1.el7.x86_64.img root: /dev/onn_ovn-monster/ovirt-node-ng-4.2.1.1-0.20180223.0+1 current_layer: ovirt-node-ng-4.2.3-0.20180524.0+1 [root@ovn-monster ~]# uptime 07:35:27 up 2 days, 15:42, 1 user, load average: 1,07, 1,00, 0,95
Am 29.06.2018 um 23:53 schrieb Matt Simonsen <matt@khoza.com>:
Hello,
I did yum updates on 2 of my oVirt 4.2.3 nodes running the prebuilt node platform and it doesn't appear the updates worked.
[root@node6-g8-h4 ~]# yum update Loaded plugins: enabled_repos_upload, fastestmirror, imgbased-persist, : package_upload, product-id, search-disabled-repos, subscription- : manager This system is not registered with an entitlement server. You can use subscription-manager to register. Loading mirror speeds from cached hostfile * ovirt-4.2-epel: linux.mirrors.es.net Resolving Dependencies --> Running transaction check ---> Package ovirt-node-ng-image-update.noarch 0:4.2.3.1-1.el7 will be updated ---> Package ovirt-node-ng-image-update.noarch 0:4.2.4-1.el7 will be obsoleting ---> Package ovirt-node-ng-image-update-placeholder.noarch 0:4.2.3.1-1.el7 will be obsoleted --> Finished Dependency Resolution
Dependencies Resolved
============================================================ ============================================================= Package Arch Version Repository Size ============================================================ ============================================================= Installing: ovirt-node-ng-image-update noarch 4.2.4-1.el7 ovirt-4.2 647 M replacing ovirt-node-ng-image-update-placeholder.noarch 4.2.3.1-1.el7
Transaction Summary ============================================================ ============================================================= Install 1 Package
Total download size: 647 M Is this ok [y/d/N]: y Downloading packages: warning: /var/cache/yum/x86_64/7/ovirt-4.2/packages/ovirt-node-ng-ima ge-update-4.2.4-1.el7.noarch.rpm: Header V4 RSA/SHA1 Signature, key ID fe590cb7: NOKEY Public key for ovirt-node-ng-image-update-4.2.4-1.el7.noarch.rpm is not installed ovirt-node-ng-image-update-4.2.4-1.el7.noarch.rpm | 647 MB 00:02:07 Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-ovirt-4.2 Importing GPG key 0xFE590CB7: Userid : "oVirt <infra@ovirt.org>" Fingerprint: 31a5 d783 7fad 7cb2 86cd 3469 ab8c 4f9d fe59 0cb7 Package : ovirt-release42-4.2.3.1-1.el7.noarch (installed) From : /etc/pki/rpm-gpg/RPM-GPG-ovirt-4.2 Is this ok [y/N]: y Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : ovirt-node-ng-image-update-4.2.4-1.el7.noarch 1/3 warning: %post(ovirt-node-ng-image-update-4.2.4-1.el7.noarch) scriptlet failed, exit status 1 Non-fatal POSTIN scriptlet failure in rpm package ovirt-node-ng-image-update-4.2.4-1.el7.noarch Erasing : ovirt-node-ng-image-update-placeholder-4.2.3.1-1.el7.noarch 2/3 Cleanup : ovirt-node-ng-image-update-4.2.3.1-1.el7.noarch 3/3 warning: file /usr/share/ovirt-node-ng/image /ovirt-node-ng-4.2.0-0.20180530.0.el7.squashfs.img: remove failed: No such file or directory Uploading Package Profile Unable to upload Package Profile Verifying : ovirt-node-ng-image-update-4.2.4-1.el7.noarch 1/3 Verifying : ovirt-node-ng-image-update-4.2.3.1-1.el7.noarch 2/3 Verifying : ovirt-node-ng-image-update-placeholder-4.2.3.1-1.el7.noarch 3/3
Installed: ovirt-node-ng-image-update.noarch 0:4.2.4-1.el7
Replaced: ovirt-node-ng-image-update-placeholder.noarch 0:4.2.3.1-1.el7
Complete! Uploading Enabled Repositories Report Loaded plugins: fastestmirror, product-id, subscription-manager This system is not registered with an entitlement server. You can use subscription-manager to register. Cannot upload enabled repos report, is this client registered?
My engine shows the nodes as having no updates, however the major components including the kernel version and port 9090 admin GUI show 4.2.3
Is there anything I can provide to help diagnose the issue?
[root@node6-g8-h4 ~]# rpm -qa | grep ovirt
ovirt-imageio-common-1.3.1.2-0.el7.centos.noarch ovirt-host-deploy-1.7.3-1.el7.centos.noarch ovirt-vmconsole-host-1.0.5-4.el7.centos.noarch ovirt-provider-ovn-driver-1.2.10-1.el7.centos.noarch ovirt-engine-sdk-python-3.6.9.1-1.el7.noarch ovirt-setup-lib-1.1.4-1.el7.centos.noarch ovirt-release42-4.2.3.1-1.el7.noarch ovirt-imageio-daemon-1.3.1.2-0.el7.centos.noarch ovirt-hosted-engine-setup-2.2.20-1.el7.centos.noarch ovirt-host-dependencies-4.2.2-2.el7.centos.x86_64 ovirt-hosted-engine-ha-2.2.11-1.el7.centos.noarch ovirt-host-4.2.2-2.el7.centos.x86_64 ovirt-node-ng-image-update-4.2.4-1.el7.noarch ovirt-vmconsole-1.0.5-4.el7.centos.noarch ovirt-release-host-node-4.2.3.1-1.el7.noarch cockpit-ovirt-dashboard-0.11.24-1.el7.centos.noarch ovirt-node-ng-nodectl-4.2.0-0.20180524.0.el7.noarch python-ovirt-engine-sdk4-4.2.6-2.el7.centos.x86_64
[root@node6-g8-h4 ~]# yum update Loaded plugins: enabled_repos_upload, fastestmirror, imgbased-persist, package_upload, product-id, search-disabled-repos, subscription-manager This system is not registered with an entitlement server. You can use subscription-manager to register. Loading mirror speeds from cached hostfile * ovirt-4.2-epel: linux.mirrors.es.net No packages marked for update Uploading Enabled Repositories Report Loaded plugins: fastestmirror, product-id, subscription-manager This system is not registered with an entitlement server. You can use subscription-manager to register. Cannot upload enabled repos report, is this client registered? _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/communit y/about/community-guidelines/ List Archives: https://lists.ovirt.org/archiv es/list/users@ovirt.org/message/UHQMGULUHL4GBBHUBNGOAICJEM6W3RVW/
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/communit y/about/community-guidelines/ List Archives: https://lists.ovirt.org/archiv es/list/users@ovirt.org/message/RTOXFNAXQ3NJBWX7RXOYK5H5RZBHX2OK/
--
SANDRO BONAZZOLA
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo@redhat.com <https://red.ht/sig>
-- SANDRO BONAZZOLA MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV Red Hat EMEA <https://www.redhat.com/> sbonazzo@redhat.com <https://red.ht/sig>

This error adds some clarity. That said, I'm a bit unsure how the space can be the issue given I have several hundred GB of storage in the thin pool that's unused... How do you suggest I proceed? Thank you for your help, Matt [root@node6-g8-h4 ~]# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert home onn_node1-g8-h4 Vwi-aotz-- 1.00g pool00 4.79 ovirt-node-ng-4.2.2-0.20180423.0 onn_node1-g8-h4 Vwi---tz-k <50.06g pool00 root ovirt-node-ng-4.2.2-0.20180423.0+1 onn_node1-g8-h4 Vwi---tz-- <50.06g pool00 ovirt-node-ng-4.2.2-0.20180423.0 ovirt-node-ng-4.2.3.1-0.20180530.0 onn_node1-g8-h4 Vri---tz-k <50.06g pool00 ovirt-node-ng-4.2.3.1-0.20180530.0+1 onn_node1-g8-h4 Vwi-aotz-- <50.06g pool00 ovirt-node-ng-4.2.3.1-0.20180530.0 6.95 pool00 onn_node1-g8-h4 twi-aotz-- <1.30t 76.63 50.34 root onn_node1-g8-h4 Vwi---tz-- <50.06g pool00 tmp onn_node1-g8-h4 Vwi-aotz-- 1.00g pool00 5.04 var onn_node1-g8-h4 Vwi-aotz-- 15.00g pool00 5.86 var_crash onn_node1-g8-h4 Vwi---tz-- 10.00g pool00 var_local_images onn_node1-g8-h4 Vwi-aotz-- 1.10t pool00 89.72 var_log onn_node1-g8-h4 Vwi-aotz-- 8.00g pool00 6.84 var_log_audit onn_node1-g8-h4 Vwi-aotz-- 2.00g pool00 6.16 [root@node6-g8-h4 ~]# vgs VG #PV #LV #SN Attr VSize VFree onn_node1-g8-h4 1 13 0 wz--n- <1.31t 8.00g 2018-06-29 14:19:31,142 [DEBUG] (MainThread) Version: imgbased-1.0.20 2018-06-29 14:19:31,147 [DEBUG] (MainThread) Arguments: Namespace(FILENAME='/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img', command='update', debug=True, experimental=False, format='liveimg', stream='Image') 2018-06-29 14:19:31,147 [INFO] (MainThread) Extracting image '/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img' 2018-06-29 14:19:31,148 [DEBUG] (MainThread) Calling binary: (['mktemp', '-d', '--tmpdir', 'mnt.XXXXX'],) {} 2018-06-29 14:19:31,148 [DEBUG] (MainThread) Calling: (['mktemp', '-d', '--tmpdir', 'mnt.XXXXX'],) {'close_fds': True, 'stderr': -2} 2018-06-29 14:19:31,150 [DEBUG] (MainThread) Returned: /tmp/mnt.1OhaU 2018-06-29 14:19:31,151 [DEBUG] (MainThread) Calling binary: (['mount', '/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img', u'/tmp/mnt.1OhaU'],) {} 2018-06-29 14:19:31,151 [DEBUG] (MainThread) Calling: (['mount', '/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img', u'/tmp/mnt.1OhaU'],) {'close_fds': True, 'stderr': -2} 2018-06-29 14:19:31,157 [DEBUG] (MainThread) Returned: 2018-06-29 14:19:31,158 [DEBUG] (MainThread) Mounted squashfs 2018-06-29 14:19:31,158 [DEBUG] (MainThread) Found fsimage at '/tmp/mnt.1OhaU/LiveOS/rootfs.img' 2018-06-29 14:19:31,159 [DEBUG] (MainThread) Calling binary: (['mktemp', '-d', '--tmpdir', 'mnt.XXXXX'],) {} 2018-06-29 14:19:31,159 [DEBUG] (MainThread) Calling: (['mktemp', '-d', '--tmpdir', 'mnt.XXXXX'],) {'close_fds': True, 'stderr': -2} 2018-06-29 14:19:31,162 [DEBUG] (MainThread) Returned: /tmp/mnt.153do 2018-06-29 14:19:31,162 [DEBUG] (MainThread) Calling binary: (['mount', u'/tmp/mnt.1OhaU/LiveOS/rootfs.img', u'/tmp/mnt.153do'],) {} 2018-06-29 14:19:31,162 [DEBUG] (MainThread) Calling: (['mount', u'/tmp/mnt.1OhaU/LiveOS/rootfs.img', u'/tmp/mnt.153do'],) {'close_fds': True, 'stderr': -2} 2018-06-29 14:19:31,177 [DEBUG] (MainThread) Returned: 2018-06-29 14:19:31,189 [DEBUG] (MainThread) Using nvr: ovirt-node-ng-4.2.4-0.20180626.0 2018-06-29 14:19:31,189 [DEBUG] (MainThread) Fetching image for '/' 2018-06-29 14:19:31,189 [DEBUG] (MainThread) Calling binary: (['findmnt', '--noheadings', '-o', 'SOURCE', '/'],) {} 2018-06-29 14:19:31,190 [DEBUG] (MainThread) Calling: (['findmnt', '--noheadings', '-o', 'SOURCE', '/'],) {'close_fds': True, 'stderr': -2} 2018-06-29 14:19:31,203 [DEBUG] (MainThread) Returned: /dev/mapper/onn_node1--g8--h4-ovirt--node--ng--4.2.3.1--0.20180530.0+1 2018-06-29 14:19:31,204 [DEBUG] (MainThread) Found '/dev/mapper/onn_node1--g8--h4-ovirt--node--ng--4.2.3.1--0.20180530.0+1' 2018-06-29 14:19:31,204 [DEBUG] (MainThread) Calling binary: (['lvs', '--noheadings', '--ignoreskippedcluster', '-ovg_name,lv_name', u'/dev/mapper/onn_node1--g8--h4-ovirt--node--ng--4.2.3.1--0.20180530.0+1'],) {'stderr': <open file '/dev/null', mode 'w' at 0x7f56b787eed0>} 2018-06-29 14:19:31,204 [DEBUG] (MainThread) Calling: (['lvs', '--noheadings', '--ignoreskippedcluster', '-ovg_name,lv_name', u'/dev/mapper/onn_node1--g8--h4-ovirt--node--ng--4.2.3.1--0.20180530.0+1'],) {'close_fds': True, 'stderr': <open file '/dev/null', mode 'w' at 0x7f56b787eed0>} 2018-06-29 14:19:31,283 [DEBUG] (MainThread) Returned: onn_node1-g8-h4 ovirt-node-ng-4.2.3.1-0.20180530.0+1 2018-06-29 14:19:31,283 [DEBUG] (MainThread) Found LV for path /dev/mapper/onn_node1--g8--h4-ovirt--node--ng--4.2.3.1--0.20180530.0+1: onn_node1-g8-h4 ovirt-node-ng-4.2.3.1-0.20180530.0+1 2018-06-29 14:19:31,283 [DEBUG] (MainThread) Found LV 'ovirt-node-ng-4.2.3.1-0.20180530.0+1' for path '/dev/mapper/onn_node1--g8--h4-ovirt--node--ng--4.2.3.1--0.20180530.0+1' 2018-06-29 14:19:31,284 [DEBUG] (MainThread) Calling binary: (['vgs', '--noheadings', '--ignoreskippedcluster', '--select', 'vg_tags = imgbased:vg', '-o', 'vg_name'],) {'stderr': <open file '/dev/null', mode 'w' at 0x7f56b787eed0>} 2018-06-29 14:19:31,284 [DEBUG] (MainThread) Calling: (['vgs', '--noheadings', '--ignoreskippedcluster', '--select', 'vg_tags = imgbased:vg', '-o', 'vg_name'],) {'close_fds': True, 'stderr': <open file '/dev/null', mode 'w' at 0x7f56b787eed0>} 2018-06-29 14:19:31,321 [DEBUG] (MainThread) Returned: onn_node1-g8-h4 2018-06-29 14:19:31,322 [DEBUG] (MainThread) Calling binary: (['lvs', '--noheadings', '--ignoreskippedcluster', '-osize', '--units', 'B', u'onn_node1-g8-h4/ovirt-node-ng-4.2.3.1-0.20180530.0+1'],) {'stderr': <open file '/dev/null', mode 'w' at 0x7f56b787eed0>} 2018-06-29 14:19:31,322 [DEBUG] (MainThread) Calling: (['lvs', '--noheadings', '--ignoreskippedcluster', '-osize', '--units', 'B', u'onn_node1-g8-h4/ovirt-node-ng-4.2.3.1-0.20180530.0+1'],) {'close_fds': True, 'stderr': <open file '/dev/null', mode 'w' at 0x7f56b787eed0>} 2018-06-29 14:19:31,355 [DEBUG] (MainThread) Returned: 53750005760B 2018-06-29 14:19:31,355 [DEBUG] (MainThread) Recommeneded base size: 53750005760B 2018-06-29 14:19:31,355 [INFO] (MainThread) Starting base creation 2018-06-29 14:19:31,355 [INFO] (MainThread) New base will be: ovirt-node-ng-4.2.4-0.20180626.0 2018-06-29 14:19:31,356 [DEBUG] (MainThread) Calling binary: (['vgs', '--noheadings', '--ignoreskippedcluster', '@imgbased:pool', '-o', 'lv_full_name'],) {'stderr': <open file '/dev/null', mode 'w' at 0x7f56b787eed0>} 2018-06-29 14:19:31,356 [DEBUG] (MainThread) Calling: (['vgs', '--noheadings', '--ignoreskippedcluster', '@imgbased:pool', '-o', 'lv_full_name'],) {'close_fds': True, 'stderr': <open file '/dev/null', mode 'w' at 0x7f56b787eed0>} 2018-06-29 14:19:31,381 [DEBUG] (MainThread) Returned: onn_node1-g8-h4/pool00 2018-06-29 14:19:31,381 [DEBUG] (MainThread) Pool: <LV 'onn_node1-g8-h4/pool00' /> 2018-06-29 14:19:31,382 [DEBUG] (MainThread) Calling binary: (['lvcreate', '--thin', '--virtualsize', u'53750005760B', '--name', 'ovirt-node-ng-4.2.4-0.20180626.0', u'onn_node1-g8-h4/pool00'],) {} 2018-06-29 14:19:31,382 [DEBUG] (MainThread) Calling: (['lvcreate', '--thin', '--virtualsize', u'53750005760B', '--name', 'ovirt-node-ng-4.2.4-0.20180626.0', u'onn_node1-g8-h4/pool00'],) {'close_fds': True, 'stderr': -2} 2018-06-29 14:19:31,406 [DEBUG] (MainThread) Exception! Cannot create new thin volume, free space in thin pool onn_node1-g8-h4/pool00 reached threshold. 2018-06-29 14:19:31,406 [DEBUG] (MainThread) Calling binary: (['umount', '-l', u'/tmp/mnt.153do'],) {} 2018-06-29 14:19:31,406 [DEBUG] (MainThread) Calling: (['umount', '-l', u'/tmp/mnt.153do'],) {'close_fds': True, 'stderr': -2} 2018-06-29 14:19:31,422 [DEBUG] (MainThread) Returned: 2018-06-29 14:19:31,422 [DEBUG] (MainThread) Calling binary: (['rmdir', u'/tmp/mnt.153do'],) {} 2018-06-29 14:19:31,422 [DEBUG] (MainThread) Calling: (['rmdir', u'/tmp/mnt.153do'],) {'close_fds': True, 'stderr': -2} 2018-06-29 14:19:31,425 [DEBUG] (MainThread) Returned: 2018-06-29 14:19:31,425 [DEBUG] (MainThread) Calling binary: (['umount', '-l', u'/tmp/mnt.1OhaU'],) {} 2018-06-29 14:19:31,425 [DEBUG] (MainThread) Calling: (['umount', '-l', u'/tmp/mnt.1OhaU'],) {'close_fds': True, 'stderr': -2} 2018-06-29 14:19:31,437 [DEBUG] (MainThread) Returned: 2018-06-29 14:19:31,437 [DEBUG] (MainThread) Calling binary: (['rmdir', u'/tmp/mnt.1OhaU'],) {} 2018-06-29 14:19:31,437 [DEBUG] (MainThread) Calling: (['rmdir', u'/tmp/mnt.1OhaU'],) {'close_fds': True, 'stderr': -2} 2018-06-29 14:19:31,440 [DEBUG] (MainThread) Returned: Traceback (most recent call last): File "/usr/lib64/python2.7/runpy.py", line 162, in _run_module_as_main "__main__", fname, loader, pkg_name) File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code exec code in run_globals File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/__main__.py", line 53, in <module> CliApplication() File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/__init__.py", line 82, in CliApplication app.hooks.emit("post-arg-parse", args) File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/hooks.py", line 120, in emit cb(self.context, *args) File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/plugins/update.py", line 56, in post_argparse base_lv, _ = LiveimgExtractor(app.imgbase).extract(args.FILENAME) File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/plugins/update.py", line 118, in extract "%s" % size, nvr) File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/plugins/update.py", line 84, in add_base_with_tree lvs) File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/imgbase.py", line 310, in add_base new_base_lv = pool.create_thinvol(new_base.lv_name, size) File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/lvm.py", line 324, in create_thinvol self.lvm_name]) File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/utils.py", line 390, in lvcreate return self.call(["lvcreate"] + args, **kwargs) File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/utils.py", line 378, in call stdout = call(*args, **kwargs) File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/utils.py", line 153, in call return subprocess.check_output(*args, **kwargs).strip() File "/usr/lib64/python2.7/subprocess.py", line 575, in check_output raise CalledProcessError(retcode, cmd, output=output) subprocess.CalledProcessError: Command '['lvcreate', '--thin', '--virtualsize', u'53750005760B', '--name', 'ovirt-node-ng-4.2.4-0.20180626.0', u'onn_node1-g8-h4/pool00']' returned non-zero exit status 5 On 07/02/2018 04:58 AM, Yuval Turgeman wrote:
Looks like the upgrade script failed - can you please attach /var/log/imgbased.log or /tmp/imgbased.log ?
Thanks, Yuval.
On Mon, Jul 2, 2018 at 2:54 PM, Sandro Bonazzola <sbonazzo@redhat.com <mailto:sbonazzo@redhat.com>> wrote:
Yuval, can you please have a look?
2018-06-30 7:48 GMT+02:00 Oliver Riesener <Oliver.Riesener@hs-bremen.de <mailto:Oliver.Riesener@hs-bremen.de>>:
Yes, here is the same.
It seams the bootloader isn’t configured right ?
I did the Upgrade and reboot to 4.2.4 from UI and got:
[root@ovn-monster ~]# nodectl info layers: ovirt-node-ng-4.2.4-0.20180626.0: ovirt-node-ng-4.2.4-0.20180626.0+1 ovirt-node-ng-4.2.3.1-0.20180530.0: ovirt-node-ng-4.2.3.1-0.20180530.0+1 ovirt-node-ng-4.2.3-0.20180524.0: ovirt-node-ng-4.2.3-0.20180524.0+1 ovirt-node-ng-4.2.1.1-0.20180223.0: ovirt-node-ng-4.2.1.1-0.20180223.0+1 bootloader: default: ovirt-node-ng-4.2.3-0.20180524.0+1 entries: ovirt-node-ng-4.2.3-0.20180524.0+1: index: 0 title: ovirt-node-ng-4.2.3-0.20180524.0 kernel: /boot/ovirt-node-ng-4.2.3-0.20180524.0+1/vmlinuz-3.10.0-862.3.2.el7.x86_64 args: "ro crashkernel=auto rd.lvm.lv <http://rd.lvm.lv>=onn_ovn-monster/ovirt-node-ng-4.2.3-0.20180524.0+1 rd.lvm.lv <http://rd.lvm.lv>=onn_ovn-monster/swap rd.md.uuid=c6c3013b:027a9346:67dfd181:89635587 rhgb quiet LANG=de_DE.UTF-8 img.bootid=ovirt-node-ng-4.2.3-0.20180524.0+1" initrd: /boot/ovirt-node-ng-4.2.3-0.20180524.0+1/initramfs-3.10.0-862.3.2.el7.x86_64.img root: /dev/onn_ovn-monster/ovirt-node-ng-4.2.3-0.20180524.0+1 ovirt-node-ng-4.2.1.1-0.20180223.0+1: index: 1 title: ovirt-node-ng-4.2.1.1-0.20180223.0 kernel: /boot/ovirt-node-ng-4.2.1.1-0.20180223.0+1/vmlinuz-3.10.0-693.17.1.el7.x86_64 args: "ro crashkernel=auto rd.lvm.lv <http://rd.lvm.lv>=onn_ovn-monster/ovirt-node-ng-4.2.1.1-0.20180223.0+1 rd.lvm.lv <http://rd.lvm.lv>=onn_ovn-monster/swap rd.md.uuid=c6c3013b:027a9346:67dfd181:89635587 rhgb quiet LANG=de_DE.UTF-8 img.bootid=ovirt-node-ng-4.2.1.1-0.20180223.0+1" initrd: /boot/ovirt-node-ng-4.2.1.1-0.20180223.0+1/initramfs-3.10.0-693.17.1.el7.x86_64.img root: /dev/onn_ovn-monster/ovirt-node-ng-4.2.1.1-0.20180223.0+1 current_layer: ovirt-node-ng-4.2.3-0.20180524.0+1 [root@ovn-monster ~]# uptime 07:35:27 up 2 days, 15:42, 1 user, load average: 1,07, 1,00, 0,95
Am 29.06.2018 um 23:53 schrieb Matt Simonsen <matt@khoza.com <mailto:matt@khoza.com>>:
Hello,
I did yum updates on 2 of my oVirt 4.2.3 nodes running the prebuilt node platform and it doesn't appear the updates worked.
[root@node6-g8-h4 ~]# yum update Loaded plugins: enabled_repos_upload, fastestmirror, imgbased-persist, : package_upload, product-id, search-disabled-repos, subscription- : manager This system is not registered with an entitlement server. You can use subscription-manager to register. Loading mirror speeds from cached hostfile * ovirt-4.2-epel: linux.mirrors.es.net <http://linux.mirrors.es.net> Resolving Dependencies --> Running transaction check ---> Package ovirt-node-ng-image-update.noarch 0:4.2.3.1-1.el7 will be updated ---> Package ovirt-node-ng-image-update.noarch 0:4.2.4-1.el7 will be obsoleting ---> Package ovirt-node-ng-image-update-placeholder.noarch 0:4.2.3.1-1.el7 will be obsoleted --> Finished Dependency Resolution
Dependencies Resolved
========================================================================================================================= Package Arch Version Repository Size ========================================================================================================================= Installing: ovirt-node-ng-image-update noarch 4.2.4-1.el7 ovirt-4.2 647 M replacing ovirt-node-ng-image-update-placeholder.noarch 4.2.3.1-1.el7
Transaction Summary ========================================================================================================================= Install 1 Package
Total download size: 647 M Is this ok [y/d/N]: y Downloading packages: warning: /var/cache/yum/x86_64/7/ovirt-4.2/packages/ovirt-node-ng-image-update-4.2.4-1.el7.noarch.rpm: Header V4 RSA/SHA1 Signature, key ID fe590cb7: NOKEY Public key for ovirt-node-ng-image-update-4.2.4-1.el7.noarch.rpm is not installed ovirt-node-ng-image-update-4.2.4-1.el7.noarch.rpm | 647 MB 00:02:07 Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-ovirt-4.2 Importing GPG key 0xFE590CB7: Userid : "oVirt <infra@ovirt.org <mailto:infra@ovirt.org>>" Fingerprint: 31a5 d783 7fad 7cb2 86cd 3469 ab8c 4f9d fe59 0cb7 Package : ovirt-release42-4.2.3.1-1.el7.noarch (installed) From : /etc/pki/rpm-gpg/RPM-GPG-ovirt-4.2 Is this ok [y/N]: y Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : ovirt-node-ng-image-update-4.2.4-1.el7.noarch 1/3 warning: %post(ovirt-node-ng-image-update-4.2.4-1.el7.noarch) scriptlet failed, exit status 1 Non-fatal POSTIN scriptlet failure in rpm package ovirt-node-ng-image-update-4.2.4-1.el7.noarch Erasing : ovirt-node-ng-image-update-placeholder-4.2.3.1-1.el7.noarch 2/3 Cleanup : ovirt-node-ng-image-update-4.2.3.1-1.el7.noarch 3/3 warning: file /usr/share/ovirt-node-ng/image/ovirt-node-ng-4.2.0-0.20180530.0.el7.squashfs.img: remove failed: No such file or directory Uploading Package Profile Unable to upload Package Profile Verifying : ovirt-node-ng-image-update-4.2.4-1.el7.noarch 1/3 Verifying : ovirt-node-ng-image-update-4.2.3.1-1.el7.noarch 2/3 Verifying : ovirt-node-ng-image-update-placeholder-4.2.3.1-1.el7.noarch 3/3
Installed: ovirt-node-ng-image-update.noarch 0:4.2.4-1.el7
Replaced: ovirt-node-ng-image-update-placeholder.noarch 0:4.2.3.1-1.el7
Complete! Uploading Enabled Repositories Report Loaded plugins: fastestmirror, product-id, subscription-manager This system is not registered with an entitlement server. You can use subscription-manager to register. Cannot upload enabled repos report, is this client registered?
My engine shows the nodes as having no updates, however the major components including the kernel version and port 9090 admin GUI show 4.2.3
Is there anything I can provide to help diagnose the issue?
[root@node6-g8-h4 ~]# rpm -qa | grep ovirt
ovirt-imageio-common-1.3.1.2-0.el7.centos.noarch ovirt-host-deploy-1.7.3-1.el7.centos.noarch ovirt-vmconsole-host-1.0.5-4.el7.centos.noarch ovirt-provider-ovn-driver-1.2.10-1.el7.centos.noarch ovirt-engine-sdk-python-3.6.9.1-1.el7.noarch ovirt-setup-lib-1.1.4-1.el7.centos.noarch ovirt-release42-4.2.3.1-1.el7.noarch ovirt-imageio-daemon-1.3.1.2-0.el7.centos.noarch ovirt-hosted-engine-setup-2.2.20-1.el7.centos.noarch ovirt-host-dependencies-4.2.2-2.el7.centos.x86_64 ovirt-hosted-engine-ha-2.2.11-1.el7.centos.noarch ovirt-host-4.2.2-2.el7.centos.x86_64 ovirt-node-ng-image-update-4.2.4-1.el7.noarch ovirt-vmconsole-1.0.5-4.el7.centos.noarch ovirt-release-host-node-4.2.3.1-1.el7.noarch cockpit-ovirt-dashboard-0.11.24-1.el7.centos.noarch ovirt-node-ng-nodectl-4.2.0-0.20180524.0.el7.noarch python-ovirt-engine-sdk4-4.2.6-2.el7.centos.x86_64
[root@node6-g8-h4 ~]# yum update Loaded plugins: enabled_repos_upload, fastestmirror, imgbased-persist, package_upload, product-id, search-disabled-repos, subscription-manager This system is not registered with an entitlement server. You can use subscription-manager to register. Loading mirror speeds from cached hostfile * ovirt-4.2-epel: linux.mirrors.es.net <http://linux.mirrors.es.net> No packages marked for update Uploading Enabled Repositories Report Loaded plugins: fastestmirror, product-id, subscription-manager This system is not registered with an entitlement server. You can use subscription-manager to register. Cannot upload enabled repos report, is this client registered? _______________________________________________ Users mailing list -- users@ovirt.org <mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org <mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ <https://www.ovirt.org/site/privacy-policy/> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ <https://www.ovirt.org/community/about/community-guidelines/> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/UHQMGULUHL4GBB... <https://lists.ovirt.org/archives/list/users@ovirt.org/message/UHQMGULUHL4GBBHUBNGOAICJEM6W3RVW/>
_______________________________________________ Users mailing list -- users@ovirt.org <mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org <mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ <https://www.ovirt.org/site/privacy-policy/> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ <https://www.ovirt.org/community/about/community-guidelines/> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/RTOXFNAXQ3NJBW... <https://lists.ovirt.org/archives/list/users@ovirt.org/message/RTOXFNAXQ3NJBWX7RXOYK5H5RZBHX2OK/>
--
SANDRO BONAZZOLA
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo@redhat.com <mailto:sbonazzo@redhat.com>

Not in front of my laptop so it's a little hard to read but does it say 8g free on the vg ? On Mon, Jul 2, 2018, 20:00 Matt Simonsen <matt@khoza.com> wrote:
This error adds some clarity.
That said, I'm a bit unsure how the space can be the issue given I have several hundred GB of storage in the thin pool that's unused...
How do you suggest I proceed?
Thank you for your help,
Matt
[root@node6-g8-h4 ~]# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert home onn_node1-g8-h4 Vwi-aotz-- 1.00g pool00 4.79 ovirt-node-ng-4.2.2-0.20180423.0 onn_node1-g8-h4 Vwi---tz-k <50.06g pool00 root ovirt-node-ng-4.2.2-0.20180423.0+1 onn_node1-g8-h4 Vwi---tz-- <50.06g pool00 ovirt-node-ng-4.2.2-0.20180423.0 ovirt-node-ng-4.2.3.1-0.20180530.0 onn_node1-g8-h4 Vri---tz-k <50.06g pool00
ovirt-node-ng-4.2.3.1-0.20180530.0+1 onn_node1-g8-h4 Vwi-aotz-- <50.06g pool00 ovirt-node-ng-4.2.3.1-0.20180530.0 6.95 pool00 onn_node1-g8-h4 twi-aotz-- <1.30t 76.63 50.34 root onn_node1-g8-h4 Vwi---tz-- <50.06g pool00
tmp onn_node1-g8-h4 Vwi-aotz-- 1.00g pool00 5.04 var onn_node1-g8-h4 Vwi-aotz-- 15.00g pool00 5.86 var_crash onn_node1-g8-h4 Vwi---tz-- 10.00g pool00
var_local_images onn_node1-g8-h4 Vwi-aotz-- 1.10t pool00 89.72 var_log onn_node1-g8-h4 Vwi-aotz-- 8.00g pool00 6.84 var_log_audit onn_node1-g8-h4 Vwi-aotz-- 2.00g pool00 6.16 [root@node6-g8-h4 ~]# vgs VG #PV #LV #SN Attr VSize VFree onn_node1-g8-h4 1 13 0 wz--n- <1.31t 8.00g
2018-06-29 14:19:31,142 [DEBUG] (MainThread) Version: imgbased-1.0.20 2018-06-29 14:19:31,147 [DEBUG] (MainThread) Arguments: Namespace(FILENAME='/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img', command='update', debug=True, experimental=False, format='liveimg', stream='Image') 2018-06-29 14:19:31,147 [INFO] (MainThread) Extracting image '/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img' 2018-06-29 14:19:31,148 [DEBUG] (MainThread) Calling binary: (['mktemp', '-d', '--tmpdir', 'mnt.XXXXX'],) {} 2018-06-29 14:19:31,148 [DEBUG] (MainThread) Calling: (['mktemp', '-d', '--tmpdir', 'mnt.XXXXX'],) {'close_fds': True, 'stderr': -2} 2018-06-29 14:19:31,150 [DEBUG] (MainThread) Returned: /tmp/mnt.1OhaU 2018-06-29 14:19:31,151 [DEBUG] (MainThread) Calling binary: (['mount', '/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img', u'/tmp/mnt.1OhaU'],) {} 2018-06-29 14:19:31,151 [DEBUG] (MainThread) Calling: (['mount', '/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img', u'/tmp/mnt.1OhaU'],) {'close_fds': True, 'stderr': -2} 2018-06-29 14:19:31,157 [DEBUG] (MainThread) Returned: 2018-06-29 14:19:31,158 [DEBUG] (MainThread) Mounted squashfs 2018-06-29 14:19:31,158 [DEBUG] (MainThread) Found fsimage at '/tmp/mnt.1OhaU/LiveOS/rootfs.img' 2018-06-29 14:19:31,159 [DEBUG] (MainThread) Calling binary: (['mktemp', '-d', '--tmpdir', 'mnt.XXXXX'],) {} 2018-06-29 14:19:31,159 [DEBUG] (MainThread) Calling: (['mktemp', '-d', '--tmpdir', 'mnt.XXXXX'],) {'close_fds': True, 'stderr': -2} 2018-06-29 14:19:31,162 [DEBUG] (MainThread) Returned: /tmp/mnt.153do 2018-06-29 14:19:31,162 [DEBUG] (MainThread) Calling binary: (['mount', u'/tmp/mnt.1OhaU/LiveOS/rootfs.img', u'/tmp/mnt.153do'],) {} 2018-06-29 14:19:31,162 [DEBUG] (MainThread) Calling: (['mount', u'/tmp/mnt.1OhaU/LiveOS/rootfs.img', u'/tmp/mnt.153do'],) {'close_fds': True, 'stderr': -2} 2018-06-29 14:19:31,177 [DEBUG] (MainThread) Returned: 2018-06-29 14:19:31,189 [DEBUG] (MainThread) Using nvr: ovirt-node-ng-4.2.4-0.20180626.0 2018-06-29 14:19:31,189 [DEBUG] (MainThread) Fetching image for '/' 2018-06-29 14:19:31,189 [DEBUG] (MainThread) Calling binary: (['findmnt', '--noheadings', '-o', 'SOURCE', '/'],) {} 2018-06-29 14:19:31,190 [DEBUG] (MainThread) Calling: (['findmnt', '--noheadings', '-o', 'SOURCE', '/'],) {'close_fds': True, 'stderr': -2} 2018-06-29 14:19:31,203 [DEBUG] (MainThread) Returned: /dev/mapper/onn_node1--g8--h4-ovirt--node--ng--4.2.3.1--0.20180530.0+1 2018-06-29 14:19:31,204 [DEBUG] (MainThread) Found '/dev/mapper/onn_node1--g8--h4-ovirt--node--ng--4.2.3.1--0.20180530.0+1' 2018-06-29 14:19:31,204 [DEBUG] (MainThread) Calling binary: (['lvs', '--noheadings', '--ignoreskippedcluster', '-ovg_name,lv_name', u'/dev/mapper/onn_node1--g8--h4-ovirt--node--ng--4.2.3.1--0.20180530.0+1'],) {'stderr': <open file '/dev/null', mode 'w' at 0x7f56b787eed0>} 2018-06-29 14:19:31,204 [DEBUG] (MainThread) Calling: (['lvs', '--noheadings', '--ignoreskippedcluster', '-ovg_name,lv_name', u'/dev/mapper/onn_node1--g8--h4-ovirt--node--ng--4.2.3.1--0.20180530.0+1'],) {'close_fds': True, 'stderr': <open file '/dev/null', mode 'w' at 0x7f56b787eed0>} 2018-06-29 14:19:31,283 [DEBUG] (MainThread) Returned: onn_node1-g8-h4 ovirt-node-ng-4.2.3.1-0.20180530.0+1 2018-06-29 14:19:31,283 [DEBUG] (MainThread) Found LV for path /dev/mapper/onn_node1--g8--h4-ovirt--node--ng--4.2.3.1--0.20180530.0+1: onn_node1-g8-h4 ovirt-node-ng-4.2.3.1-0.20180530.0+1 2018-06-29 14:19:31,283 [DEBUG] (MainThread) Found LV 'ovirt-node-ng-4.2.3.1-0.20180530.0+1' for path '/dev/mapper/onn_node1--g8--h4-ovirt--node--ng--4.2.3.1--0.20180530.0+1' 2018-06-29 14:19:31,284 [DEBUG] (MainThread) Calling binary: (['vgs', '--noheadings', '--ignoreskippedcluster', '--select', 'vg_tags = imgbased:vg', '-o', 'vg_name'],) {'stderr': <open file '/dev/null', mode 'w' at 0x7f56b787eed0>} 2018-06-29 14:19:31,284 [DEBUG] (MainThread) Calling: (['vgs', '--noheadings', '--ignoreskippedcluster', '--select', 'vg_tags = imgbased:vg', '-o', 'vg_name'],) {'close_fds': True, 'stderr': <open file '/dev/null', mode 'w' at 0x7f56b787eed0>} 2018-06-29 14:19:31,321 [DEBUG] (MainThread) Returned: onn_node1-g8-h4 2018-06-29 14:19:31,322 [DEBUG] (MainThread) Calling binary: (['lvs', '--noheadings', '--ignoreskippedcluster', '-osize', '--units', 'B', u'onn_node1-g8-h4/ovirt-node-ng-4.2.3.1-0.20180530.0+1'],) {'stderr': <open file '/dev/null', mode 'w' at 0x7f56b787eed0>} 2018-06-29 14:19:31,322 [DEBUG] (MainThread) Calling: (['lvs', '--noheadings', '--ignoreskippedcluster', '-osize', '--units', 'B', u'onn_node1-g8-h4/ovirt-node-ng-4.2.3.1-0.20180530.0+1'],) {'close_fds': True, 'stderr': <open file '/dev/null', mode 'w' at 0x7f56b787eed0>} 2018-06-29 14:19:31,355 [DEBUG] (MainThread) Returned: 53750005760B 2018-06-29 14:19:31,355 [DEBUG] (MainThread) Recommeneded base size: 53750005760B 2018-06-29 14:19:31,355 [INFO] (MainThread) Starting base creation 2018-06-29 14:19:31,355 [INFO] (MainThread) New base will be: ovirt-node-ng-4.2.4-0.20180626.0 2018-06-29 14:19:31,356 [DEBUG] (MainThread) Calling binary: (['vgs', '--noheadings', '--ignoreskippedcluster', '@imgbased:pool', '-o', 'lv_full_name'],) {'stderr': <open file '/dev/null', mode 'w' at 0x7f56b787eed0>} 2018-06-29 14:19:31,356 [DEBUG] (MainThread) Calling: (['vgs', '--noheadings', '--ignoreskippedcluster', '@imgbased:pool', '-o', 'lv_full_name'],) {'close_fds': True, 'stderr': <open file '/dev/null', mode 'w' at 0x7f56b787eed0>} 2018-06-29 14:19:31,381 [DEBUG] (MainThread) Returned: onn_node1-g8-h4/pool00 2018-06-29 14:19:31,381 [DEBUG] (MainThread) Pool: <LV 'onn_node1-g8-h4/pool00' /> 2018-06-29 14:19:31,382 [DEBUG] (MainThread) Calling binary: (['lvcreate', '--thin', '--virtualsize', u'53750005760B', '--name', 'ovirt-node-ng-4.2.4-0.20180626.0', u'onn_node1-g8-h4/pool00'],) {} 2018-06-29 14:19:31,382 [DEBUG] (MainThread) Calling: (['lvcreate', '--thin', '--virtualsize', u'53750005760B', '--name', 'ovirt-node-ng-4.2.4-0.20180626.0', u'onn_node1-g8-h4/pool00'],) {'close_fds': True, 'stderr': -2} 2018-06-29 14:19:31,406 [DEBUG] (MainThread) Exception! Cannot create new thin volume, free space in thin pool onn_node1-g8-h4/pool00 reached threshold.
2018-06-29 14:19:31,406 [DEBUG] (MainThread) Calling binary: (['umount', '-l', u'/tmp/mnt.153do'],) {} 2018-06-29 14:19:31,406 [DEBUG] (MainThread) Calling: (['umount', '-l', u'/tmp/mnt.153do'],) {'close_fds': True, 'stderr': -2} 2018-06-29 14:19:31,422 [DEBUG] (MainThread) Returned: 2018-06-29 14:19:31,422 [DEBUG] (MainThread) Calling binary: (['rmdir', u'/tmp/mnt.153do'],) {} 2018-06-29 14:19:31,422 [DEBUG] (MainThread) Calling: (['rmdir', u'/tmp/mnt.153do'],) {'close_fds': True, 'stderr': -2} 2018-06-29 14:19:31,425 [DEBUG] (MainThread) Returned: 2018-06-29 14:19:31,425 [DEBUG] (MainThread) Calling binary: (['umount', '-l', u'/tmp/mnt.1OhaU'],) {} 2018-06-29 14:19:31,425 [DEBUG] (MainThread) Calling: (['umount', '-l', u'/tmp/mnt.1OhaU'],) {'close_fds': True, 'stderr': -2} 2018-06-29 14:19:31,437 [DEBUG] (MainThread) Returned: 2018-06-29 14:19:31,437 [DEBUG] (MainThread) Calling binary: (['rmdir', u'/tmp/mnt.1OhaU'],) {} 2018-06-29 14:19:31,437 [DEBUG] (MainThread) Calling: (['rmdir', u'/tmp/mnt.1OhaU'],) {'close_fds': True, 'stderr': -2} 2018-06-29 14:19:31,440 [DEBUG] (MainThread) Returned: Traceback (most recent call last): File "/usr/lib64/python2.7/runpy.py", line 162, in _run_module_as_main "__main__", fname, loader, pkg_name) File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code exec code in run_globals File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/__main__.py", line 53, in <module> CliApplication() File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/__init__.py", line 82, in CliApplication app.hooks.emit("post-arg-parse", args) File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/hooks.py", line 120, in emit cb(self.context, *args) File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/plugins/update.py", line 56, in post_argparse base_lv, _ = LiveimgExtractor(app.imgbase).extract(args.FILENAME) File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/plugins/update.py", line 118, in extract "%s" % size, nvr) File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/plugins/update.py", line 84, in add_base_with_tree lvs) File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/imgbase.py", line 310, in add_base new_base_lv = pool.create_thinvol(new_base.lv_name, size) File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/lvm.py", line 324, in create_thinvol self.lvm_name]) File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/utils.py", line 390, in lvcreate return self.call(["lvcreate"] + args, **kwargs) File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/utils.py", line 378, in call stdout = call(*args, **kwargs) File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/utils.py", line 153, in call return subprocess.check_output(*args, **kwargs).strip() File "/usr/lib64/python2.7/subprocess.py", line 575, in check_output raise CalledProcessError(retcode, cmd, output=output) subprocess.CalledProcessError: Command '['lvcreate', '--thin', '--virtualsize', u'53750005760B', '--name', 'ovirt-node-ng-4.2.4-0.20180626.0', u'onn_node1-g8-h4/pool00']' returned non-zero exit status 5
On 07/02/2018 04:58 AM, Yuval Turgeman wrote:
Looks like the upgrade script failed - can you please attach /var/log/imgbased.log or /tmp/imgbased.log ?
Thanks, Yuval.
On Mon, Jul 2, 2018 at 2:54 PM, Sandro Bonazzola <sbonazzo@redhat.com> wrote:
Yuval, can you please have a look?
2018-06-30 7:48 GMT+02:00 Oliver Riesener <Oliver.Riesener@hs-bremen.de>:
Yes, here is the same.
It seams the bootloader isn’t configured right ?
I did the Upgrade and reboot to 4.2.4 from UI and got:
[root@ovn-monster ~]# nodectl info layers: ovirt-node-ng-4.2.4-0.20180626.0: ovirt-node-ng-4.2.4-0.20180626.0+1 ovirt-node-ng-4.2.3.1-0.20180530.0: ovirt-node-ng-4.2.3.1-0.20180530.0+1 ovirt-node-ng-4.2.3-0.20180524.0: ovirt-node-ng-4.2.3-0.20180524.0+1 ovirt-node-ng-4.2.1.1-0.20180223.0: ovirt-node-ng-4.2.1.1-0.20180223.0+1 bootloader: default: ovirt-node-ng-4.2.3-0.20180524.0+1 entries: ovirt-node-ng-4.2.3-0.20180524.0+1: index: 0 title: ovirt-node-ng-4.2.3-0.20180524.0 kernel: /boot/ovirt-node-ng-4.2.3-0.20180524.0+1/vmlinuz-3.10.0-862.3.2.el7.x86_64 args: "ro crashkernel=auto rd.lvm.lv=onn_ovn-monster/ovirt-node-ng-4.2.3-0.20180524.0+1 rd.lvm.lv=onn_ovn-monster/swap rd.md.uuid=c6c3013b:027a9346:67dfd181:89635587 rhgb quiet LANG=de_DE.UTF-8 img.bootid=ovirt-node-ng-4.2.3-0.20180524.0+1" initrd: /boot/ovirt-node-ng-4.2.3-0.20180524.0+1/initramfs-3.10.0-862.3.2.el7.x86_64.img root: /dev/onn_ovn-monster/ovirt-node-ng-4.2.3-0.20180524.0+1 ovirt-node-ng-4.2.1.1-0.20180223.0+1: index: 1 title: ovirt-node-ng-4.2.1.1-0.20180223.0 kernel: /boot/ovirt-node-ng-4.2.1.1-0.20180223.0+1/vmlinuz-3.10.0-693.17.1.el7.x86_64 args: "ro crashkernel=auto rd.lvm.lv=onn_ovn-monster/ovirt-node-ng-4.2.1.1-0.20180223.0+1 rd.lvm.lv=onn_ovn-monster/swap rd.md.uuid=c6c3013b:027a9346:67dfd181:89635587 rhgb quiet LANG=de_DE.UTF-8 img.bootid=ovirt-node-ng-4.2.1.1-0.20180223.0+1" initrd: /boot/ovirt-node-ng-4.2.1.1-0.20180223.0+1/initramfs-3.10.0-693.17.1.el7.x86_64.img root: /dev/onn_ovn-monster/ovirt-node-ng-4.2.1.1-0.20180223.0+1 current_layer: ovirt-node-ng-4.2.3-0.20180524.0+1 [root@ovn-monster ~]# uptime 07:35:27 up 2 days, 15:42, 1 user, load average: 1,07, 1,00, 0,95
Am 29.06.2018 um 23:53 schrieb Matt Simonsen <matt@khoza.com>:
Hello,
I did yum updates on 2 of my oVirt 4.2.3 nodes running the prebuilt node platform and it doesn't appear the updates worked.
[root@node6-g8-h4 ~]# yum update Loaded plugins: enabled_repos_upload, fastestmirror, imgbased-persist, : package_upload, product-id, search-disabled-repos, subscription- : manager This system is not registered with an entitlement server. You can use subscription-manager to register. Loading mirror speeds from cached hostfile * ovirt-4.2-epel: linux.mirrors.es.net Resolving Dependencies --> Running transaction check ---> Package ovirt-node-ng-image-update.noarch 0:4.2.3.1-1.el7 will be updated ---> Package ovirt-node-ng-image-update.noarch 0:4.2.4-1.el7 will be obsoleting ---> Package ovirt-node-ng-image-update-placeholder.noarch 0:4.2.3.1-1.el7 will be obsoleted --> Finished Dependency Resolution
Dependencies Resolved
========================================================================================================================= Package Arch Version Repository Size
========================================================================================================================= Installing: ovirt-node-ng-image-update noarch 4.2.4-1.el7 ovirt-4.2 647 M replacing ovirt-node-ng-image-update-placeholder.noarch 4.2.3.1-1.el7
Transaction Summary
========================================================================================================================= Install 1 Package
Total download size: 647 M Is this ok [y/d/N]: y Downloading packages: warning: /var/cache/yum/x86_64/7/ovirt-4.2/packages/ovirt-node-ng-image-update-4.2.4-1.el7.noarch.rpm: Header V4 RSA/SHA1 Signature, key ID fe590cb7: NOKEY Public key for ovirt-node-ng-image-update-4.2.4-1.el7.noarch.rpm is not installed ovirt-node-ng-image-update-4.2.4-1.el7.noarch.rpm | 647 MB 00:02:07 Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-ovirt-4.2 Importing GPG key 0xFE590CB7: Userid : "oVirt <infra@ovirt.org>" Fingerprint: 31a5 d783 7fad 7cb2 86cd 3469 ab8c 4f9d fe59 0cb7 Package : ovirt-release42-4.2.3.1-1.el7.noarch (installed) From : /etc/pki/rpm-gpg/RPM-GPG-ovirt-4.2 Is this ok [y/N]: y Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : ovirt-node-ng-image-update-4.2.4-1.el7.noarch 1/3 warning: %post(ovirt-node-ng-image-update-4.2.4-1.el7.noarch) scriptlet failed, exit status 1 Non-fatal POSTIN scriptlet failure in rpm package ovirt-node-ng-image-update-4.2.4-1.el7.noarch Erasing : ovirt-node-ng-image-update-placeholder-4.2.3.1-1.el7.noarch 2/3 Cleanup : ovirt-node-ng-image-update-4.2.3.1-1.el7.noarch 3/3 warning: file /usr/share/ovirt-node-ng/image/ovirt-node-ng-4.2.0-0.20180530.0.el7.squashfs.img: remove failed: No such file or directory Uploading Package Profile Unable to upload Package Profile Verifying : ovirt-node-ng-image-update-4.2.4-1.el7.noarch 1/3 Verifying : ovirt-node-ng-image-update-4.2.3.1-1.el7.noarch 2/3 Verifying : ovirt-node-ng-image-update-placeholder-4.2.3.1-1.el7.noarch 3/3
Installed: ovirt-node-ng-image-update.noarch 0:4.2.4-1.el7
Replaced: ovirt-node-ng-image-update-placeholder.noarch 0:4.2.3.1-1.el7
Complete! Uploading Enabled Repositories Report Loaded plugins: fastestmirror, product-id, subscription-manager This system is not registered with an entitlement server. You can use subscription-manager to register. Cannot upload enabled repos report, is this client registered?
My engine shows the nodes as having no updates, however the major components including the kernel version and port 9090 admin GUI show 4.2.3
Is there anything I can provide to help diagnose the issue?
[root@node6-g8-h4 ~]# rpm -qa | grep ovirt
ovirt-imageio-common-1.3.1.2-0.el7.centos.noarch ovirt-host-deploy-1.7.3-1.el7.centos.noarch ovirt-vmconsole-host-1.0.5-4.el7.centos.noarch ovirt-provider-ovn-driver-1.2.10-1.el7.centos.noarch ovirt-engine-sdk-python-3.6.9.1-1.el7.noarch ovirt-setup-lib-1.1.4-1.el7.centos.noarch ovirt-release42-4.2.3.1-1.el7.noarch ovirt-imageio-daemon-1.3.1.2-0.el7.centos.noarch ovirt-hosted-engine-setup-2.2.20-1.el7.centos.noarch ovirt-host-dependencies-4.2.2-2.el7.centos.x86_64 ovirt-hosted-engine-ha-2.2.11-1.el7.centos.noarch ovirt-host-4.2.2-2.el7.centos.x86_64 ovirt-node-ng-image-update-4.2.4-1.el7.noarch ovirt-vmconsole-1.0.5-4.el7.centos.noarch ovirt-release-host-node-4.2.3.1-1.el7.noarch cockpit-ovirt-dashboard-0.11.24-1.el7.centos.noarch ovirt-node-ng-nodectl-4.2.0-0.20180524.0.el7.noarch python-ovirt-engine-sdk4-4.2.6-2.el7.centos.x86_64
[root@node6-g8-h4 ~]# yum update Loaded plugins: enabled_repos_upload, fastestmirror, imgbased-persist, package_upload, product-id, search-disabled-repos, subscription-manager This system is not registered with an entitlement server. You can use subscription-manager to register. Loading mirror speeds from cached hostfile * ovirt-4.2-epel: linux.mirrors.es.net No packages marked for update Uploading Enabled Repositories Report Loaded plugins: fastestmirror, product-id, subscription-manager This system is not registered with an entitlement server. You can use subscription-manager to register. Cannot upload enabled repos report, is this client registered? _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/UHQMGULUHL4GBB...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/RTOXFNAXQ3NJBW...
--
SANDRO BONAZZOLA
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo@redhat.com <https://red.ht/sig>
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/AS3UWWIO5PBFYU...

2018-07-02 19:55 GMT+02:00 Yuval Turgeman <yturgema@redhat.com>:
Not in front of my laptop so it's a little hard to read but does it say 8g free on the vg ?
Yes, it says 8G in Vfree column
On Mon, Jul 2, 2018, 20:00 Matt Simonsen <matt@khoza.com> wrote:
This error adds some clarity.
That said, I'm a bit unsure how the space can be the issue given I have several hundred GB of storage in the thin pool that's unused...
How do you suggest I proceed?
Thank you for your help,
Matt
[root@node6-g8-h4 ~]# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert home onn_node1-g8-h4 Vwi-aotz-- 1.00g pool00 4.79 ovirt-node-ng-4.2.2-0.20180423.0 onn_node1-g8-h4 Vwi---tz-k <50.06g pool00 root
ovirt-node-ng-4.2.2-0.20180423.0+1 onn_node1-g8-h4 Vwi---tz-- <50.06g pool00 ovirt-node-ng-4.2.2-0.20180423.0
ovirt-node-ng-4.2.3.1-0.20180530.0 onn_node1-g8-h4 Vri---tz-k <50.06g pool00
ovirt-node-ng-4.2.3.1-0.20180530.0+1 onn_node1-g8-h4 Vwi-aotz-- <50.06g pool00 ovirt-node-ng-4.2.3.1-0.20180530.0 6.95 pool00 onn_node1-g8-h4 twi-aotz-- <1.30t 76.63 50.34 root onn_node1-g8-h4 Vwi---tz-- <50.06g pool00
tmp onn_node1-g8-h4 Vwi-aotz-- 1.00g pool00 5.04 var onn_node1-g8-h4 Vwi-aotz-- 15.00g pool00 5.86 var_crash onn_node1-g8-h4 Vwi---tz-- 10.00g pool00
var_local_images onn_node1-g8-h4 Vwi-aotz-- 1.10t pool00 89.72 var_log onn_node1-g8-h4 Vwi-aotz-- 8.00g pool00 6.84 var_log_audit onn_node1-g8-h4 Vwi-aotz-- 2.00g pool00 6.16 [root@node6-g8-h4 ~]# vgs VG #PV #LV #SN Attr VSize VFree onn_node1-g8-h4 1 13 0 wz--n- <1.31t 8.00g
2018-06-29 14:19:31,142 [DEBUG] (MainThread) Version: imgbased-1.0.20 2018-06-29 14:19:31,147 [DEBUG] (MainThread) Arguments: Namespace(FILENAME='/usr/share/ovirt-node-ng/image// ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img', command='update', debug=True, experimental=False, format='liveimg', stream='Image') 2018-06-29 14:19:31,147 [INFO] (MainThread) Extracting image '/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0. 20180626.0.el7.squashfs.img' 2018-06-29 14:19:31,148 [DEBUG] (MainThread) Calling binary: (['mktemp', '-d', '--tmpdir', 'mnt.XXXXX'],) {} 2018-06-29 14:19:31,148 [DEBUG] (MainThread) Calling: (['mktemp', '-d', '--tmpdir', 'mnt.XXXXX'],) {'close_fds': True, 'stderr': -2} 2018-06-29 14:19:31,150 [DEBUG] (MainThread) Returned: /tmp/mnt.1OhaU 2018-06-29 14:19:31,151 [DEBUG] (MainThread) Calling binary: (['mount', '/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img', u'/tmp/mnt.1OhaU'],) {} 2018-06-29 14:19:31,151 [DEBUG] (MainThread) Calling: (['mount', '/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img', u'/tmp/mnt.1OhaU'],) {'close_fds': True, 'stderr': -2} 2018-06-29 14:19:31,157 [DEBUG] (MainThread) Returned: 2018-06-29 14:19:31,158 [DEBUG] (MainThread) Mounted squashfs 2018-06-29 14:19:31,158 [DEBUG] (MainThread) Found fsimage at '/tmp/mnt.1OhaU/LiveOS/rootfs.img' 2018-06-29 14:19:31,159 [DEBUG] (MainThread) Calling binary: (['mktemp', '-d', '--tmpdir', 'mnt.XXXXX'],) {} 2018-06-29 14:19:31,159 [DEBUG] (MainThread) Calling: (['mktemp', '-d', '--tmpdir', 'mnt.XXXXX'],) {'close_fds': True, 'stderr': -2} 2018-06-29 14:19:31,162 [DEBUG] (MainThread) Returned: /tmp/mnt.153do 2018-06-29 14:19:31,162 [DEBUG] (MainThread) Calling binary: (['mount', u'/tmp/mnt.1OhaU/LiveOS/rootfs.img', u'/tmp/mnt.153do'],) {} 2018-06-29 14:19:31,162 [DEBUG] (MainThread) Calling: (['mount', u'/tmp/mnt.1OhaU/LiveOS/rootfs.img', u'/tmp/mnt.153do'],) {'close_fds': True, 'stderr': -2} 2018-06-29 14:19:31,177 [DEBUG] (MainThread) Returned: 2018-06-29 14:19:31,189 [DEBUG] (MainThread) Using nvr: ovirt-node-ng-4.2.4-0.20180626.0 2018-06-29 14:19:31,189 [DEBUG] (MainThread) Fetching image for '/' 2018-06-29 14:19:31,189 [DEBUG] (MainThread) Calling binary: (['findmnt', '--noheadings', '-o', 'SOURCE', '/'],) {} 2018-06-29 14:19:31,190 [DEBUG] (MainThread) Calling: (['findmnt', '--noheadings', '-o', 'SOURCE', '/'],) {'close_fds': True, 'stderr': -2} 2018-06-29 14:19:31,203 [DEBUG] (MainThread) Returned: /dev/mapper/onn_node1--g8--h4-ovirt--node--ng--4.2.3.1--0.20180530.0+1 2018-06-29 14:19:31,204 [DEBUG] (MainThread) Found '/dev/mapper/onn_node1--g8--h4-ovirt--node--ng--4.2.3.1--0.20180530.0+1' 2018-06-29 14:19:31,204 [DEBUG] (MainThread) Calling binary: (['lvs', '--noheadings', '--ignoreskippedcluster', '-ovg_name,lv_name', u'/dev/mapper/onn_node1--g8--h4-ovirt--node--ng--4.2.3.1--0.20180530.0+1'],) {'stderr': <open file '/dev/null', mode 'w' at 0x7f56b787eed0>} 2018-06-29 14:19:31,204 [DEBUG] (MainThread) Calling: (['lvs', '--noheadings', '--ignoreskippedcluster', '-ovg_name,lv_name', u'/dev/mapper/onn_node1--g8--h4-ovirt--node--ng--4.2.3.1--0.20180530.0+1'],) {'close_fds': True, 'stderr': <open file '/dev/null', mode 'w' at 0x7f56b787eed0>} 2018-06-29 14:19:31,283 [DEBUG] (MainThread) Returned: onn_node1-g8-h4 ovirt-node-ng-4.2.3.1-0.20180530.0+1 2018-06-29 14:19:31,283 [DEBUG] (MainThread) Found LV for path /dev/mapper/onn_node1--g8--h4-ovirt--node--ng--4.2.3.1--0.20180530.0+1: onn_node1-g8-h4 ovirt-node-ng-4.2.3.1-0.20180530.0+1 2018-06-29 14:19:31,283 [DEBUG] (MainThread) Found LV 'ovirt-node-ng-4.2.3.1-0.20180530.0+1' for path '/dev/mapper/onn_node1--g8--h4-ovirt--node--ng--4.2.3.1--0.20180530.0+1' 2018-06-29 14:19:31,284 [DEBUG] (MainThread) Calling binary: (['vgs', '--noheadings', '--ignoreskippedcluster', '--select', 'vg_tags = imgbased:vg', '-o', 'vg_name'],) {'stderr': <open file '/dev/null', mode 'w' at 0x7f56b787eed0>} 2018-06-29 14:19:31,284 [DEBUG] (MainThread) Calling: (['vgs', '--noheadings', '--ignoreskippedcluster', '--select', 'vg_tags = imgbased:vg', '-o', 'vg_name'],) {'close_fds': True, 'stderr': <open file '/dev/null', mode 'w' at 0x7f56b787eed0>} 2018-06-29 14:19:31,321 [DEBUG] (MainThread) Returned: onn_node1-g8-h4 2018-06-29 14:19:31,322 [DEBUG] (MainThread) Calling binary: (['lvs', '--noheadings', '--ignoreskippedcluster', '-osize', '--units', 'B', u'onn_node1-g8-h4/ovirt-node-ng-4.2.3.1-0.20180530.0+1'],) {'stderr': <open file '/dev/null', mode 'w' at 0x7f56b787eed0>} 2018-06-29 14:19:31,322 [DEBUG] (MainThread) Calling: (['lvs', '--noheadings', '--ignoreskippedcluster', '-osize', '--units', 'B', u'onn_node1-g8-h4/ovirt-node-ng-4.2.3.1-0.20180530.0+1'],) {'close_fds': True, 'stderr': <open file '/dev/null', mode 'w' at 0x7f56b787eed0>} 2018-06-29 14:19:31,355 [DEBUG] (MainThread) Returned: 53750005760B 2018-06-29 14:19:31,355 [DEBUG] (MainThread) Recommeneded base size: 53750005760B 2018-06-29 14:19:31,355 [INFO] (MainThread) Starting base creation 2018-06-29 14:19:31,355 [INFO] (MainThread) New base will be: ovirt-node-ng-4.2.4-0.20180626.0 2018-06-29 14:19:31,356 [DEBUG] (MainThread) Calling binary: (['vgs', '--noheadings', '--ignoreskippedcluster', '@imgbased:pool', '-o', 'lv_full_name'],) {'stderr': <open file '/dev/null', mode 'w' at 0x7f56b787eed0>} 2018-06-29 14:19:31,356 [DEBUG] (MainThread) Calling: (['vgs', '--noheadings', '--ignoreskippedcluster', '@imgbased:pool', '-o', 'lv_full_name'],) {'close_fds': True, 'stderr': <open file '/dev/null', mode 'w' at 0x7f56b787eed0>} 2018-06-29 14:19:31,381 [DEBUG] (MainThread) Returned: onn_node1-g8-h4/pool00 2018-06-29 14:19:31,381 [DEBUG] (MainThread) Pool: <LV 'onn_node1-g8-h4/pool00' /> 2018-06-29 14:19:31,382 [DEBUG] (MainThread) Calling binary: (['lvcreate', '--thin', '--virtualsize', u'53750005760B', '--name', 'ovirt-node-ng-4.2.4-0.20180626.0', u'onn_node1-g8-h4/pool00'],) {} 2018-06-29 14:19:31,382 [DEBUG] (MainThread) Calling: (['lvcreate', '--thin', '--virtualsize', u'53750005760B', '--name', 'ovirt-node-ng-4.2.4-0.20180626.0', u'onn_node1-g8-h4/pool00'],) {'close_fds': True, 'stderr': -2} 2018-06-29 14:19:31,406 [DEBUG] (MainThread) Exception! Cannot create new thin volume, free space in thin pool onn_node1-g8-h4/pool00 reached threshold.
2018-06-29 14:19:31,406 [DEBUG] (MainThread) Calling binary: (['umount', '-l', u'/tmp/mnt.153do'],) {} 2018-06-29 14:19:31,406 [DEBUG] (MainThread) Calling: (['umount', '-l', u'/tmp/mnt.153do'],) {'close_fds': True, 'stderr': -2} 2018-06-29 14:19:31,422 [DEBUG] (MainThread) Returned: 2018-06-29 14:19:31,422 [DEBUG] (MainThread) Calling binary: (['rmdir', u'/tmp/mnt.153do'],) {} 2018-06-29 14:19:31,422 [DEBUG] (MainThread) Calling: (['rmdir', u'/tmp/mnt.153do'],) {'close_fds': True, 'stderr': -2} 2018-06-29 14:19:31,425 [DEBUG] (MainThread) Returned: 2018-06-29 14:19:31,425 [DEBUG] (MainThread) Calling binary: (['umount', '-l', u'/tmp/mnt.1OhaU'],) {} 2018-06-29 14:19:31,425 [DEBUG] (MainThread) Calling: (['umount', '-l', u'/tmp/mnt.1OhaU'],) {'close_fds': True, 'stderr': -2} 2018-06-29 14:19:31,437 [DEBUG] (MainThread) Returned: 2018-06-29 14:19:31,437 [DEBUG] (MainThread) Calling binary: (['rmdir', u'/tmp/mnt.1OhaU'],) {} 2018-06-29 14:19:31,437 [DEBUG] (MainThread) Calling: (['rmdir', u'/tmp/mnt.1OhaU'],) {'close_fds': True, 'stderr': -2} 2018-06-29 14:19:31,440 [DEBUG] (MainThread) Returned: Traceback (most recent call last): File "/usr/lib64/python2.7/runpy.py", line 162, in _run_module_as_main "__main__", fname, loader, pkg_name) File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code exec code in run_globals File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/__main__.py", line 53, in <module> CliApplication() File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/__init__.py", line 82, in CliApplication app.hooks.emit("post-arg-parse", args) File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/hooks.py", line 120, in emit cb(self.context, *args) File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/plugins/update.py", line 56, in post_argparse base_lv, _ = LiveimgExtractor(app.imgbase).extract(args.FILENAME) File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/plugins/update.py", line 118, in extract "%s" % size, nvr) File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/plugins/update.py", line 84, in add_base_with_tree lvs) File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/imgbase.py", line 310, in add_base new_base_lv = pool.create_thinvol(new_base.lv_name, size) File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/lvm.py", line 324, in create_thinvol self.lvm_name]) File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/utils.py", line 390, in lvcreate return self.call(["lvcreate"] + args, **kwargs) File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/utils.py", line 378, in call stdout = call(*args, **kwargs) File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/utils.py", line 153, in call return subprocess.check_output(*args, **kwargs).strip() File "/usr/lib64/python2.7/subprocess.py", line 575, in check_output raise CalledProcessError(retcode, cmd, output=output) subprocess.CalledProcessError: Command '['lvcreate', '--thin', '--virtualsize', u'53750005760B', '--name', 'ovirt-node-ng-4.2.4-0.20180626.0', u'onn_node1-g8-h4/pool00']' returned non-zero exit status 5
On 07/02/2018 04:58 AM, Yuval Turgeman wrote:
Looks like the upgrade script failed - can you please attach /var/log/imgbased.log or /tmp/imgbased.log ?
Thanks, Yuval.
On Mon, Jul 2, 2018 at 2:54 PM, Sandro Bonazzola <sbonazzo@redhat.com> wrote:
Yuval, can you please have a look?
2018-06-30 7:48 GMT+02:00 Oliver Riesener <Oliver.Riesener@hs-bremen.de> :
Yes, here is the same.
It seams the bootloader isn’t configured right ?
I did the Upgrade and reboot to 4.2.4 from UI and got:
[root@ovn-monster ~]# nodectl info layers: ovirt-node-ng-4.2.4-0.20180626.0: ovirt-node-ng-4.2.4-0.20180626.0+1 ovirt-node-ng-4.2.3.1-0.20180530.0: ovirt-node-ng-4.2.3.1-0.20180530.0+1 ovirt-node-ng-4.2.3-0.20180524.0: ovirt-node-ng-4.2.3-0.20180524.0+1 ovirt-node-ng-4.2.1.1-0.20180223.0: ovirt-node-ng-4.2.1.1-0.20180223.0+1 bootloader: default: ovirt-node-ng-4.2.3-0.20180524.0+1 entries: ovirt-node-ng-4.2.3-0.20180524.0+1: index: 0 title: ovirt-node-ng-4.2.3-0.20180524.0 kernel: /boot/ovirt-node-ng-4.2.3-0.20180524.0+1/vmlinuz-3.10.0- 862.3.2.el7.x86_64 args: "ro crashkernel=auto rd.lvm.lv=onn_ovn-monster/ ovirt-node-ng-4.2.3-0.20180524.0+1 rd.lvm.lv=onn_ovn-monster/swap rd.md.uuid=c6c3013b:027a9346:67dfd181:89635587 rhgb quiet LANG=de_DE.UTF-8 img.bootid=ovirt-node-ng-4.2.3-0.20180524.0+1" initrd: /boot/ovirt-node-ng-4.2.3-0. 20180524.0+1/initramfs-3.10.0-862.3.2.el7.x86_64.img root: /dev/onn_ovn-monster/ovirt-node-ng-4.2.3-0.20180524.0+1 ovirt-node-ng-4.2.1.1-0.20180223.0+1: index: 1 title: ovirt-node-ng-4.2.1.1-0.20180223.0 kernel: /boot/ovirt-node-ng-4.2.1.1-0. 20180223.0+1/vmlinuz-3.10.0-693.17.1.el7.x86_64 args: "ro crashkernel=auto rd.lvm.lv=onn_ovn-monster/ ovirt-node-ng-4.2.1.1-0.20180223.0+1 rd.lvm.lv=onn_ovn-monster/swap rd.md.uuid=c6c3013b:027a9346:67dfd181:89635587 rhgb quiet LANG=de_DE.UTF-8 img.bootid=ovirt-node-ng-4.2.1.1-0.20180223.0+1" initrd: /boot/ovirt-node-ng-4.2.1.1-0. 20180223.0+1/initramfs-3.10.0-693.17.1.el7.x86_64.img root: /dev/onn_ovn-monster/ovirt-node-ng-4.2.1.1-0.20180223.0+1 current_layer: ovirt-node-ng-4.2.3-0.20180524.0+1 [root@ovn-monster ~]# uptime 07:35:27 up 2 days, 15:42, 1 user, load average: 1,07, 1,00, 0,95
Am 29.06.2018 um 23:53 schrieb Matt Simonsen <matt@khoza.com>:
Hello,
I did yum updates on 2 of my oVirt 4.2.3 nodes running the prebuilt node platform and it doesn't appear the updates worked.
[root@node6-g8-h4 ~]# yum update Loaded plugins: enabled_repos_upload, fastestmirror, imgbased-persist, : package_upload, product-id, search-disabled-repos, subscription- : manager This system is not registered with an entitlement server. You can use subscription-manager to register. Loading mirror speeds from cached hostfile * ovirt-4.2-epel: linux.mirrors.es.net Resolving Dependencies --> Running transaction check ---> Package ovirt-node-ng-image-update.noarch 0:4.2.3.1-1.el7 will be updated ---> Package ovirt-node-ng-image-update.noarch 0:4.2.4-1.el7 will be obsoleting ---> Package ovirt-node-ng-image-update-placeholder.noarch 0:4.2.3.1-1.el7 will be obsoleted --> Finished Dependency Resolution
Dependencies Resolved
============================================================ ============================================================= Package Arch Version Repository Size ============================================================ ============================================================= Installing: ovirt-node-ng-image-update noarch 4.2.4-1.el7 ovirt-4.2 647 M replacing ovirt-node-ng-image-update-placeholder.noarch 4.2.3.1-1.el7
Transaction Summary ============================================================ ============================================================= Install 1 Package
Total download size: 647 M Is this ok [y/d/N]: y Downloading packages: warning: /var/cache/yum/x86_64/7/ovirt-4.2/packages/ovirt-node-ng- image-update-4.2.4-1.el7.noarch.rpm: Header V4 RSA/SHA1 Signature, key ID fe590cb7: NOKEY Public key for ovirt-node-ng-image-update-4.2.4-1.el7.noarch.rpm is not installed ovirt-node-ng-image-update-4.2.4-1.el7.noarch.rpm | 647 MB 00:02:07 Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-ovirt-4.2 Importing GPG key 0xFE590CB7: Userid : "oVirt <infra@ovirt.org>" Fingerprint: 31a5 d783 7fad 7cb2 86cd 3469 ab8c 4f9d fe59 0cb7 Package : ovirt-release42-4.2.3.1-1.el7.noarch (installed) From : /etc/pki/rpm-gpg/RPM-GPG-ovirt-4.2 Is this ok [y/N]: y Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : ovirt-node-ng-image-update-4.2.4-1.el7.noarch 1/3 warning: %post(ovirt-node-ng-image-update-4.2.4-1.el7.noarch) scriptlet failed, exit status 1 Non-fatal POSTIN scriptlet failure in rpm package ovirt-node-ng-image-update-4.2.4-1.el7.noarch Erasing : ovirt-node-ng-image-update-placeholder-4.2.3.1-1.el7.noarch 2/3 Cleanup : ovirt-node-ng-image-update-4.2.3.1-1.el7.noarch 3/3 warning: file /usr/share/ovirt-node-ng/image/ovirt-node-ng-4.2.0-0.20180530.0.el7.squashfs.img: remove failed: No such file or directory Uploading Package Profile Unable to upload Package Profile Verifying : ovirt-node-ng-image-update-4.2.4-1.el7.noarch 1/3 Verifying : ovirt-node-ng-image-update-4.2.3.1-1.el7.noarch 2/3 Verifying : ovirt-node-ng-image-update-placeholder-4.2.3.1-1.el7.noarch 3/3
Installed: ovirt-node-ng-image-update.noarch 0:4.2.4-1.el7
Replaced: ovirt-node-ng-image-update-placeholder.noarch 0:4.2.3.1-1.el7
Complete! Uploading Enabled Repositories Report Loaded plugins: fastestmirror, product-id, subscription-manager This system is not registered with an entitlement server. You can use subscription-manager to register. Cannot upload enabled repos report, is this client registered?
My engine shows the nodes as having no updates, however the major components including the kernel version and port 9090 admin GUI show 4.2.3
Is there anything I can provide to help diagnose the issue?
[root@node6-g8-h4 ~]# rpm -qa | grep ovirt
ovirt-imageio-common-1.3.1.2-0.el7.centos.noarch ovirt-host-deploy-1.7.3-1.el7.centos.noarch ovirt-vmconsole-host-1.0.5-4.el7.centos.noarch ovirt-provider-ovn-driver-1.2.10-1.el7.centos.noarch ovirt-engine-sdk-python-3.6.9.1-1.el7.noarch ovirt-setup-lib-1.1.4-1.el7.centos.noarch ovirt-release42-4.2.3.1-1.el7.noarch ovirt-imageio-daemon-1.3.1.2-0.el7.centos.noarch ovirt-hosted-engine-setup-2.2.20-1.el7.centos.noarch ovirt-host-dependencies-4.2.2-2.el7.centos.x86_64 ovirt-hosted-engine-ha-2.2.11-1.el7.centos.noarch ovirt-host-4.2.2-2.el7.centos.x86_64 ovirt-node-ng-image-update-4.2.4-1.el7.noarch ovirt-vmconsole-1.0.5-4.el7.centos.noarch ovirt-release-host-node-4.2.3.1-1.el7.noarch cockpit-ovirt-dashboard-0.11.24-1.el7.centos.noarch ovirt-node-ng-nodectl-4.2.0-0.20180524.0.el7.noarch python-ovirt-engine-sdk4-4.2.6-2.el7.centos.x86_64
[root@node6-g8-h4 ~]# yum update Loaded plugins: enabled_repos_upload, fastestmirror, imgbased-persist, package_upload, product-id, search-disabled-repos, subscription-manager This system is not registered with an entitlement server. You can use subscription-manager to register. Loading mirror speeds from cached hostfile * ovirt-4.2-epel: linux.mirrors.es.net No packages marked for update Uploading Enabled Repositories Report Loaded plugins: fastestmirror, product-id, subscription-manager This system is not registered with an entitlement server. You can use subscription-manager to register. Cannot upload enabled repos report, is this client registered? _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community- guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/ message/UHQMGULUHL4GBBHUBNGOAICJEM6W3RVW/
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community- guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/ message/RTOXFNAXQ3NJBWX7RXOYK5H5RZBHX2OK/
--
SANDRO BONAZZOLA
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo@redhat.com <https://red.ht/sig>
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community- guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/ message/AS3UWWIO5PBFYUPW5DIJ6O6VYJAL5ZIL/
-- SANDRO BONAZZOLA MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV Red Hat EMEA <https://www.redhat.com/> sbonazzo@redhat.com <https://red.ht/sig>

Yes, it shows 8g on the VG I removed the LV for /var/crash, then installed again, and it is still failing on the step: 2018-07-02 12:21:10,015 [DEBUG] (MainThread) Calling: (['lvcreate', '--thin', '--virtualsize', u'53750005760B', '--name', 'ovirt-node-ng-4.2.4-0.20180626.0', u'onn_node1-g8-h4/pool00'],) {'close_fds': True, 'stderr': -2} 2018-07-02 12:21:10,069 [DEBUG] (MainThread) Exception! Cannot create new thin volume, free space in thin pool onn_node1-g8-h4/pool00 reached threshold. 2018-07-02 12:21:10,069 [DEBUG] (MainThread) Calling binary: (['umount', '-l', u'/tmp/mnt.ZYOjC'],) {} Thanks Matt On 07/02/2018 10:55 AM, Yuval Turgeman wrote:
Not in front of my laptop so it's a little hard to read but does it say 8g free on the vg ?
On Mon, Jul 2, 2018, 20:00 Matt Simonsen <matt@khoza.com <mailto:matt@khoza.com>> wrote:
This error adds some clarity.
That said, I'm a bit unsure how the space can be the issue given I have several hundred GB of storage in the thin pool that's unused...
How do you suggest I proceed?
Thank you for your help,
Matt
[root@node6-g8-h4 ~]# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert home onn_node1-g8-h4 Vwi-aotz-- 1.00g pool00 4.79 ovirt-node-ng-4.2.2-0.20180423.0 onn_node1-g8-h4 Vwi---tz-k <50.06g pool00 root ovirt-node-ng-4.2.2-0.20180423.0+1 onn_node1-g8-h4 Vwi---tz-- <50.06g pool00 ovirt-node-ng-4.2.2-0.20180423.0 ovirt-node-ng-4.2.3.1-0.20180530.0 onn_node1-g8-h4 Vri---tz-k <50.06g pool00 ovirt-node-ng-4.2.3.1-0.20180530.0+1 onn_node1-g8-h4 Vwi-aotz-- <50.06g pool00 ovirt-node-ng-4.2.3.1-0.20180530.0 6.95 pool00 onn_node1-g8-h4 twi-aotz-- <1.30t 76.63 50.34 root onn_node1-g8-h4 Vwi---tz-- <50.06g pool00 tmp onn_node1-g8-h4 Vwi-aotz-- 1.00g pool00 5.04 var onn_node1-g8-h4 Vwi-aotz-- 15.00g pool00 5.86 var_crash onn_node1-g8-h4 Vwi---tz-- 10.00g pool00 var_local_images onn_node1-g8-h4 Vwi-aotz-- 1.10t pool00 89.72 var_log onn_node1-g8-h4 Vwi-aotz-- 8.00g pool00 6.84 var_log_audit onn_node1-g8-h4 Vwi-aotz-- 2.00g pool00 6.16 [root@node6-g8-h4 ~]# vgs VG #PV #LV #SN Attr VSize VFree onn_node1-g8-h4 1 13 0 wz--n- <1.31t 8.00g
2018-06-29 14:19:31,142 [DEBUG] (MainThread) Version: imgbased-1.0.20 2018-06-29 14:19:31,147 [DEBUG] (MainThread) Arguments: Namespace(FILENAME='/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img', command='update', debug=True, experimental=False, format='liveimg', stream='Image') 2018-06-29 14:19:31,147 [INFO] (MainThread) Extracting image '/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img' 2018-06-29 14:19:31,148 [DEBUG] (MainThread) Calling binary: (['mktemp', '-d', '--tmpdir', 'mnt.XXXXX'],) {} 2018-06-29 14:19:31,148 [DEBUG] (MainThread) Calling: (['mktemp', '-d', '--tmpdir', 'mnt.XXXXX'],) {'close_fds': True, 'stderr': -2} 2018-06-29 14:19:31,150 [DEBUG] (MainThread) Returned: /tmp/mnt.1OhaU 2018-06-29 14:19:31,151 [DEBUG] (MainThread) Calling binary: (['mount', '/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img', u'/tmp/mnt.1OhaU'],) {} 2018-06-29 14:19:31,151 [DEBUG] (MainThread) Calling: (['mount', '/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img', u'/tmp/mnt.1OhaU'],) {'close_fds': True, 'stderr': -2} 2018-06-29 14:19:31,157 [DEBUG] (MainThread) Returned: 2018-06-29 14:19:31,158 [DEBUG] (MainThread) Mounted squashfs 2018-06-29 14:19:31,158 [DEBUG] (MainThread) Found fsimage at '/tmp/mnt.1OhaU/LiveOS/rootfs.img' 2018-06-29 14:19:31,159 [DEBUG] (MainThread) Calling binary: (['mktemp', '-d', '--tmpdir', 'mnt.XXXXX'],) {} 2018-06-29 14:19:31,159 [DEBUG] (MainThread) Calling: (['mktemp', '-d', '--tmpdir', 'mnt.XXXXX'],) {'close_fds': True, 'stderr': -2} 2018-06-29 14:19:31,162 [DEBUG] (MainThread) Returned: /tmp/mnt.153do 2018-06-29 14:19:31,162 [DEBUG] (MainThread) Calling binary: (['mount', u'/tmp/mnt.1OhaU/LiveOS/rootfs.img', u'/tmp/mnt.153do'],) {} 2018-06-29 14:19:31,162 [DEBUG] (MainThread) Calling: (['mount', u'/tmp/mnt.1OhaU/LiveOS/rootfs.img', u'/tmp/mnt.153do'],) {'close_fds': True, 'stderr': -2} 2018-06-29 14:19:31,177 [DEBUG] (MainThread) Returned: 2018-06-29 14:19:31,189 [DEBUG] (MainThread) Using nvr: ovirt-node-ng-4.2.4-0.20180626.0 2018-06-29 14:19:31,189 [DEBUG] (MainThread) Fetching image for '/' 2018-06-29 14:19:31,189 [DEBUG] (MainThread) Calling binary: (['findmnt', '--noheadings', '-o', 'SOURCE', '/'],) {} 2018-06-29 14:19:31,190 [DEBUG] (MainThread) Calling: (['findmnt', '--noheadings', '-o', 'SOURCE', '/'],) {'close_fds': True, 'stderr': -2} 2018-06-29 14:19:31,203 [DEBUG] (MainThread) Returned: /dev/mapper/onn_node1--g8--h4-ovirt--node--ng--4.2.3.1--0.20180530.0+1 2018-06-29 14:19:31,204 [DEBUG] (MainThread) Found '/dev/mapper/onn_node1--g8--h4-ovirt--node--ng--4.2.3.1--0.20180530.0+1' 2018-06-29 14:19:31,204 [DEBUG] (MainThread) Calling binary: (['lvs', '--noheadings', '--ignoreskippedcluster', '-ovg_name,lv_name', u'/dev/mapper/onn_node1--g8--h4-ovirt--node--ng--4.2.3.1--0.20180530.0+1'],) {'stderr': <open file '/dev/null', mode 'w' at 0x7f56b787eed0>} 2018-06-29 14:19:31,204 [DEBUG] (MainThread) Calling: (['lvs', '--noheadings', '--ignoreskippedcluster', '-ovg_name,lv_name', u'/dev/mapper/onn_node1--g8--h4-ovirt--node--ng--4.2.3.1--0.20180530.0+1'],) {'close_fds': True, 'stderr': <open file '/dev/null', mode 'w' at 0x7f56b787eed0>} 2018-06-29 14:19:31,283 [DEBUG] (MainThread) Returned: onn_node1-g8-h4 ovirt-node-ng-4.2.3.1-0.20180530.0+1 2018-06-29 14:19:31,283 [DEBUG] (MainThread) Found LV for path /dev/mapper/onn_node1--g8--h4-ovirt--node--ng--4.2.3.1--0.20180530.0+1: onn_node1-g8-h4 ovirt-node-ng-4.2.3.1-0.20180530.0+1 2018-06-29 14:19:31,283 [DEBUG] (MainThread) Found LV 'ovirt-node-ng-4.2.3.1-0.20180530.0+1' for path '/dev/mapper/onn_node1--g8--h4-ovirt--node--ng--4.2.3.1--0.20180530.0+1' 2018-06-29 14:19:31,284 [DEBUG] (MainThread) Calling binary: (['vgs', '--noheadings', '--ignoreskippedcluster', '--select', 'vg_tags = imgbased:vg', '-o', 'vg_name'],) {'stderr': <open file '/dev/null', mode 'w' at 0x7f56b787eed0>} 2018-06-29 14:19:31,284 [DEBUG] (MainThread) Calling: (['vgs', '--noheadings', '--ignoreskippedcluster', '--select', 'vg_tags = imgbased:vg', '-o', 'vg_name'],) {'close_fds': True, 'stderr': <open file '/dev/null', mode 'w' at 0x7f56b787eed0>} 2018-06-29 14:19:31,321 [DEBUG] (MainThread) Returned: onn_node1-g8-h4 2018-06-29 14:19:31,322 [DEBUG] (MainThread) Calling binary: (['lvs', '--noheadings', '--ignoreskippedcluster', '-osize', '--units', 'B', u'onn_node1-g8-h4/ovirt-node-ng-4.2.3.1-0.20180530.0+1'],) {'stderr': <open file '/dev/null', mode 'w' at 0x7f56b787eed0>} 2018-06-29 14:19:31,322 [DEBUG] (MainThread) Calling: (['lvs', '--noheadings', '--ignoreskippedcluster', '-osize', '--units', 'B', u'onn_node1-g8-h4/ovirt-node-ng-4.2.3.1-0.20180530.0+1'],) {'close_fds': True, 'stderr': <open file '/dev/null', mode 'w' at 0x7f56b787eed0>} 2018-06-29 14:19:31,355 [DEBUG] (MainThread) Returned: 53750005760B 2018-06-29 14:19:31,355 [DEBUG] (MainThread) Recommeneded base size: 53750005760B 2018-06-29 14:19:31,355 [INFO] (MainThread) Starting base creation 2018-06-29 14:19:31,355 [INFO] (MainThread) New base will be: ovirt-node-ng-4.2.4-0.20180626.0 2018-06-29 14:19:31,356 [DEBUG] (MainThread) Calling binary: (['vgs', '--noheadings', '--ignoreskippedcluster', '@imgbased:pool', '-o', 'lv_full_name'],) {'stderr': <open file '/dev/null', mode 'w' at 0x7f56b787eed0>} 2018-06-29 14:19:31,356 [DEBUG] (MainThread) Calling: (['vgs', '--noheadings', '--ignoreskippedcluster', '@imgbased:pool', '-o', 'lv_full_name'],) {'close_fds': True, 'stderr': <open file '/dev/null', mode 'w' at 0x7f56b787eed0>} 2018-06-29 14:19:31,381 [DEBUG] (MainThread) Returned: onn_node1-g8-h4/pool00 2018-06-29 14:19:31,381 [DEBUG] (MainThread) Pool: <LV 'onn_node1-g8-h4/pool00' /> 2018-06-29 14:19:31,382 [DEBUG] (MainThread) Calling binary: (['lvcreate', '--thin', '--virtualsize', u'53750005760B', '--name', 'ovirt-node-ng-4.2.4-0.20180626.0', u'onn_node1-g8-h4/pool00'],) {} 2018-06-29 14:19:31,382 [DEBUG] (MainThread) Calling: (['lvcreate', '--thin', '--virtualsize', u'53750005760B', '--name', 'ovirt-node-ng-4.2.4-0.20180626.0', u'onn_node1-g8-h4/pool00'],) {'close_fds': True, 'stderr': -2} 2018-06-29 14:19:31,406 [DEBUG] (MainThread) Exception! Cannot create new thin volume, free space in thin pool onn_node1-g8-h4/pool00 reached threshold.
2018-06-29 14:19:31,406 [DEBUG] (MainThread) Calling binary: (['umount', '-l', u'/tmp/mnt.153do'],) {} 2018-06-29 14:19:31,406 [DEBUG] (MainThread) Calling: (['umount', '-l', u'/tmp/mnt.153do'],) {'close_fds': True, 'stderr': -2} 2018-06-29 14:19:31,422 [DEBUG] (MainThread) Returned: 2018-06-29 14:19:31,422 [DEBUG] (MainThread) Calling binary: (['rmdir', u'/tmp/mnt.153do'],) {} 2018-06-29 14:19:31,422 [DEBUG] (MainThread) Calling: (['rmdir', u'/tmp/mnt.153do'],) {'close_fds': True, 'stderr': -2} 2018-06-29 14:19:31,425 [DEBUG] (MainThread) Returned: 2018-06-29 14:19:31,425 [DEBUG] (MainThread) Calling binary: (['umount', '-l', u'/tmp/mnt.1OhaU'],) {} 2018-06-29 14:19:31,425 [DEBUG] (MainThread) Calling: (['umount', '-l', u'/tmp/mnt.1OhaU'],) {'close_fds': True, 'stderr': -2} 2018-06-29 14:19:31,437 [DEBUG] (MainThread) Returned: 2018-06-29 14:19:31,437 [DEBUG] (MainThread) Calling binary: (['rmdir', u'/tmp/mnt.1OhaU'],) {} 2018-06-29 14:19:31,437 [DEBUG] (MainThread) Calling: (['rmdir', u'/tmp/mnt.1OhaU'],) {'close_fds': True, 'stderr': -2} 2018-06-29 14:19:31,440 [DEBUG] (MainThread) Returned: Traceback (most recent call last): File "/usr/lib64/python2.7/runpy.py", line 162, in _run_module_as_main "__main__", fname, loader, pkg_name) File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code exec code in run_globals File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/__main__.py", line 53, in <module> CliApplication() File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/__init__.py", line 82, in CliApplication app.hooks.emit("post-arg-parse", args) File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/hooks.py", line 120, in emit cb(self.context, *args) File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/plugins/update.py", line 56, in post_argparse base_lv, _ = LiveimgExtractor(app.imgbase).extract(args.FILENAME) File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/plugins/update.py", line 118, in extract "%s" % size, nvr) File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/plugins/update.py", line 84, in add_base_with_tree lvs) File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/imgbase.py", line 310, in add_base new_base_lv = pool.create_thinvol(new_base.lv_name, size) File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/lvm.py", line 324, in create_thinvol self.lvm_name]) File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/utils.py", line 390, in lvcreate return self.call(["lvcreate"] + args, **kwargs) File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/utils.py", line 378, in call stdout = call(*args, **kwargs) File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/utils.py", line 153, in call return subprocess.check_output(*args, **kwargs).strip() File "/usr/lib64/python2.7/subprocess.py", line 575, in check_output raise CalledProcessError(retcode, cmd, output=output) subprocess.CalledProcessError: Command '['lvcreate', '--thin', '--virtualsize', u'53750005760B', '--name', 'ovirt-node-ng-4.2.4-0.20180626.0', u'onn_node1-g8-h4/pool00']' returned non-zero exit status 5
On 07/02/2018 04:58 AM, Yuval Turgeman wrote:
Looks like the upgrade script failed - can you please attach /var/log/imgbased.log or /tmp/imgbased.log ?
Thanks, Yuval.
On Mon, Jul 2, 2018 at 2:54 PM, Sandro Bonazzola <sbonazzo@redhat.com <mailto:sbonazzo@redhat.com>> wrote:
Yuval, can you please have a look?
2018-06-30 7:48 GMT+02:00 Oliver Riesener <Oliver.Riesener@hs-bremen.de <mailto:Oliver.Riesener@hs-bremen.de>>:
Yes, here is the same.
It seams the bootloader isn’t configured right ?
I did the Upgrade and reboot to 4.2.4 from UI and got:
[root@ovn-monster ~]# nodectl info layers: ovirt-node-ng-4.2.4-0.20180626.0: ovirt-node-ng-4.2.4-0.20180626.0+1 ovirt-node-ng-4.2.3.1-0.20180530.0: ovirt-node-ng-4.2.3.1-0.20180530.0+1 ovirt-node-ng-4.2.3-0.20180524.0: ovirt-node-ng-4.2.3-0.20180524.0+1 ovirt-node-ng-4.2.1.1-0.20180223.0: ovirt-node-ng-4.2.1.1-0.20180223.0+1 bootloader: default: ovirt-node-ng-4.2.3-0.20180524.0+1 entries: ovirt-node-ng-4.2.3-0.20180524.0+1: index: 0 title: ovirt-node-ng-4.2.3-0.20180524.0 kernel: /boot/ovirt-node-ng-4.2.3-0.20180524.0+1/vmlinuz-3.10.0-862.3.2.el7.x86_64 args: "ro crashkernel=auto rd.lvm.lv <http://rd.lvm.lv>=onn_ovn-monster/ovirt-node-ng-4.2.3-0.20180524.0+1 rd.lvm.lv <http://rd.lvm.lv>=onn_ovn-monster/swap rd.md.uuid=c6c3013b:027a9346:67dfd181:89635587 rhgb quiet LANG=de_DE.UTF-8 img.bootid=ovirt-node-ng-4.2.3-0.20180524.0+1" initrd: /boot/ovirt-node-ng-4.2.3-0.20180524.0+1/initramfs-3.10.0-862.3.2.el7.x86_64.img root: /dev/onn_ovn-monster/ovirt-node-ng-4.2.3-0.20180524.0+1 ovirt-node-ng-4.2.1.1-0.20180223.0+1: index: 1 title: ovirt-node-ng-4.2.1.1-0.20180223.0 kernel: /boot/ovirt-node-ng-4.2.1.1-0.20180223.0+1/vmlinuz-3.10.0-693.17.1.el7.x86_64 args: "ro crashkernel=auto rd.lvm.lv <http://rd.lvm.lv>=onn_ovn-monster/ovirt-node-ng-4.2.1.1-0.20180223.0+1 rd.lvm.lv <http://rd.lvm.lv>=onn_ovn-monster/swap rd.md.uuid=c6c3013b:027a9346:67dfd181:89635587 rhgb quiet LANG=de_DE.UTF-8 img.bootid=ovirt-node-ng-4.2.1.1-0.20180223.0+1" initrd: /boot/ovirt-node-ng-4.2.1.1-0.20180223.0+1/initramfs-3.10.0-693.17.1.el7.x86_64.img root: /dev/onn_ovn-monster/ovirt-node-ng-4.2.1.1-0.20180223.0+1 current_layer: ovirt-node-ng-4.2.3-0.20180524.0+1 [root@ovn-monster ~]# uptime 07:35:27 up 2 days, 15:42, 1 user, load average: 1,07, 1,00, 0,95
Am 29.06.2018 um 23:53 schrieb Matt Simonsen <matt@khoza.com <mailto:matt@khoza.com>>:
Hello,
I did yum updates on 2 of my oVirt 4.2.3 nodes running the prebuilt node platform and it doesn't appear the updates worked.
[root@node6-g8-h4 ~]# yum update Loaded plugins: enabled_repos_upload, fastestmirror, imgbased-persist, : package_upload, product-id, search-disabled-repos, subscription- : manager This system is not registered with an entitlement server. You can use subscription-manager to register. Loading mirror speeds from cached hostfile * ovirt-4.2-epel: linux.mirrors.es.net <http://linux.mirrors.es.net> Resolving Dependencies --> Running transaction check ---> Package ovirt-node-ng-image-update.noarch 0:4.2.3.1-1.el7 will be updated ---> Package ovirt-node-ng-image-update.noarch 0:4.2.4-1.el7 will be obsoleting ---> Package ovirt-node-ng-image-update-placeholder.noarch 0:4.2.3.1-1.el7 will be obsoleted --> Finished Dependency Resolution
Dependencies Resolved
========================================================================================================================= Package Arch Version Repository Size ========================================================================================================================= Installing: ovirt-node-ng-image-update noarch 4.2.4-1.el7 ovirt-4.2 647 M replacing ovirt-node-ng-image-update-placeholder.noarch 4.2.3.1-1.el7
Transaction Summary ========================================================================================================================= Install 1 Package
Total download size: 647 M Is this ok [y/d/N]: y Downloading packages: warning: /var/cache/yum/x86_64/7/ovirt-4.2/packages/ovirt-node-ng-image-update-4.2.4-1.el7.noarch.rpm: Header V4 RSA/SHA1 Signature, key ID fe590cb7: NOKEY Public key for ovirt-node-ng-image-update-4.2.4-1.el7.noarch.rpm is not installed ovirt-node-ng-image-update-4.2.4-1.el7.noarch.rpm | 647 MB 00:02:07 Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-ovirt-4.2 Importing GPG key 0xFE590CB7: Userid : "oVirt <infra@ovirt.org <mailto:infra@ovirt.org>>" Fingerprint: 31a5 d783 7fad 7cb2 86cd 3469 ab8c 4f9d fe59 0cb7 Package : ovirt-release42-4.2.3.1-1.el7.noarch (installed) From : /etc/pki/rpm-gpg/RPM-GPG-ovirt-4.2 Is this ok [y/N]: y Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : ovirt-node-ng-image-update-4.2.4-1.el7.noarch 1/3 warning: %post(ovirt-node-ng-image-update-4.2.4-1.el7.noarch) scriptlet failed, exit status 1 Non-fatal POSTIN scriptlet failure in rpm package ovirt-node-ng-image-update-4.2.4-1.el7.noarch Erasing : ovirt-node-ng-image-update-placeholder-4.2.3.1-1.el7.noarch 2/3 Cleanup : ovirt-node-ng-image-update-4.2.3.1-1.el7.noarch 3/3 warning: file /usr/share/ovirt-node-ng/image/ovirt-node-ng-4.2.0-0.20180530.0.el7.squashfs.img: remove failed: No such file or directory Uploading Package Profile Unable to upload Package Profile Verifying : ovirt-node-ng-image-update-4.2.4-1.el7.noarch 1/3 Verifying : ovirt-node-ng-image-update-4.2.3.1-1.el7.noarch 2/3 Verifying : ovirt-node-ng-image-update-placeholder-4.2.3.1-1.el7.noarch 3/3
Installed: ovirt-node-ng-image-update.noarch 0:4.2.4-1.el7
Replaced: ovirt-node-ng-image-update-placeholder.noarch 0:4.2.3.1-1.el7
Complete! Uploading Enabled Repositories Report Loaded plugins: fastestmirror, product-id, subscription-manager This system is not registered with an entitlement server. You can use subscription-manager to register. Cannot upload enabled repos report, is this client registered?
My engine shows the nodes as having no updates, however the major components including the kernel version and port 9090 admin GUI show 4.2.3
Is there anything I can provide to help diagnose the issue?
[root@node6-g8-h4 ~]# rpm -qa | grep ovirt
ovirt-imageio-common-1.3.1.2-0.el7.centos.noarch ovirt-host-deploy-1.7.3-1.el7.centos.noarch ovirt-vmconsole-host-1.0.5-4.el7.centos.noarch ovirt-provider-ovn-driver-1.2.10-1.el7.centos.noarch ovirt-engine-sdk-python-3.6.9.1-1.el7.noarch ovirt-setup-lib-1.1.4-1.el7.centos.noarch ovirt-release42-4.2.3.1-1.el7.noarch ovirt-imageio-daemon-1.3.1.2-0.el7.centos.noarch ovirt-hosted-engine-setup-2.2.20-1.el7.centos.noarch ovirt-host-dependencies-4.2.2-2.el7.centos.x86_64 ovirt-hosted-engine-ha-2.2.11-1.el7.centos.noarch ovirt-host-4.2.2-2.el7.centos.x86_64 ovirt-node-ng-image-update-4.2.4-1.el7.noarch ovirt-vmconsole-1.0.5-4.el7.centos.noarch ovirt-release-host-node-4.2.3.1-1.el7.noarch cockpit-ovirt-dashboard-0.11.24-1.el7.centos.noarch ovirt-node-ng-nodectl-4.2.0-0.20180524.0.el7.noarch python-ovirt-engine-sdk4-4.2.6-2.el7.centos.x86_64
[root@node6-g8-h4 ~]# yum update Loaded plugins: enabled_repos_upload, fastestmirror, imgbased-persist, package_upload, product-id, search-disabled-repos, subscription-manager This system is not registered with an entitlement server. You can use subscription-manager to register. Loading mirror speeds from cached hostfile * ovirt-4.2-epel: linux.mirrors.es.net <http://linux.mirrors.es.net> No packages marked for update Uploading Enabled Repositories Report Loaded plugins: fastestmirror, product-id, subscription-manager This system is not registered with an entitlement server. You can use subscription-manager to register. Cannot upload enabled repos report, is this client registered? _______________________________________________ Users mailing list -- users@ovirt.org <mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org <mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/UHQMGULUHL4GBB...
_______________________________________________ Users mailing list -- users@ovirt.org <mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org <mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/RTOXFNAXQ3NJBW...
--
SANDRO BONAZZOLA
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo@redhat.com <mailto:sbonazzo@redhat.com>
_______________________________________________ Users mailing list -- users@ovirt.org <mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org <mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/AS3UWWIO5PBFYU...

Are you mounted with discard ? perhaps fstrim ? On Mon, Jul 2, 2018 at 10:23 PM, Matt Simonsen <matt@khoza.com> wrote:
Yes, it shows 8g on the VG
I removed the LV for /var/crash, then installed again, and it is still failing on the step:
2018-07-02 12:21:10,015 [DEBUG] (MainThread) Calling: (['lvcreate', '--thin', '--virtualsize', u'53750005760B', '--name', 'ovirt-node-ng-4.2.4-0.20180626.0', u'onn_node1-g8-h4/pool00'],) {'close_fds': True, 'stderr': -2} 2018-07-02 12:21:10,069 [DEBUG] (MainThread) Exception! Cannot create new thin volume, free space in thin pool onn_node1-g8-h4/pool00 reached threshold.
2018-07-02 12:21:10,069 [DEBUG] (MainThread) Calling binary: (['umount', '-l', u'/tmp/mnt.ZYOjC'],) {}
Thanks
Matt
On 07/02/2018 10:55 AM, Yuval Turgeman wrote:
Not in front of my laptop so it's a little hard to read but does it say 8g free on the vg ?
On Mon, Jul 2, 2018, 20:00 Matt Simonsen <matt@khoza.com> wrote:
This error adds some clarity.
That said, I'm a bit unsure how the space can be the issue given I have several hundred GB of storage in the thin pool that's unused...
How do you suggest I proceed?
Thank you for your help,
Matt
[root@node6-g8-h4 ~]# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert home onn_node1-g8-h4 Vwi-aotz-- 1.00g pool00 4.79 ovirt-node-ng-4.2.2-0.20180423.0 onn_node1-g8-h4 Vwi---tz-k <50.06g pool00 root
ovirt-node-ng-4.2.2-0.20180423.0+1 onn_node1-g8-h4 Vwi---tz-- <50.06g pool00 ovirt-node-ng-4.2.2-0.20180423.0
ovirt-node-ng-4.2.3.1-0.20180530.0 onn_node1-g8-h4 Vri---tz-k <50.06g pool00
ovirt-node-ng-4.2.3.1-0.20180530.0+1 onn_node1-g8-h4 Vwi-aotz-- <50.06g pool00 ovirt-node-ng-4.2.3.1-0.20180530.0 6.95 pool00 onn_node1-g8-h4 twi-aotz-- <1.30t 76.63 50.34 root onn_node1-g8-h4 Vwi---tz-- <50.06g pool00
tmp onn_node1-g8-h4 Vwi-aotz-- 1.00g pool00 5.04 var onn_node1-g8-h4 Vwi-aotz-- 15.00g pool00 5.86 var_crash onn_node1-g8-h4 Vwi---tz-- 10.00g pool00
var_local_images onn_node1-g8-h4 Vwi-aotz-- 1.10t pool00 89.72 var_log onn_node1-g8-h4 Vwi-aotz-- 8.00g pool00 6.84 var_log_audit onn_node1-g8-h4 Vwi-aotz-- 2.00g pool00 6.16 [root@node6-g8-h4 ~]# vgs VG #PV #LV #SN Attr VSize VFree onn_node1-g8-h4 1 13 0 wz--n- <1.31t 8.00g
2018-06-29 14:19:31,142 [DEBUG] (MainThread) Version: imgbased-1.0.20 2018-06-29 14:19:31,147 [DEBUG] (MainThread) Arguments: Namespace(FILENAME='/usr/share/ovirt-node-ng/image// ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img', command='update', debug=True, experimental=False, format='liveimg', stream='Image') 2018-06-29 14:19:31,147 [INFO] (MainThread) Extracting image '/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0. 20180626.0.el7.squashfs.img' 2018-06-29 14:19:31,148 [DEBUG] (MainThread) Calling binary: (['mktemp', '-d', '--tmpdir', 'mnt.XXXXX'],) {} 2018-06-29 14:19:31,148 [DEBUG] (MainThread) Calling: (['mktemp', '-d', '--tmpdir', 'mnt.XXXXX'],) {'close_fds': True, 'stderr': -2} 2018-06-29 14:19:31,150 [DEBUG] (MainThread) Returned: /tmp/mnt.1OhaU 2018-06-29 14:19:31,151 [DEBUG] (MainThread) Calling binary: (['mount', '/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img', u'/tmp/mnt.1OhaU'],) {} 2018-06-29 14:19:31,151 [DEBUG] (MainThread) Calling: (['mount', '/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img', u'/tmp/mnt.1OhaU'],) {'close_fds': True, 'stderr': -2} 2018-06-29 14:19:31,157 [DEBUG] (MainThread) Returned: 2018-06-29 14:19:31,158 [DEBUG] (MainThread) Mounted squashfs 2018-06-29 14:19:31,158 [DEBUG] (MainThread) Found fsimage at '/tmp/mnt.1OhaU/LiveOS/rootfs.img' 2018-06-29 14:19:31,159 [DEBUG] (MainThread) Calling binary: (['mktemp', '-d', '--tmpdir', 'mnt.XXXXX'],) {} 2018-06-29 14:19:31,159 [DEBUG] (MainThread) Calling: (['mktemp', '-d', '--tmpdir', 'mnt.XXXXX'],) {'close_fds': True, 'stderr': -2} 2018-06-29 14:19:31,162 [DEBUG] (MainThread) Returned: /tmp/mnt.153do 2018-06-29 14:19:31,162 [DEBUG] (MainThread) Calling binary: (['mount', u'/tmp/mnt.1OhaU/LiveOS/rootfs.img', u'/tmp/mnt.153do'],) {} 2018-06-29 14:19:31,162 [DEBUG] (MainThread) Calling: (['mount', u'/tmp/mnt.1OhaU/LiveOS/rootfs.img', u'/tmp/mnt.153do'],) {'close_fds': True, 'stderr': -2} 2018-06-29 14:19:31,177 [DEBUG] (MainThread) Returned: 2018-06-29 14:19:31,189 [DEBUG] (MainThread) Using nvr: ovirt-node-ng-4.2.4-0.20180626.0 2018-06-29 14:19:31,189 [DEBUG] (MainThread) Fetching image for '/' 2018-06-29 14:19:31,189 [DEBUG] (MainThread) Calling binary: (['findmnt', '--noheadings', '-o', 'SOURCE', '/'],) {} 2018-06-29 14:19:31,190 [DEBUG] (MainThread) Calling: (['findmnt', '--noheadings', '-o', 'SOURCE', '/'],) {'close_fds': True, 'stderr': -2} 2018-06-29 14:19:31,203 [DEBUG] (MainThread) Returned: /dev/mapper/onn_node1--g8--h4-ovirt--node--ng--4.2.3.1--0.20180530.0+1 2018-06-29 14:19:31,204 [DEBUG] (MainThread) Found '/dev/mapper/onn_node1--g8--h4-ovirt--node--ng--4.2.3.1--0.20180530.0+1' 2018-06-29 14:19:31,204 [DEBUG] (MainThread) Calling binary: (['lvs', '--noheadings', '--ignoreskippedcluster', '-ovg_name,lv_name', u'/dev/mapper/onn_node1--g8--h4-ovirt--node--ng--4.2.3.1--0.20180530.0+1'],) {'stderr': <open file '/dev/null', mode 'w' at 0x7f56b787eed0>} 2018-06-29 14:19:31,204 [DEBUG] (MainThread) Calling: (['lvs', '--noheadings', '--ignoreskippedcluster', '-ovg_name,lv_name', u'/dev/mapper/onn_node1--g8--h4-ovirt--node--ng--4.2.3.1--0.20180530.0+1'],) {'close_fds': True, 'stderr': <open file '/dev/null', mode 'w' at 0x7f56b787eed0>} 2018-06-29 14:19:31,283 [DEBUG] (MainThread) Returned: onn_node1-g8-h4 ovirt-node-ng-4.2.3.1-0.20180530.0+1 2018-06-29 14:19:31,283 [DEBUG] (MainThread) Found LV for path /dev/mapper/onn_node1--g8--h4-ovirt--node--ng--4.2.3.1--0.20180530.0+1: onn_node1-g8-h4 ovirt-node-ng-4.2.3.1-0.20180530.0+1 2018-06-29 14:19:31,283 [DEBUG] (MainThread) Found LV 'ovirt-node-ng-4.2.3.1-0.20180530.0+1' for path '/dev/mapper/onn_node1--g8--h4-ovirt--node--ng--4.2.3.1--0.20180530.0+1' 2018-06-29 14:19:31,284 [DEBUG] (MainThread) Calling binary: (['vgs', '--noheadings', '--ignoreskippedcluster', '--select', 'vg_tags = imgbased:vg', '-o', 'vg_name'],) {'stderr': <open file '/dev/null', mode 'w' at 0x7f56b787eed0>} 2018-06-29 14:19:31,284 [DEBUG] (MainThread) Calling: (['vgs', '--noheadings', '--ignoreskippedcluster', '--select', 'vg_tags = imgbased:vg', '-o', 'vg_name'],) {'close_fds': True, 'stderr': <open file '/dev/null', mode 'w' at 0x7f56b787eed0>} 2018-06-29 14:19:31,321 [DEBUG] (MainThread) Returned: onn_node1-g8-h4 2018-06-29 14:19:31,322 [DEBUG] (MainThread) Calling binary: (['lvs', '--noheadings', '--ignoreskippedcluster', '-osize', '--units', 'B', u'onn_node1-g8-h4/ovirt-node-ng-4.2.3.1-0.20180530.0+1'],) {'stderr': <open file '/dev/null', mode 'w' at 0x7f56b787eed0>} 2018-06-29 14:19:31,322 [DEBUG] (MainThread) Calling: (['lvs', '--noheadings', '--ignoreskippedcluster', '-osize', '--units', 'B', u'onn_node1-g8-h4/ovirt-node-ng-4.2.3.1-0.20180530.0+1'],) {'close_fds': True, 'stderr': <open file '/dev/null', mode 'w' at 0x7f56b787eed0>} 2018-06-29 14:19:31,355 [DEBUG] (MainThread) Returned: 53750005760B 2018-06-29 14:19:31,355 [DEBUG] (MainThread) Recommeneded base size: 53750005760B 2018-06-29 14:19:31,355 [INFO] (MainThread) Starting base creation 2018-06-29 14:19:31,355 [INFO] (MainThread) New base will be: ovirt-node-ng-4.2.4-0.20180626.0 2018-06-29 14:19:31,356 [DEBUG] (MainThread) Calling binary: (['vgs', '--noheadings', '--ignoreskippedcluster', '@imgbased:pool', '-o', 'lv_full_name'],) {'stderr': <open file '/dev/null', mode 'w' at 0x7f56b787eed0>} 2018-06-29 14:19:31,356 [DEBUG] (MainThread) Calling: (['vgs', '--noheadings', '--ignoreskippedcluster', '@imgbased:pool', '-o', 'lv_full_name'],) {'close_fds': True, 'stderr': <open file '/dev/null', mode 'w' at 0x7f56b787eed0>} 2018-06-29 14:19:31,381 [DEBUG] (MainThread) Returned: onn_node1-g8-h4/pool00 2018-06-29 14:19:31,381 [DEBUG] (MainThread) Pool: <LV 'onn_node1-g8-h4/pool00' /> 2018-06-29 14:19:31,382 [DEBUG] (MainThread) Calling binary: (['lvcreate', '--thin', '--virtualsize', u'53750005760B', '--name', 'ovirt-node-ng-4.2.4-0.20180626.0', u'onn_node1-g8-h4/pool00'],) {} 2018-06-29 14:19:31,382 [DEBUG] (MainThread) Calling: (['lvcreate', '--thin', '--virtualsize', u'53750005760B', '--name', 'ovirt-node-ng-4.2.4-0.20180626.0', u'onn_node1-g8-h4/pool00'],) {'close_fds': True, 'stderr': -2} 2018-06-29 14:19:31,406 [DEBUG] (MainThread) Exception! Cannot create new thin volume, free space in thin pool onn_node1-g8-h4/pool00 reached threshold.
2018-06-29 14:19:31,406 [DEBUG] (MainThread) Calling binary: (['umount', '-l', u'/tmp/mnt.153do'],) {} 2018-06-29 14:19:31,406 [DEBUG] (MainThread) Calling: (['umount', '-l', u'/tmp/mnt.153do'],) {'close_fds': True, 'stderr': -2} 2018-06-29 14:19:31,422 [DEBUG] (MainThread) Returned: 2018-06-29 14:19:31,422 [DEBUG] (MainThread) Calling binary: (['rmdir', u'/tmp/mnt.153do'],) {} 2018-06-29 14:19:31,422 [DEBUG] (MainThread) Calling: (['rmdir', u'/tmp/mnt.153do'],) {'close_fds': True, 'stderr': -2} 2018-06-29 14:19:31,425 [DEBUG] (MainThread) Returned: 2018-06-29 14:19:31,425 [DEBUG] (MainThread) Calling binary: (['umount', '-l', u'/tmp/mnt.1OhaU'],) {} 2018-06-29 14:19:31,425 [DEBUG] (MainThread) Calling: (['umount', '-l', u'/tmp/mnt.1OhaU'],) {'close_fds': True, 'stderr': -2} 2018-06-29 14:19:31,437 [DEBUG] (MainThread) Returned: 2018-06-29 14:19:31,437 [DEBUG] (MainThread) Calling binary: (['rmdir', u'/tmp/mnt.1OhaU'],) {} 2018-06-29 14:19:31,437 [DEBUG] (MainThread) Calling: (['rmdir', u'/tmp/mnt.1OhaU'],) {'close_fds': True, 'stderr': -2} 2018-06-29 14:19:31,440 [DEBUG] (MainThread) Returned: Traceback (most recent call last): File "/usr/lib64/python2.7/runpy.py", line 162, in _run_module_as_main "__main__", fname, loader, pkg_name) File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code exec code in run_globals File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/__main__.py", line 53, in <module> CliApplication() File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/__init__.py", line 82, in CliApplication app.hooks.emit("post-arg-parse", args) File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/hooks.py", line 120, in emit cb(self.context, *args) File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/plugins/update.py", line 56, in post_argparse base_lv, _ = LiveimgExtractor(app.imgbase).extract(args.FILENAME) File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/plugins/update.py", line 118, in extract "%s" % size, nvr) File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/plugins/update.py", line 84, in add_base_with_tree lvs) File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/imgbase.py", line 310, in add_base new_base_lv = pool.create_thinvol(new_base.lv_name, size) File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/lvm.py", line 324, in create_thinvol self.lvm_name]) File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/utils.py", line 390, in lvcreate return self.call(["lvcreate"] + args, **kwargs) File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/utils.py", line 378, in call stdout = call(*args, **kwargs) File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/utils.py", line 153, in call return subprocess.check_output(*args, **kwargs).strip() File "/usr/lib64/python2.7/subprocess.py", line 575, in check_output raise CalledProcessError(retcode, cmd, output=output) subprocess.CalledProcessError: Command '['lvcreate', '--thin', '--virtualsize', u'53750005760B', '--name', 'ovirt-node-ng-4.2.4-0.20180626.0', u'onn_node1-g8-h4/pool00']' returned non-zero exit status 5
On 07/02/2018 04:58 AM, Yuval Turgeman wrote:
Looks like the upgrade script failed - can you please attach /var/log/imgbased.log or /tmp/imgbased.log ?
Thanks, Yuval.
On Mon, Jul 2, 2018 at 2:54 PM, Sandro Bonazzola <sbonazzo@redhat.com> wrote:
Yuval, can you please have a look?
2018-06-30 7:48 GMT+02:00 Oliver Riesener <Oliver.Riesener@hs-bremen.de> :
Yes, here is the same.
It seams the bootloader isn’t configured right ?
I did the Upgrade and reboot to 4.2.4 from UI and got:
[root@ovn-monster ~]# nodectl info layers: ovirt-node-ng-4.2.4-0.20180626.0: ovirt-node-ng-4.2.4-0.20180626.0+1 ovirt-node-ng-4.2.3.1-0.20180530.0: ovirt-node-ng-4.2.3.1-0.20180530.0+1 ovirt-node-ng-4.2.3-0.20180524.0: ovirt-node-ng-4.2.3-0.20180524.0+1 ovirt-node-ng-4.2.1.1-0.20180223.0: ovirt-node-ng-4.2.1.1-0.20180223.0+1 bootloader: default: ovirt-node-ng-4.2.3-0.20180524.0+1 entries: ovirt-node-ng-4.2.3-0.20180524.0+1: index: 0 title: ovirt-node-ng-4.2.3-0.20180524.0 kernel: /boot/ovirt-node-ng-4.2.3-0.20180524.0+1/vmlinuz-3.10.0- 862.3.2.el7.x86_64 args: "ro crashkernel=auto rd.lvm.lv=onn_ovn-monster/ ovirt-node-ng-4.2.3-0.20180524.0+1 rd.lvm.lv=onn_ovn-monster/swap rd.md.uuid=c6c3013b:027a9346:67dfd181:89635587 rhgb quiet LANG=de_DE.UTF-8 img.bootid=ovirt-node-ng-4.2.3-0.20180524.0+1" initrd: /boot/ovirt-node-ng-4.2.3-0. 20180524.0+1/initramfs-3.10.0-862.3.2.el7.x86_64.img root: /dev/onn_ovn-monster/ovirt-node-ng-4.2.3-0.20180524.0+1 ovirt-node-ng-4.2.1.1-0.20180223.0+1: index: 1 title: ovirt-node-ng-4.2.1.1-0.20180223.0 kernel: /boot/ovirt-node-ng-4.2.1.1-0. 20180223.0+1/vmlinuz-3.10.0-693.17.1.el7.x86_64 args: "ro crashkernel=auto rd.lvm.lv=onn_ovn-monster/ ovirt-node-ng-4.2.1.1-0.20180223.0+1 rd.lvm.lv=onn_ovn-monster/swap rd.md.uuid=c6c3013b:027a9346:67dfd181:89635587 rhgb quiet LANG=de_DE.UTF-8 img.bootid=ovirt-node-ng-4.2.1.1-0.20180223.0+1" initrd: /boot/ovirt-node-ng-4.2.1.1-0. 20180223.0+1/initramfs-3.10.0-693.17.1.el7.x86_64.img root: /dev/onn_ovn-monster/ovirt-node-ng-4.2.1.1-0.20180223.0+1 current_layer: ovirt-node-ng-4.2.3-0.20180524.0+1 [root@ovn-monster ~]# uptime 07:35:27 up 2 days, 15:42, 1 user, load average: 1,07, 1,00, 0,95
Am 29.06.2018 um 23:53 schrieb Matt Simonsen <matt@khoza.com>:
Hello,
I did yum updates on 2 of my oVirt 4.2.3 nodes running the prebuilt node platform and it doesn't appear the updates worked.
[root@node6-g8-h4 ~]# yum update Loaded plugins: enabled_repos_upload, fastestmirror, imgbased-persist, : package_upload, product-id, search-disabled-repos, subscription- : manager This system is not registered with an entitlement server. You can use subscription-manager to register. Loading mirror speeds from cached hostfile * ovirt-4.2-epel: linux.mirrors.es.net Resolving Dependencies --> Running transaction check ---> Package ovirt-node-ng-image-update.noarch 0:4.2.3.1-1.el7 will be updated ---> Package ovirt-node-ng-image-update.noarch 0:4.2.4-1.el7 will be obsoleting ---> Package ovirt-node-ng-image-update-placeholder.noarch 0:4.2.3.1-1.el7 will be obsoleted --> Finished Dependency Resolution
Dependencies Resolved
============================================================ ============================================================= Package Arch Version Repository Size ============================================================ ============================================================= Installing: ovirt-node-ng-image-update noarch 4.2.4-1.el7 ovirt-4.2 647 M replacing ovirt-node-ng-image-update-placeholder.noarch 4.2.3.1-1.el7
Transaction Summary ============================================================ ============================================================= Install 1 Package
Total download size: 647 M Is this ok [y/d/N]: y Downloading packages: warning: /var/cache/yum/x86_64/7/ovirt-4.2/packages/ovirt-node-ng- image-update-4.2.4-1.el7.noarch.rpm: Header V4 RSA/SHA1 Signature, key ID fe590cb7: NOKEY Public key for ovirt-node-ng-image-update-4.2.4-1.el7.noarch.rpm is not installed ovirt-node-ng-image-update-4.2.4-1.el7.noarch.rpm | 647 MB 00:02:07 Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-ovirt-4.2 Importing GPG key 0xFE590CB7: Userid : "oVirt <infra@ovirt.org>" Fingerprint: 31a5 d783 7fad 7cb2 86cd 3469 ab8c 4f9d fe59 0cb7 Package : ovirt-release42-4.2.3.1-1.el7.noarch (installed) From : /etc/pki/rpm-gpg/RPM-GPG-ovirt-4.2 Is this ok [y/N]: y Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : ovirt-node-ng-image-update-4.2.4-1.el7.noarch 1/3 warning: %post(ovirt-node-ng-image-update-4.2.4-1.el7.noarch) scriptlet failed, exit status 1 Non-fatal POSTIN scriptlet failure in rpm package ovirt-node-ng-image-update-4.2.4-1.el7.noarch Erasing : ovirt-node-ng-image-update-placeholder-4.2.3.1-1.el7.noarch 2/3 Cleanup : ovirt-node-ng-image-update-4.2.3.1-1.el7.noarch 3/3 warning: file /usr/share/ovirt-node-ng/image/ovirt-node-ng-4.2.0-0.20180530.0.el7.squashfs.img: remove failed: No such file or directory Uploading Package Profile Unable to upload Package Profile Verifying : ovirt-node-ng-image-update-4.2.4-1.el7.noarch 1/3 Verifying : ovirt-node-ng-image-update-4.2.3.1-1.el7.noarch 2/3 Verifying : ovirt-node-ng-image-update-placeholder-4.2.3.1-1.el7.noarch 3/3
Installed: ovirt-node-ng-image-update.noarch 0:4.2.4-1.el7
Replaced: ovirt-node-ng-image-update-placeholder.noarch 0:4.2.3.1-1.el7
Complete! Uploading Enabled Repositories Report Loaded plugins: fastestmirror, product-id, subscription-manager This system is not registered with an entitlement server. You can use subscription-manager to register. Cannot upload enabled repos report, is this client registered?
My engine shows the nodes as having no updates, however the major components including the kernel version and port 9090 admin GUI show 4.2.3
Is there anything I can provide to help diagnose the issue?
[root@node6-g8-h4 ~]# rpm -qa | grep ovirt
ovirt-imageio-common-1.3.1.2-0.el7.centos.noarch ovirt-host-deploy-1.7.3-1.el7.centos.noarch ovirt-vmconsole-host-1.0.5-4.el7.centos.noarch ovirt-provider-ovn-driver-1.2.10-1.el7.centos.noarch ovirt-engine-sdk-python-3.6.9.1-1.el7.noarch ovirt-setup-lib-1.1.4-1.el7.centos.noarch ovirt-release42-4.2.3.1-1.el7.noarch ovirt-imageio-daemon-1.3.1.2-0.el7.centos.noarch ovirt-hosted-engine-setup-2.2.20-1.el7.centos.noarch ovirt-host-dependencies-4.2.2-2.el7.centos.x86_64 ovirt-hosted-engine-ha-2.2.11-1.el7.centos.noarch ovirt-host-4.2.2-2.el7.centos.x86_64 ovirt-node-ng-image-update-4.2.4-1.el7.noarch ovirt-vmconsole-1.0.5-4.el7.centos.noarch ovirt-release-host-node-4.2.3.1-1.el7.noarch cockpit-ovirt-dashboard-0.11.24-1.el7.centos.noarch ovirt-node-ng-nodectl-4.2.0-0.20180524.0.el7.noarch python-ovirt-engine-sdk4-4.2.6-2.el7.centos.x86_64
[root@node6-g8-h4 ~]# yum update Loaded plugins: enabled_repos_upload, fastestmirror, imgbased-persist, package_upload, product-id, search-disabled-repos, subscription-manager This system is not registered with an entitlement server. You can use subscription-manager to register. Loading mirror speeds from cached hostfile * ovirt-4.2-epel: linux.mirrors.es.net No packages marked for update Uploading Enabled Repositories Report Loaded plugins: fastestmirror, product-id, subscription-manager This system is not registered with an entitlement server. You can use subscription-manager to register. Cannot upload enabled repos report, is this client registered? _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community- guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/ message/UHQMGULUHL4GBBHUBNGOAICJEM6W3RVW/
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community- guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/ message/RTOXFNAXQ3NJBWX7RXOYK5H5RZBHX2OK/
--
SANDRO BONAZZOLA
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo@redhat.com <https://red.ht/sig>
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community- guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/ message/AS3UWWIO5PBFYUPW5DIJ6O6VYJAL5ZIL/

On 07/02/2018 12:55 PM, Yuval Turgeman wrote:
Are you mounted with discard ? perhaps fstrim ?
I believe that I have all the default options, and I have one extra partition for images. # # /etc/fstab # Created by anaconda on Sat Oct 31 18:04:29 2015 # # Accessible filesystems, by reference, are maintained under '/dev/disk' # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info # /dev/onn_node1-g8-h4/ovirt-node-ng-4.2.3.1-0.20180530.0+1 / ext4 defaults,discard 1 1 UUID=84ca8776-61d6-4b19-9104-99730932b45a /boot ext4 defaults 1 2 /dev/mapper/onn_node1--g8--h4-home /home ext4 defaults,discard 1 2 /dev/mapper/onn_node1--g8--h4-tmp /tmp ext4 defaults,discard 1 2 /dev/mapper/onn_node1--g8--h4-var /var ext4 defaults,discard 1 2 /dev/mapper/onn_node1--g8--h4-var_local_images /var/local/images ext4 defaults 1 2 /dev/mapper/onn_node1--g8--h4-var_log /var/log ext4 defaults,discard 1 2 /dev/mapper/onn_node1--g8--h4-var_log_audit /var/log/audit ext4 defaults,discard 1 2 At this point I don't have a /var/crash mounted (or a LV even). I assume I should re-create. I noticed on another server with the same problem, the var_crash LV isn't available. Could this be part of the problem? --- Logical volume --- LV Path /dev/onn/var_crash LV Name var_crash VG Name onn LV UUID X1TPMZ-XeZP-DGYv-woZW-3kvk-vWZu-XQcFhL LV Write Access read/write LV Creation host, time node1-g7-h1.srihosting.com, 2018-04-05 07:03:35 -0700 LV Pool name pool00 LV Status NOT available LV Size 10.00 GiB Current LE 2560 Segments 1 Allocation inherit Read ahead sectors auto Thanks Matt

Btw, removing /var/crash was directed to Oliver - you have different problems On Mon, Jul 2, 2018 at 10:23 PM, Matt Simonsen <matt@khoza.com> wrote:
Yes, it shows 8g on the VG
I removed the LV for /var/crash, then installed again, and it is still failing on the step:
2018-07-02 12:21:10,015 [DEBUG] (MainThread) Calling: (['lvcreate', '--thin', '--virtualsize', u'53750005760B', '--name', 'ovirt-node-ng-4.2.4-0.20180626.0', u'onn_node1-g8-h4/pool00'],) {'close_fds': True, 'stderr': -2} 2018-07-02 12:21:10,069 [DEBUG] (MainThread) Exception! Cannot create new thin volume, free space in thin pool onn_node1-g8-h4/pool00 reached threshold.
2018-07-02 12:21:10,069 [DEBUG] (MainThread) Calling binary: (['umount', '-l', u'/tmp/mnt.ZYOjC'],) {}
Thanks
Matt
On 07/02/2018 10:55 AM, Yuval Turgeman wrote:
Not in front of my laptop so it's a little hard to read but does it say 8g free on the vg ?
On Mon, Jul 2, 2018, 20:00 Matt Simonsen <matt@khoza.com> wrote:
This error adds some clarity.
That said, I'm a bit unsure how the space can be the issue given I have several hundred GB of storage in the thin pool that's unused...
How do you suggest I proceed?
Thank you for your help,
Matt
[root@node6-g8-h4 ~]# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert home onn_node1-g8-h4 Vwi-aotz-- 1.00g pool00 4.79 ovirt-node-ng-4.2.2-0.20180423.0 onn_node1-g8-h4 Vwi---tz-k <50.06g pool00 root
ovirt-node-ng-4.2.2-0.20180423.0+1 onn_node1-g8-h4 Vwi---tz-- <50.06g pool00 ovirt-node-ng-4.2.2-0.20180423.0
ovirt-node-ng-4.2.3.1-0.20180530.0 onn_node1-g8-h4 Vri---tz-k <50.06g pool00
ovirt-node-ng-4.2.3.1-0.20180530.0+1 onn_node1-g8-h4 Vwi-aotz-- <50.06g pool00 ovirt-node-ng-4.2.3.1-0.20180530.0 6.95 pool00 onn_node1-g8-h4 twi-aotz-- <1.30t 76.63 50.34 root onn_node1-g8-h4 Vwi---tz-- <50.06g pool00
tmp onn_node1-g8-h4 Vwi-aotz-- 1.00g pool00 5.04 var onn_node1-g8-h4 Vwi-aotz-- 15.00g pool00 5.86 var_crash onn_node1-g8-h4 Vwi---tz-- 10.00g pool00
var_local_images onn_node1-g8-h4 Vwi-aotz-- 1.10t pool00 89.72 var_log onn_node1-g8-h4 Vwi-aotz-- 8.00g pool00 6.84 var_log_audit onn_node1-g8-h4 Vwi-aotz-- 2.00g pool00 6.16 [root@node6-g8-h4 ~]# vgs VG #PV #LV #SN Attr VSize VFree onn_node1-g8-h4 1 13 0 wz--n- <1.31t 8.00g
2018-06-29 14:19:31,142 [DEBUG] (MainThread) Version: imgbased-1.0.20 2018-06-29 14:19:31,147 [DEBUG] (MainThread) Arguments: Namespace(FILENAME='/usr/share/ovirt-node-ng/image// ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img', command='update', debug=True, experimental=False, format='liveimg', stream='Image') 2018-06-29 14:19:31,147 [INFO] (MainThread) Extracting image '/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0. 20180626.0.el7.squashfs.img' 2018-06-29 14:19:31,148 [DEBUG] (MainThread) Calling binary: (['mktemp', '-d', '--tmpdir', 'mnt.XXXXX'],) {} 2018-06-29 14:19:31,148 [DEBUG] (MainThread) Calling: (['mktemp', '-d', '--tmpdir', 'mnt.XXXXX'],) {'close_fds': True, 'stderr': -2} 2018-06-29 14:19:31,150 [DEBUG] (MainThread) Returned: /tmp/mnt.1OhaU 2018-06-29 14:19:31,151 [DEBUG] (MainThread) Calling binary: (['mount', '/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img', u'/tmp/mnt.1OhaU'],) {} 2018-06-29 14:19:31,151 [DEBUG] (MainThread) Calling: (['mount', '/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img', u'/tmp/mnt.1OhaU'],) {'close_fds': True, 'stderr': -2} 2018-06-29 14:19:31,157 [DEBUG] (MainThread) Returned: 2018-06-29 14:19:31,158 [DEBUG] (MainThread) Mounted squashfs 2018-06-29 14:19:31,158 [DEBUG] (MainThread) Found fsimage at '/tmp/mnt.1OhaU/LiveOS/rootfs.img' 2018-06-29 14:19:31,159 [DEBUG] (MainThread) Calling binary: (['mktemp', '-d', '--tmpdir', 'mnt.XXXXX'],) {} 2018-06-29 14:19:31,159 [DEBUG] (MainThread) Calling: (['mktemp', '-d', '--tmpdir', 'mnt.XXXXX'],) {'close_fds': True, 'stderr': -2} 2018-06-29 14:19:31,162 [DEBUG] (MainThread) Returned: /tmp/mnt.153do 2018-06-29 14:19:31,162 [DEBUG] (MainThread) Calling binary: (['mount', u'/tmp/mnt.1OhaU/LiveOS/rootfs.img', u'/tmp/mnt.153do'],) {} 2018-06-29 14:19:31,162 [DEBUG] (MainThread) Calling: (['mount', u'/tmp/mnt.1OhaU/LiveOS/rootfs.img', u'/tmp/mnt.153do'],) {'close_fds': True, 'stderr': -2} 2018-06-29 14:19:31,177 [DEBUG] (MainThread) Returned: 2018-06-29 14:19:31,189 [DEBUG] (MainThread) Using nvr: ovirt-node-ng-4.2.4-0.20180626.0 2018-06-29 14:19:31,189 [DEBUG] (MainThread) Fetching image for '/' 2018-06-29 14:19:31,189 [DEBUG] (MainThread) Calling binary: (['findmnt', '--noheadings', '-o', 'SOURCE', '/'],) {} 2018-06-29 14:19:31,190 [DEBUG] (MainThread) Calling: (['findmnt', '--noheadings', '-o', 'SOURCE', '/'],) {'close_fds': True, 'stderr': -2} 2018-06-29 14:19:31,203 [DEBUG] (MainThread) Returned: /dev/mapper/onn_node1--g8--h4-ovirt--node--ng--4.2.3.1--0.20180530.0+1 2018-06-29 14:19:31,204 [DEBUG] (MainThread) Found '/dev/mapper/onn_node1--g8--h4-ovirt--node--ng--4.2.3.1--0.20180530.0+1' 2018-06-29 14:19:31,204 [DEBUG] (MainThread) Calling binary: (['lvs', '--noheadings', '--ignoreskippedcluster', '-ovg_name,lv_name', u'/dev/mapper/onn_node1--g8--h4-ovirt--node--ng--4.2.3.1--0.20180530.0+1'],) {'stderr': <open file '/dev/null', mode 'w' at 0x7f56b787eed0>} 2018-06-29 14:19:31,204 [DEBUG] (MainThread) Calling: (['lvs', '--noheadings', '--ignoreskippedcluster', '-ovg_name,lv_name', u'/dev/mapper/onn_node1--g8--h4-ovirt--node--ng--4.2.3.1--0.20180530.0+1'],) {'close_fds': True, 'stderr': <open file '/dev/null', mode 'w' at 0x7f56b787eed0>} 2018-06-29 14:19:31,283 [DEBUG] (MainThread) Returned: onn_node1-g8-h4 ovirt-node-ng-4.2.3.1-0.20180530.0+1 2018-06-29 14:19:31,283 [DEBUG] (MainThread) Found LV for path /dev/mapper/onn_node1--g8--h4-ovirt--node--ng--4.2.3.1--0.20180530.0+1: onn_node1-g8-h4 ovirt-node-ng-4.2.3.1-0.20180530.0+1 2018-06-29 14:19:31,283 [DEBUG] (MainThread) Found LV 'ovirt-node-ng-4.2.3.1-0.20180530.0+1' for path '/dev/mapper/onn_node1--g8--h4-ovirt--node--ng--4.2.3.1--0.20180530.0+1' 2018-06-29 14:19:31,284 [DEBUG] (MainThread) Calling binary: (['vgs', '--noheadings', '--ignoreskippedcluster', '--select', 'vg_tags = imgbased:vg', '-o', 'vg_name'],) {'stderr': <open file '/dev/null', mode 'w' at 0x7f56b787eed0>} 2018-06-29 14:19:31,284 [DEBUG] (MainThread) Calling: (['vgs', '--noheadings', '--ignoreskippedcluster', '--select', 'vg_tags = imgbased:vg', '-o', 'vg_name'],) {'close_fds': True, 'stderr': <open file '/dev/null', mode 'w' at 0x7f56b787eed0>} 2018-06-29 14:19:31,321 [DEBUG] (MainThread) Returned: onn_node1-g8-h4 2018-06-29 14:19:31,322 [DEBUG] (MainThread) Calling binary: (['lvs', '--noheadings', '--ignoreskippedcluster', '-osize', '--units', 'B', u'onn_node1-g8-h4/ovirt-node-ng-4.2.3.1-0.20180530.0+1'],) {'stderr': <open file '/dev/null', mode 'w' at 0x7f56b787eed0>} 2018-06-29 14:19:31,322 [DEBUG] (MainThread) Calling: (['lvs', '--noheadings', '--ignoreskippedcluster', '-osize', '--units', 'B', u'onn_node1-g8-h4/ovirt-node-ng-4.2.3.1-0.20180530.0+1'],) {'close_fds': True, 'stderr': <open file '/dev/null', mode 'w' at 0x7f56b787eed0>} 2018-06-29 14:19:31,355 [DEBUG] (MainThread) Returned: 53750005760B 2018-06-29 14:19:31,355 [DEBUG] (MainThread) Recommeneded base size: 53750005760B 2018-06-29 14:19:31,355 [INFO] (MainThread) Starting base creation 2018-06-29 14:19:31,355 [INFO] (MainThread) New base will be: ovirt-node-ng-4.2.4-0.20180626.0 2018-06-29 14:19:31,356 [DEBUG] (MainThread) Calling binary: (['vgs', '--noheadings', '--ignoreskippedcluster', '@imgbased:pool', '-o', 'lv_full_name'],) {'stderr': <open file '/dev/null', mode 'w' at 0x7f56b787eed0>} 2018-06-29 14:19:31,356 [DEBUG] (MainThread) Calling: (['vgs', '--noheadings', '--ignoreskippedcluster', '@imgbased:pool', '-o', 'lv_full_name'],) {'close_fds': True, 'stderr': <open file '/dev/null', mode 'w' at 0x7f56b787eed0>} 2018-06-29 14:19:31,381 [DEBUG] (MainThread) Returned: onn_node1-g8-h4/pool00 2018-06-29 14:19:31,381 [DEBUG] (MainThread) Pool: <LV 'onn_node1-g8-h4/pool00' /> 2018-06-29 14:19:31,382 [DEBUG] (MainThread) Calling binary: (['lvcreate', '--thin', '--virtualsize', u'53750005760B', '--name', 'ovirt-node-ng-4.2.4-0.20180626.0', u'onn_node1-g8-h4/pool00'],) {} 2018-06-29 14:19:31,382 [DEBUG] (MainThread) Calling: (['lvcreate', '--thin', '--virtualsize', u'53750005760B', '--name', 'ovirt-node-ng-4.2.4-0.20180626.0', u'onn_node1-g8-h4/pool00'],) {'close_fds': True, 'stderr': -2} 2018-06-29 14:19:31,406 [DEBUG] (MainThread) Exception! Cannot create new thin volume, free space in thin pool onn_node1-g8-h4/pool00 reached threshold.
2018-06-29 14:19:31,406 [DEBUG] (MainThread) Calling binary: (['umount', '-l', u'/tmp/mnt.153do'],) {} 2018-06-29 14:19:31,406 [DEBUG] (MainThread) Calling: (['umount', '-l', u'/tmp/mnt.153do'],) {'close_fds': True, 'stderr': -2} 2018-06-29 14:19:31,422 [DEBUG] (MainThread) Returned: 2018-06-29 14:19:31,422 [DEBUG] (MainThread) Calling binary: (['rmdir', u'/tmp/mnt.153do'],) {} 2018-06-29 14:19:31,422 [DEBUG] (MainThread) Calling: (['rmdir', u'/tmp/mnt.153do'],) {'close_fds': True, 'stderr': -2} 2018-06-29 14:19:31,425 [DEBUG] (MainThread) Returned: 2018-06-29 14:19:31,425 [DEBUG] (MainThread) Calling binary: (['umount', '-l', u'/tmp/mnt.1OhaU'],) {} 2018-06-29 14:19:31,425 [DEBUG] (MainThread) Calling: (['umount', '-l', u'/tmp/mnt.1OhaU'],) {'close_fds': True, 'stderr': -2} 2018-06-29 14:19:31,437 [DEBUG] (MainThread) Returned: 2018-06-29 14:19:31,437 [DEBUG] (MainThread) Calling binary: (['rmdir', u'/tmp/mnt.1OhaU'],) {} 2018-06-29 14:19:31,437 [DEBUG] (MainThread) Calling: (['rmdir', u'/tmp/mnt.1OhaU'],) {'close_fds': True, 'stderr': -2} 2018-06-29 14:19:31,440 [DEBUG] (MainThread) Returned: Traceback (most recent call last): File "/usr/lib64/python2.7/runpy.py", line 162, in _run_module_as_main "__main__", fname, loader, pkg_name) File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code exec code in run_globals File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/__main__.py", line 53, in <module> CliApplication() File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/__init__.py", line 82, in CliApplication app.hooks.emit("post-arg-parse", args) File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/hooks.py", line 120, in emit cb(self.context, *args) File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/plugins/update.py", line 56, in post_argparse base_lv, _ = LiveimgExtractor(app.imgbase).extract(args.FILENAME) File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/plugins/update.py", line 118, in extract "%s" % size, nvr) File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/plugins/update.py", line 84, in add_base_with_tree lvs) File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/imgbase.py", line 310, in add_base new_base_lv = pool.create_thinvol(new_base.lv_name, size) File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/lvm.py", line 324, in create_thinvol self.lvm_name]) File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/utils.py", line 390, in lvcreate return self.call(["lvcreate"] + args, **kwargs) File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/utils.py", line 378, in call stdout = call(*args, **kwargs) File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/utils.py", line 153, in call return subprocess.check_output(*args, **kwargs).strip() File "/usr/lib64/python2.7/subprocess.py", line 575, in check_output raise CalledProcessError(retcode, cmd, output=output) subprocess.CalledProcessError: Command '['lvcreate', '--thin', '--virtualsize', u'53750005760B', '--name', 'ovirt-node-ng-4.2.4-0.20180626.0', u'onn_node1-g8-h4/pool00']' returned non-zero exit status 5
On 07/02/2018 04:58 AM, Yuval Turgeman wrote:
Looks like the upgrade script failed - can you please attach /var/log/imgbased.log or /tmp/imgbased.log ?
Thanks, Yuval.
On Mon, Jul 2, 2018 at 2:54 PM, Sandro Bonazzola <sbonazzo@redhat.com> wrote:
Yuval, can you please have a look?
2018-06-30 7:48 GMT+02:00 Oliver Riesener <Oliver.Riesener@hs-bremen.de> :
Yes, here is the same.
It seams the bootloader isn’t configured right ?
I did the Upgrade and reboot to 4.2.4 from UI and got:
[root@ovn-monster ~]# nodectl info layers: ovirt-node-ng-4.2.4-0.20180626.0: ovirt-node-ng-4.2.4-0.20180626.0+1 ovirt-node-ng-4.2.3.1-0.20180530.0: ovirt-node-ng-4.2.3.1-0.20180530.0+1 ovirt-node-ng-4.2.3-0.20180524.0: ovirt-node-ng-4.2.3-0.20180524.0+1 ovirt-node-ng-4.2.1.1-0.20180223.0: ovirt-node-ng-4.2.1.1-0.20180223.0+1 bootloader: default: ovirt-node-ng-4.2.3-0.20180524.0+1 entries: ovirt-node-ng-4.2.3-0.20180524.0+1: index: 0 title: ovirt-node-ng-4.2.3-0.20180524.0 kernel: /boot/ovirt-node-ng-4.2.3-0.20180524.0+1/vmlinuz-3.10.0- 862.3.2.el7.x86_64 args: "ro crashkernel=auto rd.lvm.lv=onn_ovn-monster/ ovirt-node-ng-4.2.3-0.20180524.0+1 rd.lvm.lv=onn_ovn-monster/swap rd.md.uuid=c6c3013b:027a9346:67dfd181:89635587 rhgb quiet LANG=de_DE.UTF-8 img.bootid=ovirt-node-ng-4.2.3-0.20180524.0+1" initrd: /boot/ovirt-node-ng-4.2.3-0. 20180524.0+1/initramfs-3.10.0-862.3.2.el7.x86_64.img root: /dev/onn_ovn-monster/ovirt-node-ng-4.2.3-0.20180524.0+1 ovirt-node-ng-4.2.1.1-0.20180223.0+1: index: 1 title: ovirt-node-ng-4.2.1.1-0.20180223.0 kernel: /boot/ovirt-node-ng-4.2.1.1-0. 20180223.0+1/vmlinuz-3.10.0-693.17.1.el7.x86_64 args: "ro crashkernel=auto rd.lvm.lv=onn_ovn-monster/ ovirt-node-ng-4.2.1.1-0.20180223.0+1 rd.lvm.lv=onn_ovn-monster/swap rd.md.uuid=c6c3013b:027a9346:67dfd181:89635587 rhgb quiet LANG=de_DE.UTF-8 img.bootid=ovirt-node-ng-4.2.1.1-0.20180223.0+1" initrd: /boot/ovirt-node-ng-4.2.1.1-0. 20180223.0+1/initramfs-3.10.0-693.17.1.el7.x86_64.img root: /dev/onn_ovn-monster/ovirt-node-ng-4.2.1.1-0.20180223.0+1 current_layer: ovirt-node-ng-4.2.3-0.20180524.0+1 [root@ovn-monster ~]# uptime 07:35:27 up 2 days, 15:42, 1 user, load average: 1,07, 1,00, 0,95
Am 29.06.2018 um 23:53 schrieb Matt Simonsen <matt@khoza.com>:
Hello,
I did yum updates on 2 of my oVirt 4.2.3 nodes running the prebuilt node platform and it doesn't appear the updates worked.
[root@node6-g8-h4 ~]# yum update Loaded plugins: enabled_repos_upload, fastestmirror, imgbased-persist, : package_upload, product-id, search-disabled-repos, subscription- : manager This system is not registered with an entitlement server. You can use subscription-manager to register. Loading mirror speeds from cached hostfile * ovirt-4.2-epel: linux.mirrors.es.net Resolving Dependencies --> Running transaction check ---> Package ovirt-node-ng-image-update.noarch 0:4.2.3.1-1.el7 will be updated ---> Package ovirt-node-ng-image-update.noarch 0:4.2.4-1.el7 will be obsoleting ---> Package ovirt-node-ng-image-update-placeholder.noarch 0:4.2.3.1-1.el7 will be obsoleted --> Finished Dependency Resolution
Dependencies Resolved
============================================================ ============================================================= Package Arch Version Repository Size ============================================================ ============================================================= Installing: ovirt-node-ng-image-update noarch 4.2.4-1.el7 ovirt-4.2 647 M replacing ovirt-node-ng-image-update-placeholder.noarch 4.2.3.1-1.el7
Transaction Summary ============================================================ ============================================================= Install 1 Package
Total download size: 647 M Is this ok [y/d/N]: y Downloading packages: warning: /var/cache/yum/x86_64/7/ovirt-4.2/packages/ovirt-node-ng- image-update-4.2.4-1.el7.noarch.rpm: Header V4 RSA/SHA1 Signature, key ID fe590cb7: NOKEY Public key for ovirt-node-ng-image-update-4.2.4-1.el7.noarch.rpm is not installed ovirt-node-ng-image-update-4.2.4-1.el7.noarch.rpm | 647 MB 00:02:07 Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-ovirt-4.2 Importing GPG key 0xFE590CB7: Userid : "oVirt <infra@ovirt.org>" Fingerprint: 31a5 d783 7fad 7cb2 86cd 3469 ab8c 4f9d fe59 0cb7 Package : ovirt-release42-4.2.3.1-1.el7.noarch (installed) From : /etc/pki/rpm-gpg/RPM-GPG-ovirt-4.2 Is this ok [y/N]: y Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : ovirt-node-ng-image-update-4.2.4-1.el7.noarch 1/3 warning: %post(ovirt-node-ng-image-update-4.2.4-1.el7.noarch) scriptlet failed, exit status 1 Non-fatal POSTIN scriptlet failure in rpm package ovirt-node-ng-image-update-4.2.4-1.el7.noarch Erasing : ovirt-node-ng-image-update-placeholder-4.2.3.1-1.el7.noarch 2/3 Cleanup : ovirt-node-ng-image-update-4.2.3.1-1.el7.noarch 3/3 warning: file /usr/share/ovirt-node-ng/image/ovirt-node-ng-4.2.0-0.20180530.0.el7.squashfs.img: remove failed: No such file or directory Uploading Package Profile Unable to upload Package Profile Verifying : ovirt-node-ng-image-update-4.2.4-1.el7.noarch 1/3 Verifying : ovirt-node-ng-image-update-4.2.3.1-1.el7.noarch 2/3 Verifying : ovirt-node-ng-image-update-placeholder-4.2.3.1-1.el7.noarch 3/3
Installed: ovirt-node-ng-image-update.noarch 0:4.2.4-1.el7
Replaced: ovirt-node-ng-image-update-placeholder.noarch 0:4.2.3.1-1.el7
Complete! Uploading Enabled Repositories Report Loaded plugins: fastestmirror, product-id, subscription-manager This system is not registered with an entitlement server. You can use subscription-manager to register. Cannot upload enabled repos report, is this client registered?
My engine shows the nodes as having no updates, however the major components including the kernel version and port 9090 admin GUI show 4.2.3
Is there anything I can provide to help diagnose the issue?
[root@node6-g8-h4 ~]# rpm -qa | grep ovirt
ovirt-imageio-common-1.3.1.2-0.el7.centos.noarch ovirt-host-deploy-1.7.3-1.el7.centos.noarch ovirt-vmconsole-host-1.0.5-4.el7.centos.noarch ovirt-provider-ovn-driver-1.2.10-1.el7.centos.noarch ovirt-engine-sdk-python-3.6.9.1-1.el7.noarch ovirt-setup-lib-1.1.4-1.el7.centos.noarch ovirt-release42-4.2.3.1-1.el7.noarch ovirt-imageio-daemon-1.3.1.2-0.el7.centos.noarch ovirt-hosted-engine-setup-2.2.20-1.el7.centos.noarch ovirt-host-dependencies-4.2.2-2.el7.centos.x86_64 ovirt-hosted-engine-ha-2.2.11-1.el7.centos.noarch ovirt-host-4.2.2-2.el7.centos.x86_64 ovirt-node-ng-image-update-4.2.4-1.el7.noarch ovirt-vmconsole-1.0.5-4.el7.centos.noarch ovirt-release-host-node-4.2.3.1-1.el7.noarch cockpit-ovirt-dashboard-0.11.24-1.el7.centos.noarch ovirt-node-ng-nodectl-4.2.0-0.20180524.0.el7.noarch python-ovirt-engine-sdk4-4.2.6-2.el7.centos.x86_64
[root@node6-g8-h4 ~]# yum update Loaded plugins: enabled_repos_upload, fastestmirror, imgbased-persist, package_upload, product-id, search-disabled-repos, subscription-manager This system is not registered with an entitlement server. You can use subscription-manager to register. Loading mirror speeds from cached hostfile * ovirt-4.2-epel: linux.mirrors.es.net No packages marked for update Uploading Enabled Repositories Report Loaded plugins: fastestmirror, product-id, subscription-manager This system is not registered with an entitlement server. You can use subscription-manager to register. Cannot upload enabled repos report, is this client registered? _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community- guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/ message/UHQMGULUHL4GBBHUBNGOAICJEM6W3RVW/
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community- guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/ message/RTOXFNAXQ3NJBWX7RXOYK5H5RZBHX2OK/
--
SANDRO BONAZZOLA
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo@redhat.com <https://red.ht/sig>
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community- guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/ message/AS3UWWIO5PBFYUPW5DIJ6O6VYJAL5ZIL/

On Mon, Jul 2, 2018 at 7:54 PM, Matt Simonsen <matt@khoza.com> wrote:
This error adds some clarity.
That said, I'm a bit unsure how the space can be the issue given I have several hundred GB of storage in the thin pool that's unused...
How do you suggest I proceed?
Thank you for your help,
Matt
[root@node6-g8-h4 ~]# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert home onn_node1-g8-h4 Vwi-aotz-- 1.00g pool00 4.79 ovirt-node-ng-4.2.2-0.20180423.0 onn_node1-g8-h4 Vwi---tz-k <50.06g pool00 root ovirt-node-ng-4.2.2-0.20180423.0+1 onn_node1-g8-h4 Vwi---tz-- <50.06g pool00 ovirt-node-ng-4.2.2-0.20180423.0 ovirt-node-ng-4.2.3.1-0.20180530.0 onn_node1-g8-h4 Vri---tz-k <50.06g pool00 ovirt-node-ng-4.2.3.1-0.20180530.0+1 onn_node1-g8-h4 Vwi-aotz-- <50.06g pool00 ovirt-node-ng-4.2.3.1-0.20180530.0 6.95 pool00 onn_node1-g8-h4 twi-aotz-- <1.30t 76.63 50.34
I think your thinpool meta volume is close to full and needs to be enlarged. This quite likely happened because you extended the thinpool without extending the meta vol. Check also 'lvs -a'. This might be enough, but check the names first: lvextend -L+200m onn_node1-g8-h4/pool00_tmeta Best regards,
root onn_node1-g8-h4 Vwi---tz-- <50.06g pool00 tmp onn_node1-g8-h4 Vwi-aotz-- 1.00g pool00 5.04 var onn_node1-g8-h4 Vwi-aotz-- 15.00g pool00 5.86 var_crash onn_node1-g8-h4 Vwi---tz-- 10.00g pool00 var_local_images onn_node1-g8-h4 Vwi-aotz-- 1.10t pool00 89.72 var_log onn_node1-g8-h4 Vwi-aotz-- 8.00g pool00 6.84 var_log_audit onn_node1-g8-h4 Vwi-aotz-- 2.00g pool00 6.16 [root@node6-g8-h4 ~]# vgs VG #PV #LV #SN Attr VSize VFree onn_node1-g8-h4 1 13 0 wz--n- <1.31t 8.00g
2018-06-29 14:19:31,142 [DEBUG] (MainThread) Version: imgbased-1.0.20 2018-06-29 14:19:31,147 [DEBUG] (MainThread) Arguments: Namespace(FILENAME='/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img', command='update', debug=True, experimental=False, format='liveimg', stream='Image') 2018-06-29 14:19:31,147 [INFO] (MainThread) Extracting image '/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img' 2018-06-29 14:19:31,148 [DEBUG] (MainThread) Calling binary: (['mktemp', '-d', '--tmpdir', 'mnt.XXXXX'],) {} 2018-06-29 14:19:31,148 [DEBUG] (MainThread) Calling: (['mktemp', '-d', '--tmpdir', 'mnt.XXXXX'],) {'close_fds': True, 'stderr': -2} 2018-06-29 14:19:31,150 [DEBUG] (MainThread) Returned: /tmp/mnt.1OhaU 2018-06-29 14:19:31,151 [DEBUG] (MainThread) Calling binary: (['mount', '/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img', u'/tmp/mnt.1OhaU'],) {} 2018-06-29 14:19:31,151 [DEBUG] (MainThread) Calling: (['mount', '/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img', u'/tmp/mnt.1OhaU'],) {'close_fds': True, 'stderr': -2} 2018-06-29 14:19:31,157 [DEBUG] (MainThread) Returned: 2018-06-29 14:19:31,158 [DEBUG] (MainThread) Mounted squashfs 2018-06-29 14:19:31,158 [DEBUG] (MainThread) Found fsimage at '/tmp/mnt.1OhaU/LiveOS/rootfs.img' 2018-06-29 14:19:31,159 [DEBUG] (MainThread) Calling binary: (['mktemp', '-d', '--tmpdir', 'mnt.XXXXX'],) {} 2018-06-29 14:19:31,159 [DEBUG] (MainThread) Calling: (['mktemp', '-d', '--tmpdir', 'mnt.XXXXX'],) {'close_fds': True, 'stderr': -2} 2018-06-29 14:19:31,162 [DEBUG] (MainThread) Returned: /tmp/mnt.153do 2018-06-29 14:19:31,162 [DEBUG] (MainThread) Calling binary: (['mount', u'/tmp/mnt.1OhaU/LiveOS/rootfs.img', u'/tmp/mnt.153do'],) {} 2018-06-29 14:19:31,162 [DEBUG] (MainThread) Calling: (['mount', u'/tmp/mnt.1OhaU/LiveOS/rootfs.img', u'/tmp/mnt.153do'],) {'close_fds': True, 'stderr': -2} 2018-06-29 14:19:31,177 [DEBUG] (MainThread) Returned: 2018-06-29 14:19:31,189 [DEBUG] (MainThread) Using nvr: ovirt-node-ng-4.2.4-0.20180626.0 2018-06-29 14:19:31,189 [DEBUG] (MainThread) Fetching image for '/' 2018-06-29 14:19:31,189 [DEBUG] (MainThread) Calling binary: (['findmnt', '--noheadings', '-o', 'SOURCE', '/'],) {} 2018-06-29 14:19:31,190 [DEBUG] (MainThread) Calling: (['findmnt', '--noheadings', '-o', 'SOURCE', '/'],) {'close_fds': True, 'stderr': -2} 2018-06-29 14:19:31,203 [DEBUG] (MainThread) Returned: /dev/mapper/onn_node1--g8--h4-ovirt--node--ng--4.2.3.1--0.20180530.0+1 2018-06-29 14:19:31,204 [DEBUG] (MainThread) Found '/dev/mapper/onn_node1--g8--h4-ovirt--node--ng--4.2.3.1--0.20180530.0+1' 2018-06-29 14:19:31,204 [DEBUG] (MainThread) Calling binary: (['lvs', '--noheadings', '--ignoreskippedcluster', '-ovg_name,lv_name', u'/dev/mapper/onn_node1--g8--h4-ovirt--node--ng--4.2.3.1--0.20180530.0+1'],) {'stderr': <open file '/dev/null', mode 'w' at 0x7f56b787eed0>} 2018-06-29 14:19:31,204 [DEBUG] (MainThread) Calling: (['lvs', '--noheadings', '--ignoreskippedcluster', '-ovg_name,lv_name', u'/dev/mapper/onn_node1--g8--h4-ovirt--node--ng--4.2.3.1--0.20180530.0+1'],) {'close_fds': True, 'stderr': <open file '/dev/null', mode 'w' at 0x7f56b787eed0>} 2018-06-29 14:19:31,283 [DEBUG] (MainThread) Returned: onn_node1-g8-h4 ovirt-node-ng-4.2.3.1-0.20180530.0+1 2018-06-29 14:19:31,283 [DEBUG] (MainThread) Found LV for path /dev/mapper/onn_node1--g8--h4-ovirt--node--ng--4.2.3.1--0.20180530.0+1: onn_node1-g8-h4 ovirt-node-ng-4.2.3.1-0.20180530.0+1 2018-06-29 14:19:31,283 [DEBUG] (MainThread) Found LV 'ovirt-node-ng-4.2.3.1-0.20180530.0+1' for path '/dev/mapper/onn_node1--g8--h4-ovirt--node--ng--4.2.3.1--0.20180530.0+1' 2018-06-29 14:19:31,284 [DEBUG] (MainThread) Calling binary: (['vgs', '--noheadings', '--ignoreskippedcluster', '--select', 'vg_tags = imgbased:vg', '-o', 'vg_name'],) {'stderr': <open file '/dev/null', mode 'w' at 0x7f56b787eed0>} 2018-06-29 14:19:31,284 [DEBUG] (MainThread) Calling: (['vgs', '--noheadings', '--ignoreskippedcluster', '--select', 'vg_tags = imgbased:vg', '-o', 'vg_name'],) {'close_fds': True, 'stderr': <open file '/dev/null', mode 'w' at 0x7f56b787eed0>} 2018-06-29 14:19:31,321 [DEBUG] (MainThread) Returned: onn_node1-g8-h4 2018-06-29 14:19:31,322 [DEBUG] (MainThread) Calling binary: (['lvs', '--noheadings', '--ignoreskippedcluster', '-osize', '--units', 'B', u'onn_node1-g8-h4/ovirt-node-ng-4.2.3.1-0.20180530.0+1'],) {'stderr': <open file '/dev/null', mode 'w' at 0x7f56b787eed0>} 2018-06-29 14:19:31,322 [DEBUG] (MainThread) Calling: (['lvs', '--noheadings', '--ignoreskippedcluster', '-osize', '--units', 'B', u'onn_node1-g8-h4/ovirt-node-ng-4.2.3.1-0.20180530.0+1'],) {'close_fds': True, 'stderr': <open file '/dev/null', mode 'w' at 0x7f56b787eed0>} 2018-06-29 14:19:31,355 [DEBUG] (MainThread) Returned: 53750005760B 2018-06-29 14:19:31,355 [DEBUG] (MainThread) Recommeneded base size: 53750005760B 2018-06-29 14:19:31,355 [INFO] (MainThread) Starting base creation 2018-06-29 14:19:31,355 [INFO] (MainThread) New base will be: ovirt-node-ng-4.2.4-0.20180626.0 2018-06-29 14:19:31,356 [DEBUG] (MainThread) Calling binary: (['vgs', '--noheadings', '--ignoreskippedcluster', '@imgbased:pool', '-o', 'lv_full_name'],) {'stderr': <open file '/dev/null', mode 'w' at 0x7f56b787eed0>} 2018-06-29 14:19:31,356 [DEBUG] (MainThread) Calling: (['vgs', '--noheadings', '--ignoreskippedcluster', '@imgbased:pool', '-o', 'lv_full_name'],) {'close_fds': True, 'stderr': <open file '/dev/null', mode 'w' at 0x7f56b787eed0>} 2018-06-29 14:19:31,381 [DEBUG] (MainThread) Returned: onn_node1-g8-h4/pool00 2018-06-29 14:19:31,381 [DEBUG] (MainThread) Pool: <LV 'onn_node1-g8-h4/pool00' /> 2018-06-29 14:19:31,382 [DEBUG] (MainThread) Calling binary: (['lvcreate', '--thin', '--virtualsize', u'53750005760B', '--name', 'ovirt-node-ng-4.2.4-0.20180626.0', u'onn_node1-g8-h4/pool00'],) {} 2018-06-29 14:19:31,382 [DEBUG] (MainThread) Calling: (['lvcreate', '--thin', '--virtualsize', u'53750005760B', '--name', 'ovirt-node-ng-4.2.4-0.20180626.0', u'onn_node1-g8-h4/pool00'],) {'close_fds': True, 'stderr': -2} 2018-06-29 14:19:31,406 [DEBUG] (MainThread) Exception! Cannot create new thin volume, free space in thin pool onn_node1-g8-h4/pool00 reached threshold.
2018-06-29 14:19:31,406 [DEBUG] (MainThread) Calling binary: (['umount', '-l', u'/tmp/mnt.153do'],) {} 2018-06-29 14:19:31,406 [DEBUG] (MainThread) Calling: (['umount', '-l', u'/tmp/mnt.153do'],) {'close_fds': True, 'stderr': -2} 2018-06-29 14:19:31,422 [DEBUG] (MainThread) Returned: 2018-06-29 14:19:31,422 [DEBUG] (MainThread) Calling binary: (['rmdir', u'/tmp/mnt.153do'],) {} 2018-06-29 14:19:31,422 [DEBUG] (MainThread) Calling: (['rmdir', u'/tmp/mnt.153do'],) {'close_fds': True, 'stderr': -2} 2018-06-29 14:19:31,425 [DEBUG] (MainThread) Returned: 2018-06-29 14:19:31,425 [DEBUG] (MainThread) Calling binary: (['umount', '-l', u'/tmp/mnt.1OhaU'],) {} 2018-06-29 14:19:31,425 [DEBUG] (MainThread) Calling: (['umount', '-l', u'/tmp/mnt.1OhaU'],) {'close_fds': True, 'stderr': -2} 2018-06-29 14:19:31,437 [DEBUG] (MainThread) Returned: 2018-06-29 14:19:31,437 [DEBUG] (MainThread) Calling binary: (['rmdir', u'/tmp/mnt.1OhaU'],) {} 2018-06-29 14:19:31,437 [DEBUG] (MainThread) Calling: (['rmdir', u'/tmp/mnt.1OhaU'],) {'close_fds': True, 'stderr': -2} 2018-06-29 14:19:31,440 [DEBUG] (MainThread) Returned: Traceback (most recent call last): File "/usr/lib64/python2.7/runpy.py", line 162, in _run_module_as_main "__main__", fname, loader, pkg_name) File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code exec code in run_globals File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/__main__.py", line 53, in <module> CliApplication() File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/__init__.py", line 82, in CliApplication app.hooks.emit("post-arg-parse", args) File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/hooks.py", line 120, in emit cb(self.context, *args) File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/plugins/update.py", line 56, in post_argparse base_lv, _ = LiveimgExtractor(app.imgbase).extract(args.FILENAME) File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/plugins/update.py", line 118, in extract "%s" % size, nvr) File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/plugins/update.py", line 84, in add_base_with_tree lvs) File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/imgbase.py", line 310, in add_base new_base_lv = pool.create_thinvol(new_base.lv_name, size) File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/lvm.py", line 324, in create_thinvol self.lvm_name]) File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/utils.py", line 390, in lvcreate return self.call(["lvcreate"] + args, **kwargs) File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/utils.py", line 378, in call stdout = call(*args, **kwargs) File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/utils.py", line 153, in call return subprocess.check_output(*args, **kwargs).strip() File "/usr/lib64/python2.7/subprocess.py", line 575, in check_output raise CalledProcessError(retcode, cmd, output=output) subprocess.CalledProcessError: Command '['lvcreate', '--thin', '--virtualsize', u'53750005760B', '--name', 'ovirt-node-ng-4.2.4-0.20180626.0', u'onn_node1-g8-h4/pool00']' returned non-zero exit status 5
On 07/02/2018 04:58 AM, Yuval Turgeman wrote:
Looks like the upgrade script failed - can you please attach /var/log/imgbased.log or /tmp/imgbased.log ?
Thanks, Yuval.
On Mon, Jul 2, 2018 at 2:54 PM, Sandro Bonazzola <sbonazzo@redhat.com> wrote:
Yuval, can you please have a look?
2018-06-30 7:48 GMT+02:00 Oliver Riesener <Oliver.Riesener@hs-bremen.de>:
Yes, here is the same.
It seams the bootloader isn’t configured right ?
I did the Upgrade and reboot to 4.2.4 from UI and got:
[root@ovn-monster ~]# nodectl info layers: ovirt-node-ng-4.2.4-0.20180626.0: ovirt-node-ng-4.2.4-0.20180626.0+1 ovirt-node-ng-4.2.3.1-0.20180530.0: ovirt-node-ng-4.2.3.1-0.20180530.0+1 ovirt-node-ng-4.2.3-0.20180524.0: ovirt-node-ng-4.2.3-0.20180524.0+1 ovirt-node-ng-4.2.1.1-0.20180223.0: ovirt-node-ng-4.2.1.1-0.20180223.0+1 bootloader: default: ovirt-node-ng-4.2.3-0.20180524.0+1 entries: ovirt-node-ng-4.2.3-0.20180524.0+1: index: 0 title: ovirt-node-ng-4.2.3-0.20180524.0 kernel: /boot/ovirt-node-ng-4.2.3-0.20180524.0+1/vmlinuz-3.10.0-862.3.2.el7.x86_64 args: "ro crashkernel=auto rd.lvm.lv=onn_ovn-monster/ovirt-node-ng-4.2.3-0.20180524.0+1 rd.lvm.lv=onn_ovn-monster/swap rd.md.uuid=c6c3013b:027a9346:67dfd181:89635587 rhgb quiet LANG=de_DE.UTF-8 img.bootid=ovirt-node-ng-4.2.3-0.20180524.0+1" initrd: /boot/ovirt-node-ng-4.2.3-0.20180524.0+1/initramfs-3.10.0-862.3.2.el7.x86_64.img root: /dev/onn_ovn-monster/ovirt-node-ng-4.2.3-0.20180524.0+1 ovirt-node-ng-4.2.1.1-0.20180223.0+1: index: 1 title: ovirt-node-ng-4.2.1.1-0.20180223.0 kernel: /boot/ovirt-node-ng-4.2.1.1-0.20180223.0+1/vmlinuz-3.10.0-693.17.1.el7.x86_64 args: "ro crashkernel=auto rd.lvm.lv=onn_ovn-monster/ovirt-node-ng-4.2.1.1-0.20180223.0+1 rd.lvm.lv=onn_ovn-monster/swap rd.md.uuid=c6c3013b:027a9346:67dfd181:89635587 rhgb quiet LANG=de_DE.UTF-8 img.bootid=ovirt-node-ng-4.2.1.1-0.20180223.0+1" initrd: /boot/ovirt-node-ng-4.2.1.1-0.20180223.0+1/initramfs-3.10.0-693.17.1.el7.x86_64.img root: /dev/onn_ovn-monster/ovirt-node-ng-4.2.1.1-0.20180223.0+1 current_layer: ovirt-node-ng-4.2.3-0.20180524.0+1 [root@ovn-monster ~]# uptime 07:35:27 up 2 days, 15:42, 1 user, load average: 1,07, 1,00, 0,95
Am 29.06.2018 um 23:53 schrieb Matt Simonsen <matt@khoza.com>:
Hello,
I did yum updates on 2 of my oVirt 4.2.3 nodes running the prebuilt node platform and it doesn't appear the updates worked.
[root@node6-g8-h4 ~]# yum update Loaded plugins: enabled_repos_upload, fastestmirror, imgbased-persist, : package_upload, product-id, search-disabled-repos, subscription- : manager This system is not registered with an entitlement server. You can use subscription-manager to register. Loading mirror speeds from cached hostfile * ovirt-4.2-epel: linux.mirrors.es.net Resolving Dependencies --> Running transaction check ---> Package ovirt-node-ng-image-update.noarch 0:4.2.3.1-1.el7 will be updated ---> Package ovirt-node-ng-image-update.noarch 0:4.2.4-1.el7 will be obsoleting ---> Package ovirt-node-ng-image-update-placeholder.noarch 0:4.2.3.1-1.el7 will be obsoleted --> Finished Dependency Resolution
Dependencies Resolved
========================================================================================================================= Package Arch Version Repository Size ========================================================================================================================= Installing: ovirt-node-ng-image-update noarch 4.2.4-1.el7 ovirt-4.2 647 M replacing ovirt-node-ng-image-update-placeholder.noarch 4.2.3.1-1.el7
Transaction Summary ========================================================================================================================= Install 1 Package
Total download size: 647 M Is this ok [y/d/N]: y Downloading packages: warning: /var/cache/yum/x86_64/7/ovirt-4.2/packages/ovirt-node-ng-image-update-4.2.4-1.el7.noarch.rpm: Header V4 RSA/SHA1 Signature, key ID fe590cb7: NOKEY Public key for ovirt-node-ng-image-update-4.2.4-1.el7.noarch.rpm is not installed ovirt-node-ng-image-update-4.2.4-1.el7.noarch.rpm | 647 MB 00:02:07 Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-ovirt-4.2 Importing GPG key 0xFE590CB7: Userid : "oVirt <infra@ovirt.org>" Fingerprint: 31a5 d783 7fad 7cb2 86cd 3469 ab8c 4f9d fe59 0cb7 Package : ovirt-release42-4.2.3.1-1.el7.noarch (installed) From : /etc/pki/rpm-gpg/RPM-GPG-ovirt-4.2 Is this ok [y/N]: y Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : ovirt-node-ng-image-update-4.2.4-1.el7.noarch 1/3 warning: %post(ovirt-node-ng-image-update-4.2.4-1.el7.noarch) scriptlet failed, exit status 1 Non-fatal POSTIN scriptlet failure in rpm package ovirt-node-ng-image-update-4.2.4-1.el7.noarch Erasing : ovirt-node-ng-image-update-placeholder-4.2.3.1-1.el7.noarch 2/3 Cleanup : ovirt-node-ng-image-update-4.2.3.1-1.el7.noarch 3/3 warning: file /usr/share/ovirt-node-ng/image/ovirt-node-ng-4.2.0-0.20180530.0.el7.squashfs.img: remove failed: No such file or directory Uploading Package Profile Unable to upload Package Profile Verifying : ovirt-node-ng-image-update-4.2.4-1.el7.noarch 1/3 Verifying : ovirt-node-ng-image-update-4.2.3.1-1.el7.noarch 2/3 Verifying : ovirt-node-ng-image-update-placeholder-4.2.3.1-1.el7.noarch 3/3
Installed: ovirt-node-ng-image-update.noarch 0:4.2.4-1.el7
Replaced: ovirt-node-ng-image-update-placeholder.noarch 0:4.2.3.1-1.el7
Complete! Uploading Enabled Repositories Report Loaded plugins: fastestmirror, product-id, subscription-manager This system is not registered with an entitlement server. You can use subscription-manager to register. Cannot upload enabled repos report, is this client registered?
My engine shows the nodes as having no updates, however the major components including the kernel version and port 9090 admin GUI show 4.2.3
Is there anything I can provide to help diagnose the issue?
[root@node6-g8-h4 ~]# rpm -qa | grep ovirt
ovirt-imageio-common-1.3.1.2-0.el7.centos.noarch ovirt-host-deploy-1.7.3-1.el7.centos.noarch ovirt-vmconsole-host-1.0.5-4.el7.centos.noarch ovirt-provider-ovn-driver-1.2.10-1.el7.centos.noarch ovirt-engine-sdk-python-3.6.9.1-1.el7.noarch ovirt-setup-lib-1.1.4-1.el7.centos.noarch ovirt-release42-4.2.3.1-1.el7.noarch ovirt-imageio-daemon-1.3.1.2-0.el7.centos.noarch ovirt-hosted-engine-setup-2.2.20-1.el7.centos.noarch ovirt-host-dependencies-4.2.2-2.el7.centos.x86_64 ovirt-hosted-engine-ha-2.2.11-1.el7.centos.noarch ovirt-host-4.2.2-2.el7.centos.x86_64 ovirt-node-ng-image-update-4.2.4-1.el7.noarch ovirt-vmconsole-1.0.5-4.el7.centos.noarch ovirt-release-host-node-4.2.3.1-1.el7.noarch cockpit-ovirt-dashboard-0.11.24-1.el7.centos.noarch ovirt-node-ng-nodectl-4.2.0-0.20180524.0.el7.noarch python-ovirt-engine-sdk4-4.2.6-2.el7.centos.x86_64
[root@node6-g8-h4 ~]# yum update Loaded plugins: enabled_repos_upload, fastestmirror, imgbased-persist, package_upload, product-id, search-disabled-repos, subscription-manager This system is not registered with an entitlement server. You can use subscription-manager to register. Loading mirror speeds from cached hostfile * ovirt-4.2-epel: linux.mirrors.es.net No packages marked for update Uploading Enabled Repositories Report Loaded plugins: fastestmirror, product-id, subscription-manager This system is not registered with an entitlement server. You can use subscription-manager to register. Cannot upload enabled repos report, is this client registered? _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/UHQMGULUHL4GBB...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/RTOXFNAXQ3NJBW...
--
SANDRO BONAZZOLA
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA
sbonazzo@redhat.com
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/AS3UWWIO5PBFYU...
-- Didi

Not sure this is the problem, autoextend should be enabled for the thinpool, `lvs -o +profile` should show imgbased-pool (defined at /etc/lvm/profile/imgbased-pool.profile) On Tue, Jul 3, 2018 at 8:55 AM, Yedidyah Bar David <didi@redhat.com> wrote:
On Mon, Jul 2, 2018 at 7:54 PM, Matt Simonsen <matt@khoza.com> wrote:
This error adds some clarity.
That said, I'm a bit unsure how the space can be the issue given I have
several hundred GB of storage in the thin pool that's unused...
How do you suggest I proceed?
Thank you for your help,
Matt
[root@node6-g8-h4 ~]# lvs
LV VG Attr LSize
Pool Origin Data% Meta% Move Log Cpy%Sync Convert
home onn_node1-g8-h4 Vwi-aotz-- 1.00g pool00 4.79 ovirt-node-ng-4.2.2-0.20180423.0 onn_node1-g8-h4 Vwi---tz-k <50.06g pool00 root ovirt-node-ng-4.2.2-0.20180423.0+1 onn_node1-g8-h4 Vwi---tz-- <50.06g pool00 ovirt-node-ng-4.2.2-0.20180423.0 ovirt-node-ng-4.2.3.1-0.20180530.0 onn_node1-g8-h4 Vri---tz-k <50.06g pool00 ovirt-node-ng-4.2.3.1-0.20180530.0+1 onn_node1-g8-h4 Vwi-aotz-- <50.06g pool00 ovirt-node-ng-4.2.3.1-0.20180530.0 6.95 pool00 onn_node1-g8-h4 twi-aotz-- <1.30t 76.63 50.34
I think your thinpool meta volume is close to full and needs to be enlarged. This quite likely happened because you extended the thinpool without extending the meta vol.
Check also 'lvs -a'.
This might be enough, but check the names first:
lvextend -L+200m onn_node1-g8-h4/pool00_tmeta
Best regards,
root onn_node1-g8-h4 Vwi---tz-- <50.06g pool00 tmp onn_node1-g8-h4 Vwi-aotz-- 1.00g pool00 5.04 var onn_node1-g8-h4 Vwi-aotz-- 15.00g pool00 5.86 var_crash onn_node1-g8-h4 Vwi---tz-- 10.00g pool00 var_local_images onn_node1-g8-h4 Vwi-aotz-- 1.10t pool00 89.72 var_log onn_node1-g8-h4 Vwi-aotz-- 8.00g pool00 6.84 var_log_audit onn_node1-g8-h4 Vwi-aotz-- 2.00g pool00 6.16 [root@node6-g8-h4 ~]# vgs VG #PV #LV #SN Attr VSize VFree onn_node1-g8-h4 1 13 0 wz--n- <1.31t 8.00g
2018-06-29 14:19:31,142 [DEBUG] (MainThread) Version: imgbased-1.0.20 2018-06-29 14:19:31,147 [DEBUG] (MainThread) Arguments: Namespace(FILENAME='/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img', command='update', debug=True, experimental=False, format='liveimg', stream='Image') 2018-06-29 14:19:31,147 [INFO] (MainThread) Extracting image '/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0. 20180626.0.el7.squashfs.img' 2018-06-29 14:19:31,148 [DEBUG] (MainThread) Calling binary: (['mktemp', '-d', '--tmpdir', 'mnt.XXXXX'],) {} 2018-06-29 14:19:31,148 [DEBUG] (MainThread) Calling: (['mktemp', '-d', '--tmpdir', 'mnt.XXXXX'],) {'close_fds': True, 'stderr': -2} 2018-06-29 14:19:31,150 [DEBUG] (MainThread) Returned: /tmp/mnt.1OhaU 2018-06-29 14:19:31,151 [DEBUG] (MainThread) Calling binary: (['mount', '/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img', u'/tmp/mnt.1OhaU'],) {} 2018-06-29 14:19:31,151 [DEBUG] (MainThread) Calling: (['mount', '/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img', u'/tmp/mnt.1OhaU'],) {'close_fds': True, 'stderr': -2} 2018-06-29 14:19:31,157 [DEBUG] (MainThread) Returned: 2018-06-29 14:19:31,158 [DEBUG] (MainThread) Mounted squashfs 2018-06-29 14:19:31,158 [DEBUG] (MainThread) Found fsimage at '/tmp/mnt.1OhaU/LiveOS/rootfs.img' 2018-06-29 14:19:31,159 [DEBUG] (MainThread) Calling binary: (['mktemp', '-d', '--tmpdir', 'mnt.XXXXX'],) {} 2018-06-29 14:19:31,159 [DEBUG] (MainThread) Calling: (['mktemp', '-d', '--tmpdir', 'mnt.XXXXX'],) {'close_fds': True, 'stderr': -2} 2018-06-29 14:19:31,162 [DEBUG] (MainThread) Returned: /tmp/mnt.153do 2018-06-29 14:19:31,162 [DEBUG] (MainThread) Calling binary: (['mount', u'/tmp/mnt.1OhaU/LiveOS/rootfs.img', u'/tmp/mnt.153do'],) {} 2018-06-29 14:19:31,162 [DEBUG] (MainThread) Calling: (['mount', u'/tmp/mnt.1OhaU/LiveOS/rootfs.img', u'/tmp/mnt.153do'],) {'close_fds': True, 'stderr': -2} 2018-06-29 14:19:31,177 [DEBUG] (MainThread) Returned: 2018-06-29 14:19:31,189 [DEBUG] (MainThread) Using nvr: ovirt-node-ng-4.2.4-0.20180626.0 2018-06-29 14:19:31,189 [DEBUG] (MainThread) Fetching image for '/' 2018-06-29 14:19:31,189 [DEBUG] (MainThread) Calling binary: (['findmnt', '--noheadings', '-o', 'SOURCE', '/'],) {} 2018-06-29 14:19:31,190 [DEBUG] (MainThread) Calling: (['findmnt', '--noheadings', '-o', 'SOURCE', '/'],) {'close_fds': True, 'stderr': -2} 2018-06-29 14:19:31,203 [DEBUG] (MainThread) Returned: /dev/mapper/onn_node1--g8--h4-ovirt--node--ng--4.2.3.1--0.20180530.0+1 2018-06-29 14:19:31,204 [DEBUG] (MainThread) Found '/dev/mapper/onn_node1--g8--h4-ovirt--node--ng--4.2.3.1--0.20180530.0+1' 2018-06-29 14:19:31,204 [DEBUG] (MainThread) Calling binary: (['lvs', '--noheadings', '--ignoreskippedcluster', '-ovg_name,lv_name', u'/dev/mapper/onn_node1--g8--h4-ovirt--node--ng--4.2.3.1--0.20180530.0+1'],) {'stderr': <open file '/dev/null', mode 'w' at 0x7f56b787eed0>} 2018-06-29 14:19:31,204 [DEBUG] (MainThread) Calling: (['lvs', '--noheadings', '--ignoreskippedcluster', '-ovg_name,lv_name', u'/dev/mapper/onn_node1--g8--h4-ovirt--node--ng--4.2.3.1--0.20180530.0+1'],) {'close_fds': True, 'stderr': <open file '/dev/null', mode 'w' at 0x7f56b787eed0>} 2018-06-29 14:19:31,283 [DEBUG] (MainThread) Returned: onn_node1-g8-h4 ovirt-node-ng-4.2.3.1-0.20180530.0+1 2018-06-29 14:19:31,283 [DEBUG] (MainThread) Found LV for path /dev/mapper/onn_node1--g8--h4-ovirt--node--ng--4.2.3.1--0.20180530.0+1: onn_node1-g8-h4 ovirt-node-ng-4.2.3.1-0.20180530.0+1 2018-06-29 14:19:31,283 [DEBUG] (MainThread) Found LV 'ovirt-node-ng-4.2.3.1-0.20180530.0+1' for path '/dev/mapper/onn_node1--g8--h4-ovirt--node--ng--4.2.3.1--0.20180530.0+1' 2018-06-29 14:19:31,284 [DEBUG] (MainThread) Calling binary: (['vgs', '--noheadings', '--ignoreskippedcluster', '--select', 'vg_tags = imgbased:vg', '-o', 'vg_name'],) {'stderr': <open file '/dev/null', mode 'w' at 0x7f56b787eed0>} 2018-06-29 14:19:31,284 [DEBUG] (MainThread) Calling: (['vgs', '--noheadings', '--ignoreskippedcluster', '--select', 'vg_tags = imgbased:vg', '-o', 'vg_name'],) {'close_fds': True, 'stderr': <open file '/dev/null', mode 'w' at 0x7f56b787eed0>} 2018-06-29 14:19:31,321 [DEBUG] (MainThread) Returned: onn_node1-g8-h4 2018-06-29 14:19:31,322 [DEBUG] (MainThread) Calling binary: (['lvs', '--noheadings', '--ignoreskippedcluster', '-osize', '--units', 'B', u'onn_node1-g8-h4/ovirt-node-ng-4.2.3.1-0.20180530.0+1'],) {'stderr': <open file '/dev/null', mode 'w' at 0x7f56b787eed0>} 2018-06-29 14:19:31,322 [DEBUG] (MainThread) Calling: (['lvs', '--noheadings', '--ignoreskippedcluster', '-osize', '--units', 'B', u'onn_node1-g8-h4/ovirt-node-ng-4.2.3.1-0.20180530.0+1'],) {'close_fds': True, 'stderr': <open file '/dev/null', mode 'w' at 0x7f56b787eed0>} 2018-06-29 14:19:31,355 [DEBUG] (MainThread) Returned: 53750005760B 2018-06-29 14:19:31,355 [DEBUG] (MainThread) Recommeneded base size: 53750005760B 2018-06-29 14:19:31,355 [INFO] (MainThread) Starting base creation 2018-06-29 14:19:31,355 [INFO] (MainThread) New base will be: ovirt-node-ng-4.2.4-0.20180626.0 2018-06-29 14:19:31,356 [DEBUG] (MainThread) Calling binary: (['vgs', '--noheadings', '--ignoreskippedcluster', '@imgbased:pool', '-o', 'lv_full_name'],) {'stderr': <open file '/dev/null', mode 'w' at 0x7f56b787eed0>} 2018-06-29 14:19:31,356 [DEBUG] (MainThread) Calling: (['vgs', '--noheadings', '--ignoreskippedcluster', '@imgbased:pool', '-o', 'lv_full_name'],) {'close_fds': True, 'stderr': <open file '/dev/null', mode 'w' at 0x7f56b787eed0>} 2018-06-29 14:19:31,381 [DEBUG] (MainThread) Returned: onn_node1-g8-h4/pool00 2018-06-29 14:19:31,381 [DEBUG] (MainThread) Pool: <LV 'onn_node1-g8-h4/pool00' /> 2018-06-29 14:19:31,382 [DEBUG] (MainThread) Calling binary: (['lvcreate', '--thin', '--virtualsize', u'53750005760B', '--name', 'ovirt-node-ng-4.2.4-0.20180626.0', u'onn_node1-g8-h4/pool00'],) {} 2018-06-29 14:19:31,382 [DEBUG] (MainThread) Calling: (['lvcreate', '--thin', '--virtualsize', u'53750005760B', '--name', 'ovirt-node-ng-4.2.4-0.20180626.0', u'onn_node1-g8-h4/pool00'],) {'close_fds': True, 'stderr': -2} 2018-06-29 14:19:31,406 [DEBUG] (MainThread) Exception! Cannot create new thin volume, free space in thin pool onn_node1-g8-h4/pool00 reached threshold.
2018-06-29 14:19:31,406 [DEBUG] (MainThread) Calling binary: (['umount', '-l', u'/tmp/mnt.153do'],) {} 2018-06-29 14:19:31,406 [DEBUG] (MainThread) Calling: (['umount', '-l', u'/tmp/mnt.153do'],) {'close_fds': True, 'stderr': -2} 2018-06-29 14:19:31,422 [DEBUG] (MainThread) Returned: 2018-06-29 14:19:31,422 [DEBUG] (MainThread) Calling binary: (['rmdir', u'/tmp/mnt.153do'],) {} 2018-06-29 14:19:31,422 [DEBUG] (MainThread) Calling: (['rmdir', u'/tmp/mnt.153do'],) {'close_fds': True, 'stderr': -2} 2018-06-29 14:19:31,425 [DEBUG] (MainThread) Returned: 2018-06-29 14:19:31,425 [DEBUG] (MainThread) Calling binary: (['umount', '-l', u'/tmp/mnt.1OhaU'],) {} 2018-06-29 14:19:31,425 [DEBUG] (MainThread) Calling: (['umount', '-l', u'/tmp/mnt.1OhaU'],) {'close_fds': True, 'stderr': -2} 2018-06-29 14:19:31,437 [DEBUG] (MainThread) Returned: 2018-06-29 14:19:31,437 [DEBUG] (MainThread) Calling binary: (['rmdir', u'/tmp/mnt.1OhaU'],) {} 2018-06-29 14:19:31,437 [DEBUG] (MainThread) Calling: (['rmdir', u'/tmp/mnt.1OhaU'],) {'close_fds': True, 'stderr': -2} 2018-06-29 14:19:31,440 [DEBUG] (MainThread) Returned: Traceback (most recent call last): File "/usr/lib64/python2.7/runpy.py", line 162, in _run_module_as_main "__main__", fname, loader, pkg_name) File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code exec code in run_globals File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/__main__.py", line 53, in <module> CliApplication() File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/__init__.py", line 82, in CliApplication app.hooks.emit("post-arg-parse", args) File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/hooks.py", line 120, in emit cb(self.context, *args) File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/plugins/update.py", line 56, in post_argparse base_lv, _ = LiveimgExtractor(app.imgbase).extract(args.FILENAME) File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/plugins/update.py", line 118, in extract "%s" % size, nvr) File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/plugins/update.py", line 84, in add_base_with_tree lvs) File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/imgbase.py", line 310, in add_base new_base_lv = pool.create_thinvol(new_base.lv_name, size) File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/lvm.py", line 324, in create_thinvol self.lvm_name]) File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/utils.py", line 390, in lvcreate return self.call(["lvcreate"] + args, **kwargs) File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/utils.py", line 378, in call stdout = call(*args, **kwargs) File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/utils.py", line 153, in call return subprocess.check_output(*args, **kwargs).strip() File "/usr/lib64/python2.7/subprocess.py", line 575, in check_output raise CalledProcessError(retcode, cmd, output=output) subprocess.CalledProcessError: Command '['lvcreate', '--thin', '--virtualsize', u'53750005760B', '--name', 'ovirt-node-ng-4.2.4-0.20180626.0', u'onn_node1-g8-h4/pool00']' returned non-zero exit status 5
On 07/02/2018 04:58 AM, Yuval Turgeman wrote:
Looks like the upgrade script failed - can you please attach /var/log/imgbased.log or /tmp/imgbased.log ?
Thanks, Yuval.
On Mon, Jul 2, 2018 at 2:54 PM, Sandro Bonazzola <sbonazzo@redhat.com> wrote:
Yuval, can you please have a look?
2018-06-30 7:48 GMT+02:00 Oliver Riesener <Oliver.Riesener@hs-bremen.de
Yes, here is the same.
It seams the bootloader isn’t configured right ?
I did the Upgrade and reboot to 4.2.4 from UI and got:
[root@ovn-monster ~]# nodectl info layers: ovirt-node-ng-4.2.4-0.20180626.0: ovirt-node-ng-4.2.4-0.20180626.0+1 ovirt-node-ng-4.2.3.1-0.20180530.0: ovirt-node-ng-4.2.3.1-0.20180530.0+1 ovirt-node-ng-4.2.3-0.20180524.0: ovirt-node-ng-4.2.3-0.20180524.0+1 ovirt-node-ng-4.2.1.1-0.20180223.0: ovirt-node-ng-4.2.1.1-0.20180223.0+1 bootloader: default: ovirt-node-ng-4.2.3-0.20180524.0+1 entries: ovirt-node-ng-4.2.3-0.20180524.0+1: index: 0 title: ovirt-node-ng-4.2.3-0.20180524.0 kernel: /boot/ovirt-node-ng-4.2.3-0.20180524.0+1/vmlinuz-3.10.0-
862.3.2.el7.x86_64
args: "ro crashkernel=auto rd.lvm.lv=onn_ovn-monster/
ovirt-node-ng-4.2.3-0.20180524.0+1 rd.lvm.lv=onn_ovn-monster/swap rd.md.uuid=c6c3013b:027a9346:67dfd181:89635587 rhgb quiet LANG=de_DE.UTF-8 img.bootid=ovirt-node-ng-4.2.3-0.20180524.0+1"
initrd: /boot/ovirt-node-ng-4.2.3-0.
20180524.0+1/initramfs-3.10.0-862.3.2.el7.x86_64.img
root: /dev/onn_ovn-monster/ovirt-node-ng-4.2.3-0.20180524.0+1 ovirt-node-ng-4.2.1.1-0.20180223.0+1: index: 1 title: ovirt-node-ng-4.2.1.1-0.20180223.0 kernel: /boot/ovirt-node-ng-4.2.1.1-0.
20180223.0+1/vmlinuz-3.10.0-693.17.1.el7.x86_64
args: "ro crashkernel=auto rd.lvm.lv=onn_ovn-monster/
ovirt-node-ng-4.2.1.1-0.20180223.0+1 rd.lvm.lv=onn_ovn-monster/swap rd.md.uuid=c6c3013b:027a9346:67dfd181:89635587 rhgb quiet LANG=de_DE.UTF-8 img.bootid=ovirt-node-ng-4.2.1.1-0.20180223.0+1"
initrd: /boot/ovirt-node-ng-4.2.1.1-0.
20180223.0+1/initramfs-3.10.0-693.17.1.el7.x86_64.img
root: /dev/onn_ovn-monster/ovirt-node-ng-4.2.1.1-0.20180223.0+1 current_layer: ovirt-node-ng-4.2.3-0.20180524.0+1 [root@ovn-monster ~]# uptime 07:35:27 up 2 days, 15:42, 1 user, load average: 1,07, 1,00, 0,95
Am 29.06.2018 um 23:53 schrieb Matt Simonsen <matt@khoza.com>:
Hello,
I did yum updates on 2 of my oVirt 4.2.3 nodes running the prebuilt
node platform and it doesn't appear the updates worked.
[root@node6-g8-h4 ~]# yum update Loaded plugins: enabled_repos_upload, fastestmirror, imgbased-persist, : package_upload, product-id, search-disabled-repos,
subscription-
: manager This system is not registered with an entitlement server. You can use
subscription-manager to register.
Loading mirror speeds from cached hostfile * ovirt-4.2-epel: linux.mirrors.es.net Resolving Dependencies --> Running transaction check ---> Package ovirt-node-ng-image-update.noarch 0:4.2.3.1-1.el7 will be updated ---> Package ovirt-node-ng-image-update.noarch 0:4.2.4-1.el7 will be obsoleting ---> Package ovirt-node-ng-image-update-placeholder.noarch 0:4.2.3.1-1.el7 will be obsoleted --> Finished Dependency Resolution
Dependencies Resolved
============================================================ ============================================================= Package Arch Version Repository Size ============================================================ ============================================================= Installing: ovirt-node-ng-image-update noarch 4.2.4-1.el7 ovirt-4.2 647 M replacing ovirt-node-ng-image-update-placeholder.noarch 4.2.3.1-1.el7
Transaction Summary ============================================================ ============================================================= Install 1 Package
Total download size: 647 M Is this ok [y/d/N]: y Downloading packages: warning: /var/cache/yum/x86_64/7/ovirt-4.2/packages/ovirt-node-ng- image-update-4.2.4-1.el7.noarch.rpm: Header V4 RSA/SHA1 Signature, key ID fe590cb7: NOKEY Public key for ovirt-node-ng-image-update-4.2.4-1.el7.noarch.rpm is not installed ovirt-node-ng-image-update-4.2.4-1.el7.noarch.rpm | 647 MB 00:02:07 Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-ovirt-4.2 Importing GPG key 0xFE590CB7: Userid : "oVirt <infra@ovirt.org>" Fingerprint: 31a5 d783 7fad 7cb2 86cd 3469 ab8c 4f9d fe59 0cb7 Package : ovirt-release42-4.2.3.1-1.el7.noarch (installed) From : /etc/pki/rpm-gpg/RPM-GPG-ovirt-4.2 Is this ok [y/N]: y Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : ovirt-node-ng-image-update-4.2.4-1.el7.noarch 1/3 warning: %post(ovirt-node-ng-image-update-4.2.4-1.el7.noarch)
Non-fatal POSTIN scriptlet failure in rpm package ovirt-node-ng-image-update-4.2.4-1.el7.noarch Erasing : ovirt-node-ng-image-update-placeholder-4.2.3.1-1.el7.noarch 2/3 Cleanup : ovirt-node-ng-image-update-4.2.3.1-1.el7.noarch 3/3 warning: file /usr/share/ovirt-node-ng/image/ovirt-node-ng-4.2.0-0.20180530.0.el7.squashfs.img: remove failed: No such file or directory Uploading Package Profile Unable to upload Package Profile Verifying : ovirt-node-ng-image-update-4.2.4-1.el7.noarch 1/3 Verifying : ovirt-node-ng-image-update-4.2.3.1-1.el7.noarch 2/3 Verifying : ovirt-node-ng-image-update-placeholder-4.2.3.1-1.el7.noarch 3/3
Installed: ovirt-node-ng-image-update.noarch 0:4.2.4-1.el7
Replaced: ovirt-node-ng-image-update-placeholder.noarch 0:4.2.3.1-1.el7
Complete! Uploading Enabled Repositories Report Loaded plugins: fastestmirror, product-id, subscription-manager This system is not registered with an entitlement server. You can use subscription-manager to register. Cannot upload enabled repos report, is this client registered?
My engine shows the nodes as having no updates, however the major components including the kernel version and port 9090 admin GUI show 4.2.3
Is there anything I can provide to help diagnose the issue?
[root@node6-g8-h4 ~]# rpm -qa | grep ovirt
ovirt-imageio-common-1.3.1.2-0.el7.centos.noarch ovirt-host-deploy-1.7.3-1.el7.centos.noarch ovirt-vmconsole-host-1.0.5-4.el7.centos.noarch ovirt-provider-ovn-driver-1.2.10-1.el7.centos.noarch ovirt-engine-sdk-python-3.6.9.1-1.el7.noarch ovirt-setup-lib-1.1.4-1.el7.centos.noarch ovirt-release42-4.2.3.1-1.el7.noarch ovirt-imageio-daemon-1.3.1.2-0.el7.centos.noarch ovirt-hosted-engine-setup-2.2.20-1.el7.centos.noarch ovirt-host-dependencies-4.2.2-2.el7.centos.x86_64 ovirt-hosted-engine-ha-2.2.11-1.el7.centos.noarch ovirt-host-4.2.2-2.el7.centos.x86_64 ovirt-node-ng-image-update-4.2.4-1.el7.noarch ovirt-vmconsole-1.0.5-4.el7.centos.noarch ovirt-release-host-node-4.2.3.1-1.el7.noarch cockpit-ovirt-dashboard-0.11.24-1.el7.centos.noarch ovirt-node-ng-nodectl-4.2.0-0.20180524.0.el7.noarch python-ovirt-engine-sdk4-4.2.6-2.el7.centos.x86_64
[root@node6-g8-h4 ~]# yum update Loaded plugins: enabled_repos_upload, fastestmirror, imgbased-persist,
: scriptlet failed, exit status 1 package_upload, product-id, search-disabled-repos, subscription-manager
This system is not registered with an entitlement server. You can use subscription-manager to register. Loading mirror speeds from cached hostfile * ovirt-4.2-epel: linux.mirrors.es.net No packages marked for update Uploading Enabled Repositories Report Loaded plugins: fastestmirror, product-id, subscription-manager This system is not registered with an entitlement server. You can use subscription-manager to register. Cannot upload enabled repos report, is this client registered? _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/ community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/ message/UHQMGULUHL4GBBHUBNGOAICJEM6W3RVW/
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/ community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/ message/RTOXFNAXQ3NJBWX7RXOYK5H5RZBHX2OK/
--
SANDRO BONAZZOLA
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA
sbonazzo@redhat.com
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community- guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/ message/AS3UWWIO5PBFYUPW5DIJ6O6VYJAL5ZIL/
-- Didi

Thank you again for the assistance with this issue. Below is the result of the command below. In the future I am considering using different Logical RAID Volumes to get different devices (sda, sdb, etc) for the oVirt Node image & storage filesystem to simplify. However I'd like to understand why this upgrade failed and also how to correct it if at all possible. I believe I need to recreate the /var/crash partition? I incorrectly removed it, is it simply a matter of using LVM to add a new partition and format it? Secondly, do you have any suggestions on how to move forward with the error regarding the pool capacity? I'm not sure if this is a legitimate error or problem in the upgrade process. Thanks, Matt On 07/03/2018 03:58 AM, Yuval Turgeman wrote:
Not sure this is the problem, autoextend should be enabled for the thinpool, `lvs -o +profile` should show imgbased-pool (defined at /etc/lvm/profile/imgbased-pool.profile)
On Tue, Jul 3, 2018 at 8:55 AM, Yedidyah Bar David <didi@redhat.com <mailto:didi@redhat.com>> wrote:
On Mon, Jul 2, 2018 at 7:54 PM, Matt Simonsen <matt@khoza.com <mailto:matt@khoza.com>> wrote: > > This error adds some clarity. > > That said, I'm a bit unsure how the space can be the issue given I have several hundred GB of storage in the thin pool that's unused... > > How do you suggest I proceed? > > Thank you for your help, > > Matt > > > > [root@node6-g8-h4 ~]# lvs > > LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert > home onn_node1-g8-h4 Vwi-aotz-- 1.00g pool00 4.79 > ovirt-node-ng-4.2.2-0.20180423.0 onn_node1-g8-h4 Vwi---tz-k <50.06g pool00 root > ovirt-node-ng-4.2.2-0.20180423.0+1 onn_node1-g8-h4 Vwi---tz-- <50.06g pool00 ovirt-node-ng-4.2.2-0.20180423.0 > ovirt-node-ng-4.2.3.1-0.20180530.0 onn_node1-g8-h4 Vri---tz-k <50.06g pool00 > ovirt-node-ng-4.2.3.1-0.20180530.0+1 onn_node1-g8-h4 Vwi-aotz-- <50.06g pool00 ovirt-node-ng-4.2.3.1-0.20180530.0 6.95 > pool00 onn_node1-g8-h4 twi-aotz-- <1.30t 76.63 50.34
I think your thinpool meta volume is close to full and needs to be enlarged. This quite likely happened because you extended the thinpool without extending the meta vol.
Check also 'lvs -a'.
This might be enough, but check the names first:
lvextend -L+200m onn_node1-g8-h4/pool00_tmeta
Best regards,
> root onn_node1-g8-h4 Vwi---tz-- <50.06g pool00 > tmp onn_node1-g8-h4 Vwi-aotz-- 1.00g pool00 5.04 > var onn_node1-g8-h4 Vwi-aotz-- 15.00g pool00 5.86 > var_crash onn_node1-g8-h4 Vwi---tz-- 10.00g pool00 > var_local_images onn_node1-g8-h4 Vwi-aotz-- 1.10t pool00 89.72 > var_log onn_node1-g8-h4 Vwi-aotz-- 8.00g pool00 6.84 > var_log_audit onn_node1-g8-h4 Vwi-aotz-- 2.00g pool00 6.16 > [root@node6-g8-h4 ~]# vgs > VG #PV #LV #SN Attr VSize VFree > onn_node1-g8-h4 1 13 0 wz--n- <1.31t 8.00g > > > 2018-06-29 14:19:31,142 [DEBUG] (MainThread) Version: imgbased-1.0.20 > 2018-06-29 14:19:31,147 [DEBUG] (MainThread) Arguments: Namespace(FILENAME='/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img', command='update', debug=True, experimental=False, format='liveimg', stream='Image') > 2018-06-29 14:19:31,147 [INFO] (MainThread) Extracting image '/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img' > 2018-06-29 14:19:31,148 [DEBUG] (MainThread) Calling binary: (['mktemp', '-d', '--tmpdir', 'mnt.XXXXX'],) {} > 2018-06-29 14:19:31,148 [DEBUG] (MainThread) Calling: (['mktemp', '-d', '--tmpdir', 'mnt.XXXXX'],) {'close_fds': True, 'stderr': -2} > 2018-06-29 14:19:31,150 [DEBUG] (MainThread) Returned: /tmp/mnt.1OhaU > 2018-06-29 14:19:31,151 [DEBUG] (MainThread) Calling binary: (['mount', '/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img', u'/tmp/mnt.1OhaU'],) {} > 2018-06-29 14:19:31,151 [DEBUG] (MainThread) Calling: (['mount', '/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img', u'/tmp/mnt.1OhaU'],) {'close_fds': True, 'stderr': -2} > 2018-06-29 14:19:31,157 [DEBUG] (MainThread) Returned: > 2018-06-29 14:19:31,158 [DEBUG] (MainThread) Mounted squashfs > 2018-06-29 14:19:31,158 [DEBUG] (MainThread) Found fsimage at '/tmp/mnt.1OhaU/LiveOS/rootfs.img' > 2018-06-29 14:19:31,159 [DEBUG] (MainThread) Calling binary: (['mktemp', '-d', '--tmpdir', 'mnt.XXXXX'],) {} > 2018-06-29 14:19:31,159 [DEBUG] (MainThread) Calling: (['mktemp', '-d', '--tmpdir', 'mnt.XXXXX'],) {'close_fds': True, 'stderr': -2} > 2018-06-29 14:19:31,162 [DEBUG] (MainThread) Returned: /tmp/mnt.153do > 2018-06-29 14:19:31,162 [DEBUG] (MainThread) Calling binary: (['mount', u'/tmp/mnt.1OhaU/LiveOS/rootfs.img', u'/tmp/mnt.153do'],) {} > 2018-06-29 14:19:31,162 [DEBUG] (MainThread) Calling: (['mount', u'/tmp/mnt.1OhaU/LiveOS/rootfs.img', u'/tmp/mnt.153do'],) {'close_fds': True, 'stderr': -2} > 2018-06-29 14:19:31,177 [DEBUG] (MainThread) Returned: > 2018-06-29 14:19:31,189 [DEBUG] (MainThread) Using nvr: ovirt-node-ng-4.2.4-0.20180626.0 > 2018-06-29 14:19:31,189 [DEBUG] (MainThread) Fetching image for '/' > 2018-06-29 14:19:31,189 [DEBUG] (MainThread) Calling binary: (['findmnt', '--noheadings', '-o', 'SOURCE', '/'],) {} > 2018-06-29 14:19:31,190 [DEBUG] (MainThread) Calling: (['findmnt', '--noheadings', '-o', 'SOURCE', '/'],) {'close_fds': True, 'stderr': -2} > 2018-06-29 14:19:31,203 [DEBUG] (MainThread) Returned: /dev/mapper/onn_node1--g8--h4-ovirt--node--ng--4.2.3.1--0.20180530.0+1 > 2018-06-29 14:19:31,204 [DEBUG] (MainThread) Found '/dev/mapper/onn_node1--g8--h4-ovirt--node--ng--4.2.3.1--0.20180530.0+1' > 2018-06-29 14:19:31,204 [DEBUG] (MainThread) Calling binary: (['lvs', '--noheadings', '--ignoreskippedcluster', '-ovg_name,lv_name', u'/dev/mapper/onn_node1--g8--h4-ovirt--node--ng--4.2.3.1--0.20180530.0+1'],) {'stderr': <open file '/dev/null', mode 'w' at 0x7f56b787eed0>} > 2018-06-29 14:19:31,204 [DEBUG] (MainThread) Calling: (['lvs', '--noheadings', '--ignoreskippedcluster', '-ovg_name,lv_name', u'/dev/mapper/onn_node1--g8--h4-ovirt--node--ng--4.2.3.1--0.20180530.0+1'],) {'close_fds': True, 'stderr': <open file '/dev/null', mode 'w' at 0x7f56b787eed0>} > 2018-06-29 14:19:31,283 [DEBUG] (MainThread) Returned: onn_node1-g8-h4 ovirt-node-ng-4.2.3.1-0.20180530.0+1 > 2018-06-29 14:19:31,283 [DEBUG] (MainThread) Found LV for path /dev/mapper/onn_node1--g8--h4-ovirt--node--ng--4.2.3.1--0.20180530.0+1: onn_node1-g8-h4 ovirt-node-ng-4.2.3.1-0.20180530.0+1 > 2018-06-29 14:19:31,283 [DEBUG] (MainThread) Found LV 'ovirt-node-ng-4.2.3.1-0.20180530.0+1' for path '/dev/mapper/onn_node1--g8--h4-ovirt--node--ng--4.2.3.1--0.20180530.0+1' > 2018-06-29 14:19:31,284 [DEBUG] (MainThread) Calling binary: (['vgs', '--noheadings', '--ignoreskippedcluster', '--select', 'vg_tags = imgbased:vg', '-o', 'vg_name'],) {'stderr': <open file '/dev/null', mode 'w' at 0x7f56b787eed0>} > 2018-06-29 14:19:31,284 [DEBUG] (MainThread) Calling: (['vgs', '--noheadings', '--ignoreskippedcluster', '--select', 'vg_tags = imgbased:vg', '-o', 'vg_name'],) {'close_fds': True, 'stderr': <open file '/dev/null', mode 'w' at 0x7f56b787eed0>} > 2018-06-29 14:19:31,321 [DEBUG] (MainThread) Returned: onn_node1-g8-h4 > 2018-06-29 14:19:31,322 [DEBUG] (MainThread) Calling binary: (['lvs', '--noheadings', '--ignoreskippedcluster', '-osize', '--units', 'B', u'onn_node1-g8-h4/ovirt-node-ng-4.2.3.1-0.20180530.0+1'],) {'stderr': <open file '/dev/null', mode 'w' at 0x7f56b787eed0>} > 2018-06-29 14:19:31,322 [DEBUG] (MainThread) Calling: (['lvs', '--noheadings', '--ignoreskippedcluster', '-osize', '--units', 'B', u'onn_node1-g8-h4/ovirt-node-ng-4.2.3.1-0.20180530.0+1'],) {'close_fds': True, 'stderr': <open file '/dev/null', mode 'w' at 0x7f56b787eed0>} > 2018-06-29 14:19:31,355 [DEBUG] (MainThread) Returned: 53750005760B > 2018-06-29 14:19:31,355 [DEBUG] (MainThread) Recommeneded base size: 53750005760B > 2018-06-29 14:19:31,355 [INFO] (MainThread) Starting base creation > 2018-06-29 14:19:31,355 [INFO] (MainThread) New base will be: ovirt-node-ng-4.2.4-0.20180626.0 > 2018-06-29 14:19:31,356 [DEBUG] (MainThread) Calling binary: (['vgs', '--noheadings', '--ignoreskippedcluster', '@imgbased:pool', '-o', 'lv_full_name'],) {'stderr': <open file '/dev/null', mode 'w' at 0x7f56b787eed0>} > 2018-06-29 14:19:31,356 [DEBUG] (MainThread) Calling: (['vgs', '--noheadings', '--ignoreskippedcluster', '@imgbased:pool', '-o', 'lv_full_name'],) {'close_fds': True, 'stderr': <open file '/dev/null', mode 'w' at 0x7f56b787eed0>} > 2018-06-29 14:19:31,381 [DEBUG] (MainThread) Returned: onn_node1-g8-h4/pool00 > 2018-06-29 14:19:31,381 [DEBUG] (MainThread) Pool: <LV 'onn_node1-g8-h4/pool00' /> > 2018-06-29 14:19:31,382 [DEBUG] (MainThread) Calling binary: (['lvcreate', '--thin', '--virtualsize', u'53750005760B', '--name', 'ovirt-node-ng-4.2.4-0.20180626.0', u'onn_node1-g8-h4/pool00'],) {} > 2018-06-29 14:19:31,382 [DEBUG] (MainThread) Calling: (['lvcreate', '--thin', '--virtualsize', u'53750005760B', '--name', 'ovirt-node-ng-4.2.4-0.20180626.0', u'onn_node1-g8-h4/pool00'],) {'close_fds': True, 'stderr': -2} > 2018-06-29 14:19:31,406 [DEBUG] (MainThread) Exception! Cannot create new thin volume, free space in thin pool onn_node1-g8-h4/pool00 reached threshold. > > 2018-06-29 14:19:31,406 [DEBUG] (MainThread) Calling binary: (['umount', '-l', u'/tmp/mnt.153do'],) {} > 2018-06-29 14:19:31,406 [DEBUG] (MainThread) Calling: (['umount', '-l', u'/tmp/mnt.153do'],) {'close_fds': True, 'stderr': -2} > 2018-06-29 14:19:31,422 [DEBUG] (MainThread) Returned: > 2018-06-29 14:19:31,422 [DEBUG] (MainThread) Calling binary: (['rmdir', u'/tmp/mnt.153do'],) {} > 2018-06-29 14:19:31,422 [DEBUG] (MainThread) Calling: (['rmdir', u'/tmp/mnt.153do'],) {'close_fds': True, 'stderr': -2} > 2018-06-29 14:19:31,425 [DEBUG] (MainThread) Returned: > 2018-06-29 14:19:31,425 [DEBUG] (MainThread) Calling binary: (['umount', '-l', u'/tmp/mnt.1OhaU'],) {} > 2018-06-29 14:19:31,425 [DEBUG] (MainThread) Calling: (['umount', '-l', u'/tmp/mnt.1OhaU'],) {'close_fds': True, 'stderr': -2} > 2018-06-29 14:19:31,437 [DEBUG] (MainThread) Returned: > 2018-06-29 14:19:31,437 [DEBUG] (MainThread) Calling binary: (['rmdir', u'/tmp/mnt.1OhaU'],) {} > 2018-06-29 14:19:31,437 [DEBUG] (MainThread) Calling: (['rmdir', u'/tmp/mnt.1OhaU'],) {'close_fds': True, 'stderr': -2} > 2018-06-29 14:19:31,440 [DEBUG] (MainThread) Returned: > Traceback (most recent call last): > File "/usr/lib64/python2.7/runpy.py", line 162, in _run_module_as_main > "__main__", fname, loader, pkg_name) > File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code > exec code in run_globals > File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/__main__.py", line 53, in <module> > CliApplication() > File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/__init__.py", line 82, in CliApplication > app.hooks.emit("post-arg-parse", args) > File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/hooks.py", line 120, in emit > cb(self.context, *args) > File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/plugins/update.py", line 56, in post_argparse > base_lv, _ = LiveimgExtractor(app.imgbase).extract(args.FILENAME) > File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/plugins/update.py", line 118, in extract > "%s" % size, nvr) > File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/plugins/update.py", line 84, in add_base_with_tree > lvs) > File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/imgbase.py", line 310, in add_base > new_base_lv = pool.create_thinvol(new_base.lv_name, size) > File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/lvm.py", line 324, in create_thinvol > self.lvm_name]) > File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/utils.py", line 390, in lvcreate > return self.call(["lvcreate"] + args, **kwargs) > File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/utils.py", line 378, in call > stdout = call(*args, **kwargs) > File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/utils.py", line 153, in call > return subprocess.check_output(*args, **kwargs).strip() > File "/usr/lib64/python2.7/subprocess.py", line 575, in check_output > raise CalledProcessError(retcode, cmd, output=output) > subprocess.CalledProcessError: Command '['lvcreate', '--thin', '--virtualsize', u'53750005760B', '--name', 'ovirt-node-ng-4.2.4-0.20180626.0', u'onn_node1-g8-h4/pool00']' returned non-zero exit status 5 > > > > > > On 07/02/2018 04:58 AM, Yuval Turgeman wrote: > > Looks like the upgrade script failed - can you please attach /var/log/imgbased.log or /tmp/imgbased.log ? > > Thanks, > Yuval. > > On Mon, Jul 2, 2018 at 2:54 PM, Sandro Bonazzola <sbonazzo@redhat.com <mailto:sbonazzo@redhat.com>> wrote: >> >> Yuval, can you please have a look? >> >> 2018-06-30 7:48 GMT+02:00 Oliver Riesener <Oliver.Riesener@hs-bremen.de <mailto:Oliver.Riesener@hs-bremen.de>>: >>> >>> Yes, here is the same. >>> >>> It seams the bootloader isn’t configured right ? >>> >>> I did the Upgrade and reboot to 4.2.4 from UI and got: >>> >>> [root@ovn-monster ~]# nodectl info >>> layers: >>> ovirt-node-ng-4.2.4-0.20180626.0: >>> ovirt-node-ng-4.2.4-0.20180626.0+1 >>> ovirt-node-ng-4.2.3.1-0.20180530.0: >>> ovirt-node-ng-4.2.3.1-0.20180530.0+1 >>> ovirt-node-ng-4.2.3-0.20180524.0: >>> ovirt-node-ng-4.2.3-0.20180524.0+1 >>> ovirt-node-ng-4.2.1.1-0.20180223.0: >>> ovirt-node-ng-4.2.1.1-0.20180223.0+1 >>> bootloader: >>> default: ovirt-node-ng-4.2.3-0.20180524.0+1 >>> entries: >>> ovirt-node-ng-4.2.3-0.20180524.0+1: >>> index: 0 >>> title: ovirt-node-ng-4.2.3-0.20180524.0 >>> kernel: /boot/ovirt-node-ng-4.2.3-0.20180524.0+1/vmlinuz-3.10.0-862.3.2.el7.x86_64 >>> args: "ro crashkernel=auto rd.lvm.lv <http://rd.lvm.lv>=onn_ovn-monster/ovirt-node-ng-4.2.3-0.20180524.0+1 rd.lvm.lv <http://rd.lvm.lv>=onn_ovn-monster/swap rd.md.uuid=c6c3013b:027a9346:67dfd181:89635587 rhgb quiet LANG=de_DE.UTF-8 img.bootid=ovirt-node-ng-4.2.3-0.20180524.0+1" >>> initrd: /boot/ovirt-node-ng-4.2.3-0.20180524.0+1/initramfs-3.10.0-862.3.2.el7.x86_64.img >>> root: /dev/onn_ovn-monster/ovirt-node-ng-4.2.3-0.20180524.0+1 >>> ovirt-node-ng-4.2.1.1-0.20180223.0+1: >>> index: 1 >>> title: ovirt-node-ng-4.2.1.1-0.20180223.0 >>> kernel: /boot/ovirt-node-ng-4.2.1.1-0.20180223.0+1/vmlinuz-3.10.0-693.17.1.el7.x86_64 >>> args: "ro crashkernel=auto rd.lvm.lv <http://rd.lvm.lv>=onn_ovn-monster/ovirt-node-ng-4.2.1.1-0.20180223.0+1 rd.lvm.lv <http://rd.lvm.lv>=onn_ovn-monster/swap rd.md.uuid=c6c3013b:027a9346:67dfd181:89635587 rhgb quiet LANG=de_DE.UTF-8 img.bootid=ovirt-node-ng-4.2.1.1-0.20180223.0+1" >>> initrd: /boot/ovirt-node-ng-4.2.1.1-0.20180223.0+1/initramfs-3.10.0-693.17.1.el7.x86_64.img >>> root: /dev/onn_ovn-monster/ovirt-node-ng-4.2.1.1-0.20180223.0+1 >>> current_layer: ovirt-node-ng-4.2.3-0.20180524.0+1 >>> [root@ovn-monster ~]# uptime >>> 07:35:27 up 2 days, 15:42, 1 user, load average: 1,07, 1,00, 0,95 >>> >>> Am 29.06.2018 um 23:53 schrieb Matt Simonsen <matt@khoza.com <mailto:matt@khoza.com>>: >>> >>> Hello, >>> >>> I did yum updates on 2 of my oVirt 4.2.3 nodes running the prebuilt node platform and it doesn't appear the updates worked. >>> >>> >>> [root@node6-g8-h4 ~]# yum update >>> Loaded plugins: enabled_repos_upload, fastestmirror, imgbased-persist, >>> : package_upload, product-id, search-disabled-repos, subscription- >>> : manager >>> This system is not registered with an entitlement server. You can use subscription-manager to register. >>> Loading mirror speeds from cached hostfile >>> * ovirt-4.2-epel: linux.mirrors.es.net <http://linux.mirrors.es.net> >>> Resolving Dependencies >>> --> Running transaction check >>> ---> Package ovirt-node-ng-image-update.noarch 0:4.2.3.1-1.el7 will be updated >>> ---> Package ovirt-node-ng-image-update.noarch 0:4.2.4-1.el7 will be obsoleting >>> ---> Package ovirt-node-ng-image-update-placeholder.noarch 0:4.2.3.1-1.el7 will be obsoleted >>> --> Finished Dependency Resolution >>> >>> Dependencies Resolved >>> >>> ========================================================================================================================= >>> Package Arch Version Repository Size >>> ========================================================================================================================= >>> Installing: >>> ovirt-node-ng-image-update noarch 4.2.4-1.el7 ovirt-4.2 647 M >>> replacing ovirt-node-ng-image-update-placeholder.noarch 4.2.3.1-1.el7 >>> >>> Transaction Summary >>> ========================================================================================================================= >>> Install 1 Package >>> >>> Total download size: 647 M >>> Is this ok [y/d/N]: y >>> Downloading packages: >>> warning: /var/cache/yum/x86_64/7/ovirt-4.2/packages/ovirt-node-ng-image-update-4.2.4-1.el7.noarch.rpm: Header V4 RSA/SHA1 Signature, key ID fe590cb7: NOKEY >>> Public key for ovirt-node-ng-image-update-4.2.4-1.el7.noarch.rpm is not installed >>> ovirt-node-ng-image-update-4.2.4-1.el7.noarch.rpm | 647 MB 00:02:07 >>> Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-ovirt-4.2 >>> Importing GPG key 0xFE590CB7: >>> Userid : "oVirt <infra@ovirt.org <mailto:infra@ovirt.org>>" >>> Fingerprint: 31a5 d783 7fad 7cb2 86cd 3469 ab8c 4f9d fe59 0cb7 >>> Package : ovirt-release42-4.2.3.1-1.el7.noarch (installed) >>> From : /etc/pki/rpm-gpg/RPM-GPG-ovirt-4.2 >>> Is this ok [y/N]: y >>> Running transaction check >>> Running transaction test >>> Transaction test succeeded >>> Running transaction >>> Installing : ovirt-node-ng-image-update-4.2.4-1.el7.noarch 1/3 >>> warning: %post(ovirt-node-ng-image-update-4.2.4-1.el7.noarch) scriptlet failed, exit status 1 >>> Non-fatal POSTIN scriptlet failure in rpm package ovirt-node-ng-image-update-4.2.4-1.el7.noarch >>> Erasing : ovirt-node-ng-image-update-placeholder-4.2.3.1-1.el7.noarch 2/3 >>> Cleanup : ovirt-node-ng-image-update-4.2.3.1-1.el7.noarch 3/3 >>> warning: file /usr/share/ovirt-node-ng/image/ovirt-node-ng-4.2.0-0.20180530.0.el7.squashfs.img: remove failed: No such file or directory >>> Uploading Package Profile >>> Unable to upload Package Profile >>> Verifying : ovirt-node-ng-image-update-4.2.4-1.el7.noarch 1/3 >>> Verifying : ovirt-node-ng-image-update-4.2.3.1-1.el7.noarch 2/3 >>> Verifying : ovirt-node-ng-image-update-placeholder-4.2.3.1-1.el7.noarch 3/3 >>> >>> Installed: >>> ovirt-node-ng-image-update.noarch 0:4.2.4-1.el7 >>> >>> Replaced: >>> ovirt-node-ng-image-update-placeholder.noarch 0:4.2.3.1-1.el7 >>> >>> Complete! >>> Uploading Enabled Repositories Report >>> Loaded plugins: fastestmirror, product-id, subscription-manager >>> This system is not registered with an entitlement server. You can use subscription-manager to register. >>> Cannot upload enabled repos report, is this client registered? >>> >>> >>> My engine shows the nodes as having no updates, however the major components including the kernel version and port 9090 admin GUI show 4.2.3 >>> >>> Is there anything I can provide to help diagnose the issue? >>> >>> >>> [root@node6-g8-h4 ~]# rpm -qa | grep ovirt >>> >>> ovirt-imageio-common-1.3.1.2-0.el7.centos.noarch >>> ovirt-host-deploy-1.7.3-1.el7.centos.noarch >>> ovirt-vmconsole-host-1.0.5-4.el7.centos.noarch >>> ovirt-provider-ovn-driver-1.2.10-1.el7.centos.noarch >>> ovirt-engine-sdk-python-3.6.9.1-1.el7.noarch >>> ovirt-setup-lib-1.1.4-1.el7.centos.noarch >>> ovirt-release42-4.2.3.1-1.el7.noarch >>> ovirt-imageio-daemon-1.3.1.2-0.el7.centos.noarch >>> ovirt-hosted-engine-setup-2.2.20-1.el7.centos.noarch >>> ovirt-host-dependencies-4.2.2-2.el7.centos.x86_64 >>> ovirt-hosted-engine-ha-2.2.11-1.el7.centos.noarch >>> ovirt-host-4.2.2-2.el7.centos.x86_64 >>> ovirt-node-ng-image-update-4.2.4-1.el7.noarch >>> ovirt-vmconsole-1.0.5-4.el7.centos.noarch >>> ovirt-release-host-node-4.2.3.1-1.el7.noarch >>> cockpit-ovirt-dashboard-0.11.24-1.el7.centos.noarch >>> ovirt-node-ng-nodectl-4.2.0-0.20180524.0.el7.noarch >>> python-ovirt-engine-sdk4-4.2.6-2.el7.centos.x86_64 >>> >>> [root@node6-g8-h4 ~]# yum update >>> Loaded plugins: enabled_repos_upload, fastestmirror, imgbased-persist, package_upload, product-id, search-disabled-repos, subscription-manager >>> This system is not registered with an entitlement server. You can use subscription-manager to register. >>> Loading mirror speeds from cached hostfile >>> * ovirt-4.2-epel: linux.mirrors.es.net <http://linux.mirrors.es.net> >>> No packages marked for update >>> Uploading Enabled Repositories Report >>> Loaded plugins: fastestmirror, product-id, subscription-manager >>> This system is not registered with an entitlement server. You can use subscription-manager to register. >>> Cannot upload enabled repos report, is this client registered? >>> _______________________________________________ >>> Users mailing list -- users@ovirt.org <mailto:users@ovirt.org> >>> To unsubscribe send an email to users-leave@ovirt.org <mailto:users-leave@ovirt.org> >>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ <https://www.ovirt.org/site/privacy-policy/> >>> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ <https://www.ovirt.org/community/about/community-guidelines/> >>> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/UHQMGULUHL4GBB... <https://lists.ovirt.org/archives/list/users@ovirt.org/message/UHQMGULUHL4GBBHUBNGOAICJEM6W3RVW/> >>> >>> >>> >>> _______________________________________________ >>> Users mailing list -- users@ovirt.org <mailto:users@ovirt.org> >>> To unsubscribe send an email to users-leave@ovirt.org <mailto:users-leave@ovirt.org> >>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ <https://www.ovirt.org/site/privacy-policy/> >>> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ <https://www.ovirt.org/community/about/community-guidelines/> >>> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/RTOXFNAXQ3NJBW... <https://lists.ovirt.org/archives/list/users@ovirt.org/message/RTOXFNAXQ3NJBWX7RXOYK5H5RZBHX2OK/> >>> >> >> >> >> -- >> >> SANDRO BONAZZOLA >> >> MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV >> >> Red Hat EMEA >> >> sbonazzo@redhat.com <mailto:sbonazzo@redhat.com> > > > > > _______________________________________________ > Users mailing list -- users@ovirt.org <mailto:users@ovirt.org> > To unsubscribe send an email to users-leave@ovirt.org <mailto:users-leave@ovirt.org> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/ <https://www.ovirt.org/site/privacy-policy/> > oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ <https://www.ovirt.org/community/about/community-guidelines/> > List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/AS3UWWIO5PBFYU... <https://lists.ovirt.org/archives/list/users@ovirt.org/message/AS3UWWIO5PBFYUPW5DIJ6O6VYJAL5ZIL/> >
-- Didi

Hi Matt, I would try to run `fstrim -a` (man fstrim) and see if it frees anything from the thinpool. If you do decide to run this, please send the output for lvs again. Also, are you on #ovirt ? Thanks, Yuval. On Tue, Jul 3, 2018 at 9:00 PM, Matt Simonsen <matt@khoza.com> wrote:
Thank you again for the assistance with this issue.
Below is the result of the command below.
In the future I am considering using different Logical RAID Volumes to get different devices (sda, sdb, etc) for the oVirt Node image & storage filesystem to simplify. However I'd like to understand why this upgrade failed and also how to correct it if at all possible.
I believe I need to recreate the /var/crash partition? I incorrectly removed it, is it simply a matter of using LVM to add a new partition and format it?
Secondly, do you have any suggestions on how to move forward with the error regarding the pool capacity? I'm not sure if this is a legitimate error or problem in the upgrade process.
Thanks,
Matt
On 07/03/2018 03:58 AM, Yuval Turgeman wrote:
Not sure this is the problem, autoextend should be enabled for the thinpool, `lvs -o +profile` should show imgbased-pool (defined at /etc/lvm/profile/imgbased-pool.profile)
On Tue, Jul 3, 2018 at 8:55 AM, Yedidyah Bar David <didi@redhat.com> wrote:
On Mon, Jul 2, 2018 at 7:54 PM, Matt Simonsen <matt@khoza.com> wrote:
This error adds some clarity.
That said, I'm a bit unsure how the space can be the issue given I have
several hundred GB of storage in the thin pool that's unused...
How do you suggest I proceed?
Thank you for your help,
Matt
[root@node6-g8-h4 ~]# lvs
LV VG Attr
LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
home onn_node1-g8-h4 Vwi-aotz-- 1.00g pool00 4.79 ovirt-node-ng-4.2.2-0.20180423.0 onn_node1-g8-h4 Vwi---tz-k <50.06g pool00 root ovirt-node-ng-4.2.2-0.20180423.0+1 onn_node1-g8-h4 Vwi---tz-- <50.06g pool00 ovirt-node-ng-4.2.2-0.20180423.0 ovirt-node-ng-4.2.3.1-0.20180530.0 onn_node1-g8-h4 Vri---tz-k <50.06g pool00 ovirt-node-ng-4.2.3.1-0.20180530.0+1 onn_node1-g8-h4 Vwi-aotz-- <50.06g pool00 ovirt-node-ng-4.2.3.1-0.20180530.0 6.95 pool00 onn_node1-g8-h4 twi-aotz-- <1.30t 76.63 50.34
I think your thinpool meta volume is close to full and needs to be enlarged. This quite likely happened because you extended the thinpool without extending the meta vol.
Check also 'lvs -a'.
This might be enough, but check the names first:
lvextend -L+200m onn_node1-g8-h4/pool00_tmeta
Best regards,
root onn_node1-g8-h4 Vwi---tz-- <50.06g pool00 tmp onn_node1-g8-h4 Vwi-aotz-- 1.00g pool00 5.04 var onn_node1-g8-h4 Vwi-aotz-- 15.00g pool00 5.86 var_crash onn_node1-g8-h4 Vwi---tz-- 10.00g pool00 var_local_images onn_node1-g8-h4 Vwi-aotz-- 1.10t pool00 89.72 var_log onn_node1-g8-h4 Vwi-aotz-- 8.00g pool00 6.84 var_log_audit onn_node1-g8-h4 Vwi-aotz-- 2.00g pool00 6.16 [root@node6-g8-h4 ~]# vgs VG #PV #LV #SN Attr VSize VFree onn_node1-g8-h4 1 13 0 wz--n- <1.31t 8.00g
2018-06-29 14:19:31,142 [DEBUG] (MainThread) Version: imgbased-1.0.20 2018-06-29 14:19:31,147 [DEBUG] (MainThread) Arguments: Namespace(FILENAME='/usr/share/ovirt-node-ng/image//ovirt- node-ng-4.2.0-0.20180626.0.el7.squashfs.img', command='update', debug=True, experimental=False, format='liveimg', stream='Image') 2018-06-29 14:19:31,147 [INFO] (MainThread) Extracting image '/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180 626.0.el7.squashfs.img' 2018-06-29 14:19:31,148 [DEBUG] (MainThread) Calling binary: (['mktemp', '-d', '--tmpdir', 'mnt.XXXXX'],) {} 2018-06-29 14:19:31,148 [DEBUG] (MainThread) Calling: (['mktemp', '-d', '--tmpdir', 'mnt.XXXXX'],) {'close_fds': True, 'stderr': -2} 2018-06-29 14:19:31,150 [DEBUG] (MainThread) Returned: /tmp/mnt.1OhaU 2018-06-29 14:19:31,151 [DEBUG] (MainThread) Calling binary: (['mount', '/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img', u'/tmp/mnt.1OhaU'],) {} 2018-06-29 14:19:31,151 [DEBUG] (MainThread) Calling: (['mount', '/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img', u'/tmp/mnt.1OhaU'],) {'close_fds': True, 'stderr': -2} 2018-06-29 14:19:31,157 [DEBUG] (MainThread) Returned: 2018-06-29 14:19:31,158 [DEBUG] (MainThread) Mounted squashfs 2018-06-29 14:19:31,158 [DEBUG] (MainThread) Found fsimage at '/tmp/mnt.1OhaU/LiveOS/rootfs.img' 2018-06-29 14:19:31,159 [DEBUG] (MainThread) Calling binary: (['mktemp', '-d', '--tmpdir', 'mnt.XXXXX'],) {} 2018-06-29 14:19:31,159 [DEBUG] (MainThread) Calling: (['mktemp', '-d', '--tmpdir', 'mnt.XXXXX'],) {'close_fds': True, 'stderr': -2} 2018-06-29 14:19:31,162 [DEBUG] (MainThread) Returned: /tmp/mnt.153do 2018-06-29 14:19:31,162 [DEBUG] (MainThread) Calling binary: (['mount', u'/tmp/mnt.1OhaU/LiveOS/rootfs.img', u'/tmp/mnt.153do'],) {} 2018-06-29 14:19:31,162 [DEBUG] (MainThread) Calling: (['mount', u'/tmp/mnt.1OhaU/LiveOS/rootfs.img', u'/tmp/mnt.153do'],) {'close_fds': True, 'stderr': -2} 2018-06-29 14:19:31,177 [DEBUG] (MainThread) Returned: 2018-06-29 14:19:31,189 [DEBUG] (MainThread) Using nvr: ovirt-node-ng-4.2.4-0.20180626.0 2018-06-29 14:19:31,189 [DEBUG] (MainThread) Fetching image for '/' 2018-06-29 14:19:31,189 [DEBUG] (MainThread) Calling binary: (['findmnt', '--noheadings', '-o', 'SOURCE', '/'],) {} 2018-06-29 14:19:31,190 [DEBUG] (MainThread) Calling: (['findmnt', '--noheadings', '-o', 'SOURCE', '/'],) {'close_fds': True, 'stderr': -2} 2018-06-29 14:19:31,203 [DEBUG] (MainThread) Returned: /dev/mapper/onn_node1--g8--h4-ovirt--node--ng--4.2.3.1--0.20180530.0+1 2018-06-29 14:19:31,204 [DEBUG] (MainThread) Found '/dev/mapper/onn_node1--g8--h4-ovirt--node--ng--4.2.3.1--0.20180530.0+1' 2018-06-29 14:19:31,204 [DEBUG] (MainThread) Calling binary: (['lvs', '--noheadings', '--ignoreskippedcluster', '-ovg_name,lv_name', u'/dev/mapper/onn_node1--g8--h4-ovirt--node--ng--4.2.3.1--0.20180530.0+1'],) {'stderr': <open file '/dev/null', mode 'w' at 0x7f56b787eed0>} 2018-06-29 14:19:31,204 [DEBUG] (MainThread) Calling: (['lvs', '--noheadings', '--ignoreskippedcluster', '-ovg_name,lv_name', u'/dev/mapper/onn_node1--g8--h4-ovirt--node--ng--4.2.3.1--0.20180530.0+1'],) {'close_fds': True, 'stderr': <open file '/dev/null', mode 'w' at 0x7f56b787eed0>} 2018-06-29 14:19:31,283 [DEBUG] (MainThread) Returned: onn_node1-g8-h4 ovirt-node-ng-4.2.3.1-0.20180530.0+1 2018-06-29 14:19:31,283 [DEBUG] (MainThread) Found LV for path /dev/mapper/onn_node1--g8--h4-ovirt--node--ng--4.2.3.1--0.20180530.0+1: onn_node1-g8-h4 ovirt-node-ng-4.2.3.1-0.20180530.0+1 2018-06-29 14:19:31,283 [DEBUG] (MainThread) Found LV 'ovirt-node-ng-4.2.3.1-0.20180530.0+1' for path '/dev/mapper/onn_node1--g8--h4-ovirt--node--ng--4.2.3.1--0.20180530.0+1' 2018-06-29 14:19:31,284 [DEBUG] (MainThread) Calling binary: (['vgs', '--noheadings', '--ignoreskippedcluster', '--select', 'vg_tags = imgbased:vg', '-o', 'vg_name'],) {'stderr': <open file '/dev/null', mode 'w' at 0x7f56b787eed0>} 2018-06-29 14:19:31,284 [DEBUG] (MainThread) Calling: (['vgs', '--noheadings', '--ignoreskippedcluster', '--select', 'vg_tags = imgbased:vg', '-o', 'vg_name'],) {'close_fds': True, 'stderr': <open file '/dev/null', mode 'w' at 0x7f56b787eed0>} 2018-06-29 14:19:31,321 [DEBUG] (MainThread) Returned: onn_node1-g8-h4 2018-06-29 14:19:31,322 [DEBUG] (MainThread) Calling binary: (['lvs', '--noheadings', '--ignoreskippedcluster', '-osize', '--units', 'B', u'onn_node1-g8-h4/ovirt-node-ng-4.2.3.1-0.20180530.0+1'],) {'stderr': <open file '/dev/null', mode 'w' at 0x7f56b787eed0>} 2018-06-29 14:19:31,322 [DEBUG] (MainThread) Calling: (['lvs', '--noheadings', '--ignoreskippedcluster', '-osize', '--units', 'B', u'onn_node1-g8-h4/ovirt-node-ng-4.2.3.1-0.20180530.0+1'],) {'close_fds': True, 'stderr': <open file '/dev/null', mode 'w' at 0x7f56b787eed0>} 2018-06-29 14:19:31,355 [DEBUG] (MainThread) Returned: 53750005760B 2018-06-29 14:19:31,355 [DEBUG] (MainThread) Recommeneded base size: 53750005760B 2018-06-29 14:19:31,355 [INFO] (MainThread) Starting base creation 2018-06-29 14:19:31,355 [INFO] (MainThread) New base will be: ovirt-node-ng-4.2.4-0.20180626.0 2018-06-29 14:19:31,356 [DEBUG] (MainThread) Calling binary: (['vgs', '--noheadings', '--ignoreskippedcluster', '@imgbased:pool', '-o', 'lv_full_name'],) {'stderr': <open file '/dev/null', mode 'w' at 0x7f56b787eed0>} 2018-06-29 14:19:31,356 [DEBUG] (MainThread) Calling: (['vgs', '--noheadings', '--ignoreskippedcluster', '@imgbased:pool', '-o', 'lv_full_name'],) {'close_fds': True, 'stderr': <open file '/dev/null', mode 'w' at 0x7f56b787eed0>} 2018-06-29 14:19:31,381 [DEBUG] (MainThread) Returned: onn_node1-g8-h4/pool00 2018-06-29 14:19:31,381 [DEBUG] (MainThread) Pool: <LV 'onn_node1-g8-h4/pool00' /> 2018-06-29 14:19:31,382 [DEBUG] (MainThread) Calling binary: (['lvcreate', '--thin', '--virtualsize', u'53750005760B', '--name', 'ovirt-node-ng-4.2.4-0.20180626.0', u'onn_node1-g8-h4/pool00'],) {} 2018-06-29 14:19:31,382 [DEBUG] (MainThread) Calling: (['lvcreate', '--thin', '--virtualsize', u'53750005760B', '--name', 'ovirt-node-ng-4.2.4-0.20180626.0', u'onn_node1-g8-h4/pool00'],) {'close_fds': True, 'stderr': -2} 2018-06-29 14:19:31,406 [DEBUG] (MainThread) Exception! Cannot create new thin volume, free space in thin pool onn_node1-g8-h4/pool00 reached threshold.
2018-06-29 14:19:31,406 [DEBUG] (MainThread) Calling binary: (['umount', '-l', u'/tmp/mnt.153do'],) {} 2018-06-29 14:19:31,406 [DEBUG] (MainThread) Calling: (['umount', '-l', u'/tmp/mnt.153do'],) {'close_fds': True, 'stderr': -2} 2018-06-29 14:19:31,422 [DEBUG] (MainThread) Returned: 2018-06-29 14:19:31,422 [DEBUG] (MainThread) Calling binary: (['rmdir', u'/tmp/mnt.153do'],) {} 2018-06-29 14:19:31,422 [DEBUG] (MainThread) Calling: (['rmdir', u'/tmp/mnt.153do'],) {'close_fds': True, 'stderr': -2} 2018-06-29 14:19:31,425 [DEBUG] (MainThread) Returned: 2018-06-29 14:19:31,425 [DEBUG] (MainThread) Calling binary: (['umount', '-l', u'/tmp/mnt.1OhaU'],) {} 2018-06-29 14:19:31,425 [DEBUG] (MainThread) Calling: (['umount', '-l', u'/tmp/mnt.1OhaU'],) {'close_fds': True, 'stderr': -2} 2018-06-29 14:19:31,437 [DEBUG] (MainThread) Returned: 2018-06-29 14:19:31,437 [DEBUG] (MainThread) Calling binary: (['rmdir', u'/tmp/mnt.1OhaU'],) {} 2018-06-29 14:19:31,437 [DEBUG] (MainThread) Calling: (['rmdir', u'/tmp/mnt.1OhaU'],) {'close_fds': True, 'stderr': -2} 2018-06-29 14:19:31,440 [DEBUG] (MainThread) Returned: Traceback (most recent call last): File "/usr/lib64/python2.7/runpy.py", line 162, in _run_module_as_main "__main__", fname, loader, pkg_name) File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code exec code in run_globals File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/__main__.py", line 53, in <module> CliApplication() File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/__init__.py", line 82, in CliApplication app.hooks.emit("post-arg-parse", args) File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/hooks.py", line 120, in emit cb(self.context, *args) File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/plugins/update.py", line 56, in post_argparse base_lv, _ = LiveimgExtractor(app.imgbase).extract(args.FILENAME) File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/plugins/update.py", line 118, in extract "%s" % size, nvr) File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/plugins/update.py", line 84, in add_base_with_tree lvs) File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/imgbase.py", line 310, in add_base new_base_lv = pool.create_thinvol(new_base.lv_name, size) File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/lvm.py", line 324, in create_thinvol self.lvm_name]) File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/utils.py", line 390, in lvcreate return self.call(["lvcreate"] + args, **kwargs) File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/utils.py", line 378, in call stdout = call(*args, **kwargs) File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/utils.py", line 153, in call return subprocess.check_output(*args, **kwargs).strip() File "/usr/lib64/python2.7/subprocess.py", line 575, in check_output raise CalledProcessError(retcode, cmd, output=output) subprocess.CalledProcessError: Command '['lvcreate', '--thin', '--virtualsize', u'53750005760B', '--name', 'ovirt-node-ng-4.2.4-0.20180626.0', u'onn_node1-g8-h4/pool00']' returned non-zero exit status 5
On 07/02/2018 04:58 AM, Yuval Turgeman wrote:
Looks like the upgrade script failed - can you please attach /var/log/imgbased.log or /tmp/imgbased.log ?
Thanks, Yuval.
On Mon, Jul 2, 2018 at 2:54 PM, Sandro Bonazzola <sbonazzo@redhat.com> wrote:
Yuval, can you please have a look?
2018-06-30 7:48 GMT+02:00 Oliver Riesener <
Yes, here is the same.
It seams the bootloader isn’t configured right ?
I did the Upgrade and reboot to 4.2.4 from UI and got:
[root@ovn-monster ~]# nodectl info layers: ovirt-node-ng-4.2.4-0.20180626.0: ovirt-node-ng-4.2.4-0.20180626.0+1 ovirt-node-ng-4.2.3.1-0.20180530.0: ovirt-node-ng-4.2.3.1-0.20180530.0+1 ovirt-node-ng-4.2.3-0.20180524.0: ovirt-node-ng-4.2.3-0.20180524.0+1 ovirt-node-ng-4.2.1.1-0.20180223.0: ovirt-node-ng-4.2.1.1-0.20180223.0+1 bootloader: default: ovirt-node-ng-4.2.3-0.20180524.0+1 entries: ovirt-node-ng-4.2.3-0.20180524.0+1: index: 0 title: ovirt-node-ng-4.2.3-0.20180524.0 kernel: /boot/ovirt-node-ng-4.2.3-0.20
180524.0+1/vmlinuz-3.10.0-862.3.2.el7.x86_64
args: "ro crashkernel=auto rd.lvm.lv=onn_ovn-monster/ovirt-node-ng-4.2.3-0.20180524.0+1
rd.lvm.lv=onn_ovn-monster/swap rd.md.uuid=c6c3013b:027a9346:67dfd181:89635587 rhgb quiet LANG=de_DE.UTF-8 img.bootid=ovirt-node-ng-4.2.3 -0.20180524.0+1"
initrd: /boot/ovirt-node-ng-4.2.3-0.20
180524.0+1/initramfs-3.10.0-862.3.2.el7.x86_64.img
root: /dev/onn_ovn-monster/ovirt-node-ng-4.2.3-0.20180524.0+1 ovirt-node-ng-4.2.1.1-0.20180223.0+1: index: 1 title: ovirt-node-ng-4.2.1.1-0.20180223.0 kernel: /boot/ovirt-node-ng-4.2.1.1-0.
20180223.0+1/vmlinuz-3.10.0-693.17.1.el7.x86_64
args: "ro crashkernel=auto rd.lvm.lv=onn_ovn-monster/ovir
t-node-ng-4.2.1.1-0.20180223.0+1 rd.lvm.lv=onn_ovn-monster/swap rd.md.uuid=c6c3013b:027a9346:67dfd181:89635587 rhgb quiet LANG=de_DE.UTF-8 img.bootid=ovirt-node-ng-4.2.1.1-0.20180223.0+1"
initrd: /boot/ovirt-node-ng-4.2.1.1-0.
20180223.0+1/initramfs-3.10.0-693.17.1.el7.x86_64.img
root: /dev/onn_ovn-monster/ovirt-node-ng-4.2.1.1-0.20180223.0+1 current_layer: ovirt-node-ng-4.2.3-0.20180524.0+1 [root@ovn-monster ~]# uptime 07:35:27 up 2 days, 15:42, 1 user, load average: 1,07, 1,00, 0,95
Am 29.06.2018 um 23:53 schrieb Matt Simonsen <matt@khoza.com>:
Hello,
I did yum updates on 2 of my oVirt 4.2.3 nodes running the prebuilt
node platform and it doesn't appear the updates worked.
[root@node6-g8-h4 ~]# yum update Loaded plugins: enabled_repos_upload, fastestmirror, imgbased-persist, : package_upload, product-id, search-disabled-repos,
subscription-
: manager This system is not registered with an entitlement server. You can use
subscription-manager to register.
Loading mirror speeds from cached hostfile * ovirt-4.2-epel: linux.mirrors.es.net Resolving Dependencies --> Running transaction check ---> Package ovirt-node-ng-image-update.noarch 0:4.2.3.1-1.el7 will be updated ---> Package ovirt-node-ng-image-update.noarch 0:4.2.4-1.el7 will be obsoleting ---> Package ovirt-node-ng-image-update-placeholder.noarch 0:4.2.3.1-1.el7 will be obsoleted --> Finished Dependency Resolution
Dependencies Resolved
============================================================ ============================================================= Package Arch Version Repository Size ============================================================ ============================================================= Installing: ovirt-node-ng-image-update noarch 4.2.4-1.el7 ovirt-4.2 647 M replacing ovirt-node-ng-image-update-placeholder.noarch 4.2.3.1-1.el7
Transaction Summary ============================================================ ============================================================= Install 1 Package
Total download size: 647 M Is this ok [y/d/N]: y Downloading packages: warning: /var/cache/yum/x86_64/7/ovirt-4.2/packages/ovirt-node-ng-ima ge-update-4.2.4-1.el7.noarch.rpm: Header V4 RSA/SHA1 Signature, key ID fe590cb7: NOKEY Public key for ovirt-node-ng-image-update-4.2.4-1.el7.noarch.rpm is not installed ovirt-node-ng-image-update-4.2.4-1.el7.noarch.rpm | 647 MB 00:02:07 Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-ovirt-4.2 Importing GPG key 0xFE590CB7: Userid : "oVirt <infra@ovirt.org>" Fingerprint: 31a5 d783 7fad 7cb2 86cd 3469 ab8c 4f9d fe59 0cb7 Package : ovirt-release42-4.2.3.1-1.el7.noarch (installed) From : /etc/pki/rpm-gpg/RPM-GPG-ovirt-4.2 Is this ok [y/N]: y Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : ovirt-node-ng-image-update-4.2.4-1.el7.noarch 1/3 warning: %post(ovirt-node-ng-image-update-4.2.4-1.el7.noarch)
Oliver.Riesener@hs-bremen.de>: scriptlet failed, exit status 1
Non-fatal POSTIN scriptlet failure in rpm package ovirt-node-ng-image-update-4.2.4-1.el7.noarch Erasing : ovirt-node-ng-image-update-placeholder-4.2.3.1-1.el7.noarch 2/3 Cleanup : ovirt-node-ng-image-update-4.2.3.1-1.el7.noarch 3/3 warning: file /usr/share/ovirt-node-ng/image /ovirt-node-ng-4.2.0-0.20180530.0.el7.squashfs.img: remove failed: No such file or directory Uploading Package Profile Unable to upload Package Profile Verifying : ovirt-node-ng-image-update-4.2.4-1.el7.noarch 1/3 Verifying : ovirt-node-ng-image-update-4.2.3.1-1.el7.noarch 2/3 Verifying : ovirt-node-ng-image-update-placeholder-4.2.3.1-1.el7.noarch 3/3
Installed: ovirt-node-ng-image-update.noarch 0:4.2.4-1.el7
Replaced: ovirt-node-ng-image-update-placeholder.noarch 0:4.2.3.1-1.el7
Complete! Uploading Enabled Repositories Report Loaded plugins: fastestmirror, product-id, subscription-manager This system is not registered with an entitlement server. You can use subscription-manager to register. Cannot upload enabled repos report, is this client registered?
My engine shows the nodes as having no updates, however the major components including the kernel version and port 9090 admin GUI show 4.2.3
Is there anything I can provide to help diagnose the issue?
[root@node6-g8-h4 ~]# rpm -qa | grep ovirt
ovirt-imageio-common-1.3.1.2-0.el7.centos.noarch ovirt-host-deploy-1.7.3-1.el7.centos.noarch ovirt-vmconsole-host-1.0.5-4.el7.centos.noarch ovirt-provider-ovn-driver-1.2.10-1.el7.centos.noarch ovirt-engine-sdk-python-3.6.9.1-1.el7.noarch ovirt-setup-lib-1.1.4-1.el7.centos.noarch ovirt-release42-4.2.3.1-1.el7.noarch ovirt-imageio-daemon-1.3.1.2-0.el7.centos.noarch ovirt-hosted-engine-setup-2.2.20-1.el7.centos.noarch ovirt-host-dependencies-4.2.2-2.el7.centos.x86_64 ovirt-hosted-engine-ha-2.2.11-1.el7.centos.noarch ovirt-host-4.2.2-2.el7.centos.x86_64 ovirt-node-ng-image-update-4.2.4-1.el7.noarch ovirt-vmconsole-1.0.5-4.el7.centos.noarch ovirt-release-host-node-4.2.3.1-1.el7.noarch cockpit-ovirt-dashboard-0.11.24-1.el7.centos.noarch ovirt-node-ng-nodectl-4.2.0-0.20180524.0.el7.noarch python-ovirt-engine-sdk4-4.2.6-2.el7.centos.x86_64
[root@node6-g8-h4 ~]# yum update Loaded plugins: enabled_repos_upload, fastestmirror, imgbased-persist, package_upload, product-id, search-disabled-repos, subscription-manager This system is not registered with an entitlement server. You can use subscription-manager to register. Loading mirror speeds from cached hostfile * ovirt-4.2-epel: linux.mirrors.es.net No packages marked for update Uploading Enabled Repositories Report Loaded plugins: fastestmirror, product-id, subscription-manager This system is not registered with an entitlement server. You can use subscription-manager to register. Cannot upload enabled repos report, is this client registered? _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/communit y/about/community-guidelines/ List Archives: https://lists.ovirt.org/archiv es/list/users@ovirt.org/message/UHQMGULUHL4GBBHUBNGOAICJEM6W3RVW/
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/communit y/about/community-guidelines/ List Archives: https://lists.ovirt.org/archiv es/list/users@ovirt.org/message/RTOXFNAXQ3NJBWX7RXOYK5H5RZBHX2OK/
--
SANDRO BONAZZOLA
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA
sbonazzo@redhat.com
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/communit y/about/community-guidelines/ List Archives: https://lists.ovirt.org/archiv es/list/users@ovirt.org/message/AS3UWWIO5PBFYUPW5DIJ6O6VYJAL5ZIL/
-- Didi

Many thanks to Yuval. After moving the discussion to #ovirt, I tried "fstrim -a" and this allowed the upgrade to complete successfully. Matt On 07/03/2018 12:19 PM, Yuval Turgeman wrote:
Hi Matt,
I would try to run `fstrim -a` (man fstrim) and see if it frees anything from the thinpool. If you do decide to run this, please send the output for lvs again.
Also, are you on #ovirt ?
Thanks, Yuval.
On Tue, Jul 3, 2018 at 9:00 PM, Matt Simonsen <matt@khoza.com <mailto:matt@khoza.com>> wrote:
Thank you again for the assistance with this issue.
Below is the result of the command below.
In the future I am considering using different Logical RAID Volumes to get different devices (sda, sdb, etc) for the oVirt Node image & storage filesystem to simplify. However I'd like to understand why this upgrade failed and also how to correct it if at all possible.
I believe I need to recreate the /var/crash partition? I incorrectly removed it, is it simply a matter of using LVM to add a new partition and format it?
Secondly, do you have any suggestions on how to move forward with the error regarding the pool capacity? I'm not sure if this is a legitimate error or problem in the upgrade process.
Thanks,
Matt
On 07/03/2018 03:58 AM, Yuval Turgeman wrote:
Not sure this is the problem, autoextend should be enabled for the thinpool, `lvs -o +profile` should show imgbased-pool (defined at /etc/lvm/profile/imgbased-pool.profile)
On Tue, Jul 3, 2018 at 8:55 AM, Yedidyah Bar David <didi@redhat.com <mailto:didi@redhat.com>> wrote:
On Mon, Jul 2, 2018 at 7:54 PM, Matt Simonsen <matt@khoza.com <mailto:matt@khoza.com>> wrote: > > This error adds some clarity. > > That said, I'm a bit unsure how the space can be the issue given I have several hundred GB of storage in the thin pool that's unused... > > How do you suggest I proceed? > > Thank you for your help, > > Matt > > > > [root@node6-g8-h4 ~]# lvs > > LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert > home onn_node1-g8-h4 Vwi-aotz-- 1.00g pool00 4.79 > ovirt-node-ng-4.2.2-0.20180423.0 onn_node1-g8-h4 Vwi---tz-k <50.06g pool00 root > ovirt-node-ng-4.2.2-0.20180423.0+1 onn_node1-g8-h4 Vwi---tz-- <50.06g pool00 ovirt-node-ng-4.2.2-0.20180423.0 > ovirt-node-ng-4.2.3.1-0.20180530.0 onn_node1-g8-h4 Vri---tz-k <50.06g pool00 > ovirt-node-ng-4.2.3.1-0.20180530.0+1 onn_node1-g8-h4 Vwi-aotz-- <50.06g pool00 ovirt-node-ng-4.2.3.1-0.20180530.0 6.95 > pool00 onn_node1-g8-h4 twi-aotz-- <1.30t 76.63 50.34
I think your thinpool meta volume is close to full and needs to be enlarged. This quite likely happened because you extended the thinpool without extending the meta vol.
Check also 'lvs -a'.
This might be enough, but check the names first:
lvextend -L+200m onn_node1-g8-h4/pool00_tmeta
Best regards,
> root onn_node1-g8-h4 Vwi---tz-- <50.06g pool00 > tmp onn_node1-g8-h4 Vwi-aotz-- 1.00g pool00 5.04 > var onn_node1-g8-h4 Vwi-aotz-- 15.00g pool00 5.86 > var_crash onn_node1-g8-h4 Vwi---tz-- 10.00g pool00 > var_local_images onn_node1-g8-h4 Vwi-aotz-- 1.10t pool00 89.72 > var_log onn_node1-g8-h4 Vwi-aotz-- 8.00g pool00 6.84 > var_log_audit onn_node1-g8-h4 Vwi-aotz-- 2.00g pool00 6.16 > [root@node6-g8-h4 ~]# vgs > VG #PV #LV #SN Attr VSize VFree > onn_node1-g8-h4 1 13 0 wz--n- <1.31t 8.00g > > > 2018-06-29 14:19:31,142 [DEBUG] (MainThread) Version: imgbased-1.0.20 > 2018-06-29 14:19:31,147 [DEBUG] (MainThread) Arguments: Namespace(FILENAME='/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img', command='update', debug=True, experimental=False, format='liveimg', stream='Image') > 2018-06-29 14:19:31,147 [INFO] (MainThread) Extracting image '/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img' > 2018-06-29 14:19:31,148 [DEBUG] (MainThread) Calling binary: (['mktemp', '-d', '--tmpdir', 'mnt.XXXXX'],) {} > 2018-06-29 14:19:31,148 [DEBUG] (MainThread) Calling: (['mktemp', '-d', '--tmpdir', 'mnt.XXXXX'],) {'close_fds': True, 'stderr': -2} > 2018-06-29 14:19:31,150 [DEBUG] (MainThread) Returned: /tmp/mnt.1OhaU > 2018-06-29 14:19:31,151 [DEBUG] (MainThread) Calling binary: (['mount', '/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img', u'/tmp/mnt.1OhaU'],) {} > 2018-06-29 14:19:31,151 [DEBUG] (MainThread) Calling: (['mount', '/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img', u'/tmp/mnt.1OhaU'],) {'close_fds': True, 'stderr': -2} > 2018-06-29 14:19:31,157 [DEBUG] (MainThread) Returned: > 2018-06-29 14:19:31,158 [DEBUG] (MainThread) Mounted squashfs > 2018-06-29 14:19:31,158 [DEBUG] (MainThread) Found fsimage at '/tmp/mnt.1OhaU/LiveOS/rootfs.img' > 2018-06-29 14:19:31,159 [DEBUG] (MainThread) Calling binary: (['mktemp', '-d', '--tmpdir', 'mnt.XXXXX'],) {} > 2018-06-29 14:19:31,159 [DEBUG] (MainThread) Calling: (['mktemp', '-d', '--tmpdir', 'mnt.XXXXX'],) {'close_fds': True, 'stderr': -2} > 2018-06-29 14:19:31,162 [DEBUG] (MainThread) Returned: /tmp/mnt.153do > 2018-06-29 14:19:31,162 [DEBUG] (MainThread) Calling binary: (['mount', u'/tmp/mnt.1OhaU/LiveOS/rootfs.img', u'/tmp/mnt.153do'],) {} > 2018-06-29 14:19:31,162 [DEBUG] (MainThread) Calling: (['mount', u'/tmp/mnt.1OhaU/LiveOS/rootfs.img', u'/tmp/mnt.153do'],) {'close_fds': True, 'stderr': -2} > 2018-06-29 14:19:31,177 [DEBUG] (MainThread) Returned: > 2018-06-29 14:19:31,189 [DEBUG] (MainThread) Using nvr: ovirt-node-ng-4.2.4-0.20180626.0 > 2018-06-29 14:19:31,189 [DEBUG] (MainThread) Fetching image for '/' > 2018-06-29 14:19:31,189 [DEBUG] (MainThread) Calling binary: (['findmnt', '--noheadings', '-o', 'SOURCE', '/'],) {} > 2018-06-29 14:19:31,190 [DEBUG] (MainThread) Calling: (['findmnt', '--noheadings', '-o', 'SOURCE', '/'],) {'close_fds': True, 'stderr': -2} > 2018-06-29 14:19:31,203 [DEBUG] (MainThread) Returned: /dev/mapper/onn_node1--g8--h4-ovirt--node--ng--4.2.3.1--0.20180530.0+1 > 2018-06-29 14:19:31,204 [DEBUG] (MainThread) Found '/dev/mapper/onn_node1--g8--h4-ovirt--node--ng--4.2.3.1--0.20180530.0+1' > 2018-06-29 14:19:31,204 [DEBUG] (MainThread) Calling binary: (['lvs', '--noheadings', '--ignoreskippedcluster', '-ovg_name,lv_name', u'/dev/mapper/onn_node1--g8--h4-ovirt--node--ng--4.2.3.1--0.20180530.0+1'],) {'stderr': <open file '/dev/null', mode 'w' at 0x7f56b787eed0>} > 2018-06-29 14:19:31,204 [DEBUG] (MainThread) Calling: (['lvs', '--noheadings', '--ignoreskippedcluster', '-ovg_name,lv_name', u'/dev/mapper/onn_node1--g8--h4-ovirt--node--ng--4.2.3.1--0.20180530.0+1'],) {'close_fds': True, 'stderr': <open file '/dev/null', mode 'w' at 0x7f56b787eed0>} > 2018-06-29 14:19:31,283 [DEBUG] (MainThread) Returned: onn_node1-g8-h4 ovirt-node-ng-4.2.3.1-0.20180530.0+1 > 2018-06-29 14:19:31,283 [DEBUG] (MainThread) Found LV for path /dev/mapper/onn_node1--g8--h4-ovirt--node--ng--4.2.3.1--0.20180530.0+1: onn_node1-g8-h4 ovirt-node-ng-4.2.3.1-0.20180530.0+1 > 2018-06-29 14:19:31,283 [DEBUG] (MainThread) Found LV 'ovirt-node-ng-4.2.3.1-0.20180530.0+1' for path '/dev/mapper/onn_node1--g8--h4-ovirt--node--ng--4.2.3.1--0.20180530.0+1' > 2018-06-29 14:19:31,284 [DEBUG] (MainThread) Calling binary: (['vgs', '--noheadings', '--ignoreskippedcluster', '--select', 'vg_tags = imgbased:vg', '-o', 'vg_name'],) {'stderr': <open file '/dev/null', mode 'w' at 0x7f56b787eed0>} > 2018-06-29 14:19:31,284 [DEBUG] (MainThread) Calling: (['vgs', '--noheadings', '--ignoreskippedcluster', '--select', 'vg_tags = imgbased:vg', '-o', 'vg_name'],) {'close_fds': True, 'stderr': <open file '/dev/null', mode 'w' at 0x7f56b787eed0>} > 2018-06-29 14:19:31,321 [DEBUG] (MainThread) Returned: onn_node1-g8-h4 > 2018-06-29 14:19:31,322 [DEBUG] (MainThread) Calling binary: (['lvs', '--noheadings', '--ignoreskippedcluster', '-osize', '--units', 'B', u'onn_node1-g8-h4/ovirt-node-ng-4.2.3.1-0.20180530.0+1'],) {'stderr': <open file '/dev/null', mode 'w' at 0x7f56b787eed0>} > 2018-06-29 14:19:31,322 [DEBUG] (MainThread) Calling: (['lvs', '--noheadings', '--ignoreskippedcluster', '-osize', '--units', 'B', u'onn_node1-g8-h4/ovirt-node-ng-4.2.3.1-0.20180530.0+1'],) {'close_fds': True, 'stderr': <open file '/dev/null', mode 'w' at 0x7f56b787eed0>} > 2018-06-29 14:19:31,355 [DEBUG] (MainThread) Returned: 53750005760B > 2018-06-29 14:19:31,355 [DEBUG] (MainThread) Recommeneded base size: 53750005760B > 2018-06-29 14:19:31,355 [INFO] (MainThread) Starting base creation > 2018-06-29 14:19:31,355 [INFO] (MainThread) New base will be: ovirt-node-ng-4.2.4-0.20180626.0 > 2018-06-29 14:19:31,356 [DEBUG] (MainThread) Calling binary: (['vgs', '--noheadings', '--ignoreskippedcluster', '@imgbased:pool', '-o', 'lv_full_name'],) {'stderr': <open file '/dev/null', mode 'w' at 0x7f56b787eed0>} > 2018-06-29 14:19:31,356 [DEBUG] (MainThread) Calling: (['vgs', '--noheadings', '--ignoreskippedcluster', '@imgbased:pool', '-o', 'lv_full_name'],) {'close_fds': True, 'stderr': <open file '/dev/null', mode 'w' at 0x7f56b787eed0>} > 2018-06-29 14:19:31,381 [DEBUG] (MainThread) Returned: onn_node1-g8-h4/pool00 > 2018-06-29 14:19:31,381 [DEBUG] (MainThread) Pool: <LV 'onn_node1-g8-h4/pool00' /> > 2018-06-29 14:19:31,382 [DEBUG] (MainThread) Calling binary: (['lvcreate', '--thin', '--virtualsize', u'53750005760B', '--name', 'ovirt-node-ng-4.2.4-0.20180626.0', u'onn_node1-g8-h4/pool00'],) {} > 2018-06-29 14:19:31,382 [DEBUG] (MainThread) Calling: (['lvcreate', '--thin', '--virtualsize', u'53750005760B', '--name', 'ovirt-node-ng-4.2.4-0.20180626.0', u'onn_node1-g8-h4/pool00'],) {'close_fds': True, 'stderr': -2} > 2018-06-29 14:19:31,406 [DEBUG] (MainThread) Exception! Cannot create new thin volume, free space in thin pool onn_node1-g8-h4/pool00 reached threshold. > > 2018-06-29 14:19:31,406 [DEBUG] (MainThread) Calling binary: (['umount', '-l', u'/tmp/mnt.153do'],) {} > 2018-06-29 14:19:31,406 [DEBUG] (MainThread) Calling: (['umount', '-l', u'/tmp/mnt.153do'],) {'close_fds': True, 'stderr': -2} > 2018-06-29 14:19:31,422 [DEBUG] (MainThread) Returned: > 2018-06-29 14:19:31,422 [DEBUG] (MainThread) Calling binary: (['rmdir', u'/tmp/mnt.153do'],) {} > 2018-06-29 14:19:31,422 [DEBUG] (MainThread) Calling: (['rmdir', u'/tmp/mnt.153do'],) {'close_fds': True, 'stderr': -2} > 2018-06-29 14:19:31,425 [DEBUG] (MainThread) Returned: > 2018-06-29 14:19:31,425 [DEBUG] (MainThread) Calling binary: (['umount', '-l', u'/tmp/mnt.1OhaU'],) {} > 2018-06-29 14:19:31,425 [DEBUG] (MainThread) Calling: (['umount', '-l', u'/tmp/mnt.1OhaU'],) {'close_fds': True, 'stderr': -2} > 2018-06-29 14:19:31,437 [DEBUG] (MainThread) Returned: > 2018-06-29 14:19:31,437 [DEBUG] (MainThread) Calling binary: (['rmdir', u'/tmp/mnt.1OhaU'],) {} > 2018-06-29 14:19:31,437 [DEBUG] (MainThread) Calling: (['rmdir', u'/tmp/mnt.1OhaU'],) {'close_fds': True, 'stderr': -2} > 2018-06-29 14:19:31,440 [DEBUG] (MainThread) Returned: > Traceback (most recent call last): > File "/usr/lib64/python2.7/runpy.py", line 162, in _run_module_as_main > "__main__", fname, loader, pkg_name) > File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code > exec code in run_globals > File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/__main__.py", line 53, in <module> > CliApplication() > File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/__init__.py", line 82, in CliApplication > app.hooks.emit("post-arg-parse", args) > File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/hooks.py", line 120, in emit > cb(self.context, *args) > File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/plugins/update.py", line 56, in post_argparse > base_lv, _ = LiveimgExtractor(app.imgbase).extract(args.FILENAME) > File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/plugins/update.py", line 118, in extract > "%s" % size, nvr) > File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/plugins/update.py", line 84, in add_base_with_tree > lvs) > File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/imgbase.py", line 310, in add_base > new_base_lv = pool.create_thinvol(new_base.lv_name, size) > File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/lvm.py", line 324, in create_thinvol > self.lvm_name]) > File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/utils.py", line 390, in lvcreate > return self.call(["lvcreate"] + args, **kwargs) > File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/utils.py", line 378, in call > stdout = call(*args, **kwargs) > File "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/utils.py", line 153, in call > return subprocess.check_output(*args, **kwargs).strip() > File "/usr/lib64/python2.7/subprocess.py", line 575, in check_output > raise CalledProcessError(retcode, cmd, output=output) > subprocess.CalledProcessError: Command '['lvcreate', '--thin', '--virtualsize', u'53750005760B', '--name', 'ovirt-node-ng-4.2.4-0.20180626.0', u'onn_node1-g8-h4/pool00']' returned non-zero exit status 5 > > > > > > On 07/02/2018 04:58 AM, Yuval Turgeman wrote: > > Looks like the upgrade script failed - can you please attach /var/log/imgbased.log or /tmp/imgbased.log ? > > Thanks, > Yuval. > > On Mon, Jul 2, 2018 at 2:54 PM, Sandro Bonazzola <sbonazzo@redhat.com <mailto:sbonazzo@redhat.com>> wrote: >> >> Yuval, can you please have a look? >> >> 2018-06-30 7:48 GMT+02:00 Oliver Riesener <Oliver.Riesener@hs-bremen.de <mailto:Oliver.Riesener@hs-bremen.de>>: >>> >>> Yes, here is the same. >>> >>> It seams the bootloader isn’t configured right ? >>> >>> I did the Upgrade and reboot to 4.2.4 from UI and got: >>> >>> [root@ovn-monster ~]# nodectl info >>> layers: >>> ovirt-node-ng-4.2.4-0.20180626.0: >>> ovirt-node-ng-4.2.4-0.20180626.0+1 >>> ovirt-node-ng-4.2.3.1-0.20180530.0: >>> ovirt-node-ng-4.2.3.1-0.20180530.0+1 >>> ovirt-node-ng-4.2.3-0.20180524.0: >>> ovirt-node-ng-4.2.3-0.20180524.0+1 >>> ovirt-node-ng-4.2.1.1-0.20180223.0: >>> ovirt-node-ng-4.2.1.1-0.20180223.0+1 >>> bootloader: >>> default: ovirt-node-ng-4.2.3-0.20180524.0+1 >>> entries: >>> ovirt-node-ng-4.2.3-0.20180524.0+1: >>> index: 0 >>> title: ovirt-node-ng-4.2.3-0.20180524.0 >>> kernel: /boot/ovirt-node-ng-4.2.3-0.20180524.0+1/vmlinuz-3.10.0-862.3.2.el7.x86_64 >>> args: "ro crashkernel=auto rd.lvm.lv <http://rd.lvm.lv>=onn_ovn-monster/ovirt-node-ng-4.2.3-0.20180524.0+1 rd.lvm.lv <http://rd.lvm.lv>=onn_ovn-monster/swap rd.md.uuid=c6c3013b:027a9346:67dfd181:89635587 rhgb quiet LANG=de_DE.UTF-8 img.bootid=ovirt-node-ng-4.2.3-0.20180524.0+1" >>> initrd: /boot/ovirt-node-ng-4.2.3-0.20180524.0+1/initramfs-3.10.0-862.3.2.el7.x86_64.img >>> root: /dev/onn_ovn-monster/ovirt-node-ng-4.2.3-0.20180524.0+1 >>> ovirt-node-ng-4.2.1.1-0.20180223.0+1: >>> index: 1 >>> title: ovirt-node-ng-4.2.1.1-0.20180223.0 >>> kernel: /boot/ovirt-node-ng-4.2.1.1-0.20180223.0+1/vmlinuz-3.10.0-693.17.1.el7.x86_64 >>> args: "ro crashkernel=auto rd.lvm.lv <http://rd.lvm.lv>=onn_ovn-monster/ovirt-node-ng-4.2.1.1-0.20180223.0+1 rd.lvm.lv <http://rd.lvm.lv>=onn_ovn-monster/swap rd.md.uuid=c6c3013b:027a9346:67dfd181:89635587 rhgb quiet LANG=de_DE.UTF-8 img.bootid=ovirt-node-ng-4.2.1.1-0.20180223.0+1" >>> initrd: /boot/ovirt-node-ng-4.2.1.1-0.20180223.0+1/initramfs-3.10.0-693.17.1.el7.x86_64.img >>> root: /dev/onn_ovn-monster/ovirt-node-ng-4.2.1.1-0.20180223.0+1 >>> current_layer: ovirt-node-ng-4.2.3-0.20180524.0+1 >>> [root@ovn-monster ~]# uptime >>> 07:35:27 up 2 days, 15:42, 1 user, load average: 1,07, 1,00, 0,95 >>> >>> Am 29.06.2018 um 23:53 schrieb Matt Simonsen <matt@khoza.com <mailto:matt@khoza.com>>: >>> >>> Hello, >>> >>> I did yum updates on 2 of my oVirt 4.2.3 nodes running the prebuilt node platform and it doesn't appear the updates worked. >>> >>> >>> [root@node6-g8-h4 ~]# yum update >>> Loaded plugins: enabled_repos_upload, fastestmirror, imgbased-persist, >>> : package_upload, product-id, search-disabled-repos, subscription- >>> : manager >>> This system is not registered with an entitlement server. You can use subscription-manager to register. >>> Loading mirror speeds from cached hostfile >>> * ovirt-4.2-epel: linux.mirrors.es.net <http://linux.mirrors.es.net> >>> Resolving Dependencies >>> --> Running transaction check >>> ---> Package ovirt-node-ng-image-update.noarch 0:4.2.3.1-1.el7 will be updated >>> ---> Package ovirt-node-ng-image-update.noarch 0:4.2.4-1.el7 will be obsoleting >>> ---> Package ovirt-node-ng-image-update-placeholder.noarch 0:4.2.3.1-1.el7 will be obsoleted >>> --> Finished Dependency Resolution >>> >>> Dependencies Resolved >>> >>> ========================================================================================================================= >>> Package Arch Version Repository Size >>> ========================================================================================================================= >>> Installing: >>> ovirt-node-ng-image-update noarch 4.2.4-1.el7 ovirt-4.2 647 M >>> replacing ovirt-node-ng-image-update-placeholder.noarch 4.2.3.1-1.el7 >>> >>> Transaction Summary >>> ========================================================================================================================= >>> Install 1 Package >>> >>> Total download size: 647 M >>> Is this ok [y/d/N]: y >>> Downloading packages: >>> warning: /var/cache/yum/x86_64/7/ovirt-4.2/packages/ovirt-node-ng-image-update-4.2.4-1.el7.noarch.rpm: Header V4 RSA/SHA1 Signature, key ID fe590cb7: NOKEY >>> Public key for ovirt-node-ng-image-update-4.2.4-1.el7.noarch.rpm is not installed >>> ovirt-node-ng-image-update-4.2.4-1.el7.noarch.rpm | 647 MB 00:02:07 >>> Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-ovirt-4.2 >>> Importing GPG key 0xFE590CB7: >>> Userid : "oVirt <infra@ovirt.org <mailto:infra@ovirt.org>>" >>> Fingerprint: 31a5 d783 7fad 7cb2 86cd 3469 ab8c 4f9d fe59 0cb7 >>> Package : ovirt-release42-4.2.3.1-1.el7.noarch (installed) >>> From : /etc/pki/rpm-gpg/RPM-GPG-ovirt-4.2 >>> Is this ok [y/N]: y >>> Running transaction check >>> Running transaction test >>> Transaction test succeeded >>> Running transaction >>> Installing : ovirt-node-ng-image-update-4.2.4-1.el7.noarch 1/3 >>> warning: %post(ovirt-node-ng-image-update-4.2.4-1.el7.noarch) scriptlet failed, exit status 1 >>> Non-fatal POSTIN scriptlet failure in rpm package ovirt-node-ng-image-update-4.2.4-1.el7.noarch >>> Erasing : ovirt-node-ng-image-update-placeholder-4.2.3.1-1.el7.noarch 2/3 >>> Cleanup : ovirt-node-ng-image-update-4.2.3.1-1.el7.noarch 3/3 >>> warning: file /usr/share/ovirt-node-ng/image/ovirt-node-ng-4.2.0-0.20180530.0.el7.squashfs.img: remove failed: No such file or directory >>> Uploading Package Profile >>> Unable to upload Package Profile >>> Verifying : ovirt-node-ng-image-update-4.2.4-1.el7.noarch 1/3 >>> Verifying : ovirt-node-ng-image-update-4.2.3.1-1.el7.noarch 2/3 >>> Verifying : ovirt-node-ng-image-update-placeholder-4.2.3.1-1.el7.noarch 3/3 >>> >>> Installed: >>> ovirt-node-ng-image-update.no <http://ovirt-node-ng-image-update.no>arch 0:4.2.4-1.el7 >>> >>> Replaced: >>> ovirt-node-ng-image-update-placeholder.noarch 0:4.2.3.1-1.el7 >>> >>> Complete! >>> Uploading Enabled Repositories Report >>> Loaded plugins: fastestmirror, product-id, subscription-manager >>> This system is not registered with an entitlement server. You can use subscription-manager to register. >>> Cannot upload enabled repos report, is this client registered? >>> >>> >>> My engine shows the nodes as having no updates, however the major components including the kernel version and port 9090 admin GUI show 4.2.3 >>> >>> Is there anything I can provide to help diagnose the issue? >>> >>> >>> [root@node6-g8-h4 ~]# rpm -qa | grep ovirt >>> >>> ovirt-imageio-common-1.3.1.2-0.el7.centos.noarch >>> ovirt-host-deploy-1.7.3-1.el7.centos.noarch >>> ovirt-vmconsole-host-1.0.5-4.el7.centos.noarch >>> ovirt-provider-ovn-driver-1.2.10-1.el7.centos.noarch >>> ovirt-engine-sdk-python-3.6.9.1-1.el7.noarch >>> ovirt-setup-lib-1.1.4-1.el7.centos.noarch >>> ovirt-release42-4.2.3.1-1.el7.noarch >>> ovirt-imageio-daemon-1.3.1.2-0.el7.centos.noarch >>> ovirt-hosted-engine-setup-2.2.20-1.el7.centos.noarch >>> ovirt-host-dependencies-4.2.2-2.el7.centos.x86_64 >>> ovirt-hosted-engine-ha-2.2.11-1.el7.centos.noarch >>> ovirt-host-4.2.2-2.el7.centos.x86_64 >>> ovirt-node-ng-image-update-4.2.4-1.el7.noarch >>> ovirt-vmconsole-1.0.5-4.el7.centos.noarch >>> ovirt-release-host-node-4.2.3.1-1.el7.noarch >>> cockpit-ovirt-dashboard-0.11.24-1.el7.centos.noarch >>> ovirt-node-ng-nodectl-4.2.0-0.20180524.0.el7.noarch >>> python-ovirt-engine-sdk4-4.2.6-2.el7.centos.x86_64 >>> >>> [root@node6-g8-h4 ~]# yum update >>> Loaded plugins: enabled_repos_upload, fastestmirror, imgbased-persist, package_upload, product-id, search-disabled-repos, subscription-manager >>> This system is not registered with an entitlement server. You can use subscription-manager to register. >>> Loading mirror speeds from cached hostfile >>> * ovirt-4.2-epel: linux.mirrors.es.net <http://linux.mirrors.es.net> >>> No packages marked for update >>> Uploading Enabled Repositories Report >>> Loaded plugins: fastestmirror, product-id, subscription-manager >>> This system is not registered with an entitlement server. You can use subscription-manager to register. >>> Cannot upload enabled repos report, is this client registered? >>> _______________________________________________ >>> Users mailing list -- users@ovirt.org <mailto:users@ovirt.org> >>> To unsubscribe send an email to users-leave@ovirt.org <mailto:users-leave@ovirt.org> >>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ <https://www.ovirt.org/site/privacy-policy/> >>> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ <https://www.ovirt.org/community/about/community-guidelines/> >>> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/UHQMGULUHL4GBB... <https://lists.ovirt.org/archives/list/users@ovirt.org/message/UHQMGULUHL4GBBHUBNGOAICJEM6W3RVW/> >>> >>> >>> >>> _______________________________________________ >>> Users mailing list -- users@ovirt.org <mailto:users@ovirt.org> >>> To unsubscribe send an email to users-leave@ovirt.org <mailto:users-leave@ovirt.org> >>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ <https://www.ovirt.org/site/privacy-policy/> >>> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ <https://www.ovirt.org/community/about/community-guidelines/> >>> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/RTOXFNAXQ3NJBW... <https://lists.ovirt.org/archives/list/users@ovirt.org/message/RTOXFNAXQ3NJBWX7RXOYK5H5RZBHX2OK/> >>> >> >> >> >> -- >> >> SANDRO BONAZZOLA >> >> MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV >> >> Red Hat EMEA >> >> sbonazzo@redhat.com <mailto:sbonazzo@redhat.com> > > > > > _______________________________________________ > Users mailing list -- users@ovirt.org <mailto:users@ovirt.org> > To unsubscribe send an email to users-leave@ovirt.org <mailto:users-leave@ovirt.org> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/ <https://www.ovirt.org/site/privacy-policy/> > oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ <https://www.ovirt.org/community/about/community-guidelines/> > List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/AS3UWWIO5PBFYU... <https://lists.ovirt.org/archives/list/users@ovirt.org/message/AS3UWWIO5PBFYUPW5DIJ6O6VYJAL5ZIL/> >
-- Didi

Hi, i attached my /tmp/imgbased.log Sheers Oliver
Am 02.07.2018 um 13:58 schrieb Yuval Turgeman <yuvalt@redhat.com>:
Looks like the upgrade script failed - can you please attach /var/log/imgbased.log or /tmp/imgbased.log ?
Thanks, Yuval.
On Mon, Jul 2, 2018 at 2:54 PM, Sandro Bonazzola <sbonazzo@redhat.com <mailto:sbonazzo@redhat.com>> wrote: Yuval, can you please have a look?
2018-06-30 7:48 GMT+02:00 Oliver Riesener <Oliver.Riesener@hs-bremen.de <mailto:Oliver.Riesener@hs-bremen.de>>: Yes, here is the same.
It seams the bootloader isn’t configured right ?
I did the Upgrade and reboot to 4.2.4 from UI and got:
[root@ovn-monster ~]# nodectl info layers: ovirt-node-ng-4.2.4-0.20180626.0: ovirt-node-ng-4.2.4-0.20180626.0+1 ovirt-node-ng-4.2.3.1-0.20180530.0: ovirt-node-ng-4.2.3.1-0.20180530.0+1 ovirt-node-ng-4.2.3-0.20180524.0: ovirt-node-ng-4.2.3-0.20180524.0+1 ovirt-node-ng-4.2.1.1-0.20180223.0: ovirt-node-ng-4.2.1.1-0.20180223.0+1 bootloader: default: ovirt-node-ng-4.2.3-0.20180524.0+1 entries: ovirt-node-ng-4.2.3-0.20180524.0+1: index: 0 title: ovirt-node-ng-4.2.3-0.20180524.0 kernel: /boot/ovirt-node-ng-4.2.3-0.20180524.0+1/vmlinuz-3.10.0-862.3.2.el7.x86_64 args: "ro crashkernel=auto rd.lvm.lv <http://rd.lvm.lv/>=onn_ovn-monster/ovirt-node-ng-4.2.3-0.20180524.0+1 rd.lvm.lv <http://rd.lvm.lv/>=onn_ovn-monster/swap rd.md.uuid=c6c3013b:027a9346:67dfd181:89635587 rhgb quiet LANG=de_DE.UTF-8 img.bootid=ovirt-node-ng-4.2.3-0.20180524.0+1" initrd: /boot/ovirt-node-ng-4.2.3-0.20180524.0+1/initramfs-3.10.0-862.3.2.el7.x86_64.img root: /dev/onn_ovn-monster/ovirt-node-ng-4.2.3-0.20180524.0+1 ovirt-node-ng-4.2.1.1-0.20180223.0+1: index: 1 title: ovirt-node-ng-4.2.1.1-0.20180223.0 kernel: /boot/ovirt-node-ng-4.2.1.1-0.20180223.0+1/vmlinuz-3.10.0-693.17.1.el7.x86_64 args: "ro crashkernel=auto rd.lvm.lv <http://rd.lvm.lv/>=onn_ovn-monster/ovirt-node-ng-4.2.1.1-0.20180223.0+1 rd.lvm.lv <http://rd.lvm.lv/>=onn_ovn-monster/swap rd.md.uuid=c6c3013b:027a9346:67dfd181:89635587 rhgb quiet LANG=de_DE.UTF-8 img.bootid=ovirt-node-ng-4.2.1.1-0.20180223.0+1" initrd: /boot/ovirt-node-ng-4.2.1.1-0.20180223.0+1/initramfs-3.10.0-693.17.1.el7.x86_64.img root: /dev/onn_ovn-monster/ovirt-node-ng-4.2.1.1-0.20180223.0+1 current_layer: ovirt-node-ng-4.2.3-0.20180524.0+1 [root@ovn-monster ~]# uptime 07:35:27 up 2 days, 15:42, 1 user, load average: 1,07, 1,00, 0,95
Am 29.06.2018 um 23:53 schrieb Matt Simonsen <matt@khoza.com <mailto:matt@khoza.com>>:
Hello,
I did yum updates on 2 of my oVirt 4.2.3 nodes running the prebuilt node platform and it doesn't appear the updates worked.
[root@node6-g8-h4 ~]# yum update Loaded plugins: enabled_repos_upload, fastestmirror, imgbased-persist, : package_upload, product-id, search-disabled-repos, subscription- : manager This system is not registered with an entitlement server. You can use subscription-manager to register. Loading mirror speeds from cached hostfile * ovirt-4.2-epel: linux.mirrors.es.net <http://linux.mirrors.es.net/> Resolving Dependencies --> Running transaction check ---> Package ovirt-node-ng-image-update.noarch 0:4.2.3.1-1.el7 will be updated ---> Package ovirt-node-ng-image-update.noarch 0:4.2.4-1.el7 will be obsoleting ---> Package ovirt-node-ng-image-update-placeholder.noarch 0:4.2.3.1-1.el7 will be obsoleted --> Finished Dependency Resolution
Dependencies Resolved
========================================================================================================================= Package Arch Version Repository Size ========================================================================================================================= Installing: ovirt-node-ng-image-update noarch 4.2.4-1.el7 ovirt-4.2 647 M replacing ovirt-node-ng-image-update-placeholder.noarch 4.2.3.1-1.el7
Transaction Summary ========================================================================================================================= Install 1 Package
Total download size: 647 M Is this ok [y/d/N]: y Downloading packages: warning: /var/cache/yum/x86_64/7/ovirt-4.2/packages/ovirt-node-ng-image-update-4.2.4-1.el7.noarch.rpm: Header V4 RSA/SHA1 Signature, key ID fe590cb7: NOKEY Public key for ovirt-node-ng-image-update-4.2.4-1.el7.noarch.rpm is not installed ovirt-node-ng-image-update-4.2.4-1.el7.noarch.rpm | 647 MB 00:02:07 Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-ovirt-4.2 <> Importing GPG key 0xFE590CB7: Userid : "oVirt <infra@ovirt.org <mailto:infra@ovirt.org>>" Fingerprint: 31a5 d783 7fad 7cb2 86cd 3469 ab8c 4f9d fe59 0cb7 Package : ovirt-release42-4.2.3.1-1.el7.noarch (installed) From : /etc/pki/rpm-gpg/RPM-GPG-ovirt-4.2 Is this ok [y/N]: y Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : ovirt-node-ng-image-update-4.2.4-1.el7.noarch 1/3 warning: %post(ovirt-node-ng-image-update-4.2.4-1.el7.noarch) scriptlet failed, exit status 1 Non-fatal POSTIN scriptlet failure in rpm package ovirt-node-ng-image-update-4.2.4-1.el7.noarch Erasing : ovirt-node-ng-image-update-placeholder-4.2.3.1-1.el7.noarch 2/3 Cleanup : ovirt-node-ng-image-update-4.2.3.1-1.el7.noarch 3/3 warning: file /usr/share/ovirt-node-ng/image/ovirt-node-ng-4.2.0-0.20180530.0.el7.squashfs.img: remove failed: No such file or directory Uploading Package Profile Unable to upload Package Profile Verifying : ovirt-node-ng-image-update-4.2.4-1.el7.noarch 1/3 Verifying : ovirt-node-ng-image-update-4.2.3.1-1.el7.noarch 2/3 Verifying : ovirt-node-ng-image-update-placeholder-4.2.3.1-1.el7.noarch 3/3
Installed: ovirt-node-ng-image-update.noarch 0:4.2.4-1.el7
Replaced: ovirt-node-ng-image-update-placeholder.noarch 0:4.2.3.1-1.el7
Complete! Uploading Enabled Repositories Report Loaded plugins: fastestmirror, product-id, subscription-manager This system is not registered with an entitlement server. You can use subscription-manager to register. Cannot upload enabled repos report, is this client registered?
My engine shows the nodes as having no updates, however the major components including the kernel version and port 9090 admin GUI show 4.2.3
Is there anything I can provide to help diagnose the issue?
[root@node6-g8-h4 ~]# rpm -qa | grep ovirt
ovirt-imageio-common-1.3.1.2-0.el7.centos.noarch ovirt-host-deploy-1.7.3-1.el7.centos.noarch ovirt-vmconsole-host-1.0.5-4.el7.centos.noarch ovirt-provider-ovn-driver-1.2.10-1.el7.centos.noarch ovirt-engine-sdk-python-3.6.9.1-1.el7.noarch ovirt-setup-lib-1.1.4-1.el7.centos.noarch ovirt-release42-4.2.3.1-1.el7.noarch ovirt-imageio-daemon-1.3.1.2-0.el7.centos.noarch ovirt-hosted-engine-setup-2.2.20-1.el7.centos.noarch ovirt-host-dependencies-4.2.2-2.el7.centos.x86_64 ovirt-hosted-engine-ha-2.2.11-1.el7.centos.noarch ovirt-host-4.2.2-2.el7.centos.x86_64 ovirt-node-ng-image-update-4.2.4-1.el7.noarch ovirt-vmconsole-1.0.5-4.el7.centos.noarch ovirt-release-host-node-4.2.3.1-1.el7.noarch cockpit-ovirt-dashboard-0.11.24-1.el7.centos.noarch ovirt-node-ng-nodectl-4.2.0-0.20180524.0.el7.noarch python-ovirt-engine-sdk4-4.2.6-2.el7.centos.x86_64
[root@node6-g8-h4 ~]# yum update Loaded plugins: enabled_repos_upload, fastestmirror, imgbased-persist, package_upload, product-id, search-disabled-repos, subscription-manager This system is not registered with an entitlement server. You can use subscription-manager to register. Loading mirror speeds from cached hostfile * ovirt-4.2-epel: linux.mirrors.es.net <http://linux.mirrors.es.net/> No packages marked for update Uploading Enabled Repositories Report Loaded plugins: fastestmirror, product-id, subscription-manager This system is not registered with an entitlement server. You can use subscription-manager to register. Cannot upload enabled repos report, is this client registered? _______________________________________________ Users mailing list -- users@ovirt.org <mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org <mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ <https://www.ovirt.org/site/privacy-policy/> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ <https://www.ovirt.org/community/about/community-guidelines/> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/UHQMGULUHL4GBB... <https://lists.ovirt.org/archives/list/users@ovirt.org/message/UHQMGULUHL4GBBHUBNGOAICJEM6W3RVW/>
_______________________________________________ Users mailing list -- users@ovirt.org <mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org <mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ <https://www.ovirt.org/site/privacy-policy/> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ <https://www.ovirt.org/community/about/community-guidelines/> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/RTOXFNAXQ3NJBW... <https://lists.ovirt.org/archives/list/users@ovirt.org/message/RTOXFNAXQ3NJBWX7RXOYK5H5RZBHX2OK/>
-- SANDRO BONAZZOLA MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV Red Hat EMEA <https://www.redhat.com/> sbonazzo@redhat.com <mailto:sbonazzo@redhat.com>
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/ACKMTWOUNXB7NW...

From your log: AssertionError: Path is already a volume: /var/crash Basically, it means that you already have an LV for /var/crash but it's not mounted for some reason, so either mount it (if the data good) or remove it and then reinstall the image-update rpm. Before that, check that you dont have any other LVs in that same state - or you can post the output for lvs... btw, do you have any more imgbased.log files laying around ? You can find more details about this here: https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.1/htm... On Mon, Jul 2, 2018 at 8:12 PM, Oliver Riesener < Oliver.Riesener@hs-bremen.de> wrote:
Hi,
i attached my /tmp/imgbased.log
Sheers
Oliver
Am 02.07.2018 um 13:58 schrieb Yuval Turgeman <yuvalt@redhat.com>:
Looks like the upgrade script failed - can you please attach /var/log/imgbased.log or /tmp/imgbased.log ?
Thanks, Yuval.
On Mon, Jul 2, 2018 at 2:54 PM, Sandro Bonazzola <sbonazzo@redhat.com> wrote:
Yuval, can you please have a look?
2018-06-30 7:48 GMT+02:00 Oliver Riesener <Oliver.Riesener@hs-bremen.de>:
Yes, here is the same.
It seams the bootloader isn’t configured right ?
I did the Upgrade and reboot to 4.2.4 from UI and got:
[root@ovn-monster ~]# nodectl info layers: ovirt-node-ng-4.2.4-0.20180626.0: ovirt-node-ng-4.2.4-0.20180626.0+1 ovirt-node-ng-4.2.3.1-0.20180530.0: ovirt-node-ng-4.2.3.1-0.20180530.0+1 ovirt-node-ng-4.2.3-0.20180524.0: ovirt-node-ng-4.2.3-0.20180524.0+1 ovirt-node-ng-4.2.1.1-0.20180223.0: ovirt-node-ng-4.2.1.1-0.20180223.0+1 bootloader: default: ovirt-node-ng-4.2.3-0.20180524.0+1 entries: ovirt-node-ng-4.2.3-0.20180524.0+1: index: 0 title: ovirt-node-ng-4.2.3-0.20180524.0 kernel: /boot/ovirt-node-ng-4.2.3-0.20 180524.0+1/vmlinuz-3.10.0-862.3.2.el7.x86_64 args: "ro crashkernel=auto rd.lvm.lv=onn_ovn-monster/ovirt-node-ng-4.2.3-0.20180524.0+1 rd.lvm.lv=onn_ovn-monster/swap rd.md.uuid=c6c3013b:027a9346:67dfd181:89635587 rhgb quiet LANG=de_DE.UTF-8 img.bootid=ovirt-node-ng-4.2.3 -0.20180524.0+1" initrd: /boot/ovirt-node-ng-4.2.3-0.20 180524.0+1/initramfs-3.10.0-862.3.2.el7.x86_64.img root: /dev/onn_ovn-monster/ovirt-node-ng-4.2.3-0.20180524.0+1 ovirt-node-ng-4.2.1.1-0.20180223.0+1: index: 1 title: ovirt-node-ng-4.2.1.1-0.20180223.0 kernel: /boot/ovirt-node-ng-4.2.1.1-0. 20180223.0+1/vmlinuz-3.10.0-693.17.1.el7.x86_64 args: "ro crashkernel=auto rd.lvm.lv=onn_ovn-monster/ovir t-node-ng-4.2.1.1-0.20180223.0+1 rd.lvm.lv=onn_ovn-monster/swap rd.md.uuid=c6c3013b:027a9346:67dfd181:89635587 rhgb quiet LANG=de_DE.UTF-8 img.bootid=ovirt-node-ng-4.2.1.1-0.20180223.0+1" initrd: /boot/ovirt-node-ng-4.2.1.1-0. 20180223.0+1/initramfs-3.10.0-693.17.1.el7.x86_64.img root: /dev/onn_ovn-monster/ovirt-node-ng-4.2.1.1-0.20180223.0+1 current_layer: ovirt-node-ng-4.2.3-0.20180524.0+1 [root@ovn-monster ~]# uptime 07:35:27 up 2 days, 15:42, 1 user, load average: 1,07, 1,00, 0,95
Am 29.06.2018 um 23:53 schrieb Matt Simonsen <matt@khoza.com>:
Hello,
I did yum updates on 2 of my oVirt 4.2.3 nodes running the prebuilt node platform and it doesn't appear the updates worked.
[root@node6-g8-h4 ~]# yum update Loaded plugins: enabled_repos_upload, fastestmirror, imgbased-persist, : package_upload, product-id, search-disabled-repos, subscription- : manager This system is not registered with an entitlement server. You can use subscription-manager to register. Loading mirror speeds from cached hostfile * ovirt-4.2-epel: linux.mirrors.es.net Resolving Dependencies --> Running transaction check ---> Package ovirt-node-ng-image-update.noarch 0:4.2.3.1-1.el7 will be updated ---> Package ovirt-node-ng-image-update.noarch 0:4.2.4-1.el7 will be obsoleting ---> Package ovirt-node-ng-image-update-placeholder.noarch 0:4.2.3.1-1.el7 will be obsoleted --> Finished Dependency Resolution
Dependencies Resolved
============================================================ ============================================================= Package Arch Version Repository Size ============================================================ ============================================================= Installing: ovirt-node-ng-image-update noarch 4.2.4-1.el7 ovirt-4.2 647 M replacing ovirt-node-ng-image-update-placeholder.noarch 4.2.3.1-1.el7
Transaction Summary ============================================================ ============================================================= Install 1 Package
Total download size: 647 M Is this ok [y/d/N]: y Downloading packages: warning: /var/cache/yum/x86_64/7/ovirt-4.2/packages/ovirt-node-ng-ima ge-update-4.2.4-1.el7.noarch.rpm: Header V4 RSA/SHA1 Signature, key ID fe590cb7: NOKEY Public key for ovirt-node-ng-image-update-4.2.4-1.el7.noarch.rpm is not installed ovirt-node-ng-image-update-4.2.4-1.el7.noarch.rpm | 647 MB 00:02:07 Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-ovirt-4.2 Importing GPG key 0xFE590CB7: Userid : "oVirt <infra@ovirt.org>" Fingerprint: 31a5 d783 7fad 7cb2 86cd 3469 ab8c 4f9d fe59 0cb7 Package : ovirt-release42-4.2.3.1-1.el7.noarch (installed) From : /etc/pki/rpm-gpg/RPM-GPG-ovirt-4.2 Is this ok [y/N]: y Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : ovirt-node-ng-image-update-4.2.4-1.el7.noarch 1/3 warning: %post(ovirt-node-ng-image-update-4.2.4-1.el7.noarch) scriptlet failed, exit status 1 Non-fatal POSTIN scriptlet failure in rpm package ovirt-node-ng-image-update-4.2.4-1.el7.noarch Erasing : ovirt-node-ng-image-update-placeholder-4.2.3.1-1.el7.noarch 2/3 Cleanup : ovirt-node-ng-image-update-4.2.3.1-1.el7.noarch 3/3 warning: file /usr/share/ovirt-node-ng/image /ovirt-node-ng-4.2.0-0.20180530.0.el7.squashfs.img: remove failed: No such file or directory Uploading Package Profile Unable to upload Package Profile Verifying : ovirt-node-ng-image-update-4.2.4-1.el7.noarch 1/3 Verifying : ovirt-node-ng-image-update-4.2.3.1-1.el7.noarch 2/3 Verifying : ovirt-node-ng-image-update-placeholder-4.2.3.1-1.el7.noarch 3/3
Installed: ovirt-node-ng-image-update.noarch 0:4.2.4-1.el7
Replaced: ovirt-node-ng-image-update-placeholder.noarch 0:4.2.3.1-1.el7
Complete! Uploading Enabled Repositories Report Loaded plugins: fastestmirror, product-id, subscription-manager This system is not registered with an entitlement server. You can use subscription-manager to register. Cannot upload enabled repos report, is this client registered?
My engine shows the nodes as having no updates, however the major components including the kernel version and port 9090 admin GUI show 4.2.3
Is there anything I can provide to help diagnose the issue?
[root@node6-g8-h4 ~]# rpm -qa | grep ovirt
ovirt-imageio-common-1.3.1.2-0.el7.centos.noarch ovirt-host-deploy-1.7.3-1.el7.centos.noarch ovirt-vmconsole-host-1.0.5-4.el7.centos.noarch ovirt-provider-ovn-driver-1.2.10-1.el7.centos.noarch ovirt-engine-sdk-python-3.6.9.1-1.el7.noarch ovirt-setup-lib-1.1.4-1.el7.centos.noarch ovirt-release42-4.2.3.1-1.el7.noarch ovirt-imageio-daemon-1.3.1.2-0.el7.centos.noarch ovirt-hosted-engine-setup-2.2.20-1.el7.centos.noarch ovirt-host-dependencies-4.2.2-2.el7.centos.x86_64 ovirt-hosted-engine-ha-2.2.11-1.el7.centos.noarch ovirt-host-4.2.2-2.el7.centos.x86_64 ovirt-node-ng-image-update-4.2.4-1.el7.noarch ovirt-vmconsole-1.0.5-4.el7.centos.noarch ovirt-release-host-node-4.2.3.1-1.el7.noarch cockpit-ovirt-dashboard-0.11.24-1.el7.centos.noarch ovirt-node-ng-nodectl-4.2.0-0.20180524.0.el7.noarch python-ovirt-engine-sdk4-4.2.6-2.el7.centos.x86_64
[root@node6-g8-h4 ~]# yum update Loaded plugins: enabled_repos_upload, fastestmirror, imgbased-persist, package_upload, product-id, search-disabled-repos, subscription-manager This system is not registered with an entitlement server. You can use subscription-manager to register. Loading mirror speeds from cached hostfile * ovirt-4.2-epel: linux.mirrors.es.net No packages marked for update Uploading Enabled Repositories Report Loaded plugins: fastestmirror, product-id, subscription-manager This system is not registered with an entitlement server. You can use subscription-manager to register. Cannot upload enabled repos report, is this client registered? _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/communit y/about/community-guidelines/ List Archives: https://lists.ovirt.org/archiv es/list/users@ovirt.org/message/UHQMGULUHL4GBBHUBNGOAICJEM6W3RVW/
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/communit y/about/community-guidelines/ List Archives: https://lists.ovirt.org/archiv es/list/users@ovirt.org/message/RTOXFNAXQ3NJBWX7RXOYK5H5RZBHX2OK/
-- SANDRO BONAZZOLA
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV Red Hat EMEA <https://www.redhat.com/>
sbonazzo@redhat.com <https://red.ht/sig>
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community- guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/ message/ACKMTWOUNXB7NWOCRZIOWJI7U5HIQAUV/

Hi Yuval, yes you are right, there was a unused and deactivated var_crash LV. * I activated and mount it to /var/crash via /etc/fstab. * /var/crash was empty, and LV has already ext4 fs. var_crash onn_ovn-monster Vwi-aotz-- 10,00g pool00 2,86 * Now i will try to upgrade again. * yum reinstall ovirt-node-ng-image-update.noarch BTW, no more imgbased.log files found.
Am 02.07.2018 um 20:57 schrieb Yuval Turgeman <yturgema@redhat.com>:
From your log:
AssertionError: Path is already a volume: /var/crash
Basically, it means that you already have an LV for /var/crash but it's not mounted for some reason, so either mount it (if the data good) or remove it and then reinstall the image-update rpm. Before that, check that you dont have any other LVs in that same state - or you can post the output for lvs... btw, do you have any more imgbased.log files laying around ?
You can find more details about this here:
https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.1/htm... <https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.1/html/upgrade_guide/recovering_from_failed_nist-800_upgrade>
On Mon, Jul 2, 2018 at 8:12 PM, Oliver Riesener <Oliver.Riesener@hs-bremen.de <mailto:Oliver.Riesener@hs-bremen.de>> wrote: Hi,
i attached my /tmp/imgbased.log
Sheers
Oliver
Am 02.07.2018 um 13:58 schrieb Yuval Turgeman <yuvalt@redhat.com <mailto:yuvalt@redhat.com>>:
Looks like the upgrade script failed - can you please attach /var/log/imgbased.log or /tmp/imgbased.log ?
Thanks, Yuval.
On Mon, Jul 2, 2018 at 2:54 PM, Sandro Bonazzola <sbonazzo@redhat.com <mailto:sbonazzo@redhat.com>> wrote: Yuval, can you please have a look?
2018-06-30 7:48 GMT+02:00 Oliver Riesener <Oliver.Riesener@hs-bremen.de <mailto:Oliver.Riesener@hs-bremen.de>>: Yes, here is the same.
It seams the bootloader isn’t configured right ?
I did the Upgrade and reboot to 4.2.4 from UI and got:
[root@ovn-monster ~]# nodectl info layers: ovirt-node-ng-4.2.4-0.20180626.0: ovirt-node-ng-4.2.4-0.20180626.0+1 ovirt-node-ng-4.2.3.1-0.20180530.0: ovirt-node-ng-4.2.3.1-0.20180530.0+1 ovirt-node-ng-4.2.3-0.20180524.0: ovirt-node-ng-4.2.3-0.20180524.0+1 ovirt-node-ng-4.2.1.1-0.20180223.0: ovirt-node-ng-4.2.1.1-0.20180223.0+1 bootloader: default: ovirt-node-ng-4.2.3-0.20180524.0+1 entries: ovirt-node-ng-4.2.3-0.20180524.0+1: index: 0 title: ovirt-node-ng-4.2.3-0.20180524.0 kernel: /boot/ovirt-node-ng-4.2.3-0.20180524.0+1/vmlinuz-3.10.0-862.3.2.el7.x86_64 args: "ro crashkernel=auto rd.lvm.lv <http://rd.lvm.lv/>=onn_ovn-monster/ovirt-node-ng-4.2.3-0.20180524.0+1 rd.lvm.lv <http://rd.lvm.lv/>=onn_ovn-monster/swap rd.md.uuid=c6c3013b:027a9346:67dfd181:89635587 rhgb quiet LANG=de_DE.UTF-8 img.bootid=ovirt-node-ng-4.2.3-0.20180524.0+1" initrd: /boot/ovirt-node-ng-4.2.3-0.20180524.0+1/initramfs-3.10.0-862.3.2.el7.x86_64.img root: /dev/onn_ovn-monster/ovirt-node-ng-4.2.3-0.20180524.0+1 ovirt-node-ng-4.2.1.1-0.20180223.0+1: index: 1 title: ovirt-node-ng-4.2.1.1-0.20180223.0 kernel: /boot/ovirt-node-ng-4.2.1.1-0.20180223.0+1/vmlinuz-3.10.0-693.17.1.el7.x86_64 args: "ro crashkernel=auto rd.lvm.lv <http://rd.lvm.lv/>=onn_ovn-monster/ovirt-node-ng-4.2.1.1-0.20180223.0+1 rd.lvm.lv <http://rd.lvm.lv/>=onn_ovn-monster/swap rd.md.uuid=c6c3013b:027a9346:67dfd181:89635587 rhgb quiet LANG=de_DE.UTF-8 img.bootid=ovirt-node-ng-4.2.1.1-0.20180223.0+1" initrd: /boot/ovirt-node-ng-4.2.1.1-0.20180223.0+1/initramfs-3.10.0-693.17.1.el7.x86_64.img root: /dev/onn_ovn-monster/ovirt-node-ng-4.2.1.1-0.20180223.0+1 current_layer: ovirt-node-ng-4.2.3-0.20180524.0+1 [root@ovn-monster ~]# uptime 07:35:27 up 2 days, 15:42, 1 user, load average: 1,07, 1,00, 0,95
Am 29.06.2018 um 23:53 schrieb Matt Simonsen <matt@khoza.com <mailto:matt@khoza.com>>:
Hello,
I did yum updates on 2 of my oVirt 4.2.3 nodes running the prebuilt node platform and it doesn't appear the updates worked.
[root@node6-g8-h4 ~]# yum update Loaded plugins: enabled_repos_upload, fastestmirror, imgbased-persist, : package_upload, product-id, search-disabled-repos, subscription- : manager This system is not registered with an entitlement server. You can use subscription-manager to register. Loading mirror speeds from cached hostfile * ovirt-4.2-epel: linux.mirrors.es.net <http://linux.mirrors.es.net/> Resolving Dependencies --> Running transaction check ---> Package ovirt-node-ng-image-update.noarch 0:4.2.3.1-1.el7 will be updated ---> Package ovirt-node-ng-image-update.noarch 0:4.2.4-1.el7 will be obsoleting ---> Package ovirt-node-ng-image-update-placeholder.noarch 0:4.2.3.1-1.el7 will be obsoleted --> Finished Dependency Resolution
Dependencies Resolved
========================================================================================================================= Package Arch Version Repository Size ========================================================================================================================= Installing: ovirt-node-ng-image-update noarch 4.2.4-1.el7 ovirt-4.2 647 M replacing ovirt-node-ng-image-update-placeholder.noarch 4.2.3.1-1.el7
Transaction Summary ========================================================================================================================= Install 1 Package
Total download size: 647 M Is this ok [y/d/N]: y Downloading packages: warning: /var/cache/yum/x86_64/7/ovirt-4.2/packages/ovirt-node-ng-image-update-4.2.4-1.el7.noarch.rpm: Header V4 RSA/SHA1 Signature, key ID fe590cb7: NOKEY Public key for ovirt-node-ng-image-update-4.2.4-1.el7.noarch.rpm is not installed ovirt-node-ng-image-update-4.2.4-1.el7.noarch.rpm | 647 MB 00:02:07 Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-ovirt-4.2 <> Importing GPG key 0xFE590CB7: Userid : "oVirt <infra@ovirt.org <mailto:infra@ovirt.org>>" Fingerprint: 31a5 d783 7fad 7cb2 86cd 3469 ab8c 4f9d fe59 0cb7 Package : ovirt-release42-4.2.3.1-1.el7.noarch (installed) From : /etc/pki/rpm-gpg/RPM-GPG-ovirt-4.2 Is this ok [y/N]: y Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : ovirt-node-ng-image-update-4.2.4-1.el7.noarch 1/3 warning: %post(ovirt-node-ng-image-update-4.2.4-1.el7.noarch) scriptlet failed, exit status 1 Non-fatal POSTIN scriptlet failure in rpm package ovirt-node-ng-image-update-4.2.4-1.el7.noarch Erasing : ovirt-node-ng-image-update-placeholder-4.2.3.1-1.el7.noarch 2/3 Cleanup : ovirt-node-ng-image-update-4.2.3.1-1.el7.noarch 3/3 warning: file /usr/share/ovirt-node-ng/image/ovirt-node-ng-4.2.0-0.20180530.0.el7.squashfs.img: remove failed: No such file or directory Uploading Package Profile Unable to upload Package Profile Verifying : ovirt-node-ng-image-update-4.2.4-1.el7.noarch 1/3 Verifying : ovirt-node-ng-image-update-4.2.3.1-1.el7.noarch 2/3 Verifying : ovirt-node-ng-image-update-placeholder-4.2.3.1-1.el7.noarch 3/3
Installed: ovirt-node-ng-image-update.noarch 0:4.2.4-1.el7
Replaced: ovirt-node-ng-image-update-placeholder.noarch 0:4.2.3.1-1.el7
Complete! Uploading Enabled Repositories Report Loaded plugins: fastestmirror, product-id, subscription-manager This system is not registered with an entitlement server. You can use subscription-manager to register. Cannot upload enabled repos report, is this client registered?
My engine shows the nodes as having no updates, however the major components including the kernel version and port 9090 admin GUI show 4.2.3
Is there anything I can provide to help diagnose the issue?
[root@node6-g8-h4 ~]# rpm -qa | grep ovirt
ovirt-imageio-common-1.3.1.2-0.el7.centos.noarch ovirt-host-deploy-1.7.3-1.el7.centos.noarch ovirt-vmconsole-host-1.0.5-4.el7.centos.noarch ovirt-provider-ovn-driver-1.2.10-1.el7.centos.noarch ovirt-engine-sdk-python-3.6.9.1-1.el7.noarch ovirt-setup-lib-1.1.4-1.el7.centos.noarch ovirt-release42-4.2.3.1-1.el7.noarch ovirt-imageio-daemon-1.3.1.2-0.el7.centos.noarch ovirt-hosted-engine-setup-2.2.20-1.el7.centos.noarch ovirt-host-dependencies-4.2.2-2.el7.centos.x86_64 ovirt-hosted-engine-ha-2.2.11-1.el7.centos.noarch ovirt-host-4.2.2-2.el7.centos.x86_64 ovirt-node-ng-image-update-4.2.4-1.el7.noarch ovirt-vmconsole-1.0.5-4.el7.centos.noarch ovirt-release-host-node-4.2.3.1-1.el7.noarch cockpit-ovirt-dashboard-0.11.24-1.el7.centos.noarch ovirt-node-ng-nodectl-4.2.0-0.20180524.0.el7.noarch python-ovirt-engine-sdk4-4.2.6-2.el7.centos.x86_64
[root@node6-g8-h4 ~]# yum update Loaded plugins: enabled_repos_upload, fastestmirror, imgbased-persist, package_upload, product-id, search-disabled-repos, subscription-manager This system is not registered with an entitlement server. You can use subscription-manager to register. Loading mirror speeds from cached hostfile * ovirt-4.2-epel: linux.mirrors.es.net <http://linux.mirrors.es.net/> No packages marked for update Uploading Enabled Repositories Report Loaded plugins: fastestmirror, product-id, subscription-manager This system is not registered with an entitlement server. You can use subscription-manager to register. Cannot upload enabled repos report, is this client registered? _______________________________________________ Users mailing list -- users@ovirt.org <mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org <mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ <https://www.ovirt.org/site/privacy-policy/> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ <https://www.ovirt.org/community/about/community-guidelines/> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/UHQMGULUHL4GBB... <https://lists.ovirt.org/archives/list/users@ovirt.org/message/UHQMGULUHL4GBBHUBNGOAICJEM6W3RVW/>
_______________________________________________ Users mailing list -- users@ovirt.org <mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org <mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ <https://www.ovirt.org/site/privacy-policy/> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ <https://www.ovirt.org/community/about/community-guidelines/> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/RTOXFNAXQ3NJBW... <https://lists.ovirt.org/archives/list/users@ovirt.org/message/RTOXFNAXQ3NJBWX7RXOYK5H5RZBHX2OK/>
-- SANDRO BONAZZOLA MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV Red Hat EMEA <https://www.redhat.com/> sbonazzo@redhat.com <mailto:sbonazzo@redhat.com>
_______________________________________________ Users mailing list -- users@ovirt.org <mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org <mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ <https://www.ovirt.org/site/privacy-policy/> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ <https://www.ovirt.org/community/about/community-guidelines/> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/ACKMTWOUNXB7NW... <https://lists.ovirt.org/archives/list/users@ovirt.org/message/ACKMTWOUNXB7NWOCRZIOWJI7U5HIQAUV/>
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/E5MFPG7WW5BIQ5... <https://lists.ovirt.org/archives/list/users@ovirt.org/message/E5MFPG7WW5BIQ5BOC7LG5C23YZ7B7UJ2/>

Hi Yuval, * reinstallation failed, because LV already exists. ovirt-node-ng-4.2.4-0.20180626.0 onn_ovn-monster Vri-a-tz-k <252,38g pool00 0,85 ovirt-node-ng-4.2.4-0.20180626.0+1 onn_ovn-monster Vwi-a-tz-- <252,38g pool00 ovirt-node-ng-4.2.4-0.20180626.0 0,85 See attachment imgbased.reinstall.log * I removed them and re-reinstall without luck. I got KeyError: <NVR ovirt-node-ng-4.2.1.1-0.20180223.0 /> See attachment imgbased.rereinstall.log Also a new problem with nodectl info [root@ovn-monster tmp]# nodectl info Traceback (most recent call last): File "/usr/lib64/python2.7/runpy.py", line 162, in _run_module_as_main "__main__", fname, loader, pkg_name) File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code exec code in run_globals File "/usr/lib/python2.7/site-packages/nodectl/__main__.py", line 42, in <module> CliApplication() File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 200, in CliApplication return cmdmap.command(args) File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 118, in command return self.commands[command](**kwargs) File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 76, in info Info(self.imgbased, self.machine).write() File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 45, in __init__ self._fetch_information() File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 49, in _fetch_information self._get_layout() File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 66, in _get_layout layout = LayoutParser(self.app.imgbase.layout()).parse() File "/usr/lib/python2.7/site-packages/imgbased/imgbase.py", line 155, in layout return self.naming.layout() File "/usr/lib/python2.7/site-packages/imgbased/naming.py", line 109, in layout tree = self.tree(lvs) File "/usr/lib/python2.7/site-packages/imgbased/naming.py", line 224, in tree bases[img.base.nvr].layers.append(img) KeyError: <NVR ovirt-node-ng-4.2.3-0.20180524.0 />
Am 02.07.2018 um 22:22 schrieb Oliver Riesener <Oliver.Riesener@hs-bremen.de>:
Hi Yuval,
yes you are right, there was a unused and deactivated var_crash LV.
* I activated and mount it to /var/crash via /etc/fstab. * /var/crash was empty, and LV has already ext4 fs. var_crash onn_ovn-monster Vwi-aotz-- 10,00g pool00 2,86
* Now i will try to upgrade again. * yum reinstall ovirt-node-ng-image-update.noarch
BTW, no more imgbased.log files found.
Am 02.07.2018 um 20:57 schrieb Yuval Turgeman <yturgema@redhat.com <mailto:yturgema@redhat.com>>:
From your log:
AssertionError: Path is already a volume: /var/crash
Basically, it means that you already have an LV for /var/crash but it's not mounted for some reason, so either mount it (if the data good) or remove it and then reinstall the image-update rpm. Before that, check that you dont have any other LVs in that same state - or you can post the output for lvs... btw, do you have any more imgbased.log files laying around ?
You can find more details about this here:
https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.1/htm... <https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.1/html/upgrade_guide/recovering_from_failed_nist-800_upgrade>
On Mon, Jul 2, 2018 at 8:12 PM, Oliver Riesener <Oliver.Riesener@hs-bremen.de <mailto:Oliver.Riesener@hs-bremen.de>> wrote: Hi,
i attached my /tmp/imgbased.log
Sheers
Oliver
Am 02.07.2018 um 13:58 schrieb Yuval Turgeman <yuvalt@redhat.com <mailto:yuvalt@redhat.com>>:
Looks like the upgrade script failed - can you please attach /var/log/imgbased.log or /tmp/imgbased.log ?
Thanks, Yuval.
On Mon, Jul 2, 2018 at 2:54 PM, Sandro Bonazzola <sbonazzo@redhat.com <mailto:sbonazzo@redhat.com>> wrote: Yuval, can you please have a look?
2018-06-30 7:48 GMT+02:00 Oliver Riesener <Oliver.Riesener@hs-bremen.de <mailto:Oliver.Riesener@hs-bremen.de>>: Yes, here is the same.
It seams the bootloader isn’t configured right ?
I did the Upgrade and reboot to 4.2.4 from UI and got:
[root@ovn-monster ~]# nodectl info layers: ovirt-node-ng-4.2.4-0.20180626.0: ovirt-node-ng-4.2.4-0.20180626.0+1 ovirt-node-ng-4.2.3.1-0.20180530.0: ovirt-node-ng-4.2.3.1-0.20180530.0+1 ovirt-node-ng-4.2.3-0.20180524.0: ovirt-node-ng-4.2.3-0.20180524.0+1 ovirt-node-ng-4.2.1.1-0.20180223.0: ovirt-node-ng-4.2.1.1-0.20180223.0+1 bootloader: default: ovirt-node-ng-4.2.3-0.20180524.0+1 entries: ovirt-node-ng-4.2.3-0.20180524.0+1: index: 0 title: ovirt-node-ng-4.2.3-0.20180524.0 kernel: /boot/ovirt-node-ng-4.2.3-0.20180524.0+1/vmlinuz-3.10.0-862.3.2.el7.x86_64 args: "ro crashkernel=auto rd.lvm.lv <http://rd.lvm.lv/>=onn_ovn-monster/ovirt-node-ng-4.2.3-0.20180524.0+1 rd.lvm.lv <http://rd.lvm.lv/>=onn_ovn-monster/swap rd.md.uuid=c6c3013b:027a9346:67dfd181:89635587 rhgb quiet LANG=de_DE.UTF-8 img.bootid=ovirt-node-ng-4.2.3-0.20180524.0+1" initrd: /boot/ovirt-node-ng-4.2.3-0.20180524.0+1/initramfs-3.10.0-862.3.2.el7.x86_64.img root: /dev/onn_ovn-monster/ovirt-node-ng-4.2.3-0.20180524.0+1 ovirt-node-ng-4.2.1.1-0.20180223.0+1: index: 1 title: ovirt-node-ng-4.2.1.1-0.20180223.0 kernel: /boot/ovirt-node-ng-4.2.1.1-0.20180223.0+1/vmlinuz-3.10.0-693.17.1.el7.x86_64 args: "ro crashkernel=auto rd.lvm.lv <http://rd.lvm.lv/>=onn_ovn-monster/ovirt-node-ng-4.2.1.1-0.20180223.0+1 rd.lvm.lv <http://rd.lvm.lv/>=onn_ovn-monster/swap rd.md.uuid=c6c3013b:027a9346:67dfd181:89635587 rhgb quiet LANG=de_DE.UTF-8 img.bootid=ovirt-node-ng-4.2.1.1-0.20180223.0+1" initrd: /boot/ovirt-node-ng-4.2.1.1-0.20180223.0+1/initramfs-3.10.0-693.17.1.el7.x86_64.img root: /dev/onn_ovn-monster/ovirt-node-ng-4.2.1.1-0.20180223.0+1 current_layer: ovirt-node-ng-4.2.3-0.20180524.0+1 [root@ovn-monster ~]# uptime 07:35:27 up 2 days, 15:42, 1 user, load average: 1,07, 1,00, 0,95
Am 29.06.2018 um 23:53 schrieb Matt Simonsen <matt@khoza.com <mailto:matt@khoza.com>>:
Hello,
I did yum updates on 2 of my oVirt 4.2.3 nodes running the prebuilt node platform and it doesn't appear the updates worked.
[root@node6-g8-h4 ~]# yum update Loaded plugins: enabled_repos_upload, fastestmirror, imgbased-persist, : package_upload, product-id, search-disabled-repos, subscription- : manager This system is not registered with an entitlement server. You can use subscription-manager to register. Loading mirror speeds from cached hostfile * ovirt-4.2-epel: linux.mirrors.es.net <http://linux.mirrors.es.net/> Resolving Dependencies --> Running transaction check ---> Package ovirt-node-ng-image-update.noarch 0:4.2.3.1-1.el7 will be updated ---> Package ovirt-node-ng-image-update.noarch 0:4.2.4-1.el7 will be obsoleting ---> Package ovirt-node-ng-image-update-placeholder.noarch 0:4.2.3.1-1.el7 will be obsoleted --> Finished Dependency Resolution
Dependencies Resolved
========================================================================================================================= Package Arch Version Repository Size ========================================================================================================================= Installing: ovirt-node-ng-image-update noarch 4.2.4-1.el7 ovirt-4.2 647 M replacing ovirt-node-ng-image-update-placeholder.noarch 4.2.3.1-1.el7
Transaction Summary ========================================================================================================================= Install 1 Package
Total download size: 647 M Is this ok [y/d/N]: y Downloading packages: warning: /var/cache/yum/x86_64/7/ovirt-4.2/packages/ovirt-node-ng-image-update-4.2.4-1.el7.noarch.rpm: Header V4 RSA/SHA1 Signature, key ID fe590cb7: NOKEY Public key for ovirt-node-ng-image-update-4.2.4-1.el7.noarch.rpm is not installed ovirt-node-ng-image-update-4.2.4-1.el7.noarch.rpm | 647 MB 00:02:07 Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-ovirt-4.2 <> Importing GPG key 0xFE590CB7: Userid : "oVirt <infra@ovirt.org <mailto:infra@ovirt.org>>" Fingerprint: 31a5 d783 7fad 7cb2 86cd 3469 ab8c 4f9d fe59 0cb7 Package : ovirt-release42-4.2.3.1-1.el7.noarch (installed) From : /etc/pki/rpm-gpg/RPM-GPG-ovirt-4.2 Is this ok [y/N]: y Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : ovirt-node-ng-image-update-4.2.4-1.el7.noarch 1/3 warning: %post(ovirt-node-ng-image-update-4.2.4-1.el7.noarch) scriptlet failed, exit status 1 Non-fatal POSTIN scriptlet failure in rpm package ovirt-node-ng-image-update-4.2.4-1.el7.noarch Erasing : ovirt-node-ng-image-update-placeholder-4.2.3.1-1.el7.noarch 2/3 Cleanup : ovirt-node-ng-image-update-4.2.3.1-1.el7.noarch 3/3 warning: file /usr/share/ovirt-node-ng/image/ovirt-node-ng-4.2.0-0.20180530.0.el7.squashfs.img: remove failed: No such file or directory Uploading Package Profile Unable to upload Package Profile Verifying : ovirt-node-ng-image-update-4.2.4-1.el7.noarch 1/3 Verifying : ovirt-node-ng-image-update-4.2.3.1-1.el7.noarch 2/3 Verifying : ovirt-node-ng-image-update-placeholder-4.2.3.1-1.el7.noarch 3/3
Installed: ovirt-node-ng-image-update.noarch 0:4.2.4-1.el7
Replaced: ovirt-node-ng-image-update-placeholder.noarch 0:4.2.3.1-1.el7
Complete! Uploading Enabled Repositories Report Loaded plugins: fastestmirror, product-id, subscription-manager This system is not registered with an entitlement server. You can use subscription-manager to register. Cannot upload enabled repos report, is this client registered?
My engine shows the nodes as having no updates, however the major components including the kernel version and port 9090 admin GUI show 4.2.3
Is there anything I can provide to help diagnose the issue?
[root@node6-g8-h4 ~]# rpm -qa | grep ovirt
ovirt-imageio-common-1.3.1.2-0.el7.centos.noarch ovirt-host-deploy-1.7.3-1.el7.centos.noarch ovirt-vmconsole-host-1.0.5-4.el7.centos.noarch ovirt-provider-ovn-driver-1.2.10-1.el7.centos.noarch ovirt-engine-sdk-python-3.6.9.1-1.el7.noarch ovirt-setup-lib-1.1.4-1.el7.centos.noarch ovirt-release42-4.2.3.1-1.el7.noarch ovirt-imageio-daemon-1.3.1.2-0.el7.centos.noarch ovirt-hosted-engine-setup-2.2.20-1.el7.centos.noarch ovirt-host-dependencies-4.2.2-2.el7.centos.x86_64 ovirt-hosted-engine-ha-2.2.11-1.el7.centos.noarch ovirt-host-4.2.2-2.el7.centos.x86_64 ovirt-node-ng-image-update-4.2.4-1.el7.noarch ovirt-vmconsole-1.0.5-4.el7.centos.noarch ovirt-release-host-node-4.2.3.1-1.el7.noarch cockpit-ovirt-dashboard-0.11.24-1.el7.centos.noarch ovirt-node-ng-nodectl-4.2.0-0.20180524.0.el7.noarch python-ovirt-engine-sdk4-4.2.6-2.el7.centos.x86_64
[root@node6-g8-h4 ~]# yum update Loaded plugins: enabled_repos_upload, fastestmirror, imgbased-persist, package_upload, product-id, search-disabled-repos, subscription-manager This system is not registered with an entitlement server. You can use subscription-manager to register. Loading mirror speeds from cached hostfile * ovirt-4.2-epel: linux.mirrors.es.net <http://linux.mirrors.es.net/> No packages marked for update Uploading Enabled Repositories Report Loaded plugins: fastestmirror, product-id, subscription-manager This system is not registered with an entitlement server. You can use subscription-manager to register. Cannot upload enabled repos report, is this client registered? _______________________________________________ Users mailing list -- users@ovirt.org <mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org <mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ <https://www.ovirt.org/site/privacy-policy/> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ <https://www.ovirt.org/community/about/community-guidelines/> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/UHQMGULUHL4GBB... <https://lists.ovirt.org/archives/list/users@ovirt.org/message/UHQMGULUHL4GBBHUBNGOAICJEM6W3RVW/>
_______________________________________________ Users mailing list -- users@ovirt.org <mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org <mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ <https://www.ovirt.org/site/privacy-policy/> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ <https://www.ovirt.org/community/about/community-guidelines/> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/RTOXFNAXQ3NJBW... <https://lists.ovirt.org/archives/list/users@ovirt.org/message/RTOXFNAXQ3NJBWX7RXOYK5H5RZBHX2OK/>
-- SANDRO BONAZZOLA MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV Red Hat EMEA <https://www.redhat.com/> sbonazzo@redhat.com <mailto:sbonazzo@redhat.com>
_______________________________________________ Users mailing list -- users@ovirt.org <mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org <mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ <https://www.ovirt.org/site/privacy-policy/> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ <https://www.ovirt.org/community/about/community-guidelines/> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/ACKMTWOUNXB7NW... <https://lists.ovirt.org/archives/list/users@ovirt.org/message/ACKMTWOUNXB7NWOCRZIOWJI7U5HIQAUV/>
_______________________________________________ Users mailing list -- users@ovirt.org <mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org <mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ <https://www.ovirt.org/site/privacy-policy/> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ <https://www.ovirt.org/community/about/community-guidelines/> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/E5MFPG7WW5BIQ5... <https://lists.ovirt.org/archives/list/users@ovirt.org/message/E5MFPG7WW5BIQ5BOC7LG5C23YZ7B7UJ2/>
Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/IPKGIHH7SFTW7K...

Oliver, can you share the output from lvs ? On Tue, Jul 3, 2018 at 12:06 AM, Oliver Riesener < Oliver.Riesener@hs-bremen.de> wrote:
Hi Yuval,
* reinstallation failed, because LV already exists. ovirt-node-ng-4.2.4-0.20180626.0 onn_ovn-monster Vri-a-tz-k <252,38g pool00 0,85 ovirt-node-ng-4.2.4-0.20180626.0+1 onn_ovn-monster Vwi-a-tz-- <252,38g pool00 ovirt-node-ng-4.2.4-0.20180626.0 0,85 See attachment imgbased.reinstall.log
* I removed them and re-reinstall without luck.
I got KeyError: <NVR ovirt-node-ng-4.2.1.1-0.20180223.0 />
See attachment imgbased.rereinstall.log
Also a new problem with nodectl info [root@ovn-monster tmp]# nodectl info Traceback (most recent call last): File "/usr/lib64/python2.7/runpy.py", line 162, in _run_module_as_main "__main__", fname, loader, pkg_name) File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code exec code in run_globals File "/usr/lib/python2.7/site-packages/nodectl/__main__.py", line 42, in <module> CliApplication() File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 200, in CliApplication return cmdmap.command(args) File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 118, in command return self.commands[command](**kwargs) File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 76, in info Info(self.imgbased, self.machine).write() File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 45, in __init__ self._fetch_information() File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 49, in _fetch_information self._get_layout() File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 66, in _get_layout layout = LayoutParser(self.app.imgbase.layout()).parse() File "/usr/lib/python2.7/site-packages/imgbased/imgbase.py", line 155, in layout return self.naming.layout() File "/usr/lib/python2.7/site-packages/imgbased/naming.py", line 109, in layout tree = self.tree(lvs) File "/usr/lib/python2.7/site-packages/imgbased/naming.py", line 224, in tree bases[img.base.nvr].layers.append(img) KeyError: <NVR ovirt-node-ng-4.2.3-0.20180524.0 />
Am 02.07.2018 um 22:22 schrieb Oliver Riesener < Oliver.Riesener@hs-bremen.de>:
Hi Yuval,
yes you are right, there was a unused and deactivated var_crash LV.
* I activated and mount it to /var/crash via /etc/fstab. * /var/crash was empty, and LV has already ext4 fs. var_crash onn_ovn-monster Vwi-aotz-- 10,00g pool00 2,86
* Now i will try to upgrade again. * yum reinstall ovirt-node-ng-image-update.noarch
BTW, no more imgbased.log files found.
Am 02.07.2018 um 20:57 schrieb Yuval Turgeman <yturgema@redhat.com>:
From your log:
AssertionError: Path is already a volume: /var/crash
Basically, it means that you already have an LV for /var/crash but it's not mounted for some reason, so either mount it (if the data good) or remove it and then reinstall the image-update rpm. Before that, check that you dont have any other LVs in that same state - or you can post the output for lvs... btw, do you have any more imgbased.log files laying around ?
You can find more details about this here:
https://access.redhat.com/documentation/en-us/red_hat_ virtualization/4.1/html/upgrade_guide/recovering_from_ failed_nist-800_upgrade
On Mon, Jul 2, 2018 at 8:12 PM, Oliver Riesener <Oliver.Riesener@hs- bremen.de> wrote:
Hi,
i attached my /tmp/imgbased.log
Sheers
Oliver
Am 02.07.2018 um 13:58 schrieb Yuval Turgeman <yuvalt@redhat.com>:
Looks like the upgrade script failed - can you please attach /var/log/imgbased.log or /tmp/imgbased.log ?
Thanks, Yuval.
On Mon, Jul 2, 2018 at 2:54 PM, Sandro Bonazzola <sbonazzo@redhat.com> wrote:
Yuval, can you please have a look?
2018-06-30 7:48 GMT+02:00 Oliver Riesener <Oliver.Riesener@hs-bremen.de> :
Yes, here is the same.
It seams the bootloader isn’t configured right ?
I did the Upgrade and reboot to 4.2.4 from UI and got:
[root@ovn-monster ~]# nodectl info layers: ovirt-node-ng-4.2.4-0.20180626.0: ovirt-node-ng-4.2.4-0.20180626.0+1 ovirt-node-ng-4.2.3.1-0.20180530.0: ovirt-node-ng-4.2.3.1-0.20180530.0+1 ovirt-node-ng-4.2.3-0.20180524.0: ovirt-node-ng-4.2.3-0.20180524.0+1 ovirt-node-ng-4.2.1.1-0.20180223.0: ovirt-node-ng-4.2.1.1-0.20180223.0+1 bootloader: default: ovirt-node-ng-4.2.3-0.20180524.0+1 entries: ovirt-node-ng-4.2.3-0.20180524.0+1: index: 0 title: ovirt-node-ng-4.2.3-0.20180524.0 kernel: /boot/ovirt-node-ng-4.2.3-0.20 180524.0+1/vmlinuz-3.10.0-862.3.2.el7.x86_64 args: "ro crashkernel=auto rd.lvm.lv= onn_ovn-monster/ovirt-node-ng-4.2.3-0.20180524.0+1 rd.lvm.lv=onn_ovn-monster/swap rd.md.uuid=c6c3013b:027a9346:67dfd181:89635587 rhgb quiet LANG=de_DE.UTF-8 img.bootid=ovirt-node-ng-4.2.3-0.20180524.0+1" initrd: /boot/ovirt-node-ng-4.2.3-0.20 180524.0+1/initramfs-3.10.0-862.3.2.el7.x86_64.img root: /dev/onn_ovn-monster/ovirt-node-ng-4.2.3-0.20180524.0+1 ovirt-node-ng-4.2.1.1-0.20180223.0+1: index: 1 title: ovirt-node-ng-4.2.1.1-0.20180223.0 kernel: /boot/ovirt-node-ng-4.2.1.1-0. 20180223.0+1/vmlinuz-3.10.0-693.17.1.el7.x86_64 args: "ro crashkernel=auto rd.lvm.lv= onn_ovn-monster/ovirt-node-ng-4.2.1.1-0.20180223.0+1 rd.lvm.lv=onn_ovn-monster/swap rd.md.uuid=c6c3013b:027a9346:67dfd181:89635587 rhgb quiet LANG=de_DE.UTF-8 img.bootid=ovirt-node-ng-4.2.1.1-0.20180223.0+1" initrd: /boot/ovirt-node-ng-4.2.1.1-0. 20180223.0+1/initramfs-3.10.0-693.17.1.el7.x86_64.img root: /dev/onn_ovn-monster/ovirt-node-ng-4.2.1.1-0.20180223.0+1 current_layer: ovirt-node-ng-4.2.3-0.20180524.0+1 [root@ovn-monster ~]# uptime 07:35:27 up 2 days, 15:42, 1 user, load average: 1,07, 1,00, 0,95
Am 29.06.2018 um 23:53 schrieb Matt Simonsen <matt@khoza.com>:
Hello,
I did yum updates on 2 of my oVirt 4.2.3 nodes running the prebuilt node platform and it doesn't appear the updates worked.
[root@node6-g8-h4 ~]# yum update Loaded plugins: enabled_repos_upload, fastestmirror, imgbased-persist, : package_upload, product-id, search-disabled-repos, subscription- : manager This system is not registered with an entitlement server. You can use subscription-manager to register. Loading mirror speeds from cached hostfile * ovirt-4.2-epel: linux.mirrors.es.net Resolving Dependencies --> Running transaction check ---> Package ovirt-node-ng-image-update.noarch 0:4.2.3.1-1.el7 will be updated ---> Package ovirt-node-ng-image-update.noarch 0:4.2.4-1.el7 will be obsoleting ---> Package ovirt-node-ng-image-update-placeholder.noarch 0:4.2.3.1-1.el7 will be obsoleted --> Finished Dependency Resolution
Dependencies Resolved
============================================================ ============================================================= Package Arch Version Repository Size ============================================================ ============================================================= Installing: ovirt-node-ng-image-update noarch 4.2.4-1.el7 ovirt-4.2 647 M replacing ovirt-node-ng-image-update-placeholder.noarch 4.2.3.1-1.el7
Transaction Summary ============================================================ ============================================================= Install 1 Package
Total download size: 647 M Is this ok [y/d/N]: y Downloading packages: warning: /var/cache/yum/x86_64/7/ovirt-4.2/packages/ovirt-node-ng-ima ge-update-4.2.4-1.el7.noarch.rpm: Header V4 RSA/SHA1 Signature, key ID fe590cb7: NOKEY Public key for ovirt-node-ng-image-update-4.2.4-1.el7.noarch.rpm is not installed ovirt-node-ng-image-update-4.2.4-1.el7.noarch.rpm | 647 MB 00:02:07 Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-ovirt-4.2 Importing GPG key 0xFE590CB7: Userid : "oVirt <infra@ovirt.org>" Fingerprint: 31a5 d783 7fad 7cb2 86cd 3469 ab8c 4f9d fe59 0cb7 Package : ovirt-release42-4.2.3.1-1.el7.noarch (installed) From : /etc/pki/rpm-gpg/RPM-GPG-ovirt-4.2 Is this ok [y/N]: y Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : ovirt-node-ng-image-update-4.2.4-1.el7.noarch 1/3 warning: %post(ovirt-node-ng-image-update-4.2.4-1.el7.noarch) scriptlet failed, exit status 1 Non-fatal POSTIN scriptlet failure in rpm package ovirt-node-ng-image-update-4.2.4-1.el7.noarch Erasing : ovirt-node-ng-image-update-placeholder-4.2.3.1-1.el7.noarch 2/3 Cleanup : ovirt-node-ng-image-update-4.2.3.1-1.el7.noarch 3/3 warning: file /usr/share/ovirt-node-ng/image /ovirt-node-ng-4.2.0-0.20180530.0.el7.squashfs.img: remove failed: No such file or directory Uploading Package Profile Unable to upload Package Profile Verifying : ovirt-node-ng-image-update-4.2.4-1.el7.noarch 1/3 Verifying : ovirt-node-ng-image-update-4.2.3.1-1.el7.noarch 2/3 Verifying : ovirt-node-ng-image-update-placeholder-4.2.3.1-1.el7.noarch 3/3
Installed: ovirt-node-ng-image-update.noarch 0:4.2.4-1.el7
Replaced: ovirt-node-ng-image-update-placeholder.noarch 0:4.2.3.1-1.el7
Complete! Uploading Enabled Repositories Report Loaded plugins: fastestmirror, product-id, subscription-manager This system is not registered with an entitlement server. You can use subscription-manager to register. Cannot upload enabled repos report, is this client registered?
My engine shows the nodes as having no updates, however the major components including the kernel version and port 9090 admin GUI show 4.2.3
Is there anything I can provide to help diagnose the issue?
[root@node6-g8-h4 ~]# rpm -qa | grep ovirt
ovirt-imageio-common-1.3.1.2-0.el7.centos.noarch ovirt-host-deploy-1.7.3-1.el7.centos.noarch ovirt-vmconsole-host-1.0.5-4.el7.centos.noarch ovirt-provider-ovn-driver-1.2.10-1.el7.centos.noarch ovirt-engine-sdk-python-3.6.9.1-1.el7.noarch ovirt-setup-lib-1.1.4-1.el7.centos.noarch ovirt-release42-4.2.3.1-1.el7.noarch ovirt-imageio-daemon-1.3.1.2-0.el7.centos.noarch ovirt-hosted-engine-setup-2.2.20-1.el7.centos.noarch ovirt-host-dependencies-4.2.2-2.el7.centos.x86_64 ovirt-hosted-engine-ha-2.2.11-1.el7.centos.noarch ovirt-host-4.2.2-2.el7.centos.x86_64 ovirt-node-ng-image-update-4.2.4-1.el7.noarch ovirt-vmconsole-1.0.5-4.el7.centos.noarch ovirt-release-host-node-4.2.3.1-1.el7.noarch cockpit-ovirt-dashboard-0.11.24-1.el7.centos.noarch ovirt-node-ng-nodectl-4.2.0-0.20180524.0.el7.noarch python-ovirt-engine-sdk4-4.2.6-2.el7.centos.x86_64
[root@node6-g8-h4 ~]# yum update Loaded plugins: enabled_repos_upload, fastestmirror, imgbased-persist, package_upload, product-id, search-disabled-repos, subscription-manager This system is not registered with an entitlement server. You can use subscription-manager to register. Loading mirror speeds from cached hostfile * ovirt-4.2-epel: linux.mirrors.es.net No packages marked for update Uploading Enabled Repositories Report Loaded plugins: fastestmirror, product-id, subscription-manager This system is not registered with an entitlement server. You can use subscription-manager to register. Cannot upload enabled repos report, is this client registered? _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community- guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt. org/message/UHQMGULUHL4GBBHUBNGOAICJEM6W3RVW/
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community- guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt. org/message/RTOXFNAXQ3NJBWX7RXOYK5H5RZBHX2OK/
-- SANDRO BONAZZOLA
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV Red Hat EMEA <https://www.redhat.com/>
sbonazzo@redhat.com <https://red.ht/sig>
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community- guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt. org/message/ACKMTWOUNXB7NWOCRZIOWJI7U5HIQAUV/
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community- guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt. org/message/E5MFPG7WW5BIQ5BOC7LG5C23YZ7B7UJ2/
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community- guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/ message/IPKGIHH7SFTW7K7YOKL4QXBYXR5KMKNB/

Yuval, here comes the lvs output. The IO Errors are because Node is in maintenance. The LV root is from previous installed centos 7.5. The i have installed node-ng 4.2.1 and got this MIX. The LV turbo is a SSD in it’s own VG named ovirt. I have removed LV ovirt-node-ng-4.2.1-0.20180223.0 and (+1) removed because nodectl info error: KeyError: <NVR ovirt-node-ng-4.2.1.1-0.20180223.0 Now i get the error @4.2.3: [root@ovn-monster ~]# nodectl info Traceback (most recent call last): File "/usr/lib64/python2.7/runpy.py", line 162, in _run_module_as_main "__main__", fname, loader, pkg_name) File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code exec code in run_globals File "/usr/lib/python2.7/site-packages/nodectl/__main__.py", line 42, in <module> CliApplication() File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 200, in CliApplication return cmdmap.command(args) File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 118, in command return self.commands[command](**kwargs) File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 76, in info Info(self.imgbased, self.machine).write() File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 45, in __init__ self._fetch_information() File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 49, in _fetch_information self._get_layout() File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 66, in _get_layout layout = LayoutParser(self.app.imgbase.layout()).parse() File "/usr/lib/python2.7/site-packages/imgbased/imgbase.py", line 155, in layout return self.naming.layout() File "/usr/lib/python2.7/site-packages/imgbased/naming.py", line 109, in layout tree = self.tree(lvs) File "/usr/lib/python2.7/site-packages/imgbased/naming.py", line 224, in tree bases[img.base.nvr].layers.append(img) KeyError: <NVR ovirt-node-ng-4.2.3-0.20180524.0 /> lvs -a [root@ovn-monster ~]# lvs -a /dev/mapper/36090a02860ee716bee5e05b11dc52616: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/mapper/36090a02860ee716bee5e05b11dc52616: read failed after 0 of 4096 at 5497568559104: Eingabe-/Ausgabefehler /dev/mapper/36090a02860ee716bee5e05b11dc52616: read failed after 0 of 4096 at 5497568616448: Eingabe-/Ausgabefehler /dev/mapper/36090a02860ee716bee5e05b11dc52616: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/mapper/36090a02860ee9137c5ae35cd4bc5f6b8: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/mapper/36090a02860ee9137c5ae35cd4bc5f6b8: read failed after 0 of 4096 at 1099526242304: Eingabe-/Ausgabefehler /dev/mapper/36090a02860ee9137c5ae35cd4bc5f6b8: read failed after 0 of 4096 at 1099526299648: Eingabe-/Ausgabefehler /dev/mapper/36090a02860ee9137c5ae35cd4bc5f6b8: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/mapper/36090a02860eea13dc5aed55e4cc57698: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/mapper/36090a02860eea13dc5aed55e4cc57698: read failed after 0 of 4096 at 1099526242304: Eingabe-/Ausgabefehler /dev/mapper/36090a02860eea13dc5aed55e4cc57698: read failed after 0 of 4096 at 1099526299648: Eingabe-/Ausgabefehler /dev/mapper/36090a02860eea13dc5aed55e4cc57698: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/metadata: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/metadata: read failed after 0 of 4096 at 536805376: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/metadata: read failed after 0 of 4096 at 536862720: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/metadata: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/ids: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/ids: read failed after 0 of 4096 at 134152192: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/ids: read failed after 0 of 4096 at 134209536: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/ids: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/leases: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/leases: read failed after 0 of 4096 at 2147418112: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/leases: read failed after 0 of 4096 at 2147475456: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/leases: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/outbox: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/outbox: read failed after 0 of 4096 at 134152192: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/outbox: read failed after 0 of 4096 at 134209536: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/outbox: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/xleases: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/xleases: read failed after 0 of 4096 at 1073676288: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/xleases: read failed after 0 of 4096 at 1073733632: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/xleases: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/inbox: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/inbox: read failed after 0 of 4096 at 134152192: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/inbox: read failed after 0 of 4096 at 134209536: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/inbox: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/master: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/master: read failed after 0 of 4096 at 1073676288: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/master: read failed after 0 of 4096 at 1073733632: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/master: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/metadata: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/metadata: read failed after 0 of 4096 at 536805376: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/metadata: read failed after 0 of 4096 at 536862720: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/metadata: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/metadata: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/metadata: read failed after 0 of 4096 at 536805376: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/metadata: read failed after 0 of 4096 at 536862720: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/metadata: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/ids: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/ids: read failed after 0 of 4096 at 134152192: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/ids: read failed after 0 of 4096 at 134209536: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/ids: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/ids: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/ids: read failed after 0 of 4096 at 134152192: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/ids: read failed after 0 of 4096 at 134209536: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/ids: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/leases: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/leases: read failed after 0 of 4096 at 2147418112: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/leases: read failed after 0 of 4096 at 2147475456: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/leases: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/leases: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/leases: read failed after 0 of 4096 at 2147418112: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/leases: read failed after 0 of 4096 at 2147475456: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/leases: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/outbox: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/outbox: read failed after 0 of 4096 at 134152192: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/outbox: read failed after 0 of 4096 at 134209536: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/outbox: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/xleases: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/xleases: read failed after 0 of 4096 at 1073676288: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/xleases: read failed after 0 of 4096 at 1073733632: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/xleases: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/inbox: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/inbox: read failed after 0 of 4096 at 134152192: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/inbox: read failed after 0 of 4096 at 134209536: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/inbox: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/master: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/master: read failed after 0 of 4096 at 1073676288: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/master: read failed after 0 of 4096 at 1073733632: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/master: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/outbox: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/outbox: read failed after 0 of 4096 at 134152192: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/outbox: read failed after 0 of 4096 at 134209536: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/outbox: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/inbox: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/inbox: read failed after 0 of 4096 at 134152192: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/inbox: read failed after 0 of 4096 at 134209536: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/inbox: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/master: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/master: read failed after 0 of 4096 at 1073676288: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/master: read failed after 0 of 4096 at 1073733632: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/master: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/xleases: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/xleases: read failed after 0 of 4096 at 1073676288: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/xleases: read failed after 0 of 4096 at 1073733632: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/xleases: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert home onn_ovn-monster Vwi-aotz-- 1,00g pool00 4,79 [lvol0_pmspare] onn_ovn-monster ewi------- 144,00m ovirt-node-ng-4.2.3-0.20180524.0+1 onn_ovn-monster Vwi-aotz-- <252,38g pool00 2,88 ovirt-node-ng-4.2.3.1-0.20180530.0+1 onn_ovn-monster Vwi-a-tz-- <252,38g pool00 0,86 ovirt-node-ng-4.2.4-0.20180626.0 onn_ovn-monster Vri-a-tz-k <252,38g pool00 0,85 ovirt-node-ng-4.2.4-0.20180626.0+1 onn_ovn-monster Vwi-a-tz-- <252,38g pool00 ovirt-node-ng-4.2.4-0.20180626.0 0,85 pool00 onn_ovn-monster twi-aotz-- <279,38g 6,76 1,01 [pool00_tdata] onn_ovn-monster Twi-ao---- <279,38g [pool00_tmeta] onn_ovn-monster ewi-ao---- 1,00g root onn_ovn-monster Vwi-a-tz-- <252,38g pool00 1,24 swap onn_ovn-monster -wi-ao---- 4,00g tmp onn_ovn-monster Vwi-aotz-- 1,00g pool00 5,01 var onn_ovn-monster Vwi-aotz-- 15,00g pool00 3,56 var_crash onn_ovn-monster Vwi-aotz-- 10,00g pool00 2,86 var_log onn_ovn-monster Vwi-aotz-- 8,00g pool00 38,48 var_log_audit onn_ovn-monster Vwi-aotz-- 2,00g pool00 6,77 turbo ovirt -wi-ao---- 894,25g
Am 03.07.2018 um 12:58 schrieb Yuval Turgeman <yturgema@redhat.com>:
Oliver, can you share the output from lvs ?
On Tue, Jul 3, 2018 at 12:06 AM, Oliver Riesener <Oliver.Riesener@hs-bremen.de <mailto:Oliver.Riesener@hs-bremen.de>> wrote: Hi Yuval,
* reinstallation failed, because LV already exists. ovirt-node-ng-4.2.4-0.20180626.0 onn_ovn-monster Vri-a-tz-k <252,38g pool00 0,85 ovirt-node-ng-4.2.4-0.20180626.0+1 onn_ovn-monster Vwi-a-tz-- <252,38g pool00 ovirt-node-ng-4.2.4-0.20180626.0 0,85 See attachment imgbased.reinstall.log
* I removed them and re-reinstall without luck.
I got KeyError: <NVR ovirt-node-ng-4.2.1.1-0.20180223.0 />
See attachment imgbased.rereinstall.log
Also a new problem with nodectl info [root@ovn-monster tmp]# nodectl info Traceback (most recent call last): File "/usr/lib64/python2.7/runpy.py", line 162, in _run_module_as_main "__main__", fname, loader, pkg_name) File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code exec code in run_globals File "/usr/lib/python2.7/site-packages/nodectl/__main__.py", line 42, in <module> CliApplication() File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 200, in CliApplication return cmdmap.command(args) File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 118, in command return self.commands[command](**kwargs) File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 76, in info Info(self.imgbased, self.machine).write() File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 45, in __init__ self._fetch_information() File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 49, in _fetch_information self._get_layout() File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 66, in _get_layout layout = LayoutParser(self.app.imgbase.layout()).parse() File "/usr/lib/python2.7/site-packages/imgbased/imgbase.py", line 155, in layout return self.naming.layout() File "/usr/lib/python2.7/site-packages/imgbased/naming.py", line 109, in layout tree = self.tree(lvs) File "/usr/lib/python2.7/site-packages/imgbased/naming.py", line 224, in tree bases[img.base.nvr].layers.append(img) KeyError: <NVR ovirt-node-ng-4.2.3-0.20180524.0 />
Am 02.07.2018 um 22:22 schrieb Oliver Riesener <Oliver.Riesener@hs-bremen.de <mailto:Oliver.Riesener@hs-bremen.de>>:
Hi Yuval,
yes you are right, there was a unused and deactivated var_crash LV.
* I activated and mount it to /var/crash via /etc/fstab. * /var/crash was empty, and LV has already ext4 fs. var_crash onn_ovn-monster Vwi-aotz-- 10,00g pool00 2,86
* Now i will try to upgrade again. * yum reinstall ovirt-node-ng-image-update.noarch
BTW, no more imgbased.log files found.
Am 02.07.2018 um 20:57 schrieb Yuval Turgeman <yturgema@redhat.com <mailto:yturgema@redhat.com>>:
From your log:
AssertionError: Path is already a volume: /var/crash
Basically, it means that you already have an LV for /var/crash but it's not mounted for some reason, so either mount it (if the data good) or remove it and then reinstall the image-update rpm. Before that, check that you dont have any other LVs in that same state - or you can post the output for lvs... btw, do you have any more imgbased.log files laying around ?
You can find more details about this here:
https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.1/htm... <https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.1/html/upgrade_guide/recovering_from_failed_nist-800_upgrade>
On Mon, Jul 2, 2018 at 8:12 PM, Oliver Riesener <Oliver.Riesener@hs-bremen.de <mailto:Oliver.Riesener@hs-bremen.de>> wrote: Hi,
i attached my /tmp/imgbased.log
Sheers
Oliver
Am 02.07.2018 um 13:58 schrieb Yuval Turgeman <yuvalt@redhat.com <mailto:yuvalt@redhat.com>>:
Looks like the upgrade script failed - can you please attach /var/log/imgbased.log or /tmp/imgbased.log ?
Thanks, Yuval.
On Mon, Jul 2, 2018 at 2:54 PM, Sandro Bonazzola <sbonazzo@redhat.com <mailto:sbonazzo@redhat.com>> wrote: Yuval, can you please have a look?
2018-06-30 7:48 GMT+02:00 Oliver Riesener <Oliver.Riesener@hs-bremen.de <mailto:Oliver.Riesener@hs-bremen.de>>: Yes, here is the same.
It seams the bootloader isn’t configured right ?
I did the Upgrade and reboot to 4.2.4 from UI and got:
[root@ovn-monster ~]# nodectl info layers: ovirt-node-ng-4.2.4-0.20180626.0: ovirt-node-ng-4.2.4-0.20180626.0+1 ovirt-node-ng-4.2.3.1-0.20180530.0: ovirt-node-ng-4.2.3.1-0.20180530.0+1 ovirt-node-ng-4.2.3-0.20180524.0: ovirt-node-ng-4.2.3-0.20180524.0+1 ovirt-node-ng-4.2.1.1-0.20180223.0: ovirt-node-ng-4.2.1.1-0.20180223.0+1 bootloader: default: ovirt-node-ng-4.2.3-0.20180524.0+1 entries: ovirt-node-ng-4.2.3-0.20180524.0+1: index: 0 title: ovirt-node-ng-4.2.3-0.20180524.0 kernel: /boot/ovirt-node-ng-4.2.3-0.20180524.0+1/vmlinuz-3.10.0-862.3.2.el7.x86_64 args: "ro crashkernel=auto rd.lvm.lv <http://rd.lvm.lv/>=onn_ovn-monster/ovirt-node-ng-4.2.3-0.20180524.0+1 rd.lvm.lv <http://rd.lvm.lv/>=onn_ovn-monster/swap rd.md.uuid=c6c3013b:027a9346:67dfd181:89635587 rhgb quiet LANG=de_DE.UTF-8 img.bootid=ovirt-node-ng-4.2.3-0.20180524.0+1" initrd: /boot/ovirt-node-ng-4.2.3-0.20180524.0+1/initramfs-3.10.0-862.3.2.el7.x86_64.img root: /dev/onn_ovn-monster/ovirt-node-ng-4.2.3-0.20180524.0+1 ovirt-node-ng-4.2.1.1-0.20180223.0+1: index: 1 title: ovirt-node-ng-4.2.1.1-0.20180223.0 kernel: /boot/ovirt-node-ng-4.2.1.1-0.20180223.0+1/vmlinuz-3.10.0-693.17.1.el7.x86_64 args: "ro crashkernel=auto rd.lvm.lv <http://rd.lvm.lv/>=onn_ovn-monster/ovirt-node-ng-4.2.1.1-0.20180223.0+1 rd.lvm.lv <http://rd.lvm.lv/>=onn_ovn-monster/swap rd.md.uuid=c6c3013b:027a9346:67dfd181:89635587 rhgb quiet LANG=de_DE.UTF-8 img.bootid=ovirt-node-ng-4.2.1.1-0.20180223.0+1" initrd: /boot/ovirt-node-ng-4.2.1.1-0.20180223.0+1/initramfs-3.10.0-693.17.1.el7.x86_64.img root: /dev/onn_ovn-monster/ovirt-node-ng-4.2.1.1-0.20180223.0+1 current_layer: ovirt-node-ng-4.2.3-0.20180524.0+1 [root@ovn-monster ~]# uptime 07:35:27 up 2 days, 15:42, 1 user, load average: 1,07, 1,00, 0,95
Am 29.06.2018 um 23:53 schrieb Matt Simonsen <matt@khoza.com <mailto:matt@khoza.com>>:
Hello,
I did yum updates on 2 of my oVirt 4.2.3 nodes running the prebuilt node platform and it doesn't appear the updates worked.
[root@node6-g8-h4 ~]# yum update Loaded plugins: enabled_repos_upload, fastestmirror, imgbased-persist, : package_upload, product-id, search-disabled-repos, subscription- : manager This system is not registered with an entitlement server. You can use subscription-manager to register. Loading mirror speeds from cached hostfile * ovirt-4.2-epel: linux.mirrors.es.net <http://linux.mirrors.es.net/> Resolving Dependencies --> Running transaction check ---> Package ovirt-node-ng-image-update.noarch 0:4.2.3.1-1.el7 will be updated ---> Package ovirt-node-ng-image-update.noarch 0:4.2.4-1.el7 will be obsoleting ---> Package ovirt-node-ng-image-update-placeholder.noarch 0:4.2.3.1-1.el7 will be obsoleted --> Finished Dependency Resolution
Dependencies Resolved
========================================================================================================================= Package Arch Version Repository Size ========================================================================================================================= Installing: ovirt-node-ng-image-update noarch 4.2.4-1.el7 ovirt-4.2 647 M replacing ovirt-node-ng-image-update-placeholder.noarch 4.2.3.1-1.el7
Transaction Summary ========================================================================================================================= Install 1 Package
Total download size: 647 M Is this ok [y/d/N]: y Downloading packages: warning: /var/cache/yum/x86_64/7/ovirt-4.2/packages/ovirt-node-ng-image-update-4.2.4-1.el7.noarch.rpm: Header V4 RSA/SHA1 Signature, key ID fe590cb7: NOKEY Public key for ovirt-node-ng-image-update-4.2.4-1.el7.noarch.rpm is not installed ovirt-node-ng-image-update-4.2.4-1.el7.noarch.rpm | 647 MB 00:02:07 Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-ovirt-4.2 <> Importing GPG key 0xFE590CB7: Userid : "oVirt <infra@ovirt.org <mailto:infra@ovirt.org>>" Fingerprint: 31a5 d783 7fad 7cb2 86cd 3469 ab8c 4f9d fe59 0cb7 Package : ovirt-release42-4.2.3.1-1.el7.noarch (installed) From : /etc/pki/rpm-gpg/RPM-GPG-ovirt-4.2 Is this ok [y/N]: y Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : ovirt-node-ng-image-update-4.2.4-1.el7.noarch 1/3 warning: %post(ovirt-node-ng-image-update-4.2.4-1.el7.noarch) scriptlet failed, exit status 1 Non-fatal POSTIN scriptlet failure in rpm package ovirt-node-ng-image-update-4.2.4-1.el7.noarch Erasing : ovirt-node-ng-image-update-placeholder-4.2.3.1-1.el7.noarch 2/3 Cleanup : ovirt-node-ng-image-update-4.2.3.1-1.el7.noarch 3/3 warning: file /usr/share/ovirt-node-ng/image/ovirt-node-ng-4.2.0-0.20180530.0.el7.squashfs.img: remove failed: No such file or directory Uploading Package Profile Unable to upload Package Profile Verifying : ovirt-node-ng-image-update-4.2.4-1.el7.noarch 1/3 Verifying : ovirt-node-ng-image-update-4.2.3.1-1.el7.noarch 2/3 Verifying : ovirt-node-ng-image-update-placeholder-4.2.3.1-1.el7.noarch 3/3
Installed: ovirt-node-ng-image-update.noarch 0:4.2.4-1.el7
Replaced: ovirt-node-ng-image-update-placeholder.noarch 0:4.2.3.1-1.el7
Complete! Uploading Enabled Repositories Report Loaded plugins: fastestmirror, product-id, subscription-manager This system is not registered with an entitlement server. You can use subscription-manager to register. Cannot upload enabled repos report, is this client registered?
My engine shows the nodes as having no updates, however the major components including the kernel version and port 9090 admin GUI show 4.2.3
Is there anything I can provide to help diagnose the issue?
[root@node6-g8-h4 ~]# rpm -qa | grep ovirt
ovirt-imageio-common-1.3.1.2-0.el7.centos.noarch ovirt-host-deploy-1.7.3-1.el7.centos.noarch ovirt-vmconsole-host-1.0.5-4.el7.centos.noarch ovirt-provider-ovn-driver-1.2.10-1.el7.centos.noarch ovirt-engine-sdk-python-3.6.9.1-1.el7.noarch ovirt-setup-lib-1.1.4-1.el7.centos.noarch ovirt-release42-4.2.3.1-1.el7.noarch ovirt-imageio-daemon-1.3.1.2-0.el7.centos.noarch ovirt-hosted-engine-setup-2.2.20-1.el7.centos.noarch ovirt-host-dependencies-4.2.2-2.el7.centos.x86_64 ovirt-hosted-engine-ha-2.2.11-1.el7.centos.noarch ovirt-host-4.2.2-2.el7.centos.x86_64 ovirt-node-ng-image-update-4.2.4-1.el7.noarch ovirt-vmconsole-1.0.5-4.el7.centos.noarch ovirt-release-host-node-4.2.3.1-1.el7.noarch cockpit-ovirt-dashboard-0.11.24-1.el7.centos.noarch ovirt-node-ng-nodectl-4.2.0-0.20180524.0.el7.noarch python-ovirt-engine-sdk4-4.2.6-2.el7.centos.x86_64
[root@node6-g8-h4 ~]# yum update Loaded plugins: enabled_repos_upload, fastestmirror, imgbased-persist, package_upload, product-id, search-disabled-repos, subscription-manager This system is not registered with an entitlement server. You can use subscription-manager to register. Loading mirror speeds from cached hostfile * ovirt-4.2-epel: linux.mirrors.es.net <http://linux.mirrors.es.net/> No packages marked for update Uploading Enabled Repositories Report Loaded plugins: fastestmirror, product-id, subscription-manager This system is not registered with an entitlement server. You can use subscription-manager to register. Cannot upload enabled repos report, is this client registered? _______________________________________________ Users mailing list -- users@ovirt.org <mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org <mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ <https://www.ovirt.org/site/privacy-policy/> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ <https://www.ovirt.org/community/about/community-guidelines/> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/UHQMGULUHL4GBB... <https://lists.ovirt.org/archives/list/users@ovirt.org/message/UHQMGULUHL4GBBHUBNGOAICJEM6W3RVW/>
_______________________________________________ Users mailing list -- users@ovirt.org <mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org <mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ <https://www.ovirt.org/site/privacy-policy/> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ <https://www.ovirt.org/community/about/community-guidelines/> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/RTOXFNAXQ3NJBW... <https://lists.ovirt.org/archives/list/users@ovirt.org/message/RTOXFNAXQ3NJBWX7RXOYK5H5RZBHX2OK/>
-- SANDRO BONAZZOLA MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV Red Hat EMEA <https://www.redhat.com/> sbonazzo@redhat.com <mailto:sbonazzo@redhat.com>
_______________________________________________ Users mailing list -- users@ovirt.org <mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org <mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ <https://www.ovirt.org/site/privacy-policy/> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ <https://www.ovirt.org/community/about/community-guidelines/> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/ACKMTWOUNXB7NW... <https://lists.ovirt.org/archives/list/users@ovirt.org/message/ACKMTWOUNXB7NWOCRZIOWJI7U5HIQAUV/>
_______________________________________________ Users mailing list -- users@ovirt.org <mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org <mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ <https://www.ovirt.org/site/privacy-policy/> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ <https://www.ovirt.org/community/about/community-guidelines/> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/E5MFPG7WW5BIQ5... <https://lists.ovirt.org/archives/list/users@ovirt.org/message/E5MFPG7WW5BIQ5BOC7LG5C23YZ7B7UJ2/>
Users mailing list -- users@ovirt.org <mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org <mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ <https://www.ovirt.org/site/privacy-policy/> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ <https://www.ovirt.org/community/about/community-guidelines/> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/IPKGIHH7SFTW7K... <https://lists.ovirt.org/archives/list/users@ovirt.org/message/IPKGIHH7SFTW7K7YOKL4QXBYXR5KMKNB/>
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/6HKLX746IND2NQ...

Hi Oliver, The KeyError happens because there are no bases for the layers. For each LV that ends with a +1, there should be a base read-only LV without +1. So for 3 ovirt-node-ng images, you're supposed to have 6 layers. This is the reason nodectl info fails, and the upgrade will fail also. In your original email it looks OK - I have never seen this happen, was this a manual lvremove ? I need to reproduce this and check what can be done. You can find me on #ovirt (irc.oftc.net) also :) On Tue, Jul 3, 2018 at 10:41 PM, Oliver Riesener < Oliver.Riesener@hs-bremen.de> wrote:
Yuval, here comes the lvs output.
The IO Errors are because Node is in maintenance. The LV root is from previous installed centos 7.5. The i have installed node-ng 4.2.1 and got this MIX. The LV turbo is a SSD in it’s own VG named ovirt.
I have removed LV ovirt-node-ng-4.2.1-0.20180223.0 and (+1) removed because nodectl info error:
KeyError: <NVR ovirt-node-ng-4.2.1.1-0.20180223.0
Now i get the error @4.2.3: [root@ovn-monster ~]# nodectl info Traceback (most recent call last): File "/usr/lib64/python2.7/runpy.py", line 162, in _run_module_as_main "__main__", fname, loader, pkg_name) File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code exec code in run_globals File "/usr/lib/python2.7/site-packages/nodectl/__main__.py", line 42, in <module> CliApplication() File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 200, in CliApplication return cmdmap.command(args) File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 118, in command return self.commands[command](**kwargs) File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 76, in info Info(self.imgbased, self.machine).write() File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 45, in __init__ self._fetch_information() File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 49, in _fetch_information self._get_layout() File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 66, in _get_layout layout = LayoutParser(self.app.imgbase.layout()).parse() File "/usr/lib/python2.7/site-packages/imgbased/imgbase.py", line 155, in layout return self.naming.layout() File "/usr/lib/python2.7/site-packages/imgbased/naming.py", line 109, in layout tree = self.tree(lvs) File "/usr/lib/python2.7/site-packages/imgbased/naming.py", line 224, in tree bases[img.base.nvr].layers.append(img) KeyError: <NVR ovirt-node-ng-4.2.3-0.20180524.0 />
lvs -a
[root@ovn-monster ~]# lvs -a /dev/mapper/36090a02860ee716bee5e05b11dc52616: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/mapper/36090a02860ee716bee5e05b11dc52616: read failed after 0 of 4096 at 5497568559104: Eingabe-/Ausgabefehler /dev/mapper/36090a02860ee716bee5e05b11dc52616: read failed after 0 of 4096 at 5497568616448: Eingabe-/Ausgabefehler /dev/mapper/36090a02860ee716bee5e05b11dc52616: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/mapper/36090a02860ee9137c5ae35cd4bc5f6b8: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/mapper/36090a02860ee9137c5ae35cd4bc5f6b8: read failed after 0 of 4096 at 1099526242304: Eingabe-/Ausgabefehler /dev/mapper/36090a02860ee9137c5ae35cd4bc5f6b8: read failed after 0 of 4096 at 1099526299648: Eingabe-/Ausgabefehler /dev/mapper/36090a02860ee9137c5ae35cd4bc5f6b8: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/mapper/36090a02860eea13dc5aed55e4cc57698: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/mapper/36090a02860eea13dc5aed55e4cc57698: read failed after 0 of 4096 at 1099526242304: Eingabe-/Ausgabefehler /dev/mapper/36090a02860eea13dc5aed55e4cc57698: read failed after 0 of 4096 at 1099526299648: Eingabe-/Ausgabefehler /dev/mapper/36090a02860eea13dc5aed55e4cc57698: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/metadata: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/metadata: read failed after 0 of 4096 at 536805376: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/metadata: read failed after 0 of 4096 at 536862720: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/metadata: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/ids: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/ids: read failed after 0 of 4096 at 134152192: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/ids: read failed after 0 of 4096 at 134209536: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/ids: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/leases: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/leases: read failed after 0 of 4096 at 2147418112: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/leases: read failed after 0 of 4096 at 2147475456: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/leases: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/outbox: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/outbox: read failed after 0 of 4096 at 134152192: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/outbox: read failed after 0 of 4096 at 134209536: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/outbox: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/xleases: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/xleases: read failed after 0 of 4096 at 1073676288: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/xleases: read failed after 0 of 4096 at 1073733632: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/xleases: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/inbox: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/inbox: read failed after 0 of 4096 at 134152192: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/inbox: read failed after 0 of 4096 at 134209536: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/inbox: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/master: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/master: read failed after 0 of 4096 at 1073676288: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/master: read failed after 0 of 4096 at 1073733632: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/master: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/metadata: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/metadata: read failed after 0 of 4096 at 536805376: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/metadata: read failed after 0 of 4096 at 536862720: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/metadata: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/metadata: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/metadata: read failed after 0 of 4096 at 536805376: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/metadata: read failed after 0 of 4096 at 536862720: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/metadata: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/ids: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/ids: read failed after 0 of 4096 at 134152192: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/ids: read failed after 0 of 4096 at 134209536: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/ids: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/ids: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/ids: read failed after 0 of 4096 at 134152192: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/ids: read failed after 0 of 4096 at 134209536: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/ids: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/leases: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/leases: read failed after 0 of 4096 at 2147418112: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/leases: read failed after 0 of 4096 at 2147475456: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/leases: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/leases: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/leases: read failed after 0 of 4096 at 2147418112: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/leases: read failed after 0 of 4096 at 2147475456: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/leases: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/outbox: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/outbox: read failed after 0 of 4096 at 134152192: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/outbox: read failed after 0 of 4096 at 134209536: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/outbox: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/xleases: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/xleases: read failed after 0 of 4096 at 1073676288: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/xleases: read failed after 0 of 4096 at 1073733632: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/xleases: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/inbox: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/inbox: read failed after 0 of 4096 at 134152192: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/inbox: read failed after 0 of 4096 at 134209536: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/inbox: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/master: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/master: read failed after 0 of 4096 at 1073676288: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/master: read failed after 0 of 4096 at 1073733632: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/master: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/outbox: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/outbox: read failed after 0 of 4096 at 134152192: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/outbox: read failed after 0 of 4096 at 134209536: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/outbox: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/inbox: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/inbox: read failed after 0 of 4096 at 134152192: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/inbox: read failed after 0 of 4096 at 134209536: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/inbox: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/master: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/master: read failed after 0 of 4096 at 1073676288: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/master: read failed after 0 of 4096 at 1073733632: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/master: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/xleases: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/xleases: read failed after 0 of 4096 at 1073676288: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/xleases: read failed after 0 of 4096 at 1073733632: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/xleases: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert home onn_ovn-monster Vwi-aotz-- 1,00g pool00 4,79
[lvol0_pmspare] onn_ovn-monster ewi------- 144,00m
ovirt-node-ng-4.2.3-0.20180524.0+1 onn_ovn-monster Vwi-aotz-- <252,38g pool00 2,88
ovirt-node-ng-4.2.3.1-0.20180530.0+1 onn_ovn-monster Vwi-a-tz-- <252,38g pool00 0,86
ovirt-node-ng-4.2.4-0.20180626.0 onn_ovn-monster Vri-a-tz-k <252,38g pool00 0,85
ovirt-node-ng-4.2.4-0.20180626.0+1 onn_ovn-monster Vwi-a-tz-- <252,38g pool00 ovirt-node-ng-4.2.4-0.20180626.0 0,85
pool00 onn_ovn-monster twi-aotz-- <279,38g 6,76 1,01
[pool00_tdata] onn_ovn-monster Twi-ao---- <279,38g
[pool00_tmeta] onn_ovn-monster ewi-ao---- 1,00g
root onn_ovn-monster Vwi-a-tz-- <252,38g pool00 1,24
swap onn_ovn-monster -wi-ao---- 4,00g
tmp onn_ovn-monster Vwi-aotz-- 1,00g pool00 5,01
var onn_ovn-monster Vwi-aotz-- 15,00g pool00 3,56
var_crash onn_ovn-monster Vwi-aotz-- 10,00g pool00 2,86
var_log onn_ovn-monster Vwi-aotz-- 8,00g pool00 38,48
var_log_audit onn_ovn-monster Vwi-aotz-- 2,00g pool00 6,77
turbo ovirt -wi-ao---- 894,25g
Am 03.07.2018 um 12:58 schrieb Yuval Turgeman <yturgema@redhat.com>:
Oliver, can you share the output from lvs ?
On Tue, Jul 3, 2018 at 12:06 AM, Oliver Riesener < Oliver.Riesener@hs-bremen.de> wrote:
Hi Yuval,
* reinstallation failed, because LV already exists. ovirt-node-ng-4.2.4-0.20180626.0 onn_ovn-monster Vri-a-tz-k <252,38g pool00 0,85 ovirt-node-ng-4.2.4-0.20180626.0+1 onn_ovn-monster Vwi-a-tz-- <252,38g pool00 ovirt-node-ng-4.2.4-0.20180626.0 0,85 See attachment imgbased.reinstall.log
* I removed them and re-reinstall without luck.
I got KeyError: <NVR ovirt-node-ng-4.2.1.1-0.20180223.0 />
See attachment imgbased.rereinstall.log
Also a new problem with nodectl info [root@ovn-monster tmp]# nodectl info Traceback (most recent call last): File "/usr/lib64/python2.7/runpy.py", line 162, in _run_module_as_main "__main__", fname, loader, pkg_name) File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code exec code in run_globals File "/usr/lib/python2.7/site-packages/nodectl/__main__.py", line 42, in <module> CliApplication() File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 200, in CliApplication return cmdmap.command(args) File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 118, in command return self.commands[command](**kwargs) File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 76, in info Info(self.imgbased, self.machine).write() File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 45, in __init__ self._fetch_information() File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 49, in _fetch_information self._get_layout() File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 66, in _get_layout layout = LayoutParser(self.app.imgbase.layout()).parse() File "/usr/lib/python2.7/site-packages/imgbased/imgbase.py", line 155, in layout return self.naming.layout() File "/usr/lib/python2.7/site-packages/imgbased/naming.py", line 109, in layout tree = self.tree(lvs) File "/usr/lib/python2.7/site-packages/imgbased/naming.py", line 224, in tree bases[img.base.nvr].layers.append(img) KeyError: <NVR ovirt-node-ng-4.2.3-0.20180524.0 />
Am 02.07.2018 um 22:22 schrieb Oliver Riesener < Oliver.Riesener@hs-bremen.de>:
Hi Yuval,
yes you are right, there was a unused and deactivated var_crash LV.
* I activated and mount it to /var/crash via /etc/fstab. * /var/crash was empty, and LV has already ext4 fs. var_crash onn_ovn-monster Vwi-aotz-- 10,00g pool00 2,86
* Now i will try to upgrade again. * yum reinstall ovirt-node-ng-image-update.noarch
BTW, no more imgbased.log files found.
Am 02.07.2018 um 20:57 schrieb Yuval Turgeman <yturgema@redhat.com>:
From your log:
AssertionError: Path is already a volume: /var/crash
Basically, it means that you already have an LV for /var/crash but it's not mounted for some reason, so either mount it (if the data good) or remove it and then reinstall the image-update rpm. Before that, check that you dont have any other LVs in that same state - or you can post the output for lvs... btw, do you have any more imgbased.log files laying around ?
You can find more details about this here:
https://access.redhat.com/documentation/en-us/red_hat_virtua lization/4.1/html/upgrade_guide/recovering_from_failed_nist-800_upgrade
On Mon, Jul 2, 2018 at 8:12 PM, Oliver Riesener <Oliver.Riesener@hs-b remen.de> wrote:
Hi,
i attached my /tmp/imgbased.log
Sheers
Oliver
Am 02.07.2018 um 13:58 schrieb Yuval Turgeman <yuvalt@redhat.com>:
Looks like the upgrade script failed - can you please attach /var/log/imgbased.log or /tmp/imgbased.log ?
Thanks, Yuval.
On Mon, Jul 2, 2018 at 2:54 PM, Sandro Bonazzola <sbonazzo@redhat.com> wrote:
Yuval, can you please have a look?
2018-06-30 7:48 GMT+02:00 Oliver Riesener <Oliver.Riesener@hs-bremen.de
:
Yes, here is the same.
It seams the bootloader isn’t configured right ?
I did the Upgrade and reboot to 4.2.4 from UI and got:
[root@ovn-monster ~]# nodectl info layers: ovirt-node-ng-4.2.4-0.20180626.0: ovirt-node-ng-4.2.4-0.20180626.0+1 ovirt-node-ng-4.2.3.1-0.20180530.0: ovirt-node-ng-4.2.3.1-0.20180530.0+1 ovirt-node-ng-4.2.3-0.20180524.0: ovirt-node-ng-4.2.3-0.20180524.0+1 ovirt-node-ng-4.2.1.1-0.20180223.0: ovirt-node-ng-4.2.1.1-0.20180223.0+1 bootloader: default: ovirt-node-ng-4.2.3-0.20180524.0+1 entries: ovirt-node-ng-4.2.3-0.20180524.0+1: index: 0 title: ovirt-node-ng-4.2.3-0.20180524.0 kernel: /boot/ovirt-node-ng-4.2.3-0.20 180524.0+1/vmlinuz-3.10.0-862.3.2.el7.x86_64 args: "ro crashkernel=auto rd.lvm.lv=onn _ovn-monster/ovirt-node-ng-4.2.3-0.20180524.0+1 rd.lvm.lv=onn_ovn-monster/swap rd.md.uuid=c6c3013b:027a9346:67dfd181:89635587 rhgb quiet LANG=de_DE.UTF-8 img.bootid=ovirt-node-ng-4.2.3-0.20180524.0+1" initrd: /boot/ovirt-node-ng-4.2.3-0.20 180524.0+1/initramfs-3.10.0-862.3.2.el7.x86_64.img root: /dev/onn_ovn-monster/ovirt-node-ng-4.2.3-0.20180524.0+1 ovirt-node-ng-4.2.1.1-0.20180223.0+1: index: 1 title: ovirt-node-ng-4.2.1.1-0.20180223.0 kernel: /boot/ovirt-node-ng-4.2.1.1-0. 20180223.0+1/vmlinuz-3.10.0-693.17.1.el7.x86_64 args: "ro crashkernel=auto rd.lvm.lv=onn _ovn-monster/ovirt-node-ng-4.2.1.1-0.20180223.0+1 rd.lvm.lv=onn_ovn-monster/swap rd.md.uuid=c6c3013b:027a9346:67dfd181:89635587 rhgb quiet LANG=de_DE.UTF-8 img.bootid=ovirt-node-ng-4.2.1.1-0.20180223.0+1" initrd: /boot/ovirt-node-ng-4.2.1.1-0. 20180223.0+1/initramfs-3.10.0-693.17.1.el7.x86_64.img root: /dev/onn_ovn-monster/ovirt-node-ng-4.2.1.1-0.20180223.0+1 current_layer: ovirt-node-ng-4.2.3-0.20180524.0+1 [root@ovn-monster ~]# uptime 07:35:27 up 2 days, 15:42, 1 user, load average: 1,07, 1,00, 0,95
Am 29.06.2018 um 23:53 schrieb Matt Simonsen <matt@khoza.com>:
Hello,
I did yum updates on 2 of my oVirt 4.2.3 nodes running the prebuilt node platform and it doesn't appear the updates worked.
[root@node6-g8-h4 ~]# yum update Loaded plugins: enabled_repos_upload, fastestmirror, imgbased-persist, : package_upload, product-id, search-disabled-repos, subscription- : manager This system is not registered with an entitlement server. You can use subscription-manager to register. Loading mirror speeds from cached hostfile * ovirt-4.2-epel: linux.mirrors.es.net Resolving Dependencies --> Running transaction check ---> Package ovirt-node-ng-image-update.noarch 0:4.2.3.1-1.el7 will be updated ---> Package ovirt-node-ng-image-update.noarch 0:4.2.4-1.el7 will be obsoleting ---> Package ovirt-node-ng-image-update-placeholder.noarch 0:4.2.3.1-1.el7 will be obsoleted --> Finished Dependency Resolution
Dependencies Resolved
============================================================ ============================================================= Package Arch Version Repository Size ============================================================ ============================================================= Installing: ovirt-node-ng-image-update noarch 4.2.4-1.el7 ovirt-4.2 647 M replacing ovirt-node-ng-image-update-placeholder.noarch 4.2.3.1-1.el7
Transaction Summary ============================================================ ============================================================= Install 1 Package
Total download size: 647 M Is this ok [y/d/N]: y Downloading packages: warning: /var/cache/yum/x86_64/7/ovirt-4.2/packages/ovirt-node-ng-ima ge-update-4.2.4-1.el7.noarch.rpm: Header V4 RSA/SHA1 Signature, key ID fe590cb7: NOKEY Public key for ovirt-node-ng-image-update-4.2.4-1.el7.noarch.rpm is not installed ovirt-node-ng-image-update-4.2.4-1.el7.noarch.rpm | 647 MB 00:02:07 Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-ovirt-4.2 Importing GPG key 0xFE590CB7: Userid : "oVirt <infra@ovirt.org>" Fingerprint: 31a5 d783 7fad 7cb2 86cd 3469 ab8c 4f9d fe59 0cb7 Package : ovirt-release42-4.2.3.1-1.el7.noarch (installed) From : /etc/pki/rpm-gpg/RPM-GPG-ovirt-4.2 Is this ok [y/N]: y Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : ovirt-node-ng-image-update-4.2.4-1.el7.noarch 1/3 warning: %post(ovirt-node-ng-image-update-4.2.4-1.el7.noarch) scriptlet failed, exit status 1 Non-fatal POSTIN scriptlet failure in rpm package ovirt-node-ng-image-update-4.2.4-1.el7.noarch Erasing : ovirt-node-ng-image-update-placeholder-4.2.3.1-1.el7.noarch 2/3 Cleanup : ovirt-node-ng-image-update-4.2.3.1-1.el7.noarch 3/3 warning: file /usr/share/ovirt-node-ng/image /ovirt-node-ng-4.2.0-0.20180530.0.el7.squashfs.img: remove failed: No such file or directory Uploading Package Profile Unable to upload Package Profile Verifying : ovirt-node-ng-image-update-4.2.4-1.el7.noarch 1/3 Verifying : ovirt-node-ng-image-update-4.2.3.1-1.el7.noarch 2/3 Verifying : ovirt-node-ng-image-update-placeholder-4.2.3.1-1.el7.noarch 3/3
Installed: ovirt-node-ng-image-update.noarch 0:4.2.4-1.el7
Replaced: ovirt-node-ng-image-update-placeholder.noarch 0:4.2.3.1-1.el7
Complete! Uploading Enabled Repositories Report Loaded plugins: fastestmirror, product-id, subscription-manager This system is not registered with an entitlement server. You can use subscription-manager to register. Cannot upload enabled repos report, is this client registered?
My engine shows the nodes as having no updates, however the major components including the kernel version and port 9090 admin GUI show 4.2.3
Is there anything I can provide to help diagnose the issue?
[root@node6-g8-h4 ~]# rpm -qa | grep ovirt
ovirt-imageio-common-1.3.1.2-0.el7.centos.noarch ovirt-host-deploy-1.7.3-1.el7.centos.noarch ovirt-vmconsole-host-1.0.5-4.el7.centos.noarch ovirt-provider-ovn-driver-1.2.10-1.el7.centos.noarch ovirt-engine-sdk-python-3.6.9.1-1.el7.noarch ovirt-setup-lib-1.1.4-1.el7.centos.noarch ovirt-release42-4.2.3.1-1.el7.noarch ovirt-imageio-daemon-1.3.1.2-0.el7.centos.noarch ovirt-hosted-engine-setup-2.2.20-1.el7.centos.noarch ovirt-host-dependencies-4.2.2-2.el7.centos.x86_64 ovirt-hosted-engine-ha-2.2.11-1.el7.centos.noarch ovirt-host-4.2.2-2.el7.centos.x86_64 ovirt-node-ng-image-update-4.2.4-1.el7.noarch ovirt-vmconsole-1.0.5-4.el7.centos.noarch ovirt-release-host-node-4.2.3.1-1.el7.noarch cockpit-ovirt-dashboard-0.11.24-1.el7.centos.noarch ovirt-node-ng-nodectl-4.2.0-0.20180524.0.el7.noarch python-ovirt-engine-sdk4-4.2.6-2.el7.centos.x86_64
[root@node6-g8-h4 ~]# yum update Loaded plugins: enabled_repos_upload, fastestmirror, imgbased-persist, package_upload, product-id, search-disabled-repos, subscription-manager This system is not registered with an entitlement server. You can use subscription-manager to register. Loading mirror speeds from cached hostfile * ovirt-4.2-epel: linux.mirrors.es.net No packages marked for update Uploading Enabled Repositories Report Loaded plugins: fastestmirror, product-id, subscription-manager This system is not registered with an entitlement server. You can use subscription-manager to register. Cannot upload enabled repos report, is this client registered? _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org /community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt. org/message/UHQMGULUHL4GBBHUBNGOAICJEM6W3RVW/
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org /community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt. org/message/RTOXFNAXQ3NJBWX7RXOYK5H5RZBHX2OK/
-- SANDRO BONAZZOLA
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV Red Hat EMEA <https://www.redhat.com/>
sbonazzo@redhat.com <https://red.ht/sig>
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org /community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt. org/message/ACKMTWOUNXB7NWOCRZIOWJI7U5HIQAUV/
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/communit y/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt. org/message/E5MFPG7WW5BIQ5BOC7LG5C23YZ7B7UJ2/
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/communit y/about/community-guidelines/ List Archives: https://lists.ovirt.org/archiv es/list/users@ovirt.org/message/IPKGIHH7SFTW7K7YOKL4QXBYXR5KMKNB/
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community- guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/ message/6HKLX746IND2NQVRRGKACMSQCA3GK6GA/

Hi Yuval, yes, it was a manual lvremove. [root@ovn-monster tmp]# lvm lvm> lvs onn_ovn-monster LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert home onn_ovn-monster Vwi-aotz-- 1,00g pool00 4,79 ovirt-node-ng-4.2.1.1-0.20180223.0 onn_ovn-monster Vwi---tz-k <252,38g pool00 root ovirt-node-ng-4.2.1.1-0.20180223.0+1 onn_ovn-monster Vwi-a-tz-- <252,38g pool00 ovirt-node-ng-4.2.1.1-0.20180223.0 2,53 ovirt-node-ng-4.2.3-0.20180524.0 onn_ovn-monster Vri---tz-k <252,38g pool00 ovirt-node-ng-4.2.3-0.20180524.0+1 onn_ovn-monster Vwi-aotz-- <252,38g pool00 ovirt-node-ng-4.2.3-0.20180524.0 2,63 ovirt-node-ng-4.2.3.1-0.20180530.0 onn_ovn-monster Vri---tz-k <252,38g pool00 ovirt-node-ng-4.2.3.1-0.20180530.0+1 onn_ovn-monster Vwi-a-tz-- <252,38g pool00 ovirt-node-ng-4.2.3.1-0.20180530.0 0,86 ovirt-node-ng-4.2.4-0.20180626.0 onn_ovn-monster Vri---tz-k <252,38g pool00 ovirt-node-ng-4.2.4-0.20180626.0+1 onn_ovn-monster Vwi-a-tz-- <252,38g pool00 ovirt-node-ng-4.2.4-0.20180626.0 0,87 pool00 onn_ovn-monster twi-aotz-- <279,38g 8,19 1,27 root onn_ovn-monster Vwi-a-tz-- <252,38g pool00 1,24 swap onn_ovn-monster -wi-ao---- 4,00g tmp onn_ovn-monster Vwi-aotz-- 1,00g pool00 5,00 var onn_ovn-monster Vwi-aotz-- 15,00g pool00 3,55 var_crash onn_ovn-monster Vwi-aotz-- 10,00g pool00 2,86 var_log onn_ovn-monster Vwi-aotz-- 8,00g pool00 38,62 var_log_audit onn_ovn-monster Vwi-aotz-- 2,00g pool00 6,75 lvm> lvremove onn_ovn-monster ovirt-node-ng-4.2.4-0.20180626.0 ovirt-node-ng-4.2.4-0.20180626.0+1 Logical volume onn_ovn-monster/swap in use. Removing pool "pool00" will remove 15 dependent volume(s). Proceed? [y/n]: n Logical volume "pool00" not removed. Logical volume onn_ovn-monster/var_log_audit contains a filesystem in use. Logical volume onn_ovn-monster/var_log contains a filesystem in use. Logical volume onn_ovn-monster/var contains a filesystem in use. Logical volume onn_ovn-monster/tmp contains a filesystem in use. Logical volume onn_ovn-monster/home contains a filesystem in use. Do you really want to remove active logical volume onn_ovn-monster/root? [y/n]: n Logical volume onn_ovn-monster/root not removed. Logical volume "ovirt-node-ng-4.2.1.1-0.20180223.0" successfully removed Do you really want to remove active logical volume onn_ovn-monster/ovirt-node-ng-4.2.1.1-0.20180223.0+1? [y/n]: n ###### my mistake here ! Logical volume onn_ovn-monster/ovirt-node-ng-4.2.1.1-0.20180223.0+1 not removed. Logical volume onn_ovn-monster/var_crash contains a filesystem in use. Logical volume "ovirt-node-ng-4.2.3-0.20180524.0" successfully removed Logical volume onn_ovn-monster/ovirt-node-ng-4.2.3-0.20180524.0+1 contains a filesystem in use. Logical volume "ovirt-node-ng-4.2.3.1-0.20180530.0" successfully removed Do you really want to remove active logical volume onn_ovn-monster/ovirt-node-ng-4.2.3.1-0.20180530.0+1? [y/n]: n Logical volume onn_ovn-monster/ovirt-node-ng-4.2.3.1-0.20180530.0+1 not removed. Logical volume "ovirt-node-ng-4.2.4-0.20180626.0" successfully removed Do you really want to remove active logical volume onn_ovn-monster/ovirt-node-ng-4.2.4-0.20180626.0+1? [y/n]: y Logical volume "ovirt-node-ng-4.2.4-0.20180626.0+1" successfully removed Volume group "ovirt-node-ng-4.2.4-0.20180626.0" not found Cannot process volume group ovirt-node-ng-4.2.4-0.20180626.0 Volume group "ovirt-node-ng-4.2.4-0.20180626.0+1" not found Cannot process volume group ovirt-node-ng-4.2.4-0.20180626.0+1 lvm> lvm> lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert home onn_ovn-monster Vwi-aotz-- 1,00g pool00 4,79 ovirt-node-ng-4.2.1.1-0.20180223.0+1 onn_ovn-monster Vwi-a-tz-- <252,38g pool00 2,53 ovirt-node-ng-4.2.3-0.20180524.0+1 onn_ovn-monster Vwi-aotz-- <252,38g pool00 2,63 ovirt-node-ng-4.2.3.1-0.20180530.0+1 onn_ovn-monster Vwi-a-tz-- <252,38g pool00 0,86 pool00 onn_ovn-monster twi-aotz-- <279,38g 7,34 1,11 root onn_ovn-monster Vwi-a-tz-- <252,38g pool00 1,24 swap onn_ovn-monster -wi-ao---- 4,00g tmp onn_ovn-monster Vwi-aotz-- 1,00g pool00 5,00 var onn_ovn-monster Vwi-aotz-- 15,00g pool00 3,55 var_crash onn_ovn-monster Vwi-aotz-- 10,00g pool00 2,86 var_log onn_ovn-monster Vwi-aotz-- 8,00g pool00 38,62 var_log_audit onn_ovn-monster Vwi-aotz-- 2,00g pool00 6,75 turbo ovirt -wi-ao---- 894,25g ##### Correct mistake lvm> lvremove onn_ovn-monster/ovirt-node-ng-4.2.1.1-0.20180223.0+1 Do you really want to remove active logical volume onn_ovn-monster/ovirt-node-ng-4.2.1.1-0.20180223.0+1? [y/n]: y Logical volume "ovirt-node-ng-4.2.1.1-0.20180223.0+1" successfully removed lvm> quit Exiting. try re-reinstall ...
Am 03.07.2018 um 21:57 schrieb Yuval Turgeman <yturgema@redhat.com>:
Hi Oliver,
The KeyError happens because there are no bases for the layers. For each LV that ends with a +1, there should be a base read-only LV without +1. So for 3 ovirt-node-ng images, you're supposed to have 6 layers. This is the reason nodectl info fails, and the upgrade will fail also. In your original email it looks OK - I have never seen this happen, was this a manual lvremove ? I need to reproduce this and check what can be done.
You can find me on #ovirt (irc.oftc.net <http://irc.oftc.net/>) also :)
On Tue, Jul 3, 2018 at 10:41 PM, Oliver Riesener <Oliver.Riesener@hs-bremen.de <mailto:Oliver.Riesener@hs-bremen.de>> wrote: Yuval, here comes the lvs output.
The IO Errors are because Node is in maintenance. The LV root is from previous installed centos 7.5. The i have installed node-ng 4.2.1 and got this MIX. The LV turbo is a SSD in it’s own VG named ovirt.
I have removed LV ovirt-node-ng-4.2.1-0.20180223.0 and (+1) removed because nodectl info error:
KeyError: <NVR ovirt-node-ng-4.2.1.1-0.20180223.0
Now i get the error @4.2.3: [root@ovn-monster ~]# nodectl info Traceback (most recent call last): File "/usr/lib64/python2.7/runpy.py", line 162, in _run_module_as_main "__main__", fname, loader, pkg_name) File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code exec code in run_globals File "/usr/lib/python2.7/site-packages/nodectl/__main__.py", line 42, in <module> CliApplication() File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 200, in CliApplication return cmdmap.command(args) File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 118, in command return self.commands[command](**kwargs) File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 76, in info Info(self.imgbased, self.machine).write() File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 45, in __init__ self._fetch_information() File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 49, in _fetch_information self._get_layout() File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 66, in _get_layout layout = LayoutParser(self.app.imgbase.layout()).parse() File "/usr/lib/python2.7/site-packages/imgbased/imgbase.py", line 155, in layout return self.naming.layout() File "/usr/lib/python2.7/site-packages/imgbased/naming.py", line 109, in layout tree = self.tree(lvs) File "/usr/lib/python2.7/site-packages/imgbased/naming.py", line 224, in tree bases[img.base.nvr].layers.append(img) KeyError: <NVR ovirt-node-ng-4.2.3-0.20180524.0 />
lvs -a
[root@ovn-monster ~]# lvs -a /dev/mapper/36090a02860ee716bee5e05b11dc52616: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/mapper/36090a02860ee716bee5e05b11dc52616: read failed after 0 of 4096 at 5497568559104: Eingabe-/Ausgabefehler /dev/mapper/36090a02860ee716bee5e05b11dc52616: read failed after 0 of 4096 at 5497568616448: Eingabe-/Ausgabefehler /dev/mapper/36090a02860ee716bee5e05b11dc52616: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/mapper/36090a02860ee9137c5ae35cd4bc5f6b8: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/mapper/36090a02860ee9137c5ae35cd4bc5f6b8: read failed after 0 of 4096 at 1099526242304: Eingabe-/Ausgabefehler /dev/mapper/36090a02860ee9137c5ae35cd4bc5f6b8: read failed after 0 of 4096 at 1099526299648: Eingabe-/Ausgabefehler /dev/mapper/36090a02860ee9137c5ae35cd4bc5f6b8: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/mapper/36090a02860eea13dc5aed55e4cc57698: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/mapper/36090a02860eea13dc5aed55e4cc57698: read failed after 0 of 4096 at 1099526242304: Eingabe-/Ausgabefehler /dev/mapper/36090a02860eea13dc5aed55e4cc57698: read failed after 0 of 4096 at 1099526299648: Eingabe-/Ausgabefehler /dev/mapper/36090a02860eea13dc5aed55e4cc57698: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/metadata: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/metadata: read failed after 0 of 4096 at 536805376: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/metadata: read failed after 0 of 4096 at 536862720: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/metadata: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/ids: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/ids: read failed after 0 of 4096 at 134152192: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/ids: read failed after 0 of 4096 at 134209536: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/ids: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/leases: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/leases: read failed after 0 of 4096 at 2147418112: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/leases: read failed after 0 of 4096 at 2147475456: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/leases: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/outbox: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/outbox: read failed after 0 of 4096 at 134152192: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/outbox: read failed after 0 of 4096 at 134209536: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/outbox: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/xleases: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/xleases: read failed after 0 of 4096 at 1073676288: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/xleases: read failed after 0 of 4096 at 1073733632: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/xleases: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/inbox: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/inbox: read failed after 0 of 4096 at 134152192: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/inbox: read failed after 0 of 4096 at 134209536: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/inbox: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/master: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/master: read failed after 0 of 4096 at 1073676288: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/master: read failed after 0 of 4096 at 1073733632: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/master: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/metadata: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/metadata: read failed after 0 of 4096 at 536805376: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/metadata: read failed after 0 of 4096 at 536862720: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/metadata: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/metadata: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/metadata: read failed after 0 of 4096 at 536805376: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/metadata: read failed after 0 of 4096 at 536862720: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/metadata: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/ids: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/ids: read failed after 0 of 4096 at 134152192: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/ids: read failed after 0 of 4096 at 134209536: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/ids: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/ids: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/ids: read failed after 0 of 4096 at 134152192: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/ids: read failed after 0 of 4096 at 134209536: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/ids: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/leases: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/leases: read failed after 0 of 4096 at 2147418112: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/leases: read failed after 0 of 4096 at 2147475456: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/leases: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/leases: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/leases: read failed after 0 of 4096 at 2147418112: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/leases: read failed after 0 of 4096 at 2147475456: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/leases: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/outbox: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/outbox: read failed after 0 of 4096 at 134152192: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/outbox: read failed after 0 of 4096 at 134209536: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/outbox: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/xleases: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/xleases: read failed after 0 of 4096 at 1073676288: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/xleases: read failed after 0 of 4096 at 1073733632: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/xleases: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/inbox: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/inbox: read failed after 0 of 4096 at 134152192: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/inbox: read failed after 0 of 4096 at 134209536: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/inbox: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/master: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/master: read failed after 0 of 4096 at 1073676288: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/master: read failed after 0 of 4096 at 1073733632: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/master: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/outbox: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/outbox: read failed after 0 of 4096 at 134152192: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/outbox: read failed after 0 of 4096 at 134209536: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/outbox: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/inbox: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/inbox: read failed after 0 of 4096 at 134152192: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/inbox: read failed after 0 of 4096 at 134209536: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/inbox: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/master: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/master: read failed after 0 of 4096 at 1073676288: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/master: read failed after 0 of 4096 at 1073733632: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/master: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/xleases: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/xleases: read failed after 0 of 4096 at 1073676288: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/xleases: read failed after 0 of 4096 at 1073733632: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/xleases: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert home onn_ovn-monster Vwi-aotz-- 1,00g pool00 4,79 [lvol0_pmspare] onn_ovn-monster ewi------- 144,00m ovirt-node-ng-4.2.3-0.20180524.0+1 onn_ovn-monster Vwi-aotz-- <252,38g pool00 2,88 ovirt-node-ng-4.2.3.1-0.20180530.0+1 onn_ovn-monster Vwi-a-tz-- <252,38g pool00 0,86 ovirt-node-ng-4.2.4-0.20180626.0 onn_ovn-monster Vri-a-tz-k <252,38g pool00 0,85 ovirt-node-ng-4.2.4-0.20180626.0+1 onn_ovn-monster Vwi-a-tz-- <252,38g pool00 ovirt-node-ng-4.2.4-0.20180626.0 0,85 pool00 onn_ovn-monster twi-aotz-- <279,38g 6,76 1,01 [pool00_tdata] onn_ovn-monster Twi-ao---- <279,38g [pool00_tmeta] onn_ovn-monster ewi-ao---- 1,00g root onn_ovn-monster Vwi-a-tz-- <252,38g pool00 1,24 swap onn_ovn-monster -wi-ao---- 4,00g tmp onn_ovn-monster Vwi-aotz-- 1,00g pool00 5,01 var onn_ovn-monster Vwi-aotz-- 15,00g pool00 3,56 var_crash onn_ovn-monster Vwi-aotz-- 10,00g pool00 2,86 var_log onn_ovn-monster Vwi-aotz-- 8,00g pool00 38,48 var_log_audit onn_ovn-monster Vwi-aotz-- 2,00g pool00 6,77 turbo ovirt -wi-ao---- 894,25g
Am 03.07.2018 um 12:58 schrieb Yuval Turgeman <yturgema@redhat.com <mailto:yturgema@redhat.com>>:
Oliver, can you share the output from lvs ?
On Tue, Jul 3, 2018 at 12:06 AM, Oliver Riesener <Oliver.Riesener@hs-bremen.de <mailto:Oliver.Riesener@hs-bremen.de>> wrote: Hi Yuval,
* reinstallation failed, because LV already exists. ovirt-node-ng-4.2.4-0.20180626.0 onn_ovn-monster Vri-a-tz-k <252,38g pool00 0,85 ovirt-node-ng-4.2.4-0.20180626.0+1 onn_ovn-monster Vwi-a-tz-- <252,38g pool00 ovirt-node-ng-4.2.4-0.20180626.0 0,85 See attachment imgbased.reinstall.log
* I removed them and re-reinstall without luck.
I got KeyError: <NVR ovirt-node-ng-4.2.1.1-0.20180223.0 />
See attachment imgbased.rereinstall.log
Also a new problem with nodectl info [root@ovn-monster tmp]# nodectl info Traceback (most recent call last): File "/usr/lib64/python2.7/runpy.py", line 162, in _run_module_as_main "__main__", fname, loader, pkg_name) File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code exec code in run_globals File "/usr/lib/python2.7/site-packages/nodectl/__main__.py", line 42, in <module> CliApplication() File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 200, in CliApplication return cmdmap.command(args) File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 118, in command return self.commands[command](**kwargs) File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 76, in info Info(self.imgbased, self.machine).write() File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 45, in __init__ self._fetch_information() File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 49, in _fetch_information self._get_layout() File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 66, in _get_layout layout = LayoutParser(self.app.imgbase.layout()).parse() File "/usr/lib/python2.7/site-packages/imgbased/imgbase.py", line 155, in layout return self.naming.layout() File "/usr/lib/python2.7/site-packages/imgbased/naming.py", line 109, in layout tree = self.tree(lvs) File "/usr/lib/python2.7/site-packages/imgbased/naming.py", line 224, in tree bases[img.base.nvr].layers.append(img) KeyError: <NVR ovirt-node-ng-4.2.3-0.20180524.0 />
Am 02.07.2018 um 22:22 schrieb Oliver Riesener <Oliver.Riesener@hs-bremen.de <mailto:Oliver.Riesener@hs-bremen.de>>:
Hi Yuval,
yes you are right, there was a unused and deactivated var_crash LV.
* I activated and mount it to /var/crash via /etc/fstab. * /var/crash was empty, and LV has already ext4 fs. var_crash onn_ovn-monster Vwi-aotz-- 10,00g pool00 2,86
* Now i will try to upgrade again. * yum reinstall ovirt-node-ng-image-update.noarch
BTW, no more imgbased.log files found.
Am 02.07.2018 um 20:57 schrieb Yuval Turgeman <yturgema@redhat.com <mailto:yturgema@redhat.com>>:
From your log:
AssertionError: Path is already a volume: /var/crash
Basically, it means that you already have an LV for /var/crash but it's not mounted for some reason, so either mount it (if the data good) or remove it and then reinstall the image-update rpm. Before that, check that you dont have any other LVs in that same state - or you can post the output for lvs... btw, do you have any more imgbased.log files laying around ?
You can find more details about this here:
https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.1/htm... <https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.1/html/upgrade_guide/recovering_from_failed_nist-800_upgrade>
On Mon, Jul 2, 2018 at 8:12 PM, Oliver Riesener <Oliver.Riesener@hs-bremen.de <mailto:Oliver.Riesener@hs-bremen.de>> wrote: Hi,
i attached my /tmp/imgbased.log
Sheers
Oliver
Am 02.07.2018 um 13:58 schrieb Yuval Turgeman <yuvalt@redhat.com <mailto:yuvalt@redhat.com>>:
Looks like the upgrade script failed - can you please attach /var/log/imgbased.log or /tmp/imgbased.log ?
Thanks, Yuval.
On Mon, Jul 2, 2018 at 2:54 PM, Sandro Bonazzola <sbonazzo@redhat.com <mailto:sbonazzo@redhat.com>> wrote: Yuval, can you please have a look?
2018-06-30 7:48 GMT+02:00 Oliver Riesener <Oliver.Riesener@hs-bremen.de <mailto:Oliver.Riesener@hs-bremen.de>>: Yes, here is the same.
It seams the bootloader isn’t configured right ?
I did the Upgrade and reboot to 4.2.4 from UI and got:
[root@ovn-monster ~]# nodectl info layers: ovirt-node-ng-4.2.4-0.20180626.0: ovirt-node-ng-4.2.4-0.20180626.0+1 ovirt-node-ng-4.2.3.1-0.20180530.0: ovirt-node-ng-4.2.3.1-0.20180530.0+1 ovirt-node-ng-4.2.3-0.20180524.0: ovirt-node-ng-4.2.3-0.20180524.0+1 ovirt-node-ng-4.2.1.1-0.20180223.0: ovirt-node-ng-4.2.1.1-0.20180223.0+1 bootloader: default: ovirt-node-ng-4.2.3-0.20180524.0+1 entries: ovirt-node-ng-4.2.3-0.20180524.0+1: index: 0 title: ovirt-node-ng-4.2.3-0.20180524.0 kernel: /boot/ovirt-node-ng-4.2.3-0.20180524.0+1/vmlinuz-3.10.0-862.3.2.el7.x86_64 args: "ro crashkernel=auto rd.lvm.lv <http://rd.lvm.lv/>=onn_ovn-monster/ovirt-node-ng-4.2.3-0.20180524.0+1 rd.lvm.lv <http://rd.lvm.lv/>=onn_ovn-monster/swap rd.md.uuid=c6c3013b:027a9346:67dfd181:89635587 rhgb quiet LANG=de_DE.UTF-8 img.bootid=ovirt-node-ng-4.2.3-0.20180524.0+1" initrd: /boot/ovirt-node-ng-4.2.3-0.20180524.0+1/initramfs-3.10.0-862.3.2.el7.x86_64.img root: /dev/onn_ovn-monster/ovirt-node-ng-4.2.3-0.20180524.0+1 ovirt-node-ng-4.2.1.1-0.20180223.0+1: index: 1 title: ovirt-node-ng-4.2.1.1-0.20180223.0 kernel: /boot/ovirt-node-ng-4.2.1.1-0.20180223.0+1/vmlinuz-3.10.0-693.17.1.el7.x86_64 args: "ro crashkernel=auto rd.lvm.lv <http://rd.lvm.lv/>=onn_ovn-monster/ovirt-node-ng-4.2.1.1-0.20180223.0+1 rd.lvm.lv <http://rd.lvm.lv/>=onn_ovn-monster/swap rd.md.uuid=c6c3013b:027a9346:67dfd181:89635587 rhgb quiet LANG=de_DE.UTF-8 img.bootid=ovirt-node-ng-4.2.1.1-0.20180223.0+1" initrd: /boot/ovirt-node-ng-4.2.1.1-0.20180223.0+1/initramfs-3.10.0-693.17.1.el7.x86_64.img root: /dev/onn_ovn-monster/ovirt-node-ng-4.2.1.1-0.20180223.0+1 current_layer: ovirt-node-ng-4.2.3-0.20180524.0+1 [root@ovn-monster ~]# uptime 07:35:27 up 2 days, 15:42, 1 user, load average: 1,07, 1,00, 0,95
Am 29.06.2018 um 23:53 schrieb Matt Simonsen <matt@khoza.com <mailto:matt@khoza.com>>:
Hello,
I did yum updates on 2 of my oVirt 4.2.3 nodes running the prebuilt node platform and it doesn't appear the updates worked.
[root@node6-g8-h4 ~]# yum update Loaded plugins: enabled_repos_upload, fastestmirror, imgbased-persist, : package_upload, product-id, search-disabled-repos, subscription- : manager This system is not registered with an entitlement server. You can use subscription-manager to register. Loading mirror speeds from cached hostfile * ovirt-4.2-epel: linux.mirrors.es.net <http://linux.mirrors.es.net/> Resolving Dependencies --> Running transaction check ---> Package ovirt-node-ng-image-update.noarch 0:4.2.3.1-1.el7 will be updated ---> Package ovirt-node-ng-image-update.noarch 0:4.2.4-1.el7 will be obsoleting ---> Package ovirt-node-ng-image-update-placeholder.noarch 0:4.2.3.1-1.el7 will be obsoleted --> Finished Dependency Resolution
Dependencies Resolved
========================================================================================================================= Package Arch Version Repository Size ========================================================================================================================= Installing: ovirt-node-ng-image-update noarch 4.2.4-1.el7 ovirt-4.2 647 M replacing ovirt-node-ng-image-update-placeholder.noarch 4.2.3.1-1.el7
Transaction Summary ========================================================================================================================= Install 1 Package
Total download size: 647 M Is this ok [y/d/N]: y Downloading packages: warning: /var/cache/yum/x86_64/7/ovirt-4.2/packages/ovirt-node-ng-image-update-4.2.4-1.el7.noarch.rpm: Header V4 RSA/SHA1 Signature, key ID fe590cb7: NOKEY Public key for ovirt-node-ng-image-update-4.2.4-1.el7.noarch.rpm is not installed ovirt-node-ng-image-update-4.2.4-1.el7.noarch.rpm | 647 MB 00:02:07 Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-ovirt-4.2 <> Importing GPG key 0xFE590CB7: Userid : "oVirt <infra@ovirt.org <mailto:infra@ovirt.org>>" Fingerprint: 31a5 d783 7fad 7cb2 86cd 3469 ab8c 4f9d fe59 0cb7 Package : ovirt-release42-4.2.3.1-1.el7.noarch (installed) From : /etc/pki/rpm-gpg/RPM-GPG-ovirt-4.2 Is this ok [y/N]: y Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : ovirt-node-ng-image-update-4.2.4-1.el7.noarch 1/3 warning: %post(ovirt-node-ng-image-update-4.2.4-1.el7.noarch) scriptlet failed, exit status 1 Non-fatal POSTIN scriptlet failure in rpm package ovirt-node-ng-image-update-4.2.4-1.el7.noarch Erasing : ovirt-node-ng-image-update-placeholder-4.2.3.1-1.el7.noarch 2/3 Cleanup : ovirt-node-ng-image-update-4.2.3.1-1.el7.noarch 3/3 warning: file /usr/share/ovirt-node-ng/image/ovirt-node-ng-4.2.0-0.20180530.0.el7.squashfs.img: remove failed: No such file or directory Uploading Package Profile Unable to upload Package Profile Verifying : ovirt-node-ng-image-update-4.2.4-1.el7.noarch 1/3 Verifying : ovirt-node-ng-image-update-4.2.3.1-1.el7.noarch 2/3 Verifying : ovirt-node-ng-image-update-placeholder-4.2.3.1-1.el7.noarch 3/3
Installed: ovirt-node-ng-image-update.noarch 0:4.2.4-1.el7
Replaced: ovirt-node-ng-image-update-placeholder.noarch 0:4.2.3.1-1.el7
Complete! Uploading Enabled Repositories Report Loaded plugins: fastestmirror, product-id, subscription-manager This system is not registered with an entitlement server. You can use subscription-manager to register. Cannot upload enabled repos report, is this client registered?
My engine shows the nodes as having no updates, however the major components including the kernel version and port 9090 admin GUI show 4.2.3
Is there anything I can provide to help diagnose the issue?
[root@node6-g8-h4 ~]# rpm -qa | grep ovirt
ovirt-imageio-common-1.3.1.2-0.el7.centos.noarch ovirt-host-deploy-1.7.3-1.el7.centos.noarch ovirt-vmconsole-host-1.0.5-4.el7.centos.noarch ovirt-provider-ovn-driver-1.2.10-1.el7.centos.noarch ovirt-engine-sdk-python-3.6.9.1-1.el7.noarch ovirt-setup-lib-1.1.4-1.el7.centos.noarch ovirt-release42-4.2.3.1-1.el7.noarch ovirt-imageio-daemon-1.3.1.2-0.el7.centos.noarch ovirt-hosted-engine-setup-2.2.20-1.el7.centos.noarch ovirt-host-dependencies-4.2.2-2.el7.centos.x86_64 ovirt-hosted-engine-ha-2.2.11-1.el7.centos.noarch ovirt-host-4.2.2-2.el7.centos.x86_64 ovirt-node-ng-image-update-4.2.4-1.el7.noarch ovirt-vmconsole-1.0.5-4.el7.centos.noarch ovirt-release-host-node-4.2.3.1-1.el7.noarch cockpit-ovirt-dashboard-0.11.24-1.el7.centos.noarch ovirt-node-ng-nodectl-4.2.0-0.20180524.0.el7.noarch python-ovirt-engine-sdk4-4.2.6-2.el7.centos.x86_64
[root@node6-g8-h4 ~]# yum update Loaded plugins: enabled_repos_upload, fastestmirror, imgbased-persist, package_upload, product-id, search-disabled-repos, subscription-manager This system is not registered with an entitlement server. You can use subscription-manager to register. Loading mirror speeds from cached hostfile * ovirt-4.2-epel: linux.mirrors.es.net <http://linux.mirrors.es.net/> No packages marked for update Uploading Enabled Repositories Report Loaded plugins: fastestmirror, product-id, subscription-manager This system is not registered with an entitlement server. You can use subscription-manager to register. Cannot upload enabled repos report, is this client registered? _______________________________________________ Users mailing list -- users@ovirt.org <mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org <mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ <https://www.ovirt.org/site/privacy-policy/> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ <https://www.ovirt.org/community/about/community-guidelines/> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/UHQMGULUHL4GBB... <https://lists.ovirt.org/archives/list/users@ovirt.org/message/UHQMGULUHL4GBBHUBNGOAICJEM6W3RVW/>
_______________________________________________ Users mailing list -- users@ovirt.org <mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org <mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ <https://www.ovirt.org/site/privacy-policy/> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ <https://www.ovirt.org/community/about/community-guidelines/> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/RTOXFNAXQ3NJBW... <https://lists.ovirt.org/archives/list/users@ovirt.org/message/RTOXFNAXQ3NJBWX7RXOYK5H5RZBHX2OK/>
-- SANDRO BONAZZOLA MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV Red Hat EMEA <https://www.redhat.com/> sbonazzo@redhat.com <mailto:sbonazzo@redhat.com>
_______________________________________________ Users mailing list -- users@ovirt.org <mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org <mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ <https://www.ovirt.org/site/privacy-policy/> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ <https://www.ovirt.org/community/about/community-guidelines/> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/ACKMTWOUNXB7NW... <https://lists.ovirt.org/archives/list/users@ovirt.org/message/ACKMTWOUNXB7NWOCRZIOWJI7U5HIQAUV/>
_______________________________________________ Users mailing list -- users@ovirt.org <mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org <mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ <https://www.ovirt.org/site/privacy-policy/> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ <https://www.ovirt.org/community/about/community-guidelines/> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/E5MFPG7WW5BIQ5... <https://lists.ovirt.org/archives/list/users@ovirt.org/message/E5MFPG7WW5BIQ5BOC7LG5C23YZ7B7UJ2/>
Users mailing list -- users@ovirt.org <mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org <mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ <https://www.ovirt.org/site/privacy-policy/> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ <https://www.ovirt.org/community/about/community-guidelines/> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/IPKGIHH7SFTW7K... <https://lists.ovirt.org/archives/list/users@ovirt.org/message/IPKGIHH7SFTW7K7YOKL4QXBYXR5KMKNB/>
_______________________________________________ Users mailing list -- users@ovirt.org <mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org <mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ <https://www.ovirt.org/site/privacy-policy/> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ <https://www.ovirt.org/community/about/community-guidelines/> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/6HKLX746IND2NQ... <https://lists.ovirt.org/archives/list/users@ovirt.org/message/6HKLX746IND2NQVRRGKACMSQCA3GK6GA/>

Hi Oliver, I would try the following, but please notice it is *very* dangerous, so a backup is probably a good idea (man vgcfgrestore)... 1. vgcfgrestore --list onn_ovn-monster 2. search for a .vg file that was created before deleting those 2 lvs ( ovirt-node-ng-4.2.3-0.20180524.0 and ovirt-node-ng-4.2.3.1-0.20180530.0) 3. vgcfgrestore -f path-to-the-file-from-step2.vg onn_ovn-monster --force 4. lvremove onn_ovn-monster/ovirt-node-ng-4.2.4-0.20180626.0 5. lvremove onn_ovn-monster/ovirt-node-ng-4.2.4-0.20180626.0+1 6. lvremove the lvs from the thinpool that are not mounted/used (var_crash?) 7. nodectl info to make sure everything is ok 8. reinstall the image-update rpm Thanks, Yuval. On Tue, Jul 3, 2018 at 10:57 PM, Yuval Turgeman <yturgema@redhat.com> wrote:
Hi Oliver,
The KeyError happens because there are no bases for the layers. For each LV that ends with a +1, there should be a base read-only LV without +1. So for 3 ovirt-node-ng images, you're supposed to have 6 layers. This is the reason nodectl info fails, and the upgrade will fail also. In your original email it looks OK - I have never seen this happen, was this a manual lvremove ? I need to reproduce this and check what can be done.
You can find me on #ovirt (irc.oftc.net) also :)
On Tue, Jul 3, 2018 at 10:41 PM, Oliver Riesener < Oliver.Riesener@hs-bremen.de> wrote:
Yuval, here comes the lvs output.
The IO Errors are because Node is in maintenance. The LV root is from previous installed centos 7.5. The i have installed node-ng 4.2.1 and got this MIX. The LV turbo is a SSD in it’s own VG named ovirt.
I have removed LV ovirt-node-ng-4.2.1-0.20180223.0 and (+1) removed because nodectl info error:
KeyError: <NVR ovirt-node-ng-4.2.1.1-0.20180223.0
Now i get the error @4.2.3: [root@ovn-monster ~]# nodectl info Traceback (most recent call last): File "/usr/lib64/python2.7/runpy.py", line 162, in _run_module_as_main "__main__", fname, loader, pkg_name) File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code exec code in run_globals File "/usr/lib/python2.7/site-packages/nodectl/__main__.py", line 42, in <module> CliApplication() File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 200, in CliApplication return cmdmap.command(args) File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 118, in command return self.commands[command](**kwargs) File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 76, in info Info(self.imgbased, self.machine).write() File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 45, in __init__ self._fetch_information() File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 49, in _fetch_information self._get_layout() File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 66, in _get_layout layout = LayoutParser(self.app.imgbase.layout()).parse() File "/usr/lib/python2.7/site-packages/imgbased/imgbase.py", line 155, in layout return self.naming.layout() File "/usr/lib/python2.7/site-packages/imgbased/naming.py", line 109, in layout tree = self.tree(lvs) File "/usr/lib/python2.7/site-packages/imgbased/naming.py", line 224, in tree bases[img.base.nvr].layers.append(img) KeyError: <NVR ovirt-node-ng-4.2.3-0.20180524.0 />
lvs -a
[root@ovn-monster ~]# lvs -a /dev/mapper/36090a02860ee716bee5e05b11dc52616: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/mapper/36090a02860ee716bee5e05b11dc52616: read failed after 0 of 4096 at 5497568559104: Eingabe-/Ausgabefehler /dev/mapper/36090a02860ee716bee5e05b11dc52616: read failed after 0 of 4096 at 5497568616448: Eingabe-/Ausgabefehler /dev/mapper/36090a02860ee716bee5e05b11dc52616: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/mapper/36090a02860ee9137c5ae35cd4bc5f6b8: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/mapper/36090a02860ee9137c5ae35cd4bc5f6b8: read failed after 0 of 4096 at 1099526242304: Eingabe-/Ausgabefehler /dev/mapper/36090a02860ee9137c5ae35cd4bc5f6b8: read failed after 0 of 4096 at 1099526299648: Eingabe-/Ausgabefehler /dev/mapper/36090a02860ee9137c5ae35cd4bc5f6b8: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/mapper/36090a02860eea13dc5aed55e4cc57698: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/mapper/36090a02860eea13dc5aed55e4cc57698: read failed after 0 of 4096 at 1099526242304: Eingabe-/Ausgabefehler /dev/mapper/36090a02860eea13dc5aed55e4cc57698: read failed after 0 of 4096 at 1099526299648: Eingabe-/Ausgabefehler /dev/mapper/36090a02860eea13dc5aed55e4cc57698: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/metadata: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/metadata: read failed after 0 of 4096 at 536805376: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/metadata: read failed after 0 of 4096 at 536862720: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/metadata: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/ids: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/ids: read failed after 0 of 4096 at 134152192: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/ids: read failed after 0 of 4096 at 134209536: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/ids: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/leases: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/leases: read failed after 0 of 4096 at 2147418112: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/leases: read failed after 0 of 4096 at 2147475456: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/leases: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/outbox: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/outbox: read failed after 0 of 4096 at 134152192: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/outbox: read failed after 0 of 4096 at 134209536: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/outbox: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/xleases: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/xleases: read failed after 0 of 4096 at 1073676288: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/xleases: read failed after 0 of 4096 at 1073733632: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/xleases: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/inbox: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/inbox: read failed after 0 of 4096 at 134152192: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/inbox: read failed after 0 of 4096 at 134209536: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/inbox: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/master: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/master: read failed after 0 of 4096 at 1073676288: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/master: read failed after 0 of 4096 at 1073733632: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/master: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/metadata: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/metadata: read failed after 0 of 4096 at 536805376: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/metadata: read failed after 0 of 4096 at 536862720: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/metadata: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/metadata: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/metadata: read failed after 0 of 4096 at 536805376: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/metadata: read failed after 0 of 4096 at 536862720: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/metadata: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/ids: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/ids: read failed after 0 of 4096 at 134152192: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/ids: read failed after 0 of 4096 at 134209536: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/ids: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/ids: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/ids: read failed after 0 of 4096 at 134152192: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/ids: read failed after 0 of 4096 at 134209536: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/ids: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/leases: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/leases: read failed after 0 of 4096 at 2147418112: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/leases: read failed after 0 of 4096 at 2147475456: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/leases: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/leases: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/leases: read failed after 0 of 4096 at 2147418112: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/leases: read failed after 0 of 4096 at 2147475456: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/leases: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/outbox: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/outbox: read failed after 0 of 4096 at 134152192: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/outbox: read failed after 0 of 4096 at 134209536: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/outbox: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/xleases: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/xleases: read failed after 0 of 4096 at 1073676288: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/xleases: read failed after 0 of 4096 at 1073733632: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/xleases: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/inbox: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/inbox: read failed after 0 of 4096 at 134152192: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/inbox: read failed after 0 of 4096 at 134209536: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/inbox: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/master: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/master: read failed after 0 of 4096 at 1073676288: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/master: read failed after 0 of 4096 at 1073733632: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/master: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/outbox: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/outbox: read failed after 0 of 4096 at 134152192: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/outbox: read failed after 0 of 4096 at 134209536: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/outbox: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/inbox: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/inbox: read failed after 0 of 4096 at 134152192: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/inbox: read failed after 0 of 4096 at 134209536: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/inbox: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/master: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/master: read failed after 0 of 4096 at 1073676288: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/master: read failed after 0 of 4096 at 1073733632: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/master: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/xleases: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/xleases: read failed after 0 of 4096 at 1073676288: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/xleases: read failed after 0 of 4096 at 1073733632: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/xleases: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert home onn_ovn-monster Vwi-aotz-- 1,00g pool00 4,79
[lvol0_pmspare] onn_ovn-monster ewi------- 144,00m
ovirt-node-ng-4.2.3-0.20180524.0+1 onn_ovn-monster Vwi-aotz-- <252,38g pool00 2,88
ovirt-node-ng-4.2.3.1-0.20180530.0+1 onn_ovn-monster Vwi-a-tz-- <252,38g pool00 0,86
ovirt-node-ng-4.2.4-0.20180626.0 onn_ovn-monster Vri-a-tz-k <252,38g pool00 0,85
ovirt-node-ng-4.2.4-0.20180626.0+1 onn_ovn-monster Vwi-a-tz-- <252,38g pool00 ovirt-node-ng-4.2.4-0.20180626.0 0,85
pool00 onn_ovn-monster twi-aotz-- <279,38g 6,76 1,01
[pool00_tdata] onn_ovn-monster Twi-ao---- <279,38g
[pool00_tmeta] onn_ovn-monster ewi-ao---- 1,00g
root onn_ovn-monster Vwi-a-tz-- <252,38g pool00 1,24
swap onn_ovn-monster -wi-ao---- 4,00g
tmp onn_ovn-monster Vwi-aotz-- 1,00g pool00 5,01
var onn_ovn-monster Vwi-aotz-- 15,00g pool00 3,56
var_crash onn_ovn-monster Vwi-aotz-- 10,00g pool00 2,86
var_log onn_ovn-monster Vwi-aotz-- 8,00g pool00 38,48
var_log_audit onn_ovn-monster Vwi-aotz-- 2,00g pool00 6,77
turbo ovirt -wi-ao---- 894,25g
Am 03.07.2018 um 12:58 schrieb Yuval Turgeman <yturgema@redhat.com>:
Oliver, can you share the output from lvs ?
On Tue, Jul 3, 2018 at 12:06 AM, Oliver Riesener < Oliver.Riesener@hs-bremen.de> wrote:
Hi Yuval,
* reinstallation failed, because LV already exists. ovirt-node-ng-4.2.4-0.20180626.0 onn_ovn-monster Vri-a-tz-k <252,38g pool00 0,85 ovirt-node-ng-4.2.4-0.20180626.0+1 onn_ovn-monster Vwi-a-tz-- <252,38g pool00 ovirt-node-ng-4.2.4-0.20180626.0 0,85 See attachment imgbased.reinstall.log
* I removed them and re-reinstall without luck.
I got KeyError: <NVR ovirt-node-ng-4.2.1.1-0.20180223.0 />
See attachment imgbased.rereinstall.log
Also a new problem with nodectl info [root@ovn-monster tmp]# nodectl info Traceback (most recent call last): File "/usr/lib64/python2.7/runpy.py", line 162, in _run_module_as_main "__main__", fname, loader, pkg_name) File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code exec code in run_globals File "/usr/lib/python2.7/site-packages/nodectl/__main__.py", line 42, in <module> CliApplication() File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 200, in CliApplication return cmdmap.command(args) File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 118, in command return self.commands[command](**kwargs) File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 76, in info Info(self.imgbased, self.machine).write() File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 45, in __init__ self._fetch_information() File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 49, in _fetch_information self._get_layout() File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 66, in _get_layout layout = LayoutParser(self.app.imgbase.layout()).parse() File "/usr/lib/python2.7/site-packages/imgbased/imgbase.py", line 155, in layout return self.naming.layout() File "/usr/lib/python2.7/site-packages/imgbased/naming.py", line 109, in layout tree = self.tree(lvs) File "/usr/lib/python2.7/site-packages/imgbased/naming.py", line 224, in tree bases[img.base.nvr].layers.append(img) KeyError: <NVR ovirt-node-ng-4.2.3-0.20180524.0 />
Am 02.07.2018 um 22:22 schrieb Oliver Riesener < Oliver.Riesener@hs-bremen.de>:
Hi Yuval,
yes you are right, there was a unused and deactivated var_crash LV.
* I activated and mount it to /var/crash via /etc/fstab. * /var/crash was empty, and LV has already ext4 fs. var_crash onn_ovn-monster Vwi-aotz-- 10,00g pool00 2,86
* Now i will try to upgrade again. * yum reinstall ovirt-node-ng-image-update.noarch
BTW, no more imgbased.log files found.
Am 02.07.2018 um 20:57 schrieb Yuval Turgeman <yturgema@redhat.com>:
From your log:
AssertionError: Path is already a volume: /var/crash
Basically, it means that you already have an LV for /var/crash but it's not mounted for some reason, so either mount it (if the data good) or remove it and then reinstall the image-update rpm. Before that, check that you dont have any other LVs in that same state - or you can post the output for lvs... btw, do you have any more imgbased.log files laying around ?
You can find more details about this here:
https://access.redhat.com/documentation/en-us/red_hat_virtua lization/4.1/html/upgrade_guide/recovering_from_failed_nist-800_upgrade
On Mon, Jul 2, 2018 at 8:12 PM, Oliver Riesener <Oliver.Riesener@hs-b remen.de> wrote:
Hi,
i attached my /tmp/imgbased.log
Sheers
Oliver
Am 02.07.2018 um 13:58 schrieb Yuval Turgeman <yuvalt@redhat.com>:
Looks like the upgrade script failed - can you please attach /var/log/imgbased.log or /tmp/imgbased.log ?
Thanks, Yuval.
On Mon, Jul 2, 2018 at 2:54 PM, Sandro Bonazzola <sbonazzo@redhat.com> wrote:
Yuval, can you please have a look?
2018-06-30 7:48 GMT+02:00 Oliver Riesener <Oliver.Riesener@hs-b remen.de>:
Yes, here is the same.
It seams the bootloader isn’t configured right ?
I did the Upgrade and reboot to 4.2.4 from UI and got:
[root@ovn-monster ~]# nodectl info layers: ovirt-node-ng-4.2.4-0.20180626.0: ovirt-node-ng-4.2.4-0.20180626.0+1 ovirt-node-ng-4.2.3.1-0.20180530.0: ovirt-node-ng-4.2.3.1-0.20180530.0+1 ovirt-node-ng-4.2.3-0.20180524.0: ovirt-node-ng-4.2.3-0.20180524.0+1 ovirt-node-ng-4.2.1.1-0.20180223.0: ovirt-node-ng-4.2.1.1-0.20180223.0+1 bootloader: default: ovirt-node-ng-4.2.3-0.20180524.0+1 entries: ovirt-node-ng-4.2.3-0.20180524.0+1: index: 0 title: ovirt-node-ng-4.2.3-0.20180524.0 kernel: /boot/ovirt-node-ng-4.2.3-0.20 180524.0+1/vmlinuz-3.10.0-862.3.2.el7.x86_64 args: "ro crashkernel=auto rd.lvm.lv=onn _ovn-monster/ovirt-node-ng-4.2.3-0.20180524.0+1 rd.lvm.lv=onn_ovn-monster/swap rd.md.uuid=c6c3013b:027a9346:67dfd181:89635587 rhgb quiet LANG=de_DE.UTF-8 img.bootid=ovirt-node-ng-4.2.3-0.20180524.0+1" initrd: /boot/ovirt-node-ng-4.2.3-0.20 180524.0+1/initramfs-3.10.0-862.3.2.el7.x86_64.img root: /dev/onn_ovn-monster/ovirt-node-ng-4.2.3-0.20180524.0+1 ovirt-node-ng-4.2.1.1-0.20180223.0+1: index: 1 title: ovirt-node-ng-4.2.1.1-0.20180223.0 kernel: /boot/ovirt-node-ng-4.2.1.1-0. 20180223.0+1/vmlinuz-3.10.0-693.17.1.el7.x86_64 args: "ro crashkernel=auto rd.lvm.lv=onn _ovn-monster/ovirt-node-ng-4.2.1.1-0.20180223.0+1 rd.lvm.lv=onn_ovn-monster/swap rd.md.uuid=c6c3013b:027a9346:67dfd181:89635587 rhgb quiet LANG=de_DE.UTF-8 img.bootid=ovirt-node-ng-4.2.1.1-0.20180223.0+1" initrd: /boot/ovirt-node-ng-4.2.1.1-0. 20180223.0+1/initramfs-3.10.0-693.17.1.el7.x86_64.img root: /dev/onn_ovn-monster/ovirt-node-ng-4.2.1.1-0.20180223.0+1 current_layer: ovirt-node-ng-4.2.3-0.20180524.0+1 [root@ovn-monster ~]# uptime 07:35:27 up 2 days, 15:42, 1 user, load average: 1,07, 1,00, 0,95
Am 29.06.2018 um 23:53 schrieb Matt Simonsen <matt@khoza.com>:
Hello,
I did yum updates on 2 of my oVirt 4.2.3 nodes running the prebuilt node platform and it doesn't appear the updates worked.
[root@node6-g8-h4 ~]# yum update Loaded plugins: enabled_repos_upload, fastestmirror, imgbased-persist, : package_upload, product-id, search-disabled-repos, subscription- : manager This system is not registered with an entitlement server. You can use subscription-manager to register. Loading mirror speeds from cached hostfile * ovirt-4.2-epel: linux.mirrors.es.net Resolving Dependencies --> Running transaction check ---> Package ovirt-node-ng-image-update.noarch 0:4.2.3.1-1.el7 will be updated ---> Package ovirt-node-ng-image-update.noarch 0:4.2.4-1.el7 will be obsoleting ---> Package ovirt-node-ng-image-update-placeholder.noarch 0:4.2.3.1-1.el7 will be obsoleted --> Finished Dependency Resolution
Dependencies Resolved
============================================================ ============================================================= Package Arch Version Repository Size ============================================================ ============================================================= Installing: ovirt-node-ng-image-update noarch 4.2.4-1.el7 ovirt-4.2 647 M replacing ovirt-node-ng-image-update-placeholder.noarch 4.2.3.1-1.el7
Transaction Summary ============================================================ ============================================================= Install 1 Package
Total download size: 647 M Is this ok [y/d/N]: y Downloading packages: warning: /var/cache/yum/x86_64/7/ovirt-4.2/packages/ovirt-node-ng-ima ge-update-4.2.4-1.el7.noarch.rpm: Header V4 RSA/SHA1 Signature, key ID fe590cb7: NOKEY Public key for ovirt-node-ng-image-update-4.2.4-1.el7.noarch.rpm is not installed ovirt-node-ng-image-update-4.2.4-1.el7.noarch.rpm | 647 MB 00:02:07 Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-ovirt-4.2 Importing GPG key 0xFE590CB7: Userid : "oVirt <infra@ovirt.org>" Fingerprint: 31a5 d783 7fad 7cb2 86cd 3469 ab8c 4f9d fe59 0cb7 Package : ovirt-release42-4.2.3.1-1.el7.noarch (installed) From : /etc/pki/rpm-gpg/RPM-GPG-ovirt-4.2 Is this ok [y/N]: y Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : ovirt-node-ng-image-update-4.2.4-1.el7.noarch 1/3 warning: %post(ovirt-node-ng-image-update-4.2.4-1.el7.noarch) scriptlet failed, exit status 1 Non-fatal POSTIN scriptlet failure in rpm package ovirt-node-ng-image-update-4.2.4-1.el7.noarch Erasing : ovirt-node-ng-image-update-placeholder-4.2.3.1-1.el7.noarch 2/3 Cleanup : ovirt-node-ng-image-update-4.2.3.1-1.el7.noarch 3/3 warning: file /usr/share/ovirt-node-ng/image /ovirt-node-ng-4.2.0-0.20180530.0.el7.squashfs.img: remove failed: No such file or directory Uploading Package Profile Unable to upload Package Profile Verifying : ovirt-node-ng-image-update-4.2.4-1.el7.noarch 1/3 Verifying : ovirt-node-ng-image-update-4.2.3.1-1.el7.noarch 2/3 Verifying : ovirt-node-ng-image-update-placeholder-4.2.3.1-1.el7.noarch 3/3
Installed: ovirt-node-ng-image-update.noarch 0:4.2.4-1.el7
Replaced: ovirt-node-ng-image-update-placeholder.noarch 0:4.2.3.1-1.el7
Complete! Uploading Enabled Repositories Report Loaded plugins: fastestmirror, product-id, subscription-manager This system is not registered with an entitlement server. You can use subscription-manager to register. Cannot upload enabled repos report, is this client registered?
My engine shows the nodes as having no updates, however the major components including the kernel version and port 9090 admin GUI show 4.2.3
Is there anything I can provide to help diagnose the issue?
[root@node6-g8-h4 ~]# rpm -qa | grep ovirt
ovirt-imageio-common-1.3.1.2-0.el7.centos.noarch ovirt-host-deploy-1.7.3-1.el7.centos.noarch ovirt-vmconsole-host-1.0.5-4.el7.centos.noarch ovirt-provider-ovn-driver-1.2.10-1.el7.centos.noarch ovirt-engine-sdk-python-3.6.9.1-1.el7.noarch ovirt-setup-lib-1.1.4-1.el7.centos.noarch ovirt-release42-4.2.3.1-1.el7.noarch ovirt-imageio-daemon-1.3.1.2-0.el7.centos.noarch ovirt-hosted-engine-setup-2.2.20-1.el7.centos.noarch ovirt-host-dependencies-4.2.2-2.el7.centos.x86_64 ovirt-hosted-engine-ha-2.2.11-1.el7.centos.noarch ovirt-host-4.2.2-2.el7.centos.x86_64 ovirt-node-ng-image-update-4.2.4-1.el7.noarch ovirt-vmconsole-1.0.5-4.el7.centos.noarch ovirt-release-host-node-4.2.3.1-1.el7.noarch cockpit-ovirt-dashboard-0.11.24-1.el7.centos.noarch ovirt-node-ng-nodectl-4.2.0-0.20180524.0.el7.noarch python-ovirt-engine-sdk4-4.2.6-2.el7.centos.x86_64
[root@node6-g8-h4 ~]# yum update Loaded plugins: enabled_repos_upload, fastestmirror, imgbased-persist, package_upload, product-id, search-disabled-repos, subscription-manager This system is not registered with an entitlement server. You can use subscription-manager to register. Loading mirror speeds from cached hostfile * ovirt-4.2-epel: linux.mirrors.es.net No packages marked for update Uploading Enabled Repositories Report Loaded plugins: fastestmirror, product-id, subscription-manager This system is not registered with an entitlement server. You can use subscription-manager to register. Cannot upload enabled repos report, is this client registered? _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org /community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt. org/message/UHQMGULUHL4GBBHUBNGOAICJEM6W3RVW/
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org /community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt. org/message/RTOXFNAXQ3NJBWX7RXOYK5H5RZBHX2OK/
-- SANDRO BONAZZOLA
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV Red Hat EMEA <https://www.redhat.com/>
sbonazzo@redhat.com <https://red.ht/sig>
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org /community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt. org/message/ACKMTWOUNXB7NWOCRZIOWJI7U5HIQAUV/
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/communit y/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt. org/message/E5MFPG7WW5BIQ5BOC7LG5C23YZ7B7UJ2/
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/communit y/about/community-guidelines/ List Archives: https://lists.ovirt.org/archiv es/list/users@ovirt.org/message/IPKGIHH7SFTW7K7YOKL4QXBYXR5KMKNB/
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/communit y/about/community-guidelines/ List Archives: https://lists.ovirt.org/archiv es/list/users@ovirt.org/message/6HKLX746IND2NQVRRGKACMSQCA3GK6GA/

I did it, with issues, see attachment.
Am 03.07.2018 um 22:25 schrieb Yuval Turgeman <yturgema@redhat.com>:
Hi Oliver,
I would try the following, but please notice it is *very* dangerous, so a backup is probably a good idea (man vgcfgrestore)...
1. vgcfgrestore --list onn_ovn-monster 2. search for a .vg file that was created before deleting those 2 lvs (ovirt-node-ng-4.2.3-0.20180524.0 and ovirt-node-ng-4.2.3.1-0.20180530.0) 3. vgcfgrestore -f path-to-the-file-from-step2.vg <http://path-to-the-file-from-step2.vg/> onn_ovn-monster --force 4. lvremove onn_ovn-monster/ovirt-node-ng-4.2.4-0.20180626.0 5. lvremove onn_ovn-monster/ovirt-node-ng-4.2.4-0.20180626.0+1 6. lvremove the lvs from the thinpool that are not mounted/used (var_crash?) 7. nodectl info to make sure everything is ok 8. reinstall the image-update rpm
Thanks, Yuval.
On Tue, Jul 3, 2018 at 10:57 PM, Yuval Turgeman <yturgema@redhat.com <mailto:yturgema@redhat.com>> wrote: Hi Oliver,
The KeyError happens because there are no bases for the layers. For each LV that ends with a +1, there should be a base read-only LV without +1. So for 3 ovirt-node-ng images, you're supposed to have 6 layers. This is the reason nodectl info fails, and the upgrade will fail also. In your original email it looks OK - I have never seen this happen, was this a manual lvremove ? I need to reproduce this and check what can be done.
You can find me on #ovirt (irc.oftc.net <http://irc.oftc.net/>) also :)
On Tue, Jul 3, 2018 at 10:41 PM, Oliver Riesener <Oliver.Riesener@hs-bremen.de <mailto:Oliver.Riesener@hs-bremen.de>> wrote: Yuval, here comes the lvs output.
The IO Errors are because Node is in maintenance. The LV root is from previous installed centos 7.5. The i have installed node-ng 4.2.1 and got this MIX. The LV turbo is a SSD in it’s own VG named ovirt.
I have removed LV ovirt-node-ng-4.2.1-0.20180223.0 and (+1) removed because nodectl info error:
KeyError: <NVR ovirt-node-ng-4.2.1.1-0.20180223.0
Now i get the error @4.2.3: [root@ovn-monster ~]# nodectl info Traceback (most recent call last): File "/usr/lib64/python2.7/runpy.py", line 162, in _run_module_as_main "__main__", fname, loader, pkg_name) File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code exec code in run_globals File "/usr/lib/python2.7/site-packages/nodectl/__main__.py", line 42, in <module> CliApplication() File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 200, in CliApplication return cmdmap.command(args) File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 118, in command return self.commands[command](**kwargs) File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 76, in info Info(self.imgbased, self.machine).write() File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 45, in __init__ self._fetch_information() File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 49, in _fetch_information self._get_layout() File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 66, in _get_layout layout = LayoutParser(self.app.imgbase.layout()).parse() File "/usr/lib/python2.7/site-packages/imgbased/imgbase.py", line 155, in layout return self.naming.layout() File "/usr/lib/python2.7/site-packages/imgbased/naming.py", line 109, in layout tree = self.tree(lvs) File "/usr/lib/python2.7/site-packages/imgbased/naming.py", line 224, in tree bases[img.base.nvr].layers.append(img) KeyError: <NVR ovirt-node-ng-4.2.3-0.20180524.0 />
lvs -a
[root@ovn-monster ~]# lvs -a /dev/mapper/36090a02860ee716bee5e05b11dc52616: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/mapper/36090a02860ee716bee5e05b11dc52616: read failed after 0 of 4096 at 5497568559104: Eingabe-/Ausgabefehler /dev/mapper/36090a02860ee716bee5e05b11dc52616: read failed after 0 of 4096 at 5497568616448: Eingabe-/Ausgabefehler /dev/mapper/36090a02860ee716bee5e05b11dc52616: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/mapper/36090a02860ee9137c5ae35cd4bc5f6b8: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/mapper/36090a02860ee9137c5ae35cd4bc5f6b8: read failed after 0 of 4096 at 1099526242304: Eingabe-/Ausgabefehler /dev/mapper/36090a02860ee9137c5ae35cd4bc5f6b8: read failed after 0 of 4096 at 1099526299648: Eingabe-/Ausgabefehler /dev/mapper/36090a02860ee9137c5ae35cd4bc5f6b8: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/mapper/36090a02860eea13dc5aed55e4cc57698: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/mapper/36090a02860eea13dc5aed55e4cc57698: read failed after 0 of 4096 at 1099526242304: Eingabe-/Ausgabefehler /dev/mapper/36090a02860eea13dc5aed55e4cc57698: read failed after 0 of 4096 at 1099526299648: Eingabe-/Ausgabefehler /dev/mapper/36090a02860eea13dc5aed55e4cc57698: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/metadata: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/metadata: read failed after 0 of 4096 at 536805376: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/metadata: read failed after 0 of 4096 at 536862720: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/metadata: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/ids: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/ids: read failed after 0 of 4096 at 134152192: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/ids: read failed after 0 of 4096 at 134209536: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/ids: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/leases: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/leases: read failed after 0 of 4096 at 2147418112: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/leases: read failed after 0 of 4096 at 2147475456: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/leases: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/outbox: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/outbox: read failed after 0 of 4096 at 134152192: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/outbox: read failed after 0 of 4096 at 134209536: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/outbox: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/xleases: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/xleases: read failed after 0 of 4096 at 1073676288: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/xleases: read failed after 0 of 4096 at 1073733632: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/xleases: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/inbox: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/inbox: read failed after 0 of 4096 at 134152192: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/inbox: read failed after 0 of 4096 at 134209536: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/inbox: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/master: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/master: read failed after 0 of 4096 at 1073676288: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/master: read failed after 0 of 4096 at 1073733632: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/master: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/metadata: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/metadata: read failed after 0 of 4096 at 536805376: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/metadata: read failed after 0 of 4096 at 536862720: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/metadata: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/metadata: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/metadata: read failed after 0 of 4096 at 536805376: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/metadata: read failed after 0 of 4096 at 536862720: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/metadata: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/ids: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/ids: read failed after 0 of 4096 at 134152192: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/ids: read failed after 0 of 4096 at 134209536: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/ids: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/ids: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/ids: read failed after 0 of 4096 at 134152192: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/ids: read failed after 0 of 4096 at 134209536: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/ids: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/leases: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/leases: read failed after 0 of 4096 at 2147418112: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/leases: read failed after 0 of 4096 at 2147475456: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/leases: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/leases: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/leases: read failed after 0 of 4096 at 2147418112: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/leases: read failed after 0 of 4096 at 2147475456: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/leases: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/outbox: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/outbox: read failed after 0 of 4096 at 134152192: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/outbox: read failed after 0 of 4096 at 134209536: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/outbox: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/xleases: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/xleases: read failed after 0 of 4096 at 1073676288: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/xleases: read failed after 0 of 4096 at 1073733632: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/xleases: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/inbox: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/inbox: read failed after 0 of 4096 at 134152192: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/inbox: read failed after 0 of 4096 at 134209536: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/inbox: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/master: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/master: read failed after 0 of 4096 at 1073676288: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/master: read failed after 0 of 4096 at 1073733632: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/master: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/outbox: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/outbox: read failed after 0 of 4096 at 134152192: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/outbox: read failed after 0 of 4096 at 134209536: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/outbox: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/inbox: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/inbox: read failed after 0 of 4096 at 134152192: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/inbox: read failed after 0 of 4096 at 134209536: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/inbox: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/master: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/master: read failed after 0 of 4096 at 1073676288: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/master: read failed after 0 of 4096 at 1073733632: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/master: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/xleases: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/xleases: read failed after 0 of 4096 at 1073676288: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/xleases: read failed after 0 of 4096 at 1073733632: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/xleases: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert home onn_ovn-monster Vwi-aotz-- 1,00g pool00 4,79 [lvol0_pmspare] onn_ovn-monster ewi------- 144,00m ovirt-node-ng-4.2.3-0.20180524.0+1 onn_ovn-monster Vwi-aotz-- <252,38g pool00 2,88 ovirt-node-ng-4.2.3.1-0.20180530.0+1 onn_ovn-monster Vwi-a-tz-- <252,38g pool00 0,86 ovirt-node-ng-4.2.4-0.20180626.0 onn_ovn-monster Vri-a-tz-k <252,38g pool00 0,85 ovirt-node-ng-4.2.4-0.20180626.0+1 onn_ovn-monster Vwi-a-tz-- <252,38g pool00 ovirt-node-ng-4.2.4-0.20180626.0 0,85 pool00 onn_ovn-monster twi-aotz-- <279,38g 6,76 1,01 [pool00_tdata] onn_ovn-monster Twi-ao---- <279,38g [pool00_tmeta] onn_ovn-monster ewi-ao---- 1,00g root onn_ovn-monster Vwi-a-tz-- <252,38g pool00 1,24 swap onn_ovn-monster -wi-ao---- 4,00g tmp onn_ovn-monster Vwi-aotz-- 1,00g pool00 5,01 var onn_ovn-monster Vwi-aotz-- 15,00g pool00 3,56 var_crash onn_ovn-monster Vwi-aotz-- 10,00g pool00 2,86 var_log onn_ovn-monster Vwi-aotz-- 8,00g pool00 38,48 var_log_audit onn_ovn-monster Vwi-aotz-- 2,00g pool00 6,77 turbo ovirt -wi-ao---- 894,25g
Am 03.07.2018 um 12:58 schrieb Yuval Turgeman <yturgema@redhat.com <mailto:yturgema@redhat.com>>:
Oliver, can you share the output from lvs ?
On Tue, Jul 3, 2018 at 12:06 AM, Oliver Riesener <Oliver.Riesener@hs-bremen.de <mailto:Oliver.Riesener@hs-bremen.de>> wrote: Hi Yuval,
* reinstallation failed, because LV already exists. ovirt-node-ng-4.2.4-0.20180626.0 onn_ovn-monster Vri-a-tz-k <252,38g pool00 0,85 ovirt-node-ng-4.2.4-0.20180626.0+1 onn_ovn-monster Vwi-a-tz-- <252,38g pool00 ovirt-node-ng-4.2.4-0.20180626.0 0,85 See attachment imgbased.reinstall.log
* I removed them and re-reinstall without luck.
I got KeyError: <NVR ovirt-node-ng-4.2.1.1-0.20180223.0 />
See attachment imgbased.rereinstall.log
Also a new problem with nodectl info [root@ovn-monster tmp]# nodectl info Traceback (most recent call last): File "/usr/lib64/python2.7/runpy.py", line 162, in _run_module_as_main "__main__", fname, loader, pkg_name) File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code exec code in run_globals File "/usr/lib/python2.7/site-packages/nodectl/__main__.py", line 42, in <module> CliApplication() File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 200, in CliApplication return cmdmap.command(args) File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 118, in command return self.commands[command](**kwargs) File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 76, in info Info(self.imgbased, self.machine).write() File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 45, in __init__ self._fetch_information() File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 49, in _fetch_information self._get_layout() File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 66, in _get_layout layout = LayoutParser(self.app.imgbase.layout()).parse() File "/usr/lib/python2.7/site-packages/imgbased/imgbase.py", line 155, in layout return self.naming.layout() File "/usr/lib/python2.7/site-packages/imgbased/naming.py", line 109, in layout tree = self.tree(lvs) File "/usr/lib/python2.7/site-packages/imgbased/naming.py", line 224, in tree bases[img.base.nvr].layers.append(img) KeyError: <NVR ovirt-node-ng-4.2.3-0.20180524.0 />
Am 02.07.2018 um 22:22 schrieb Oliver Riesener <Oliver.Riesener@hs-bremen.de <mailto:Oliver.Riesener@hs-bremen.de>>:
Hi Yuval,
yes you are right, there was a unused and deactivated var_crash LV.
* I activated and mount it to /var/crash via /etc/fstab. * /var/crash was empty, and LV has already ext4 fs. var_crash onn_ovn-monster Vwi-aotz-- 10,00g pool00 2,86
* Now i will try to upgrade again. * yum reinstall ovirt-node-ng-image-update.noarch
BTW, no more imgbased.log files found.
Am 02.07.2018 um 20:57 schrieb Yuval Turgeman <yturgema@redhat.com <mailto:yturgema@redhat.com>>:
From your log:
AssertionError: Path is already a volume: /var/crash
Basically, it means that you already have an LV for /var/crash but it's not mounted for some reason, so either mount it (if the data good) or remove it and then reinstall the image-update rpm. Before that, check that you dont have any other LVs in that same state - or you can post the output for lvs... btw, do you have any more imgbased.log files laying around ?
You can find more details about this here:
https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.1/htm... <https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.1/html/upgrade_guide/recovering_from_failed_nist-800_upgrade>
On Mon, Jul 2, 2018 at 8:12 PM, Oliver Riesener <Oliver.Riesener@hs-bremen.de <mailto:Oliver.Riesener@hs-bremen.de>> wrote: Hi,
i attached my /tmp/imgbased.log
Sheers
Oliver
Am 02.07.2018 um 13:58 schrieb Yuval Turgeman <yuvalt@redhat.com <mailto:yuvalt@redhat.com>>:
Looks like the upgrade script failed - can you please attach /var/log/imgbased.log or /tmp/imgbased.log ?
Thanks, Yuval.
On Mon, Jul 2, 2018 at 2:54 PM, Sandro Bonazzola <sbonazzo@redhat.com <mailto:sbonazzo@redhat.com>> wrote: Yuval, can you please have a look?
2018-06-30 7:48 GMT+02:00 Oliver Riesener <Oliver.Riesener@hs-bremen.de <mailto:Oliver.Riesener@hs-bremen.de>>: Yes, here is the same.
It seams the bootloader isn’t configured right ?
I did the Upgrade and reboot to 4.2.4 from UI and got:
[root@ovn-monster ~]# nodectl info layers: ovirt-node-ng-4.2.4-0.20180626.0: ovirt-node-ng-4.2.4-0.20180626.0+1 ovirt-node-ng-4.2.3.1-0.20180530.0: ovirt-node-ng-4.2.3.1-0.20180530.0+1 ovirt-node-ng-4.2.3-0.20180524.0: ovirt-node-ng-4.2.3-0.20180524.0+1 ovirt-node-ng-4.2.1.1-0.20180223.0: ovirt-node-ng-4.2.1.1-0.20180223.0+1 bootloader: default: ovirt-node-ng-4.2.3-0.20180524.0+1 entries: ovirt-node-ng-4.2.3-0.20180524.0+1: index: 0 title: ovirt-node-ng-4.2.3-0.20180524.0 kernel: /boot/ovirt-node-ng-4.2.3-0.20180524.0+1/vmlinuz-3.10.0-862.3.2.el7.x86_64 args: "ro crashkernel=auto rd.lvm.lv <http://rd.lvm.lv/>=onn_ovn-monster/ovirt-node-ng-4.2.3-0.20180524.0+1 rd.lvm.lv <http://rd.lvm.lv/>=onn_ovn-monster/swap rd.md.uuid=c6c3013b:027a9346:67dfd181:89635587 rhgb quiet LANG=de_DE.UTF-8 img.bootid=ovirt-node-ng-4.2.3-0.20180524.0+1" initrd: /boot/ovirt-node-ng-4.2.3-0.20180524.0+1/initramfs-3.10.0-862.3.2.el7.x86_64.img root: /dev/onn_ovn-monster/ovirt-node-ng-4.2.3-0.20180524.0+1 ovirt-node-ng-4.2.1.1-0.20180223.0+1: index: 1 title: ovirt-node-ng-4.2.1.1-0.20180223.0 kernel: /boot/ovirt-node-ng-4.2.1.1-0.20180223.0+1/vmlinuz-3.10.0-693.17.1.el7.x86_64 args: "ro crashkernel=auto rd.lvm.lv <http://rd.lvm.lv/>=onn_ovn-monster/ovirt-node-ng-4.2.1.1-0.20180223.0+1 rd.lvm.lv <http://rd.lvm.lv/>=onn_ovn-monster/swap rd.md.uuid=c6c3013b:027a9346:67dfd181:89635587 rhgb quiet LANG=de_DE.UTF-8 img.bootid=ovirt-node-ng-4.2.1.1-0.20180223.0+1" initrd: /boot/ovirt-node-ng-4.2.1.1-0.20180223.0+1/initramfs-3.10.0-693.17.1.el7.x86_64.img root: /dev/onn_ovn-monster/ovirt-node-ng-4.2.1.1-0.20180223.0+1 current_layer: ovirt-node-ng-4.2.3-0.20180524.0+1 [root@ovn-monster ~]# uptime 07:35:27 up 2 days, 15:42, 1 user, load average: 1,07, 1,00, 0,95
Am 29.06.2018 um 23:53 schrieb Matt Simonsen <matt@khoza.com <mailto:matt@khoza.com>>:
Hello,
I did yum updates on 2 of my oVirt 4.2.3 nodes running the prebuilt node platform and it doesn't appear the updates worked.
[root@node6-g8-h4 ~]# yum update Loaded plugins: enabled_repos_upload, fastestmirror, imgbased-persist, : package_upload, product-id, search-disabled-repos, subscription- : manager This system is not registered with an entitlement server. You can use subscription-manager to register. Loading mirror speeds from cached hostfile * ovirt-4.2-epel: linux.mirrors.es.net <http://linux.mirrors.es.net/> Resolving Dependencies --> Running transaction check ---> Package ovirt-node-ng-image-update.noarch 0:4.2.3.1-1.el7 will be updated ---> Package ovirt-node-ng-image-update.noarch 0:4.2.4-1.el7 will be obsoleting ---> Package ovirt-node-ng-image-update-placeholder.noarch 0:4.2.3.1-1.el7 will be obsoleted --> Finished Dependency Resolution
Dependencies Resolved
========================================================================================================================= Package Arch Version Repository Size ========================================================================================================================= Installing: ovirt-node-ng-image-update noarch 4.2.4-1.el7 ovirt-4.2 647 M replacing ovirt-node-ng-image-update-placeholder.noarch 4.2.3.1-1.el7
Transaction Summary ========================================================================================================================= Install 1 Package
Total download size: 647 M Is this ok [y/d/N]: y Downloading packages: warning: /var/cache/yum/x86_64/7/ovirt-4.2/packages/ovirt-node-ng-image-update-4.2.4-1.el7.noarch.rpm: Header V4 RSA/SHA1 Signature, key ID fe590cb7: NOKEY Public key for ovirt-node-ng-image-update-4.2.4-1.el7.noarch.rpm is not installed ovirt-node-ng-image-update-4.2.4-1.el7.noarch.rpm | 647 MB 00:02:07 Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-ovirt-4.2 <> Importing GPG key 0xFE590CB7: Userid : "oVirt <infra@ovirt.org <mailto:infra@ovirt.org>>" Fingerprint: 31a5 d783 7fad 7cb2 86cd 3469 ab8c 4f9d fe59 0cb7 Package : ovirt-release42-4.2.3.1-1.el7.noarch (installed) From : /etc/pki/rpm-gpg/RPM-GPG-ovirt-4.2 Is this ok [y/N]: y Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : ovirt-node-ng-image-update-4.2.4-1.el7.noarch 1/3 warning: %post(ovirt-node-ng-image-update-4.2.4-1.el7.noarch) scriptlet failed, exit status 1 Non-fatal POSTIN scriptlet failure in rpm package ovirt-node-ng-image-update-4.2.4-1.el7.noarch Erasing : ovirt-node-ng-image-update-placeholder-4.2.3.1-1.el7.noarch 2/3 Cleanup : ovirt-node-ng-image-update-4.2.3.1-1.el7.noarch 3/3 warning: file /usr/share/ovirt-node-ng/image/ovirt-node-ng-4.2.0-0.20180530.0.el7.squashfs.img: remove failed: No such file or directory Uploading Package Profile Unable to upload Package Profile Verifying : ovirt-node-ng-image-update-4.2.4-1.el7.noarch 1/3 Verifying : ovirt-node-ng-image-update-4.2.3.1-1.el7.noarch 2/3 Verifying : ovirt-node-ng-image-update-placeholder-4.2.3.1-1.el7.noarch 3/3
Installed: ovirt-node-ng-image-update.noarch 0:4.2.4-1.el7
Replaced: ovirt-node-ng-image-update-placeholder.noarch 0:4.2.3.1-1.el7
Complete! Uploading Enabled Repositories Report Loaded plugins: fastestmirror, product-id, subscription-manager This system is not registered with an entitlement server. You can use subscription-manager to register. Cannot upload enabled repos report, is this client registered?
My engine shows the nodes as having no updates, however the major components including the kernel version and port 9090 admin GUI show 4.2.3
Is there anything I can provide to help diagnose the issue?
[root@node6-g8-h4 ~]# rpm -qa | grep ovirt
ovirt-imageio-common-1.3.1.2-0.el7.centos.noarch ovirt-host-deploy-1.7.3-1.el7.centos.noarch ovirt-vmconsole-host-1.0.5-4.el7.centos.noarch ovirt-provider-ovn-driver-1.2.10-1.el7.centos.noarch ovirt-engine-sdk-python-3.6.9.1-1.el7.noarch ovirt-setup-lib-1.1.4-1.el7.centos.noarch ovirt-release42-4.2.3.1-1.el7.noarch ovirt-imageio-daemon-1.3.1.2-0.el7.centos.noarch ovirt-hosted-engine-setup-2.2.20-1.el7.centos.noarch ovirt-host-dependencies-4.2.2-2.el7.centos.x86_64 ovirt-hosted-engine-ha-2.2.11-1.el7.centos.noarch ovirt-host-4.2.2-2.el7.centos.x86_64 ovirt-node-ng-image-update-4.2.4-1.el7.noarch ovirt-vmconsole-1.0.5-4.el7.centos.noarch ovirt-release-host-node-4.2.3.1-1.el7.noarch cockpit-ovirt-dashboard-0.11.24-1.el7.centos.noarch ovirt-node-ng-nodectl-4.2.0-0.20180524.0.el7.noarch python-ovirt-engine-sdk4-4.2.6-2.el7.centos.x86_64
[root@node6-g8-h4 ~]# yum update Loaded plugins: enabled_repos_upload, fastestmirror, imgbased-persist, package_upload, product-id, search-disabled-repos, subscription-manager This system is not registered with an entitlement server. You can use subscription-manager to register. Loading mirror speeds from cached hostfile * ovirt-4.2-epel: linux.mirrors.es.net <http://linux.mirrors.es.net/> No packages marked for update Uploading Enabled Repositories Report Loaded plugins: fastestmirror, product-id, subscription-manager This system is not registered with an entitlement server. You can use subscription-manager to register. Cannot upload enabled repos report, is this client registered? _______________________________________________ Users mailing list -- users@ovirt.org <mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org <mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ <https://www.ovirt.org/site/privacy-policy/> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ <https://www.ovirt.org/community/about/community-guidelines/> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/UHQMGULUHL4GBB... <https://lists.ovirt.org/archives/list/users@ovirt.org/message/UHQMGULUHL4GBBHUBNGOAICJEM6W3RVW/>
_______________________________________________ Users mailing list -- users@ovirt.org <mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org <mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ <https://www.ovirt.org/site/privacy-policy/> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ <https://www.ovirt.org/community/about/community-guidelines/> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/RTOXFNAXQ3NJBW... <https://lists.ovirt.org/archives/list/users@ovirt.org/message/RTOXFNAXQ3NJBWX7RXOYK5H5RZBHX2OK/>
-- SANDRO BONAZZOLA MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV Red Hat EMEA <https://www.redhat.com/> sbonazzo@redhat.com <mailto:sbonazzo@redhat.com>
_______________________________________________ Users mailing list -- users@ovirt.org <mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org <mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ <https://www.ovirt.org/site/privacy-policy/> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ <https://www.ovirt.org/community/about/community-guidelines/> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/ACKMTWOUNXB7NW... <https://lists.ovirt.org/archives/list/users@ovirt.org/message/ACKMTWOUNXB7NWOCRZIOWJI7U5HIQAUV/>
_______________________________________________ Users mailing list -- users@ovirt.org <mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org <mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ <https://www.ovirt.org/site/privacy-policy/> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ <https://www.ovirt.org/community/about/community-guidelines/> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/E5MFPG7WW5BIQ5... <https://lists.ovirt.org/archives/list/users@ovirt.org/message/E5MFPG7WW5BIQ5BOC7LG5C23YZ7B7UJ2/>
Users mailing list -- users@ovirt.org <mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org <mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ <https://www.ovirt.org/site/privacy-policy/> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ <https://www.ovirt.org/community/about/community-guidelines/> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/IPKGIHH7SFTW7K... <https://lists.ovirt.org/archives/list/users@ovirt.org/message/IPKGIHH7SFTW7K7YOKL4QXBYXR5KMKNB/>
_______________________________________________ Users mailing list -- users@ovirt.org <mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org <mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ <https://www.ovirt.org/site/privacy-policy/> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ <https://www.ovirt.org/community/about/community-guidelines/> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/6HKLX746IND2NQ... <https://lists.ovirt.org/archives/list/users@ovirt.org/message/6HKLX746IND2NQVRRGKACMSQCA3GK6GA/>

OK Good, this is much better now, but ovirt-node-ng-4.2.4-0.20180626.0+1 still exists without its base - try this: 1. lvremove onn_ovn-monster/ovirt-node-ng-4.2.4-0.20180626.0+1 2. nodectl info On Tue, Jul 3, 2018 at 11:52 PM, Oliver Riesener < Oliver.Riesener@hs-bremen.de> wrote:
I did it, with issues, see attachment.
Am 03.07.2018 um 22:25 schrieb Yuval Turgeman <yturgema@redhat.com>:
Hi Oliver,
I would try the following, but please notice it is *very* dangerous, so a backup is probably a good idea (man vgcfgrestore)...
1. vgcfgrestore --list onn_ovn-monster 2. search for a .vg file that was created before deleting those 2 lvs ( ovirt-node-ng-4.2.3-0.20180524.0 and ovirt-node-ng-4.2.3.1-0.20180530.0) 3. vgcfgrestore -f path-to-the-file-from-step2.vg onn_ovn-monster --force 4. lvremove onn_ovn-monster/ovirt-node-ng-4.2.4-0.20180626.0 5. lvremove onn_ovn-monster/ovirt-node-ng-4.2.4-0.20180626.0+1 6. lvremove the lvs from the thinpool that are not mounted/used (var_crash?) 7. nodectl info to make sure everything is ok 8. reinstall the image-update rpm
Thanks, Yuval.
On Tue, Jul 3, 2018 at 10:57 PM, Yuval Turgeman <yturgema@redhat.com> wrote:
Hi Oliver,
The KeyError happens because there are no bases for the layers. For each LV that ends with a +1, there should be a base read-only LV without +1. So for 3 ovirt-node-ng images, you're supposed to have 6 layers. This is the reason nodectl info fails, and the upgrade will fail also. In your original email it looks OK - I have never seen this happen, was this a manual lvremove ? I need to reproduce this and check what can be done.
You can find me on #ovirt (irc.oftc.net) also :)
On Tue, Jul 3, 2018 at 10:41 PM, Oliver Riesener < Oliver.Riesener@hs-bremen.de> wrote:
Yuval, here comes the lvs output.
The IO Errors are because Node is in maintenance. The LV root is from previous installed centos 7.5. The i have installed node-ng 4.2.1 and got this MIX. The LV turbo is a SSD in it’s own VG named ovirt.
I have removed LV ovirt-node-ng-4.2.1-0.20180223.0 and (+1) removed because nodectl info error:
KeyError: <NVR ovirt-node-ng-4.2.1.1-0.20180223.0
Now i get the error @4.2.3: [root@ovn-monster ~]# nodectl info Traceback (most recent call last): File "/usr/lib64/python2.7/runpy.py", line 162, in _run_module_as_main "__main__", fname, loader, pkg_name) File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code exec code in run_globals File "/usr/lib/python2.7/site-packages/nodectl/__main__.py", line 42, in <module> CliApplication() File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 200, in CliApplication return cmdmap.command(args) File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 118, in command return self.commands[command](**kwargs) File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 76, in info Info(self.imgbased, self.machine).write() File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 45, in __init__ self._fetch_information() File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 49, in _fetch_information self._get_layout() File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 66, in _get_layout layout = LayoutParser(self.app.imgbase.layout()).parse() File "/usr/lib/python2.7/site-packages/imgbased/imgbase.py", line 155, in layout return self.naming.layout() File "/usr/lib/python2.7/site-packages/imgbased/naming.py", line 109, in layout tree = self.tree(lvs) File "/usr/lib/python2.7/site-packages/imgbased/naming.py", line 224, in tree bases[img.base.nvr].layers.append(img) KeyError: <NVR ovirt-node-ng-4.2.3-0.20180524.0 />
lvs -a
[root@ovn-monster ~]# lvs -a /dev/mapper/36090a02860ee716bee5e05b11dc52616: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/mapper/36090a02860ee716bee5e05b11dc52616: read failed after 0 of 4096 at 5497568559104: Eingabe-/Ausgabefehler /dev/mapper/36090a02860ee716bee5e05b11dc52616: read failed after 0 of 4096 at 5497568616448: Eingabe-/Ausgabefehler /dev/mapper/36090a02860ee716bee5e05b11dc52616: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/mapper/36090a02860ee9137c5ae35cd4bc5f6b8: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/mapper/36090a02860ee9137c5ae35cd4bc5f6b8: read failed after 0 of 4096 at 1099526242304: Eingabe-/Ausgabefehler /dev/mapper/36090a02860ee9137c5ae35cd4bc5f6b8: read failed after 0 of 4096 at 1099526299648: Eingabe-/Ausgabefehler /dev/mapper/36090a02860ee9137c5ae35cd4bc5f6b8: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/mapper/36090a02860eea13dc5aed55e4cc57698: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/mapper/36090a02860eea13dc5aed55e4cc57698: read failed after 0 of 4096 at 1099526242304: Eingabe-/Ausgabefehler /dev/mapper/36090a02860eea13dc5aed55e4cc57698: read failed after 0 of 4096 at 1099526299648: Eingabe-/Ausgabefehler /dev/mapper/36090a02860eea13dc5aed55e4cc57698: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/metadata: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/metadata: read failed after 0 of 4096 at 536805376: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/metadata: read failed after 0 of 4096 at 536862720: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/metadata: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/ids: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/ids: read failed after 0 of 4096 at 134152192: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/ids: read failed after 0 of 4096 at 134209536: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/ids: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/leases: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/leases: read failed after 0 of 4096 at 2147418112: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/leases: read failed after 0 of 4096 at 2147475456: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/leases: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/outbox: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/outbox: read failed after 0 of 4096 at 134152192: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/outbox: read failed after 0 of 4096 at 134209536: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/outbox: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/xleases: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/xleases: read failed after 0 of 4096 at 1073676288: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/xleases: read failed after 0 of 4096 at 1073733632: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/xleases: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/inbox: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/inbox: read failed after 0 of 4096 at 134152192: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/inbox: read failed after 0 of 4096 at 134209536: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/inbox: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/master: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/master: read failed after 0 of 4096 at 1073676288: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/master: read failed after 0 of 4096 at 1073733632: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/master: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/metadata: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/metadata: read failed after 0 of 4096 at 536805376: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/metadata: read failed after 0 of 4096 at 536862720: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/metadata: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/metadata: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/metadata: read failed after 0 of 4096 at 536805376: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/metadata: read failed after 0 of 4096 at 536862720: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/metadata: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/ids: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/ids: read failed after 0 of 4096 at 134152192: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/ids: read failed after 0 of 4096 at 134209536: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/ids: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/ids: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/ids: read failed after 0 of 4096 at 134152192: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/ids: read failed after 0 of 4096 at 134209536: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/ids: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/leases: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/leases: read failed after 0 of 4096 at 2147418112: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/leases: read failed after 0 of 4096 at 2147475456: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/leases: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/leases: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/leases: read failed after 0 of 4096 at 2147418112: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/leases: read failed after 0 of 4096 at 2147475456: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/leases: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/outbox: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/outbox: read failed after 0 of 4096 at 134152192: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/outbox: read failed after 0 of 4096 at 134209536: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/outbox: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/xleases: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/xleases: read failed after 0 of 4096 at 1073676288: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/xleases: read failed after 0 of 4096 at 1073733632: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/xleases: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/inbox: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/inbox: read failed after 0 of 4096 at 134152192: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/inbox: read failed after 0 of 4096 at 134209536: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/inbox: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/master: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/master: read failed after 0 of 4096 at 1073676288: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/master: read failed after 0 of 4096 at 1073733632: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/master: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/outbox: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/outbox: read failed after 0 of 4096 at 134152192: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/outbox: read failed after 0 of 4096 at 134209536: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/outbox: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/inbox: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/inbox: read failed after 0 of 4096 at 134152192: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/inbox: read failed after 0 of 4096 at 134209536: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/inbox: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/master: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/master: read failed after 0 of 4096 at 1073676288: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/master: read failed after 0 of 4096 at 1073733632: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/master: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/xleases: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/xleases: read failed after 0 of 4096 at 1073676288: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/xleases: read failed after 0 of 4096 at 1073733632: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/xleases: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert home onn_ovn-monster Vwi-aotz-- 1,00g pool00 4,79
[lvol0_pmspare] onn_ovn-monster ewi------- 144,00m
ovirt-node-ng-4.2.3-0.20180524.0+1 onn_ovn-monster Vwi-aotz-- <252,38g pool00 2,88
ovirt-node-ng-4.2.3.1-0.20180530.0+1 onn_ovn-monster Vwi-a-tz-- <252,38g pool00 0,86
ovirt-node-ng-4.2.4-0.20180626.0 onn_ovn-monster Vri-a-tz-k <252,38g pool00 0,85
ovirt-node-ng-4.2.4-0.20180626.0+1 onn_ovn-monster Vwi-a-tz-- <252,38g pool00 ovirt-node-ng-4.2.4-0.20180626.0 0,85
pool00 onn_ovn-monster twi-aotz-- <279,38g 6,76 1,01
[pool00_tdata] onn_ovn-monster Twi-ao---- <279,38g
[pool00_tmeta] onn_ovn-monster ewi-ao---- 1,00g
root onn_ovn-monster Vwi-a-tz-- <252,38g pool00 1,24
swap onn_ovn-monster -wi-ao---- 4,00g
tmp onn_ovn-monster Vwi-aotz-- 1,00g pool00 5,01
var onn_ovn-monster Vwi-aotz-- 15,00g pool00 3,56
var_crash onn_ovn-monster Vwi-aotz-- 10,00g pool00 2,86
var_log onn_ovn-monster Vwi-aotz-- 8,00g pool00 38,48
var_log_audit onn_ovn-monster Vwi-aotz-- 2,00g pool00 6,77
turbo ovirt -wi-ao---- 894,25g
Am 03.07.2018 um 12:58 schrieb Yuval Turgeman <yturgema@redhat.com>:
Oliver, can you share the output from lvs ?
On Tue, Jul 3, 2018 at 12:06 AM, Oliver Riesener < Oliver.Riesener@hs-bremen.de> wrote:
Hi Yuval,
* reinstallation failed, because LV already exists. ovirt-node-ng-4.2.4-0.20180626.0 onn_ovn-monster Vri-a-tz-k <252,38g pool00 0,85 ovirt-node-ng-4.2.4-0.20180626.0+1 onn_ovn-monster Vwi-a-tz-- <252,38g pool00 ovirt-node-ng-4.2.4-0.20180626.0 0,85 See attachment imgbased.reinstall.log
* I removed them and re-reinstall without luck.
I got KeyError: <NVR ovirt-node-ng-4.2.1.1-0.20180223.0 />
See attachment imgbased.rereinstall.log
Also a new problem with nodectl info [root@ovn-monster tmp]# nodectl info Traceback (most recent call last): File "/usr/lib64/python2.7/runpy.py", line 162, in _run_module_as_main "__main__", fname, loader, pkg_name) File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code exec code in run_globals File "/usr/lib/python2.7/site-packages/nodectl/__main__.py", line 42, in <module> CliApplication() File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 200, in CliApplication return cmdmap.command(args) File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 118, in command return self.commands[command](**kwargs) File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 76, in info Info(self.imgbased, self.machine).write() File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 45, in __init__ self._fetch_information() File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 49, in _fetch_information self._get_layout() File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 66, in _get_layout layout = LayoutParser(self.app.imgbase.layout()).parse() File "/usr/lib/python2.7/site-packages/imgbased/imgbase.py", line 155, in layout return self.naming.layout() File "/usr/lib/python2.7/site-packages/imgbased/naming.py", line 109, in layout tree = self.tree(lvs) File "/usr/lib/python2.7/site-packages/imgbased/naming.py", line 224, in tree bases[img.base.nvr].layers.append(img) KeyError: <NVR ovirt-node-ng-4.2.3-0.20180524.0 />
Am 02.07.2018 um 22:22 schrieb Oliver Riesener < Oliver.Riesener@hs-bremen.de>:
Hi Yuval,
yes you are right, there was a unused and deactivated var_crash LV.
* I activated and mount it to /var/crash via /etc/fstab. * /var/crash was empty, and LV has already ext4 fs. var_crash onn_ovn-monster Vwi-aotz-- 10,00g pool00 2,86
* Now i will try to upgrade again. * yum reinstall ovirt-node-ng-image-update.noarch
BTW, no more imgbased.log files found.
Am 02.07.2018 um 20:57 schrieb Yuval Turgeman <yturgema@redhat.com>:
From your log:
AssertionError: Path is already a volume: /var/crash
Basically, it means that you already have an LV for /var/crash but it's not mounted for some reason, so either mount it (if the data good) or remove it and then reinstall the image-update rpm. Before that, check that you dont have any other LVs in that same state - or you can post the output for lvs... btw, do you have any more imgbased.log files laying around ?
You can find more details about this here:
https://access.redhat.com/documentation/en-us/red_hat_virtua lization/4.1/html/upgrade_guide/recovering_from_failed_nist-800_upgrade
On Mon, Jul 2, 2018 at 8:12 PM, Oliver Riesener <Oliver.Riesener@hs-b remen.de> wrote:
Hi,
i attached my /tmp/imgbased.log
Sheers
Oliver
Am 02.07.2018 um 13:58 schrieb Yuval Turgeman <yuvalt@redhat.com>:
Looks like the upgrade script failed - can you please attach /var/log/imgbased.log or /tmp/imgbased.log ?
Thanks, Yuval.
On Mon, Jul 2, 2018 at 2:54 PM, Sandro Bonazzola <sbonazzo@redhat.com> wrote:
Yuval, can you please have a look?
2018-06-30 7:48 GMT+02:00 Oliver Riesener <Oliver.Riesener@hs-b remen.de>:
> Yes, here is the same. > > It seams the bootloader isn’t configured right ? > > I did the Upgrade and reboot to 4.2.4 from UI and got: > > [root@ovn-monster ~]# nodectl info > layers: > ovirt-node-ng-4.2.4-0.20180626.0: > ovirt-node-ng-4.2.4-0.20180626.0+1 > ovirt-node-ng-4.2.3.1-0.20180530.0: > ovirt-node-ng-4.2.3.1-0.20180530.0+1 > ovirt-node-ng-4.2.3-0.20180524.0: > ovirt-node-ng-4.2.3-0.20180524.0+1 > ovirt-node-ng-4.2.1.1-0.20180223.0: > ovirt-node-ng-4.2.1.1-0.20180223.0+1 > bootloader: > default: ovirt-node-ng-4.2.3-0.20180524.0+1 > entries: > ovirt-node-ng-4.2.3-0.20180524.0+1: > index: 0 > title: ovirt-node-ng-4.2.3-0.20180524.0 > kernel: /boot/ovirt-node-ng-4.2.3-0.20 > 180524.0+1/vmlinuz-3.10.0-862.3.2.el7.x86_64 > args: "ro crashkernel=auto rd.lvm.lv=onn > _ovn-monster/ovirt-node-ng-4.2.3-0.20180524.0+1 rd.lvm.lv=onn_ovn-monster/swap > rd.md.uuid=c6c3013b:027a9346:67dfd181:89635587 rhgb quiet > LANG=de_DE.UTF-8 img.bootid=ovirt-node-ng-4.2.3-0.20180524.0+1" > initrd: /boot/ovirt-node-ng-4.2.3-0.20 > 180524.0+1/initramfs-3.10.0-862.3.2.el7.x86_64.img > root: /dev/onn_ovn-monster/ovirt-node-ng-4.2.3-0.20180524.0+1 > ovirt-node-ng-4.2.1.1-0.20180223.0+1: > index: 1 > title: ovirt-node-ng-4.2.1.1-0.20180223.0 > kernel: /boot/ovirt-node-ng-4.2.1.1-0. > 20180223.0+1/vmlinuz-3.10.0-693.17.1.el7.x86_64 > args: "ro crashkernel=auto rd.lvm.lv=onn > _ovn-monster/ovirt-node-ng-4.2.1.1-0.20180223.0+1 rd.lvm.lv=onn_ovn-monster/swap > rd.md.uuid=c6c3013b:027a9346:67dfd181:89635587 rhgb quiet > LANG=de_DE.UTF-8 img.bootid=ovirt-node-ng-4.2.1.1-0.20180223.0+1" > initrd: /boot/ovirt-node-ng-4.2.1.1-0. > 20180223.0+1/initramfs-3.10.0-693.17.1.el7.x86_64.img > root: /dev/onn_ovn-monster/ovirt-nod > e-ng-4.2.1.1-0.20180223.0+1 > current_layer: ovirt-node-ng-4.2.3-0.20180524.0+1 > [root@ovn-monster ~]# uptime > 07:35:27 up 2 days, 15:42, 1 user, load average: 1,07, 1,00, 0,95 > > Am 29.06.2018 um 23:53 schrieb Matt Simonsen <matt@khoza.com>: > > Hello, > > I did yum updates on 2 of my oVirt 4.2.3 nodes running the prebuilt > node platform and it doesn't appear the updates worked. > > > [root@node6-g8-h4 ~]# yum update > Loaded plugins: enabled_repos_upload, fastestmirror, > imgbased-persist, > : package_upload, product-id, search-disabled-repos, > subscription- > : manager > This system is not registered with an entitlement server. You can > use subscription-manager to register. > Loading mirror speeds from cached hostfile > * ovirt-4.2-epel: linux.mirrors.es.net > Resolving Dependencies > --> Running transaction check > ---> Package ovirt-node-ng-image-update.noarch 0:4.2.3.1-1.el7 will > be updated > ---> Package ovirt-node-ng-image-update.noarch 0:4.2.4-1.el7 will > be obsoleting > ---> Package ovirt-node-ng-image-update-placeholder.noarch > 0:4.2.3.1-1.el7 will be obsoleted > --> Finished Dependency Resolution > > Dependencies Resolved > > ============================================================ > ============================================================= > Package Arch > Version Repository Size > ============================================================ > ============================================================= > Installing: > ovirt-node-ng-image-update noarch > 4.2.4-1.el7 ovirt-4.2 647 M > replacing ovirt-node-ng-image-update-placeholder.noarch > 4.2.3.1-1.el7 > > Transaction Summary > ============================================================ > ============================================================= > Install 1 Package > > Total download size: 647 M > Is this ok [y/d/N]: y > Downloading packages: > warning: /var/cache/yum/x86_64/7/ovirt- > 4.2/packages/ovirt-node-ng-image-update-4.2.4-1.el7.noarch.rpm: > Header V4 RSA/SHA1 Signature, key ID fe590cb7: NOKEY > Public key for ovirt-node-ng-image-update-4.2.4-1.el7.noarch.rpm is > not installed > ovirt-node-ng-image-update-4.2.4-1.el7.noarch.rpm | 647 MB 00:02:07 > Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-ovirt-4.2 > Importing GPG key 0xFE590CB7: > Userid : "oVirt <infra@ovirt.org>" > Fingerprint: 31a5 d783 7fad 7cb2 86cd 3469 ab8c 4f9d fe59 0cb7 > Package : ovirt-release42-4.2.3.1-1.el7.noarch (installed) > From : /etc/pki/rpm-gpg/RPM-GPG-ovirt-4.2 > Is this ok [y/N]: y > Running transaction check > Running transaction test > Transaction test succeeded > Running transaction > Installing : ovirt-node-ng-image-update-4.2.4-1.el7.noarch 1/3 > warning: %post(ovirt-node-ng-image-update-4.2.4-1.el7.noarch) > scriptlet failed, exit status 1 > Non-fatal POSTIN scriptlet failure in rpm package > ovirt-node-ng-image-update-4.2.4-1.el7.noarch > Erasing : ovirt-node-ng-image-update-placeholder-4.2.3.1-1.el7.noarch > 2/3 > Cleanup : ovirt-node-ng-image-update-4.2.3.1-1.el7.noarch 3/3 > warning: file /usr/share/ovirt-node-ng/image > /ovirt-node-ng-4.2.0-0.20180530.0.el7.squashfs.img: remove failed: > No such file or directory > Uploading Package Profile > Unable to upload Package Profile > Verifying : ovirt-node-ng-image-update-4.2.4-1.el7.noarch 1/3 > Verifying : ovirt-node-ng-image-update-4.2.3.1-1.el7.noarch 2/3 > Verifying : ovirt-node-ng-image-update-placeholder-4.2.3.1-1.el7.noarch > 3/3 > > Installed: > ovirt-node-ng-image-update.noarch 0:4.2.4-1.el7 > > Replaced: > ovirt-node-ng-image-update-placeholder.noarch 0:4.2.3.1-1.el7 > > Complete! > Uploading Enabled Repositories Report > Loaded plugins: fastestmirror, product-id, subscription-manager > This system is not registered with an entitlement server. You can > use subscription-manager to register. > Cannot upload enabled repos report, is this client registered? > > > My engine shows the nodes as having no updates, however the major > components including the kernel version and port 9090 admin GUI show 4.2.3 > > Is there anything I can provide to help diagnose the issue? > > > [root@node6-g8-h4 ~]# rpm -qa | grep ovirt > > ovirt-imageio-common-1.3.1.2-0.el7.centos.noarch > ovirt-host-deploy-1.7.3-1.el7.centos.noarch > ovirt-vmconsole-host-1.0.5-4.el7.centos.noarch > ovirt-provider-ovn-driver-1.2.10-1.el7.centos.noarch > ovirt-engine-sdk-python-3.6.9.1-1.el7.noarch > ovirt-setup-lib-1.1.4-1.el7.centos.noarch > ovirt-release42-4.2.3.1-1.el7.noarch > ovirt-imageio-daemon-1.3.1.2-0.el7.centos.noarch > ovirt-hosted-engine-setup-2.2.20-1.el7.centos.noarch > ovirt-host-dependencies-4.2.2-2.el7.centos.x86_64 > ovirt-hosted-engine-ha-2.2.11-1.el7.centos.noarch > ovirt-host-4.2.2-2.el7.centos.x86_64 > ovirt-node-ng-image-update-4.2.4-1.el7.noarch > ovirt-vmconsole-1.0.5-4.el7.centos.noarch > ovirt-release-host-node-4.2.3.1-1.el7.noarch > cockpit-ovirt-dashboard-0.11.24-1.el7.centos.noarch > ovirt-node-ng-nodectl-4.2.0-0.20180524.0.el7.noarch > python-ovirt-engine-sdk4-4.2.6-2.el7.centos.x86_64 > > [root@node6-g8-h4 ~]# yum update > Loaded plugins: enabled_repos_upload, fastestmirror, > imgbased-persist, package_upload, product-id, search-disabled-repos, > subscription-manager > This system is not registered with an entitlement server. You can > use subscription-manager to register. > Loading mirror speeds from cached hostfile > * ovirt-4.2-epel: linux.mirrors.es.net > No packages marked for update > Uploading Enabled Repositories Report > Loaded plugins: fastestmirror, product-id, subscription-manager > This system is not registered with an entitlement server. You can > use subscription-manager to register. > Cannot upload enabled repos report, is this client registered? > _______________________________________________ > Users mailing list -- users@ovirt.org > To unsubscribe send an email to users-leave@ovirt.org > Privacy Statement: https://www.ovirt.org/site/privacy-policy/ > oVirt Code of Conduct: https://www.ovirt.org > /community/about/community-guidelines/ > List Archives: https://lists.ovirt.org/archive > <https://lists.ovirt.org/archives/list/users@ovirt.org/message/UHQMGULUHL4GBBHUBNGOAICJEM6W3RVW/> > > ...
[Message clipped]

Hi Yuval, as you can see in my last attachment, after lv meta restore i was unable to modify LV's in pool00. Thin pool has queued transactions got 23 expect 16 or so. I reboot and try repairing from Centos 7 USB Stick and can’t access / remove LV because they has Read LOCK and then Write LOCK is prohibited. The System boots only into the dracut emergency console and i decide me for reliability to reinstall it with a fresh 4.2.4 NODE after cleaning the disk. :-) Now it running overt-node-ng-4.2.4. - Noticeable on this Issue is: - ng-node should not be installed on previously used CentOS Disks without cleaning. (var_crash LV) - upgrades eg. 4.2.4 should be easy reinstall-able. - What about old version in LV thin pool, how to remove them safely ? - fstrim -av trims also LV thin pool volumes, nice :-) Many thanks to you, i have learned a lot of lvm. Oliver
Am 03.07.2018 um 22:58 schrieb Yuval Turgeman <yturgema@redhat.com>:
OK Good, this is much better now, but ovirt-node-ng-4.2.4-0.20180626.0+1 still exists without its base - try this:
1. lvremove onn_ovn-monster/ovirt-node-ng-4.2.4-0.20180626.0+1 2. nodectl info
On Tue, Jul 3, 2018 at 11:52 PM, Oliver Riesener <Oliver.Riesener@hs-bremen.de> wrote: I did it, with issues, see attachment.
Am 03.07.2018 um 22:25 schrieb Yuval Turgeman <yturgema@redhat.com>:
Hi Oliver,
I would try the following, but please notice it is *very* dangerous, so a backup is probably a good idea (man vgcfgrestore)...
1. vgcfgrestore --list onn_ovn-monster 2. search for a .vg file that was created before deleting those 2 lvs (ovirt-node-ng-4.2.3-0.20180524.0 and ovirt-node-ng-4.2.3.1-0.20180530.0) 3. vgcfgrestore -f path-to-the-file-from-step2.vg onn_ovn-monster --force 4. lvremove onn_ovn-monster/ovirt-node-ng-4.2.4-0.20180626.0 5. lvremove onn_ovn-monster/ovirt-node-ng-4.2.4-0.20180626.0+1 6. lvremove the lvs from the thinpool that are not mounted/used (var_crash?) 7. nodectl info to make sure everything is ok 8. reinstall the image-update rpm
Thanks, Yuval.
On Tue, Jul 3, 2018 at 10:57 PM, Yuval Turgeman <yturgema@redhat.com> wrote: Hi Oliver,
The KeyError happens because there are no bases for the layers. For each LV that ends with a +1, there should be a base read-only LV without +1. So for 3 ovirt-node-ng images, you're supposed to have 6 layers. This is the reason nodectl info fails, and the upgrade will fail also. In your original email it looks OK - I have never seen this happen, was this a manual lvremove ? I need to reproduce this and check what can be done.
You can find me on #ovirt (irc.oftc.net) also :)
On Tue, Jul 3, 2018 at 10:41 PM, Oliver Riesener <Oliver.Riesener@hs-bremen.de> wrote: Yuval, here comes the lvs output.
The IO Errors are because Node is in maintenance. The LV root is from previous installed centos 7.5. The i have installed node-ng 4.2.1 and got this MIX. The LV turbo is a SSD in it’s own VG named ovirt.
I have removed LV ovirt-node-ng-4.2.1-0.20180223.0 and (+1) removed because nodectl info error:
KeyError: <NVR ovirt-node-ng-4.2.1.1-0.20180223.0
Now i get the error @4.2.3: [root@ovn-monster ~]# nodectl info Traceback (most recent call last): File "/usr/lib64/python2.7/runpy.py", line 162, in _run_module_as_main "__main__", fname, loader, pkg_name) File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code exec code in run_globals File "/usr/lib/python2.7/site-packages/nodectl/__main__.py", line 42, in <module> CliApplication() File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 200, in CliApplication return cmdmap.command(args) File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 118, in command return self.commands[command](**kwargs) File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 76, in info Info(self.imgbased, self.machine).write() File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 45, in __init__ self._fetch_information() File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 49, in _fetch_information self._get_layout() File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 66, in _get_layout layout = LayoutParser(self.app.imgbase.layout()).parse() File "/usr/lib/python2.7/site-packages/imgbased/imgbase.py", line 155, in layout return self.naming.layout() File "/usr/lib/python2.7/site-packages/imgbased/naming.py", line 109, in layout tree = self.tree(lvs) File "/usr/lib/python2.7/site-packages/imgbased/naming.py", line 224, in tree bases[img.base.nvr].layers.append(img) KeyError: <NVR ovirt-node-ng-4.2.3-0.20180524.0 />
lvs -a
[root@ovn-monster ~]# lvs -a /dev/mapper/36090a02860ee716bee5e05b11dc52616: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/mapper/36090a02860ee716bee5e05b11dc52616: read failed after 0 of 4096 at 5497568559104: Eingabe-/Ausgabefehler /dev/mapper/36090a02860ee716bee5e05b11dc52616: read failed after 0 of 4096 at 5497568616448: Eingabe-/Ausgabefehler /dev/mapper/36090a02860ee716bee5e05b11dc52616: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/mapper/36090a02860ee9137c5ae35cd4bc5f6b8: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/mapper/36090a02860ee9137c5ae35cd4bc5f6b8: read failed after 0 of 4096 at 1099526242304: Eingabe-/Ausgabefehler /dev/mapper/36090a02860ee9137c5ae35cd4bc5f6b8: read failed after 0 of 4096 at 1099526299648: Eingabe-/Ausgabefehler /dev/mapper/36090a02860ee9137c5ae35cd4bc5f6b8: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/mapper/36090a02860eea13dc5aed55e4cc57698: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/mapper/36090a02860eea13dc5aed55e4cc57698: read failed after 0 of 4096 at 1099526242304: Eingabe-/Ausgabefehler /dev/mapper/36090a02860eea13dc5aed55e4cc57698: read failed after 0 of 4096 at 1099526299648: Eingabe-/Ausgabefehler /dev/mapper/36090a02860eea13dc5aed55e4cc57698: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/metadata: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/metadata: read failed after 0 of 4096 at 536805376: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/metadata: read failed after 0 of 4096 at 536862720: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/metadata: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/ids: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/ids: read failed after 0 of 4096 at 134152192: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/ids: read failed after 0 of 4096 at 134209536: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/ids: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/leases: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/leases: read failed after 0 of 4096 at 2147418112: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/leases: read failed after 0 of 4096 at 2147475456: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/leases: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/outbox: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/outbox: read failed after 0 of 4096 at 134152192: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/outbox: read failed after 0 of 4096 at 134209536: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/outbox: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/xleases: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/xleases: read failed after 0 of 4096 at 1073676288: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/xleases: read failed after 0 of 4096 at 1073733632: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/xleases: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/inbox: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/inbox: read failed after 0 of 4096 at 134152192: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/inbox: read failed after 0 of 4096 at 134209536: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/inbox: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/master: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/master: read failed after 0 of 4096 at 1073676288: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/master: read failed after 0 of 4096 at 1073733632: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/master: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/metadata: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/metadata: read failed after 0 of 4096 at 536805376: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/metadata: read failed after 0 of 4096 at 536862720: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/metadata: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/metadata: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/metadata: read failed after 0 of 4096 at 536805376: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/metadata: read failed after 0 of 4096 at 536862720: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/metadata: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/ids: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/ids: read failed after 0 of 4096 at 134152192: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/ids: read failed after 0 of 4096 at 134209536: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/ids: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/ids: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/ids: read failed after 0 of 4096 at 134152192: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/ids: read failed after 0 of 4096 at 134209536: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/ids: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/leases: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/leases: read failed after 0 of 4096 at 2147418112: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/leases: read failed after 0 of 4096 at 2147475456: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/leases: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/leases: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/leases: read failed after 0 of 4096 at 2147418112: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/leases: read failed after 0 of 4096 at 2147475456: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/leases: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/outbox: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/outbox: read failed after 0 of 4096 at 134152192: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/outbox: read failed after 0 of 4096 at 134209536: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/outbox: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/xleases: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/xleases: read failed after 0 of 4096 at 1073676288: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/xleases: read failed after 0 of 4096 at 1073733632: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/xleases: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/inbox: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/inbox: read failed after 0 of 4096 at 134152192: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/inbox: read failed after 0 of 4096 at 134209536: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/inbox: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/master: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/master: read failed after 0 of 4096 at 1073676288: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/master: read failed after 0 of 4096 at 1073733632: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/master: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/outbox: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/outbox: read failed after 0 of 4096 at 134152192: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/outbox: read failed after 0 of 4096 at 134209536: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/outbox: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/inbox: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/inbox: read failed after 0 of 4096 at 134152192: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/inbox: read failed after 0 of 4096 at 134209536: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/inbox: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/master: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/master: read failed after 0 of 4096 at 1073676288: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/master: read failed after 0 of 4096 at 1073733632: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/master: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/xleases: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/xleases: read failed after 0 of 4096 at 1073676288: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/xleases: read failed after 0 of 4096 at 1073733632: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/xleases: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert home onn_ovn-monster Vwi-aotz-- 1,00g pool00 4,79 [lvol0_pmspare] onn_ovn-monster ewi------- 144,00m ovirt-node-ng-4.2.3-0.20180524.0+1 onn_ovn-monster Vwi-aotz-- <252,38g pool00 2,88 ovirt-node-ng-4.2.3.1-0.20180530.0+1 onn_ovn-monster Vwi-a-tz-- <252,38g pool00 0,86 ovirt-node-ng-4.2.4-0.20180626.0 onn_ovn-monster Vri-a-tz-k <252,38g pool00 0,85 ovirt-node-ng-4.2.4-0.20180626.0+1 onn_ovn-monster Vwi-a-tz-- <252,38g pool00 ovirt-node-ng-4.2.4-0.20180626.0 0,85 pool00 onn_ovn-monster twi-aotz-- <279,38g 6,76 1,01 [pool00_tdata] onn_ovn-monster Twi-ao---- <279,38g [pool00_tmeta] onn_ovn-monster ewi-ao---- 1,00g root onn_ovn-monster Vwi-a-tz-- <252,38g pool00 1,24 swap onn_ovn-monster -wi-ao---- 4,00g tmp onn_ovn-monster Vwi-aotz-- 1,00g pool00 5,01 var onn_ovn-monster Vwi-aotz-- 15,00g pool00 3,56 var_crash onn_ovn-monster Vwi-aotz-- 10,00g pool00 2,86 var_log onn_ovn-monster Vwi-aotz-- 8,00g pool00 38,48 var_log_audit onn_ovn-monster Vwi-aotz-- 2,00g pool00 6,77 turbo ovirt -wi-ao---- 894,25g
Am 03.07.2018 um 12:58 schrieb Yuval Turgeman <yturgema@redhat.com>:
Oliver, can you share the output from lvs ?
On Tue, Jul 3, 2018 at 12:06 AM, Oliver Riesener <Oliver.Riesener@hs-bremen.de> wrote: Hi Yuval,
* reinstallation failed, because LV already exists. ovirt-node-ng-4.2.4-0.20180626.0 onn_ovn-monster Vri-a-tz-k <252,38g pool00 0,85 ovirt-node-ng-4.2.4-0.20180626.0+1 onn_ovn-monster Vwi-a-tz-- <252,38g pool00 ovirt-node-ng-4.2.4-0.20180626.0 0,85 See attachment imgbased.reinstall.log
* I removed them and re-reinstall without luck.
I got KeyError: <NVR ovirt-node-ng-4.2.1.1-0.20180223.0 />
See attachment imgbased.rereinstall.log
Also a new problem with nodectl info [root@ovn-monster tmp]# nodectl info Traceback (most recent call last): File "/usr/lib64/python2.7/runpy.py", line 162, in _run_module_as_main "__main__", fname, loader, pkg_name) File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code exec code in run_globals File "/usr/lib/python2.7/site-packages/nodectl/__main__.py", line 42, in <module> CliApplication() File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 200, in CliApplication return cmdmap.command(args) File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 118, in command return self.commands[command](**kwargs) File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 76, in info Info(self.imgbased, self.machine).write() File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 45, in __init__ self._fetch_information() File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 49, in _fetch_information self._get_layout() File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 66, in _get_layout layout = LayoutParser(self.app.imgbase.layout()).parse() File "/usr/lib/python2.7/site-packages/imgbased/imgbase.py", line 155, in layout return self.naming.layout() File "/usr/lib/python2.7/site-packages/imgbased/naming.py", line 109, in layout tree = self.tree(lvs) File "/usr/lib/python2.7/site-packages/imgbased/naming.py", line 224, in tree bases[img.base.nvr].layers.append(img) KeyError: <NVR ovirt-node-ng-4.2.3-0.20180524.0 />
Am 02.07.2018 um 22:22 schrieb Oliver Riesener <Oliver.Riesener@hs-bremen.de>:
Hi Yuval,
yes you are right, there was a unused and deactivated var_crash LV.
* I activated and mount it to /var/crash via /etc/fstab. * /var/crash was empty, and LV has already ext4 fs. var_crash onn_ovn-monster Vwi-aotz-- 10,00g pool00 2,86
* Now i will try to upgrade again. * yum reinstall ovirt-node-ng-image-update.noarch
BTW, no more imgbased.log files found.
Am 02.07.2018 um 20:57 schrieb Yuval Turgeman <yturgema@redhat.com>:
From your log:
AssertionError: Path is already a volume: /var/crash
Basically, it means that you already have an LV for /var/crash but it's not mounted for some reason, so either mount it (if the data good) or remove it and then reinstall the image-update rpm. Before that, check that you dont have any other LVs in that same state - or you can post the output for lvs... btw, do you have any more imgbased.log files laying around ?
You can find more details about this here:
https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.1/htm...
On Mon, Jul 2, 2018 at 8:12 PM, Oliver Riesener <Oliver.Riesener@hs-bremen.de> wrote: Hi,
i attached my /tmp/imgbased.log
Sheers
Oliver
Am 02.07.2018 um 13:58 schrieb Yuval Turgeman <yuvalt@redhat.com>:
Looks like the upgrade script failed - can you please attach /var/log/imgbased.log or /tmp/imgbased.log ?
Thanks, Yuval.
On Mon, Jul 2, 2018 at 2:54 PM, Sandro Bonazzola <sbonazzo@redhat.com> wrote: Yuval, can you please have a look?
2018-06-30 7:48 GMT+02:00 Oliver Riesener <Oliver.Riesener@hs-bremen.de>: Yes, here is the same.
It seams the bootloader isn’t configured right ?
I did the Upgrade and reboot to 4.2.4 from UI and got:
[root@ovn-monster ~]# nodectl info layers: ovirt-node-ng-4.2.4-0.20180626.0: ovirt-node-ng-4.2.4-0.20180626.0+1 ovirt-node-ng-4.2.3.1-0.20180530.0: ovirt-node-ng-4.2.3.1-0.20180530.0+1 ovirt-node-ng-4.2.3-0.20180524.0: ovirt-node-ng-4.2.3-0.20180524.0+1 ovirt-node-ng-4.2.1.1-0.20180223.0: ovirt-node-ng-4.2.1.1-0.20180223.0+1 bootloader: default: ovirt-node-ng-4.2.3-0.20180524.0+1 entries: ovirt-node-ng-4.2.3-0.20180524.0+1: index: 0 title: ovirt-node-ng-4.2.3-0.20180524.0 kernel: /boot/ovirt-node-ng-4.2.3-0.20180524.0+1/vmlinuz-3.10.0-862.3.2.el7.x86_64 args: "ro crashkernel=auto rd.lvm.lv=onn_ovn-monster/ovirt-node-ng-4.2.3-0.20180524.0+1 rd.lvm.lv=onn_ovn-monster/swap rd.md.uuid=c6c3013b:027a9346:67dfd181:89635587 rhgb quiet LANG=de_DE.UTF-8 img.bootid=ovirt-node-ng-4.2.3-0.20180524.0+1" initrd: /boot/ovirt-node-ng-4.2.3-0.20180524.0+1/initramfs-3.10.0-862.3.2.el7.x86_64.img root: /dev/onn_ovn-monster/ovirt-node-ng-4.2.3-0.20180524.0+1 ovirt-node-ng-4.2.1.1-0.20180223.0+1: index: 1 title: ovirt-node-ng-4.2.1.1-0.20180223.0 kernel: /boot/ovirt-node-ng-4.2.1.1-0.20180223.0+1/vmlinuz-3.10.0-693.17.1.el7.x86_64 args: "ro crashkernel=auto rd.lvm.lv=onn_ovn-monster/ovirt-node-ng-4.2.1.1-0.20180223.0+1 rd.lvm.lv=onn_ovn-monster/swap rd.md.uuid=c6c3013b:027a9346:67dfd181:89635587 rhgb quiet LANG=de_DE.UTF-8 img.bootid=ovirt-node-ng-4.2.1.1-0.20180223.0+1" initrd: /boot/ovirt-node-ng-4.2.1.1-0.20180223.0+1/initramfs-3.10.0-693.17.1.el7.x86_64.img root: /dev/onn_ovn-monster/ovirt-node-ng-4.2.1.1-0.20180223.0+1 current_layer: ovirt-node-ng-4.2.3-0.20180524.0+1 [root@ovn-monster ~]# uptime 07:35:27 up 2 days, 15:42, 1 user, load average: 1,07, 1,00, 0,95
> Am 29.06.2018 um 23:53 schrieb Matt Simonsen <matt@khoza.com>: > > Hello, > > I did yum updates on 2 of my oVirt 4.2.3 nodes running the prebuilt node platform and it doesn't appear the updates worked. > > > [root@node6-g8-h4 ~]# yum update > Loaded plugins: enabled_repos_upload, fastestmirror, imgbased-persist, > : package_upload, product-id, search-disabled-repos, subscription- > : manager > This system is not registered with an entitlement server. You can use subscription-manager to register. > Loading mirror speeds from cached hostfile > * ovirt-4.2-epel: linux.mirrors.es.net > Resolving Dependencies > --> Running transaction check > ---> Package ovirt-node-ng-image-update.noarch 0:4.2.3.1-1.el7 will be updated > ---> Package ovirt-node-ng-image-update.noarch 0:4.2.4-1.el7 will be obsoleting > ---> Package ovirt-node-ng-image-update-placeholder.noarch 0:4.2.3.1-1.el7 will be obsoleted > --> Finished Dependency Resolution > > Dependencies Resolved > > ========================================================================================================================= > Package Arch Version Repository Size > ========================================================================================================================= > Installing: > ovirt-node-ng-image-update noarch 4.2.4-1.el7 ovirt-4.2 647 M > replacing ovirt-node-ng-image-update-placeholder.noarch 4.2.3.1-1.el7 > > Transaction Summary > ========================================================================================================================= > Install 1 Package > > Total download size: 647 M > Is this ok [y/d/N]: y > Downloading packages: > warning: /var/cache/yum/x86_64/7/ovirt-4.2/packages/ovirt-node-ng-image-update-4.2.4-1.el7.noarch.rpm: Header V4 RSA/SHA1 Signature, key ID fe590cb7: NOKEY > Public key for ovirt-node-ng-image-update-4.2.4-1.el7.noarch.rpm is not installed > ovirt-node-ng-image-update-4.2.4-1.el7.noarch.rpm | 647 MB 00:02:07 > Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-ovirt-4.2 > Importing GPG key 0xFE590CB7: > Userid : "oVirt <infra@ovirt.org>" > Fingerprint: 31a5 d783 7fad 7cb2 86cd 3469 ab8c 4f9d fe59 0cb7 > Package : ovirt-release42-4.2.3.1-1.el7.noarch (installed) > From : /etc/pki/rpm-gpg/RPM-GPG-ovirt-4.2 > Is this ok [y/N]: y > Running transaction check > Running transaction test > Transaction test succeeded > Running transaction > Installing : ovirt-node-ng-image-update-4.2.4-1.el7.noarch 1/3 > warning: %post(ovirt-node-ng-image-update-4.2.4-1.el7.noarch) scriptlet failed, exit status 1 > Non-fatal POSTIN scriptlet failure in rpm package ovirt-node-ng-image-update-4.2.4-1.el7.noarch > Erasing : ovirt-node-ng-image-update-placeholder-4.2.3.1-1.el7.noarch 2/3 > Cleanup : ovirt-node-ng-image-update-4.2.3.1-1.el7.noarch 3/3 > warning: file /usr/share/ovirt-node-ng/image/ovirt-node-ng-4.2.0-0.20180530.0.el7.squashfs.img: remove failed: No such file or directory > Uploading Package Profile > Unable to upload Package Profile > Verifying : ovirt-node-ng-image-update-4.2.4-1.el7.noarch 1/3 > Verifying : ovirt-node-ng-image-update-4.2.3.1-1.el7.noarch 2/3 > Verifying : ovirt-node-ng-image-update-placeholder-4.2.3.1-1.el7.noarch 3/3 > > Installed: > ovirt-node-ng-image-update.noarch 0:4.2.4-1.el7 > > Replaced: > ovirt-node-ng-image-update-placeholder.noarch 0:4.2.3.1-1.el7 > > Complete! > Uploading Enabled Repositories Report > Loaded plugins: fastestmirror, product-id, subscription-manager > This system is not registered with an entitlement server. You can use subscription-manager to register. > Cannot upload enabled repos report, is this client registered? > > > My engine shows the nodes as having no updates, however the major components including the kernel version and port 9090 admin GUI show 4.2.3 > > Is there anything I can provide to help diagnose the issue? > > > [root@node6-g8-h4 ~]# rpm -qa | grep ovirt > > ovirt-imageio-common-1.3.1.2-0.el7.centos.noarch > ovirt-host-deploy-1.7.3-1.el7.centos.noarch > ovirt-vmconsole-host-1.0.5-4.el7.centos.noarch > ovirt-provider-ovn-driver-1.2.10-1.el7.centos.noarch > ovirt-engine-sdk-python-3.6.9.1-1.el7.noarch > ovirt-setup-lib-1.1.4-1.el7.centos.noarch > ovirt-release42-4.2.3.1-1.el7.noarch > ovirt-imageio-daemon-1.3.1.2-0.el7.centos.noarch > ovirt-hosted-engine-setup-2.2.20-1.el7.centos.noarch > ovirt-host-dependencies-4.2.2-2.el7.centos.x86_64 > ovirt-hosted-engine-ha-2.2.11-1.el7.centos.noarch > ovirt-host-4.2.2-2.el7.centos.x86_64 > ovirt-node-ng-image-update-4.2.4-1.el7.noarch > ovirt-vmconsole-1.0.5-4.el7.centos.noarch > ovirt-release-host-node-4.2.3.1-1.el7.noarch > cockpit-ovirt-dashboard-0.11.24-1.el7.centos.noarch > ovirt-node-ng-nodectl-4.2.0-0.20180524.0.el7.noarch > python-ovirt-engine-sdk4-4.2.6-2.el7.centos.x86_64 > > [root@node6-g8-h4 ~]# yum update > Loaded plugins: enabled_repos_upload, fastestmirror, imgbased-persist, package_upload, product-id, search-disabled-repos, subscription-manager > This system is not registered with an entitlement server. You can use subscription-manager to register. > Loading mirror speeds from cached hostfile > * ovirt-4.2-epel: linux.mirrors.es.net > No packages marked for update > Uploading Enabled Repositories Report > Loaded plugins: fastestmirror, product-id, subscription-manager > This system is not registered with an entitlement server. You can use subscription-manager to register. > Cannot upload enabled repos report, is this client registered? > _______________________________________________ > Users mailing list -- users@ovirt.org > To unsubscribe send an email to users-leave@ovirt.org > Privacy Statement: https://www.ovirt.org/site/privacy-policy/ > oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ > List Archives: https://lists.ovirt.org/archive
...
[Message clipped]

Hi Oliver, Sorry we couldn't get this to upgrade, but removing the base layers kinda killed us - however, we already have some ideas on how to improve imgbased to make it more friendly :) Thanks for the update ! Yuval. On Thu, Jul 5, 2018 at 3:52 PM, Oliver Riesener < Oliver.Riesener@hs-bremen.de> wrote:
Hi Yuval,
as you can see in my last attachment, after lv meta restore i was unable to modify LV's in pool00. Thin pool has queued transactions got 23 expect 16 or so.
I reboot and try repairing from Centos 7 USB Stick and can’t access / remove LV because they has Read LOCK and then Write LOCK is prohibited.
The System boots only into the dracut emergency console and i decide me for reliability to reinstall it with a fresh 4.2.4 NODE after cleaning the disk. :-)
Now it running overt-node-ng-4.2.4. - Noticeable on this Issue is: - ng-node should not be installed on previously used CentOS Disks without cleaning. (var_crash LV) - upgrades eg. 4.2.4 should be easy reinstall-able. - What about old version in LV thin pool, how to remove them safely ? - fstrim -av trims also LV thin pool volumes, nice :-)
Many thanks to you, i have learned a lot of lvm.
Oliver
Am 03.07.2018 um 22:58 schrieb Yuval Turgeman <yturgema@redhat.com>:
OK Good, this is much better now, but ovirt-node-ng-4.2.4-0.20180626.0+1 still exists without its base - try this:
1. lvremove onn_ovn-monster/ovirt-node-ng-4.2.4-0.20180626.0+1 2. nodectl info
On Tue, Jul 3, 2018 at 11:52 PM, Oliver Riesener < Oliver.Riesener@hs-bremen.de> wrote: I did it, with issues, see attachment.
Am 03.07.2018 um 22:25 schrieb Yuval Turgeman <yturgema@redhat.com>:
Hi Oliver,
I would try the following, but please notice it is *very* dangerous, so a backup is probably a good idea (man vgcfgrestore)...
1. vgcfgrestore --list onn_ovn-monster 2. search for a .vg file that was created before deleting those 2 lvs (ovirt-node-ng-4.2.3-0.20180524.0 and ovirt-node-ng-4.2.3.1-0.20180530.0) 3. vgcfgrestore -f path-to-the-file-from-step2.vg onn_ovn-monster --force 4. lvremove onn_ovn-monster/ovirt-node-ng-4.2.4-0.20180626.0 5. lvremove onn_ovn-monster/ovirt-node-ng-4.2.4-0.20180626.0+1 6. lvremove the lvs from the thinpool that are not mounted/used (var_crash?) 7. nodectl info to make sure everything is ok 8. reinstall the image-update rpm
Thanks, Yuval.
On Tue, Jul 3, 2018 at 10:57 PM, Yuval Turgeman <yturgema@redhat.com> wrote: Hi Oliver,
The KeyError happens because there are no bases for the layers. For each LV that ends with a +1, there should be a base read-only LV without +1. So for 3 ovirt-node-ng images, you're supposed to have 6 layers. This is the reason nodectl info fails, and the upgrade will fail also. In your original email it looks OK - I have never seen this happen, was this a manual lvremove ? I need to reproduce this and check what can be done.
You can find me on #ovirt (irc.oftc.net) also :)
On Tue, Jul 3, 2018 at 10:41 PM, Oliver Riesener < Oliver.Riesener@hs-bremen.de> wrote: Yuval, here comes the lvs output.
The IO Errors are because Node is in maintenance. The LV root is from previous installed centos 7.5. The i have installed node-ng 4.2.1 and got this MIX. The LV turbo is a SSD in it’s own VG named ovirt.
I have removed LV ovirt-node-ng-4.2.1-0.20180223.0 and (+1) removed because nodectl info error:
KeyError: <NVR ovirt-node-ng-4.2.1.1-0.20180223.0
Now i get the error @4.2.3: [root@ovn-monster ~]# nodectl info Traceback (most recent call last): File "/usr/lib64/python2.7/runpy.py", line 162, in _run_module_as_main "__main__", fname, loader, pkg_name) File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code exec code in run_globals File "/usr/lib/python2.7/site-packages/nodectl/__main__.py", line 42, in <module> CliApplication() File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 200, in CliApplication return cmdmap.command(args) File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 118, in command return self.commands[command](**kwargs) File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 76, in info Info(self.imgbased, self.machine).write() File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 45, in __init__ self._fetch_information() File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 49, in _fetch_information self._get_layout() File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 66, in _get_layout layout = LayoutParser(self.app.imgbase.layout()).parse() File "/usr/lib/python2.7/site-packages/imgbased/imgbase.py", line 155, in layout return self.naming.layout() File "/usr/lib/python2.7/site-packages/imgbased/naming.py", line 109, in layout tree = self.tree(lvs) File "/usr/lib/python2.7/site-packages/imgbased/naming.py", line 224, in tree bases[img.base.nvr].layers.append(img) KeyError: <NVR ovirt-node-ng-4.2.3-0.20180524.0 />
lvs -a
[root@ovn-monster ~]# lvs -a /dev/mapper/36090a02860ee716bee5e05b11dc52616: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/mapper/36090a02860ee716bee5e05b11dc52616: read failed after 0 of 4096 at 5497568559104: Eingabe-/Ausgabefehler /dev/mapper/36090a02860ee716bee5e05b11dc52616: read failed after 0 of 4096 at 5497568616448: Eingabe-/Ausgabefehler /dev/mapper/36090a02860ee716bee5e05b11dc52616: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/mapper/36090a02860ee9137c5ae35cd4bc5f6b8: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/mapper/36090a02860ee9137c5ae35cd4bc5f6b8: read failed after 0 of 4096 at 1099526242304: Eingabe-/Ausgabefehler /dev/mapper/36090a02860ee9137c5ae35cd4bc5f6b8: read failed after 0 of 4096 at 1099526299648: Eingabe-/Ausgabefehler /dev/mapper/36090a02860ee9137c5ae35cd4bc5f6b8: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/mapper/36090a02860eea13dc5aed55e4cc57698: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/mapper/36090a02860eea13dc5aed55e4cc57698: read failed after 0 of 4096 at 1099526242304: Eingabe-/Ausgabefehler /dev/mapper/36090a02860eea13dc5aed55e4cc57698: read failed after 0 of 4096 at 1099526299648: Eingabe-/Ausgabefehler /dev/mapper/36090a02860eea13dc5aed55e4cc57698: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/metadata: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/metadata: read failed after 0 of 4096 at 536805376: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/metadata: read failed after 0 of 4096 at 536862720: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/metadata: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/ids: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/ids: read failed after 0 of 4096 at 134152192: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/ids: read failed after 0 of 4096 at 134209536: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/ids: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/leases: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/leases: read failed after 0 of 4096 at 2147418112: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/leases: read failed after 0 of 4096 at 2147475456: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/leases: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/outbox: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/outbox: read failed after 0 of 4096 at 134152192: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/outbox: read failed after 0 of 4096 at 134209536: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/outbox: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/xleases: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/xleases: read failed after 0 of 4096 at 1073676288: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/xleases: read failed after 0 of 4096 at 1073733632: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/xleases: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/inbox: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/inbox: read failed after 0 of 4096 at 134152192: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/inbox: read failed after 0 of 4096 at 134209536: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/inbox: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/master: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/master: read failed after 0 of 4096 at 1073676288: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/master: read failed after 0 of 4096 at 1073733632: Eingabe-/Ausgabefehler /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/master: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/metadata: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/metadata: read failed after 0 of 4096 at 536805376: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/metadata: read failed after 0 of 4096 at 536862720: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/metadata: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/metadata: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/metadata: read failed after 0 of 4096 at 536805376: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/metadata: read failed after 0 of 4096 at 536862720: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/metadata: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/ids: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/ids: read failed after 0 of 4096 at 134152192: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/ids: read failed after 0 of 4096 at 134209536: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/ids: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/ids: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/ids: read failed after 0 of 4096 at 134152192: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/ids: read failed after 0 of 4096 at 134209536: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/ids: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/leases: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/leases: read failed after 0 of 4096 at 2147418112: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/leases: read failed after 0 of 4096 at 2147475456: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/leases: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/leases: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/leases: read failed after 0 of 4096 at 2147418112: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/leases: read failed after 0 of 4096 at 2147475456: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/leases: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/outbox: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/outbox: read failed after 0 of 4096 at 134152192: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/outbox: read failed after 0 of 4096 at 134209536: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/outbox: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/xleases: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/xleases: read failed after 0 of 4096 at 1073676288: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/xleases: read failed after 0 of 4096 at 1073733632: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/xleases: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/inbox: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/inbox: read failed after 0 of 4096 at 134152192: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/inbox: read failed after 0 of 4096 at 134209536: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/inbox: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/master: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/master: read failed after 0 of 4096 at 1073676288: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/master: read failed after 0 of 4096 at 1073733632: Eingabe-/Ausgabefehler /dev/bcdbb66e-6196-4366-be25-a3e9ab948839/master: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/outbox: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/outbox: read failed after 0 of 4096 at 134152192: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/outbox: read failed after 0 of 4096 at 134209536: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/outbox: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/inbox: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/inbox: read failed after 0 of 4096 at 134152192: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/inbox: read failed after 0 of 4096 at 134209536: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/inbox: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/master: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/master: read failed after 0 of 4096 at 1073676288: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/master: read failed after 0 of 4096 at 1073733632: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/master: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/xleases: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/xleases: read failed after 0 of 4096 at 1073676288: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/xleases: read failed after 0 of 4096 at 1073733632: Eingabe-/Ausgabefehler /dev/c91974bf-fd64-4630-8005-e785b73acbef/xleases: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert home onn_ovn-monster Vwi-aotz-- 1,00g pool00 4,79
[lvol0_pmspare] onn_ovn-monster ewi------- 144,00m
ovirt-node-ng-4.2.3-0.20180524.0+1 onn_ovn-monster Vwi-aotz-- <252,38g pool00 2,88
ovirt-node-ng-4.2.3.1-0.20180530.0+1 onn_ovn-monster Vwi-a-tz-- <252,38g pool00 0,86
ovirt-node-ng-4.2.4-0.20180626.0 onn_ovn-monster Vri-a-tz-k <252,38g pool00 0,85
ovirt-node-ng-4.2.4-0.20180626.0+1 onn_ovn-monster Vwi-a-tz-- <252,38g pool00 ovirt-node-ng-4.2.4-0.20180626.0 0,85
pool00 onn_ovn-monster twi-aotz-- <279,38g 6,76 1,01
[pool00_tdata] onn_ovn-monster Twi-ao---- <279,38g
[pool00_tmeta] onn_ovn-monster ewi-ao---- 1,00g
root onn_ovn-monster Vwi-a-tz-- <252,38g pool00 1,24
swap onn_ovn-monster -wi-ao---- 4,00g
tmp onn_ovn-monster Vwi-aotz-- 1,00g pool00 5,01
var onn_ovn-monster Vwi-aotz-- 15,00g pool00 3,56
var_crash onn_ovn-monster Vwi-aotz-- 10,00g pool00 2,86
var_log onn_ovn-monster Vwi-aotz-- 8,00g pool00 38,48
var_log_audit onn_ovn-monster Vwi-aotz-- 2,00g pool00 6,77
turbo ovirt -wi-ao---- 894,25g
Am 03.07.2018 um 12:58 schrieb Yuval Turgeman <yturgema@redhat.com>:
Oliver, can you share the output from lvs ?
On Tue, Jul 3, 2018 at 12:06 AM, Oliver Riesener <
Oliver.Riesener@hs-bremen.de> wrote:
Hi Yuval,
* reinstallation failed, because LV already exists. ovirt-node-ng-4.2.4-0.20180626.0 onn_ovn-monster Vri-a-tz-k <252,38g pool00 0,85 ovirt-node-ng-4.2.4-0.20180626.0+1 onn_ovn-monster Vwi-a-tz-- <252,38g pool00 ovirt-node-ng-4.2.4-0.20180626.0 0,85 See attachment imgbased.reinstall.log
* I removed them and re-reinstall without luck.
I got KeyError: <NVR ovirt-node-ng-4.2.1.1-0.20180223.0 />
See attachment imgbased.rereinstall.log
Also a new problem with nodectl info [root@ovn-monster tmp]# nodectl info Traceback (most recent call last): File "/usr/lib64/python2.7/runpy.py", line 162, in _run_module_as_main "__main__", fname, loader, pkg_name) File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code exec code in run_globals File "/usr/lib/python2.7/site-packages/nodectl/__main__.py", line 42, in <module> CliApplication() File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 200, in CliApplication return cmdmap.command(args) File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 118, in command return self.commands[command](**kwargs) File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 76, in info Info(self.imgbased, self.machine).write() File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 45, in __init__ self._fetch_information() File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 49, in _fetch_information self._get_layout() File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 66, in _get_layout layout = LayoutParser(self.app.imgbase.layout()).parse() File "/usr/lib/python2.7/site-packages/imgbased/imgbase.py", line 155, in layout return self.naming.layout() File "/usr/lib/python2.7/site-packages/imgbased/naming.py", line 109, in layout tree = self.tree(lvs) File "/usr/lib/python2.7/site-packages/imgbased/naming.py", line 224, in tree bases[img.base.nvr].layers.append(img) KeyError: <NVR ovirt-node-ng-4.2.3-0.20180524.0 />
Am 02.07.2018 um 22:22 schrieb Oliver Riesener < Oliver.Riesener@hs-bremen.de>:
Hi Yuval,
yes you are right, there was a unused and deactivated var_crash LV.
* I activated and mount it to /var/crash via /etc/fstab. * /var/crash was empty, and LV has already ext4 fs. var_crash onn_ovn-monster Vwi-aotz-- 10,00g pool00 2,86
* Now i will try to upgrade again. * yum reinstall ovirt-node-ng-image-update.noarch
BTW, no more imgbased.log files found.
Am 02.07.2018 um 20:57 schrieb Yuval Turgeman <yturgema@redhat.com>:
From your log:
AssertionError: Path is already a volume: /var/crash
Basically, it means that you already have an LV for /var/crash but
You can find more details about this here:
virtualization/4.1/html/upgrade_guide/recovering_from_ failed_nist-800_upgrade
On Mon, Jul 2, 2018 at 8:12 PM, Oliver Riesener <
Oliver.Riesener@hs-bremen.de> wrote:
Hi,
i attached my /tmp/imgbased.log
Sheers
Oliver
> Am 02.07.2018 um 13:58 schrieb Yuval Turgeman <yuvalt@redhat.com>: > > Looks like the upgrade script failed - can you please attach /var/log/imgbased.log or /tmp/imgbased.log ? > > Thanks, > Yuval. > > On Mon, Jul 2, 2018 at 2:54 PM, Sandro Bonazzola < sbonazzo@redhat.com> wrote: > Yuval, can you please have a look? > > 2018-06-30 7:48 GMT+02:00 Oliver Riesener < Oliver.Riesener@hs-bremen.de>: > Yes, here is the same. > > It seams the bootloader isn’t configured right ? > > I did the Upgrade and reboot to 4.2.4 from UI and got: > > [root@ovn-monster ~]# nodectl info > layers: > ovirt-node-ng-4.2.4-0.20180626.0: > ovirt-node-ng-4.2.4-0.20180626.0+1 > ovirt-node-ng-4.2.3.1-0.20180530.0: > ovirt-node-ng-4.2.3.1-0.20180530.0+1 > ovirt-node-ng-4.2.3-0.20180524.0: > ovirt-node-ng-4.2.3-0.20180524.0+1 > ovirt-node-ng-4.2.1.1-0.20180223.0: > ovirt-node-ng-4.2.1.1-0.20180223.0+1 > bootloader: > default: ovirt-node-ng-4.2.3-0.20180524.0+1 > entries: > ovirt-node-ng-4.2.3-0.20180524.0+1: > index: 0 > title: ovirt-node-ng-4.2.3-0.20180524.0 > kernel: /boot/ovirt-node-ng-4.2.3-0. 20180524.0+1/vmlinuz-3.10.0-862.3.2.el7.x86_64 > args: "ro crashkernel=auto rd.lvm.lv=onn_ovn-monster/ ovirt-node-ng-4.2.3-0.20180524.0+1 rd.lvm.lv=onn_ovn-monster/swap rd.md.uuid=c6c3013b:027a9346:67dfd181:89635587 rhgb quiet LANG=de_DE.UTF-8 img.bootid=ovirt-node-ng-4.2.3-0.20180524.0+1" > initrd: /boot/ovirt-node-ng-4.2.3-0. 20180524.0+1/initramfs-3.10.0-862.3.2.el7.x86_64.img > root: /dev/onn_ovn-monster/ovirt-node-ng-4.2.3-0.20180524.0+1 > ovirt-node-ng-4.2.1.1-0.20180223.0+1: > index: 1 > title: ovirt-node-ng-4.2.1.1-0.20180223.0 > kernel: /boot/ovirt-node-ng-4.2.1.1-0. 20180223.0+1/vmlinuz-3.10.0-693.17.1.el7.x86_64 > args: "ro crashkernel=auto rd.lvm.lv=onn_ovn-monster/ ovirt-node-ng-4.2.1.1-0.20180223.0+1 rd.lvm.lv=onn_ovn-monster/swap rd.md.uuid=c6c3013b:027a9346:67dfd181:89635587 rhgb quiet LANG=de_DE.UTF-8 img.bootid=ovirt-node-ng-4.2.1.1-0.20180223.0+1" > initrd: /boot/ovirt-node-ng-4.2.1.1-0. 20180223.0+1/initramfs-3.10.0-693.17.1.el7.x86_64.img > root: /dev/onn_ovn-monster/ovirt- node-ng-4.2.1.1-0.20180223.0+1 > current_layer: ovirt-node-ng-4.2.3-0.20180524.0+1 > [root@ovn-monster ~]# uptime > 07:35:27 up 2 days, 15:42, 1 user, load average: 1,07, 1,00, 0,95 > >> Am 29.06.2018 um 23:53 schrieb Matt Simonsen <matt@khoza.com>: >> >> Hello, >> >> I did yum updates on 2 of my oVirt 4.2.3 nodes running the
>> >> >> [root@node6-g8-h4 ~]# yum update >> Loaded plugins: enabled_repos_upload, fastestmirror, imgbased-persist, >> : package_upload, product-id, search-disabled-repos, subscription- >> : manager >> This system is not registered with an entitlement server. You can use subscription-manager to register. >> Loading mirror speeds from cached hostfile >> * ovirt-4.2-epel: linux.mirrors.es.net >> Resolving Dependencies >> --> Running transaction check >> ---> Package ovirt-node-ng-image-update.noarch 0:4.2.3.1-1.el7 will be updated >> ---> Package ovirt-node-ng-image-update.noarch 0:4.2.4-1.el7 will be obsoleting >> ---> Package ovirt-node-ng-image-update-placeholder.noarch 0:4.2.3.1-1.el7 will be obsoleted >> --> Finished Dependency Resolution >> >> Dependencies Resolved >> >> ============================================================ ============================================================= >> Package Arch Version Repository Size >> ============================================================ ============================================================= >> Installing: >> ovirt-node-ng-image-update noarch 4.2.4-1.el7 ovirt-4.2 647 M >> replacing ovirt-node-ng-image-update-placeholder.noarch 4.2.3.1-1.el7 >> >> Transaction Summary >> ============================================================ ============================================================= >> Install 1 Package >> >> Total download size: 647 M >> Is this ok [y/d/N]: y >> Downloading packages: >> warning: /var/cache/yum/x86_64/7/ovirt-4.2/packages/ovirt-node-ng- image-update-4.2.4-1.el7.noarch.rpm: Header V4 RSA/SHA1 Signature, key ID fe590cb7: NOKEY >> Public key for ovirt-node-ng-image-update-4.2.4-1.el7.noarch.rpm is not installed >> ovirt-node-ng-image-update-4.2.4-1.el7.noarch.rpm | 647 MB 00:02:07 >> Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-ovirt-4.2 >> Importing GPG key 0xFE590CB7: >> Userid : "oVirt <infra@ovirt.org>" >> Fingerprint: 31a5 d783 7fad 7cb2 86cd 3469 ab8c 4f9d fe59 0cb7 >> Package : ovirt-release42-4.2.3.1-1.el7.noarch (installed) >> From : /etc/pki/rpm-gpg/RPM-GPG-ovirt-4.2 >> Is this ok [y/N]: y >> Running transaction check >> Running transaction test >> Transaction test succeeded >> Running transaction >> Installing : ovirt-node-ng-image-update-4.2.4-1.el7.noarch 1/3 >> warning: %post(ovirt-node-ng-image-update-4.2.4-1.el7.noarch)
>> Non-fatal POSTIN scriptlet failure in rpm package ovirt-node-ng-image-update-4.2.4-1.el7.noarch >> Erasing : ovirt-node-ng-image-update-
>> Cleanup : ovirt-node-ng-image-update-4.2.3.1-1.el7.noarch 3/3 >> warning: file /usr/share/ovirt-node-ng/ image/ovirt-node-ng-4.2.0-0.20180530.0.el7.squashfs.img: remove failed: No such file or directory >> Uploading Package Profile >> Unable to upload Package Profile >> Verifying : ovirt-node-ng-image-update-4.2.4-1.el7.noarch 1/3 >> Verifying : ovirt-node-ng-image-update-4.2.3.1-1.el7.noarch 2/3 >> Verifying : ovirt-node-ng-image-update-
it's not mounted for some reason, so either mount it (if the data good) or remove it and then reinstall the image-update rpm. Before that, check that you dont have any other LVs in that same state - or you can post the output for lvs... btw, do you have any more imgbased.log files laying around ? prebuilt node platform and it doesn't appear the updates worked. scriptlet failed, exit status 1 placeholder-4.2.3.1-1.el7.noarch 2/3 placeholder-4.2.3.1-1.el7.noarch 3/3
>> >> Installed: >> ovirt-node-ng-image-update.noarch 0:4.2.4-1.el7 >> >> Replaced: >> ovirt-node-ng-image-update-placeholder.noarch 0:4.2.3.1-1.el7 >> >> Complete! >> Uploading Enabled Repositories Report >> Loaded plugins: fastestmirror, product-id, subscription-manager >> This system is not registered with an entitlement server. You can use subscription-manager to register. >> Cannot upload enabled repos report, is this client registered? >> >> >> My engine shows the nodes as having no updates, however the major components including the kernel version and port 9090 admin GUI show 4.2.3 >> >> Is there anything I can provide to help diagnose the issue? >> >> >> [root@node6-g8-h4 ~]# rpm -qa | grep ovirt >> >> ovirt-imageio-common-1.3.1.2-0.el7.centos.noarch >> ovirt-host-deploy-1.7.3-1.el7.centos.noarch >> ovirt-vmconsole-host-1.0.5-4.el7.centos.noarch >> ovirt-provider-ovn-driver-1.2.10-1.el7.centos.noarch >> ovirt-engine-sdk-python-3.6.9.1-1.el7.noarch >> ovirt-setup-lib-1.1.4-1.el7.centos.noarch >> ovirt-release42-4.2.3.1-1.el7.noarch >> ovirt-imageio-daemon-1.3.1.2-0.el7.centos.noarch >> ovirt-hosted-engine-setup-2.2.20-1.el7.centos.noarch >> ovirt-host-dependencies-4.2.2-2.el7.centos.x86_64 >> ovirt-hosted-engine-ha-2.2.11-1.el7.centos.noarch >> ovirt-host-4.2.2-2.el7.centos.x86_64 >> ovirt-node-ng-image-update-4.2.4-1.el7.noarch >> ovirt-vmconsole-1.0.5-4.el7.centos.noarch >> ovirt-release-host-node-4.2.3.1-1.el7.noarch >> cockpit-ovirt-dashboard-0.11.24-1.el7.centos.noarch >> ovirt-node-ng-nodectl-4.2.0-0.20180524.0.el7.noarch >> python-ovirt-engine-sdk4-4.2.6-2.el7.centos.x86_64 >> >> [root@node6-g8-h4 ~]# yum update >> Loaded plugins: enabled_repos_upload, fastestmirror, imgbased-persist, package_upload, product-id, search-disabled-repos, subscription-manager >> This system is not registered with an entitlement server. You can use subscription-manager to register. >> Loading mirror speeds from cached hostfile >> * ovirt-4.2-epel: linux.mirrors.es.net >> No packages marked for update >> Uploading Enabled Repositories Report >> Loaded plugins: fastestmirror, product-id, subscription-manager >> This system is not registered with an entitlement server. You can use subscription-manager to register. >> Cannot upload enabled repos report, is this client registered? >> _______________________________________________ >> Users mailing list -- users@ovirt.org >> To unsubscribe send an email to users-leave@ovirt.org >> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ >> oVirt Code of Conduct: https://www.ovirt.org/ community/about/community-guidelines/ >> List Archives: https://lists.ovirt.org/archive
...
[Message clipped]
participants (6)
-
Matt Simonsen
-
Oliver Riesener
-
Sandro Bonazzola
-
Yedidyah Bar David
-
Yuval Turgeman
-
Yuval Turgeman