Usage Of More Up-To-Date Versions Of The "kojihub" rpm Files
by Matthew J Black
Hi All,
When installing the latest version of oVirt on RHEL 9 the doco says to grab a couple of rpm files from `kojihub.stream.centos.org`. The files to grab are for v2.0.0. I'm wondering, because there are newer files on the server, if the doco might be a couple of months(?) out of date and we can instead grab the newer versions (or not, as the case may be)? Could one (or more) of the "main" oVirt devs jump in with an answer, please?
For the record, when I get 5 minutes to scratch my butt I want to spin of a test cluster and try this (and other things) ou for myself, with the idea of reporting back to the Community - but I need to get a PROD cluster up and running ASAP and so don't have the luxury of "experimenting" right at this moment - hence my question.
Thanks in advance
Cheers
Dulux-Oz
1 year, 2 months
oVirt Self-Hosted Engine Deployment Error
by Matthew J Black
Hi Guys,
New oVirt install using latest versions on a Rocky Linux v9.3 host.
We're getting the following error in the setup logs:
~~~
2024-01-09 17:14:53,977+1100 ERROR otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:113 fatal: [localhost]: FAILED! => {"changed": true, "cmd": ["ip", "rule", "add", "from", "192.168.1.1/255.255.255.0", "priority", "101", "table", "main"], "delta": "0:00:00.002702", "end": "2024-01-09 17:14:53.680933", "msg": "non-zero return code", "rc": 2, "start": "2024-01-09 17:14:53.678231", "stderr": "RTNETLINK answers: File exists", "stderr_lines": ["RTNETLINK answers: File exists"], "stdout": "", "stdout_lines": []}
~~~
So, which file "RTNETLINK answers: File exists" and can I simply manually delete that file and re-run `hosted-engine --deploy`?
Cheers
Dulux-Oz
1 year, 2 months
Upgrading EL9 host from 4.5.4 to 4.5.5
by Devin A. Bougie
Hi, All. When upgrading an EL9 host from 4.5.4 to 4.5.5, I've found I need to exclude the following packages to avoid the errors shown below:
*openvswitch*,*ovn*,centos-release-nfv-common
Is that to be expected, or am I missing a required repo or other upgrade step? I just wanted to clarify, as the docs seem a little outdated at least WRT comments about nmstate?
https://ovirt.org/download/install_on_rhel.html
Thanks,
Devin
------
[root@lnxvirt01 ~]# rpm -qa |grep -i openvswitch
openvswitch-selinux-extra-policy-1.0-31.el9s.noarch
ovirt-openvswitch-ovn-2.17-1.el9.noarch
openvswitch2.17-2.17.0-103.el9s.x86_64
python3-openvswitch2.17-2.17.0-103.el9s.x86_64
openvswitch2.17-ipsec-2.17.0-103.el9s.x86_64
ovirt-openvswitch-ovn-host-2.17-1.el9.noarch
ovirt-openvswitch-ipsec-2.17-1.el9.noarch
ovirt-python-openvswitch-2.17-1.el9.noarch
ovirt-openvswitch-2.17-1.el9.noarch
ovirt-openvswitch-ovn-common-2.17-1.el9.noarch
centos-release-nfv-openvswitch-1-5.el9.noarch
[root@lnxvirt01 ~]# dnf update
173 files removed
CLASSE oVirt Packages - x86_64 988 kB/s | 9.6 kB 00:00 CLASSE Packages - x86_64 45 MB/s | 642 kB 00:00 CentOS-9-stream - Ceph Pacific 561 kB/s | 557 kB 00:00 CentOS-9-stream - Gluster 10 245 kB/s | 56 kB 00:00 CentOS-9 - RabbitMQ 38 392 kB/s | 104 kB 00:00 CentOS Stream 9 - NFV OpenvSwitch 709 kB/s | 154 kB 00:00 CentOS-9 - OpenStack yoga 11 MB/s | 3.0 MB 00:00 CentOS Stream 9 - OpsTools - collectd 175 kB/s | 51 kB 00:00 CentOS Stream 9 - Extras packages 57 kB/s | 15 kB 00:00 CentOS Stream 9 - oVirt 4.5 2.7 MB/s | 1.0 MB 00:00 oVirt upstream for CentOS Stream 9 - oVirt 4.5 932 B/s | 7.5 kB 00:08 AlmaLinux 9 - AppStream 84 MB/s | 8.1 MB 00:00 AlmaLinux 9 - BaseOS 75 MB/s | 3.5 MB 00:00 AlmaLinux 9 - BaseOS - Debug 12 MB/s | 2.2 MB 00:00 AlmaLinux 9 - CRB 67 MB/s | 2.3 MB 00:00 AlmaLinux 9 - Extras 1.5 MB/s | 17 kB 00:00 AlmaLinux 9 - HighAvailability 30 MB/s | 434 kB 00:00 AlmaLinux 9 - NFV 70 MB/s | 2.0 MB 00:00 AlmaLinux 9 - Plus 3.2 MB/s | 29 kB 00:00 AlmaLinux 9 - ResilientStorage 14 MB/s | 446 kB 00:00 AlmaLinux 9 - RT 70 MB/s | 1.9 MB 00:00 AlmaLinux 9 - SAP 846 kB/s | 9.7 kB 00:00 AlmaLinux 9 - SAPHANA 1.3 MB/s | 13 kB 00:00 Error: Problem 1: package ovirt-openvswitch-2.17-1.el9.noarch from @System requires openvswitch2.17, but none of the providers can be installed
- package rdo-openvswitch-2:3.1-2.el9s.noarch from centos-openstack-yoga obsoletes openvswitch2.17 < 3.1 provided by openvswitch2.17-2.17.0-103.el9s.x86_64 from @System
- package rdo-openvswitch-2:3.1-2.el9s.noarch from centos-openstack-yoga obsoletes openvswitch2.17 < 3.1 provided by openvswitch2.17-2.17.0-103.el9s.x86_64 from centos-nfv-openvswitch
- package rdo-openvswitch-2:3.1-2.el9s.noarch from centos-openstack-yoga obsoletes openvswitch2.17 < 3.1 provided by openvswitch2.17-2.17.0-108.el9s.x86_64 from centos-nfv-openvswitch
- package rdo-openvswitch-2:3.1-2.el9s.noarch from centos-openstack-yoga obsoletes openvswitch2.17 < 3.1 provided by openvswitch2.17-2.17.0-109.el9s.x86_64 from centos-nfv-openvswitch
- package rdo-openvswitch-2:3.1-2.el9s.noarch from centos-openstack-yoga obsoletes openvswitch2.17 < 3.1 provided by openvswitch2.17-2.17.0-115.el9s.x86_64 from centos-nfv-openvswitch
- package rdo-openvswitch-2:3.1-2.el9s.noarch from centos-openstack-yoga obsoletes openvswitch2.17 < 3.1 provided by openvswitch2.17-2.17.0-120.el9s.x86_64 from centos-nfv-openvswitch
- package rdo-openvswitch-2:3.1-2.el9s.noarch from centos-openstack-yoga obsoletes openvswitch2.17 < 3.1 provided by openvswitch2.17-2.17.0-15.el9s.x86_64 from centos-nfv-openvswitch
- package rdo-openvswitch-2:3.1-2.el9s.noarch from centos-openstack-yoga obsoletes openvswitch2.17 < 3.1 provided by openvswitch2.17-2.17.0-31.el9s.x86_64 from centos-nfv-openvswitch
- package rdo-openvswitch-2:3.1-2.el9s.noarch from centos-openstack-yoga obsoletes openvswitch2.17 < 3.1 provided by openvswitch2.17-2.17.0-51.el9s.x86_64 from centos-nfv-openvswitch
- package rdo-openvswitch-2:3.1-2.el9s.noarch from centos-openstack-yoga obsoletes openvswitch2.17 < 3.1 provided by openvswitch2.17-2.17.0-52.el9s.x86_64 from centos-nfv-openvswitch
- package rdo-openvswitch-2:3.1-2.el9s.noarch from centos-openstack-yoga obsoletes openvswitch2.17 < 3.1 provided by openvswitch2.17-2.17.0-55.el9s.x86_64 from centos-nfv-openvswitch
- package rdo-openvswitch-2:3.1-2.el9s.noarch from centos-openstack-yoga obsoletes openvswitch2.17 < 3.1 provided by openvswitch2.17-2.17.0-57.el9s.x86_64 from centos-nfv-openvswitch
- package rdo-openvswitch-2:3.1-2.el9s.noarch from centos-openstack-yoga obsoletes openvswitch2.17 < 3.1 provided by openvswitch2.17-2.17.0-60.el9s.x86_64 from centos-nfv-openvswitch
- package rdo-openvswitch-2:3.1-2.el9s.noarch from centos-openstack-yoga obsoletes openvswitch2.17 < 3.1 provided by openvswitch2.17-2.17.0-62.el9s.x86_64 from centos-nfv-openvswitch
- package rdo-openvswitch-2:3.1-2.el9s.noarch from centos-openstack-yoga obsoletes openvswitch2.17 < 3.1 provided by openvswitch2.17-2.17.0-63.el9s.x86_64 from centos-nfv-openvswitch
- package rdo-openvswitch-2:3.1-2.el9s.noarch from centos-openstack-yoga obsoletes openvswitch2.17 < 3.1 provided by openvswitch2.17-2.17.0-67.el9s.x86_64 from centos-nfv-openvswitch
- package rdo-openvswitch-2:3.1-2.el9s.noarch from centos-openstack-yoga obsoletes openvswitch2.17 < 3.1 provided by openvswitch2.17-2.17.0-68.el9s.x86_64 from centos-nfv-openvswitch
- package rdo-openvswitch-2:3.1-2.el9s.noarch from centos-openstack-yoga obsoletes openvswitch2.17 < 3.1 provided by openvswitch2.17-2.17.0-71.el9s.x86_64 from centos-nfv-openvswitch
- package rdo-openvswitch-2:3.1-2.el9s.noarch from centos-openstack-yoga obsoletes openvswitch2.17 < 3.1 provided by openvswitch2.17-2.17.0-72.el9s.x86_64 from centos-nfv-openvswitch
- package rdo-openvswitch-2:3.1-2.el9s.noarch from centos-openstack-yoga obsoletes openvswitch2.17 < 3.1 provided by openvswitch2.17-2.17.0-76.el9s.x86_64 from centos-nfv-openvswitch
- package rdo-openvswitch-2:3.1-2.el9s.noarch from centos-openstack-yoga obsoletes openvswitch2.17 < 3.1 provided by openvswitch2.17-2.17.0-77.el9s.x86_64 from centos-nfv-openvswitch
- package rdo-openvswitch-2:3.1-2.el9s.noarch from centos-openstack-yoga obsoletes openvswitch2.17 < 3.1 provided by openvswitch2.17-2.17.0-85.el9s.x86_64 from centos-nfv-openvswitch
- package rdo-openvswitch-2:3.1-2.el9s.noarch from centos-openstack-yoga obsoletes openvswitch2.17 < 3.1 provided by openvswitch2.17-2.17.0-87.el9s.x86_64 from centos-nfv-openvswitch
- package rdo-openvswitch-2:3.1-2.el9s.noarch from centos-openstack-yoga obsoletes openvswitch2.17 < 3.1 provided by openvswitch2.17-2.17.0-92.el9s.x86_64 from centos-nfv-openvswitch
- package rdo-openvswitch-2:3.1-2.el9s.noarch from centos-openstack-yoga obsoletes openvswitch2.17 < 3.1 provided by openvswitch2.17-2.17.0-93.el9s.x86_64 from centos-nfv-openvswitch
- package rdo-openvswitch-2:3.1-2.el9s.noarch from centos-openstack-yoga obsoletes openvswitch2.17 < 3.1 provided by openvswitch2.17-2.17.0-95.el9s.x86_64 from centos-nfv-openvswitch
- cannot install the best update candidate for package ovirt-openvswitch-2.17-1.el9.noarch
- cannot install the best update candidate for package openvswitch2.17-2.17.0-103.el9s.x86_64
Problem 2: package python3-rdo-openvswitch-2:3.1-2.el9s.noarch from centos-openstack-yoga obsoletes python3-openvswitch2.17 < 3.1 provided by python3-openvswitch2.17-2.17.0-120.el9s.x86_64 from centos-nfv-openvswitch
- package openvswitch2.17-ipsec-2.17.0-120.el9s.x86_64 from centos-nfv-openvswitch requires python3-openvswitch2.17 = 2.17.0-120.el9s, but none of the providers can be installed
- cannot install the best update candidate for package python3-openvswitch2.17-2.17.0-103.el9s.x86_64
- cannot install the best update candidate for package openvswitch2.17-ipsec-2.17.0-103.el9s.x86_64
Problem 3: package ovirt-openvswitch-ovn-common-2.17-1.el9.noarch from @System requires ovn22.09, but none of the providers can be installed
- package rdo-ovn-2:22.12-2.el9s.noarch from centos-openstack-yoga obsoletes ovn22.09 < 22.12 provided by ovn22.09-22.09.0-31.el9s.x86_64 from @System
- package rdo-ovn-2:22.12-2.el9s.noarch from centos-openstack-yoga obsoletes ovn22.09 < 22.12 provided by ovn22.09-22.09.0-11.el9s.x86_64 from centos-nfv-openvswitch
- package rdo-ovn-2:22.12-2.el9s.noarch from centos-openstack-yoga obsoletes ovn22.09 < 22.12 provided by ovn22.09-22.09.0-22.el9s.x86_64 from centos-nfv-openvswitch
- package rdo-ovn-2:22.12-2.el9s.noarch from centos-openstack-yoga obsoletes ovn22.09 < 22.12 provided by ovn22.09-22.09.0-31.el9s.x86_64 from centos-nfv-openvswitch
- cannot install the best update candidate for package ovn22.09-22.09.0-31.el9s.x86_64
- cannot install the best update candidate for package ovirt-openvswitch-ovn-common-2.17-1.el9.noarch
Problem 4: package ovirt-openvswitch-ovn-host-2.17-1.el9.noarch from @System requires ovn22.09-host, but none of the providers can be installed
- package rdo-ovn-host-2:22.12-2.el9s.noarch from centos-openstack-yoga obsoletes ovn22.09-host < 22.12 provided by ovn22.09-host-22.09.0-31.el9s.x86_64 from @System
- package rdo-ovn-host-2:22.12-2.el9s.noarch from centos-openstack-yoga obsoletes ovn22.09-host < 22.12 provided by ovn22.09-host-22.09.0-11.el9s.x86_64 from centos-nfv-openvswitch
- package rdo-ovn-host-2:22.12-2.el9s.noarch from centos-openstack-yoga obsoletes ovn22.09-host < 22.12 provided by ovn22.09-host-22.09.0-22.el9s.x86_64 from centos-nfv-openvswitch
- package rdo-ovn-host-2:22.12-2.el9s.noarch from centos-openstack-yoga obsoletes ovn22.09-host < 22.12 provided by ovn22.09-host-22.09.0-31.el9s.x86_64 from centos-nfv-openvswitch
- cannot install the best update candidate for package ovn22.09-host-22.09.0-31.el9s.x86_64
- cannot install the best update candidate for package ovirt-openvswitch-ovn-host-2.17-1.el9.noarch
(try to add '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages)
[root@lnxvirt01 ~]# yum --exclude=kernel*,*openvswitch*,*ovn*,centos-release-nfv-common update
Last metadata expiration check: 0:13:54 ago on Mon 08 Jan 2024 02:51:41 PM EST.
Dependencies resolved.
========================================================================================================================================================================================================================================
Package Architecture Version Repository Size
========================================================================================================================================================================================================================================
Upgrading:
NetworkManager-libreswan x86_64 1.2.14-2.el9_3.alma.1 appstream 124 k
centos-release-ceph-pacific noarch 1.0-2.el9s c9s-extras-common 7.4 k
centos-release-cloud noarch 1-1.el9s c9s-extras-common 7.9 k
centos-release-gluster10 noarch 1.0-2.el9s c9s-extras-common 8.8 k
centos-release-messaging noarch 1-4.el9s c9s-extras-common 8.4 k
centos-release-openstack-yoga noarch 1-4.el9s c9s-extras-common 8.0 k
centos-release-opstools noarch 1-12.el9s c9s-extras-common 8.4 k
centos-release-ovirt45 noarch 9.2-1.el9s c9s-extras-common 18 k
centos-release-rabbitmq-38 noarch 1-4.el9s c9s-extras-common 7.4 k
centos-release-storage-common noarch 2-5.el9s c9s-extras-common 8.3 k
centos-release-virt-common noarch 1-4.el9s c9s-extras-common 7.9 k
ceph-common x86_64 2:16.2.14-1.el9s centos-ceph-pacific 20 M
firefox x86_64 115.6.0-1.el9_3.alma appstream 110 M
glusterfs x86_64 10.5-1.el9s centos-gluster10 606 k
glusterfs-cli x86_64 10.5-1.el9s centos-gluster10 184 k
glusterfs-client-xlators x86_64 10.5-1.el9s centos-gluster10 854 k
glusterfs-fuse x86_64 10.5-1.el9s centos-gluster10 137 k
libcephfs2 x86_64 2:16.2.14-1.el9s centos-ceph-pacific 657 k
libgfrpc0 x86_64 10.5-1.el9s centos-gluster10 53 k
libgfxdr0 x86_64 10.5-1.el9s centos-gluster10 28 k
libglusterd0 x86_64 10.5-1.el9s centos-gluster10 11 k
libglusterfs0 x86_64 10.5-1.el9s centos-gluster10 300 k
libqb x86_64 2.0.8-1.el9 centos-ovirt45 91 k
librados2 x86_64 2:16.2.14-1.el9s centos-ceph-pacific 3.2 M
libradosstriper1 x86_64 2:16.2.14-1.el9s centos-ceph-pacific 469 k
librbd1 x86_64 2:16.2.14-1.el9s centos-ceph-pacific 3.0 M
librgw2 x86_64 2:16.2.14-1.el9s centos-ceph-pacific 3.4 M
libvirt x86_64 9.5.0-7.el9_3.alma.2 appstream 22 k
libvirt-client x86_64 9.5.0-7.el9_3.alma.2 appstream 426 k
libvirt-daemon x86_64 9.5.0-7.el9_3.alma.2 appstream 168 k
libvirt-daemon-common x86_64 9.5.0-7.el9_3.alma.2 appstream 129 k
libvirt-daemon-config-network x86_64 9.5.0-7.el9_3.alma.2 appstream 25 k
libvirt-daemon-config-nwfilter x86_64 9.5.0-7.el9_3.alma.2 appstream 30 k
libvirt-daemon-driver-interface x86_64 9.5.0-7.el9_3.alma.2 appstream 174 k
libvirt-daemon-driver-network x86_64 9.5.0-7.el9_3.alma.2 appstream 212 k
libvirt-daemon-driver-nodedev x86_64 9.5.0-7.el9_3.alma.2 appstream 194 k
libvirt-daemon-driver-nwfilter x86_64 9.5.0-7.el9_3.alma.2 appstream 210 k
libvirt-daemon-driver-qemu x86_64 9.5.0-7.el9_3.alma.2 appstream 909 k
libvirt-daemon-driver-secret x86_64 9.5.0-7.el9_3.alma.2 appstream 171 k
libvirt-daemon-driver-storage x86_64 9.5.0-7.el9_3.alma.2 appstream 22 k
libvirt-daemon-driver-storage-core x86_64 9.5.0-7.el9_3.alma.2 appstream 229 k
libvirt-daemon-driver-storage-disk x86_64 9.5.0-7.el9_3.alma.2 appstream 33 k
libvirt-daemon-driver-storage-iscsi x86_64 9.5.0-7.el9_3.alma.2 appstream 30 k
libvirt-daemon-driver-storage-logical x86_64 9.5.0-7.el9_3.alma.2 appstream 34 k
libvirt-daemon-driver-storage-mpath x86_64 9.5.0-7.el9_3.alma.2 appstream 28 k
libvirt-daemon-driver-storage-rbd x86_64 9.5.0-7.el9_3.alma.2 appstream 38 k
libvirt-daemon-driver-storage-scsi x86_64 9.5.0-7.el9_3.alma.2 appstream 30 k
libvirt-daemon-kvm x86_64 9.5.0-7.el9_3.alma.2 appstream 22 k
libvirt-daemon-lock x86_64 9.5.0-7.el9_3.alma.2 appstream 58 k
libvirt-daemon-log x86_64 9.5.0-7.el9_3.alma.2 appstream 62 k
libvirt-daemon-plugin-lockd x86_64 9.5.0-7.el9_3.alma.2 appstream 33 k
libvirt-daemon-plugin-sanlock x86_64 9.5.0-7.el9_3.alma.2 crb 44 k
libvirt-daemon-proxy x86_64 9.5.0-7.el9_3.alma.2 appstream 166 k
libvirt-libs x86_64 9.5.0-7.el9_3.alma.2 appstream 4.8 M
otopi-common noarch 1.10.4-1.el9 centos-ovirt45 92 k
ovirt-ansible-collection noarch 3.2.0-1.el9 centos-ovirt45 279 k
ovirt-engine-setup-base noarch 4.5.5-1.el9 centos-ovirt45 111 k
ovirt-hosted-engine-ha noarch 2.5.1-1.el9 centos-ovirt45 312 k
ovirt-hosted-engine-setup noarch 2.7.1-1.el9 centos-ovirt45 221 k
ovirt-vmconsole noarch 1.0.9-3.el9 centos-ovirt45 38 k
ovirt-vmconsole-host noarch 1.0.9-3.el9 centos-ovirt45 21 k
python3-ceph-argparse x86_64 2:16.2.14-1.el9s centos-ceph-pacific 46 k
python3-ceph-common x86_64 2:16.2.14-1.el9s centos-ceph-pacific 98 k
python3-cephfs x86_64 2:16.2.14-1.el9s centos-ceph-pacific 193 k
python3-os-brick noarch 5.2.4-1.el9s centos-openstack-yoga 1.1 M
python3-oslo-config noarch 2:8.8.1-1.el9s centos-openstack-yoga 216 k
python3-otopi noarch 1.10.4-1.el9 centos-ovirt45 105 k
python3-ovirt-engine-lib noarch 4.5.5-1.el9 centos-ovirt45 31 k
python3-rados x86_64 2:16.2.14-1.el9s centos-ceph-pacific 343 k
python3-rbd x86_64 2:16.2.14-1.el9s centos-ceph-pacific 314 k
python3-rgw x86_64 2:16.2.14-1.el9s centos-ceph-pacific 106 k
selinux-policy noarch 38.1.29-1.el9 el-classe-ovirt 56 k
selinux-policy-targeted noarch 38.1.29-1.el9 el-classe-ovirt 6.5 M
tigervnc x86_64 1.13.1-3.el9_3.3.alma.1 appstream 297 k
tigervnc-icons noarch 1.13.1-3.el9_3.3.alma.1 appstream 33 k
tigervnc-license noarch 1.13.1-3.el9_3.3.alma.1 appstream 13 k
vdsm x86_64 4.50.5.1-1.el9 centos-ovirt45 337 k
vdsm-api noarch 4.50.5.1-1.el9 centos-ovirt45 101 k
vdsm-client noarch 4.50.5.1-1.el9 centos-ovirt45 23 k
vdsm-common noarch 4.50.5.1-1.el9 centos-ovirt45 130 k
vdsm-http noarch 4.50.5.1-1.el9 centos-ovirt45 14 k
vdsm-jsonrpc noarch 4.50.5.1-1.el9 centos-ovirt45 30 k
vdsm-network x86_64 4.50.5.1-1.el9 centos-ovirt45 209 k
vdsm-python noarch 4.50.5.1-1.el9 centos-ovirt45 1.2 M
vdsm-yajsonrpc noarch 4.50.5.1-1.el9 centos-ovirt45 39 k
vivaldi-stable x86_64 6.5.3206.50-1 el-classe 103 M
Transaction Summary
========================================================================================================================================================================================================================================
Upgrade 86 Packages
Total download size: 267 M
Is this ok [y/N]: N
Operation aborted.
------
1 year, 2 months
Error: GPG check FAILED
by juan.gabriel1786@gmail.com
Hello oVirt Support Team,
I am experiencing a GPG key verification issue on my oVirt Node when attempting to update packages. The error persists even after the GPG keys have been imported and seems to be related to package verification.
System Details:
Operating System: oVirt Node 4.5.4
CPE OS Name: cpe:/o:centos:centos:9
Kernel: Linux 5.14.0-202.el9.x86_64
Architecture: x86-64
Hardware Vendor: Supermicro
Hardware Model: X9DRL-3F/iF
Steps to Reproduce:
Imported the GPG key with the command:
bash
Copy code
rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-oVirt-4.5
Ran dnf update, which led to the following error:
The GPG keys listed for the "oVirt upstream for CentOS Stream 9 - oVirt 4.5" repository are already installed but they are not correct for this package.
Check that the correct key URLs are configured for this repository. Failing package is: ovirt-node-ng-image-update-4.5.5-1.el9.noarch
Error: GPG check FAILED
The GPG key at file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oVirt-4.5 (0x24901D0C) is reported to be already installed, but it does not seem to match the packages being updated.
Could you please advise on how to resolve this GPG key verification failure? I am following standard update procedures, and this issue is preventing me from maintaining the system's security and stability.
Thank you for your time and assistance.
Best regards,
1 year, 2 months
Unable to install oVirt on RHEL7.5
by SS00514758@techmahindra.com
Hi All,
I am unable to install oVirt on RHEL7.5, to install it I am taking reference of below link,
https://www.ovirt.org/documentation/install-guide/chap-Installing_oVirt.html
But though it is not working for me, couple of dependencies is not getting installed, and because of this I am not able to run the ovirt-engine, below are the depencies packages that unable to install,
Error: Package: collectd-write_http-5.8.0-6.1.el7.x86_64 (@ovirt-4.2-centos-opstools)
Requires: collectd(x86-64) = 5.8.0-6.1.el7
Removing: collectd-5.8.0-6.1.el7.x86_64 (@ovirt-4.2-centos-opstools)
collectd(x86-64) = 5.8.0-6.1.el7
Updated By: collectd-5.8.1-1.el7.x86_64 (epel)
collectd(x86-64) = 5.8.1-1.el7
Available: collectd-5.7.2-1.el7.x86_64 (ovirt-4.2-centos-opstools)
collectd(x86-64) = 5.7.2-1.el7
Available: collectd-5.7.2-3.el7.x86_64 (ovirt-4.2-centos-opstools)
collectd(x86-64) = 5.7.2-3.el7
Available: collectd-5.8.0-2.el7.x86_64 (ovirt-4.2-centos-opstools)
collectd(x86-64) = 5.8.0-2.el7
Available: collectd-5.8.0-3.el7.x86_64 (ovirt-4.2-centos-opstools)
collectd(x86-64) = 5.8.0-3.el7
Available: collectd-5.8.0-5.el7.x86_64 (ovirt-4.2-centos-opstools)
collectd(x86-64) = 5.8.0-5.el7
Help me to install this.
Looking forward to resolve this issue.
Regards
Sumit Sahay
1 year, 2 months
Cannot get Ovirt 4.5 to work, how ever I try. Virgin install: no pki ca-cert gen, restoring: no OVN connection
by julian.steiner@conesphere.com
Hi there,
over the last months I've hunkered down to update my companies antiquated Ovirt 4.3. To manage this in an orderly fashion we replicated the setup.
In the update process I always arrive at the same problem. Once I managed to solve it by chance, but I cannot reproduce the solution.
The setup is Ovirt Engine running on a dedicated Centos-Stream-8 virtual machine managed in VirtManager. The nodes are either OvirtNode 4.4 or 4.5. The problem exists on both.
Issue1:
Updating to 4.4 works without issue. Then, regardless whether I update by restoring to Ovirt 4.5 or by updating the engine through the update path networks stop functioning and, very peculiarly I get a very strange keymap in the vm console. It's no real keymap. It's quertz, but # resolves as 3 and all kind of strange stuff. However, this can be resolved on individual basis by setting the vm-console keymap to de (german). Connected hosts and new hosts always dispaly "OVN connected: No".
The error log hints at some kind of ssl error. I either get dropping connections, or protocol miss-matches in the node log. I deactivated Ovirt4.4-repositories on the engine and did a distro-sync, because I found an old bug-report that implicated protocol mismatched may result from unclean python-library versioning.
I reenrolled certificates, I reinstalled the host and still cannot get a connection:
Logs on host:
/var/log/ovn-controller.log:
2023-12-19T11:27:14.245Z|00018|memory|INFO|6604 kB peak resident set size after 15.1 seconds
2023-12-19T11:27:14.245Z|00019|memory|INFO|idl-cells:100
2023-12-19T11:29:34.483Z|00001|vlog|INFO|opened log file /var/log/ovn/ovn-controller.log
2023-12-19T11:29:34.512Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
2023-12-19T11:29:34.513Z|00003|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
2023-12-19T11:29:34.517Z|00004|main|INFO|OVN internal version is : [21.12.3-20.21.0-61.4]
2023-12-19T11:29:34.517Z|00005|main|INFO|OVS IDL reconnected, force recompute.
2023-12-19T11:29:34.573Z|00006|reconnect|INFO|ssl:127.0.0.1:6642: connecting...
2023-12-19T11:29:34.573Z|00007|main|INFO|OVNSB IDL reconnected, force recompute.
2023-12-19T11:29:34.573Z|00008|reconnect|INFO|ssl:127.0.0.1:6642: connection attempt failed (Connection refused)
2023-12-19T11:29:35.575Z|00009|reconnect|INFO|ssl:127.0.0.1:6642: connecting...
2023-12-19T11:29:35.589Z|00010|reconnect|INFO|ssl:127.0.0.1:6642: connection attempt failed (Connection refused)
2023-12-19T11:29:35.589Z|00011|reconnect|INFO|ssl:127.0.0.1:6642: waiting 2 seconds before reconnect
2023-12-19T11:29:37.592Z|00012|reconnect|INFO|ssl:127.0.0.1:6642: connecting...
2023-12-19T11:29:37.592Z|00013|reconnect|INFO|ssl:127.0.0.1:6642: connection attempt failed (Connection refused)
2023-12-19T11:29:37.592Z|00014|reconnect|INFO|ssl:127.0.0.1:6642: waiting 4 seconds before reconnect
2023-12-19T11:29:41.596Z|00015|reconnect|INFO|ssl:127.0.0.1:6642: connecting...
2023-12-19T11:29:41.596Z|00016|reconnect|INFO|ssl:127.0.0.1:6642: connection attempt failed (Connection refused)
2023-12-19T11:29:41.596Z|00017|reconnect|INFO|ssl:127.0.0.1:6642: continuing to reconnect in the background but suppressing further logging
/var/log/openvswitch/ovsdb-server.log:
2023-12-19T11:26:56.889Z|00001|vlog|INFO|opened log file /var/log/openvswitch/ovsdb-server.log
2023-12-19T11:26:56.915Z|00002|ovsdb_server|INFO|ovsdb-server (Open vSwitch) 2.15.8
2023-12-19T11:27:06.922Z|00003|memory|INFO|20624 kB peak resident set size after 10.0 seconds
2023-12-19T11:27:06.922Z|00004|memory|INFO|cells:128 monitors:5 sessions:3
2023-12-19T11:29:30.771Z|00001|vlog|INFO|opened log file /var/log/openvswitch/ovsdb-server.log
2023-12-19T11:29:30.813Z|00002|ovsdb_server|INFO|ovsdb-server (Open vSwitch) 2.15.8
2023-12-19T11:29:31.047Z|00003|jsonrpc|WARN|unix#0: receive error: Connection reset by peer
2023-12-19T11:29:31.047Z|00004|reconnect|WARN|unix#0: connection dropped (Connection reset by peer)
2023-12-19T11:29:32.821Z|00005|jsonrpc|WARN|unix#2: receive error: Connection reset by peer
2023-12-19T11:29:32.821Z|00006|reconnect|WARN|unix#2: connection dropped (Connection reset by peer)
2023-12-19T11:29:33.139Z|00007|jsonrpc|WARN|unix#4: receive error: Connection reset by peer
2023-12-19T11:29:33.139Z|00008|reconnect|WARN|unix#4: connection dropped (Connection reset by peer)
2023-12-19T11:29:40.864Z|00009|memory|INFO|23108 kB peak resident set size after 10.1 seconds
2023-12-19T11:29:40.864Z|00010|memory|INFO|cells:128 monitors:4 sessions:3
Logs on engine:
/var/log/ovn/ovsdb-server-nb.log:
2023-12-18T19:36:23.056Z|00001|vlog|INFO|opened log file /var/log/ovn/ovsdb-server-nb.log
2023-12-18T19:36:23.784Z|00002|ovsdb_server|INFO|ovsdb-server (Open vSwitch) 2.15.8
2023-12-18T19:36:24.275Z|00003|jsonrpc|WARN|unix#0: receive error: Connection reset by peer
2023-12-18T19:36:24.276Z|00004|reconnect|WARN|unix#0: connection dropped (Connection reset by peer)
2023-12-18T19:36:33.808Z|00005|memory|INFO|22528 kB peak resident set size after 10.8 seconds
2023-12-18T19:36:33.808Z|00006|memory|INFO|cells:99 monitors:2 sessions:1
/var/log/ovirt-engine/engine.log (currently unable to start vms. normally not the case in my tests but error message seems related)
2023-12-19 06:49:17,982-05 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-34) [43d1e22d] EVENT_ID: PROVIDER_SYNCHRONIZATION_STARTED(223), Provider ovirt-provider-ovn synchronization started.
2023-12-19 06:49:18,122-05 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-34) [43d1e22d] EVENT_ID: PROVIDER_SYNCHRONIZATION_ENDED(224), Provider ovirt-provider-ovn synchronization ended.
2023-12-19 06:49:18,122-05 ERROR [org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-34) [43d1e22d] Command 'org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand' failed: EngineException: (Failed with error Unsupported or unrecognized SSL message and code 5050)
Issue2:
When installing ovirt4.5 engine-setup always fails in pki-phase because no new root cert is generated. I believe it ultimately say apache.ca is missing. This is also on a fresh Centos-Stream-8 machine following official install instructions.
Please help. :)
1 year, 2 months
Disk upload: EngineException: java.lang.NullPointerException (Failed with error ENGINE and code 5001)
by goestin@intert00bz.nl
Hi All, after adding an oVirt node as a local storage machine I am unable to
upload a disk to the datastore. The button "test connection" shows:
"Connection to ovirt-imageio was successful.".
Version: 4.5.4-1.el8
OS: AlmaLinux 8.9 (Midnight Oncilla)
Below a excerpt from the engine.log showing the entire upload session.
Note 1: The machine "kvm-sandbox-qm7" is the machine for the
localstorage cluster. The machine "kvm-sandbox-gcz" is a machine from
the other "default cluster" and I was under the impression that the two
clusters would be completely separate things and should not interfere
with each other. But I am not sure about that.
--- snip ---
2024-01-03 10:05:26,898Z INFO [org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand] (default task-1) [7855db64-635d-430f-9de8-21b1983e43a0] Lock Acquired to object 'EngineLock:{exclusiveLocks='[]', sharedLocks='[]'}'
2024-01-03 10:05:26,938Z INFO [org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand] (default task-1) [7855db64-635d-430f-9de8-21b1983e43a0] Running command: TransferDiskImageCommand internal: false. Entities affected : ID: 95dfc5bc-2a31-405c-ada0-6015edd281da Type: StorageAction group CREATE_DISK with role type USER
2024-01-03 10:05:26,938Z INFO [org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand] (default task-1) [7855db64-635d-430f-9de8-21b1983e43a0] Creating ImageTransfer entity for command 'ec704c40-89bc-4fdf-a44a-607dd7b9b2f7', proxyEnabled: true
2024-01-03 10:05:26,940Z INFO [org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand] (default task-1) [7855db64-635d-430f-9de8-21b1983e43a0] Starting image transfer: ImageTransfer:{id='ec704c40-89bc-4fdf-a44a-607dd7b9b2f7', phase='Initializing', type='Upload', active='false', lastUpdated='Wed Jan 03 10:05:26 UTC 2024', message='null', vdsId='null', diskId='null', imagedTicketId='null', proxyUri='null', bytesSent='null', bytesTotal='697434112', clientInactivityTimeout='60', timeoutPolicy='legacy', imageFormat='COW', transferClientType='Transfer via browser', shallow='false'}
2024-01-03 10:05:26,940Z INFO [org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand] (default task-1) [7855db64-635d-430f-9de8-21b1983e43a0] Creating disk image
2024-01-03 10:05:26,953Z INFO [org.ovirt.engine.core.bll.storage.disk.AddDiskCommand] (default task-1) [7855db64-635d-430f-9de8-21b1983e43a0] Running command: AddDiskCommand internal: true. Entities affected : ID: 95dfc5bc-2a31-405c-ada0-6015edd281da Type: StorageAction group CREATE_DISK with role type USER
2024-01-03 10:05:26,961Z INFO [org.ovirt.engine.core.bll.storage.disk.image.AddImageFromScratchCommand] (default task-1) [7855db64-635d-430f-9de8-21b1983e43a0] Running command: AddImageFromScratchCommand internal: true. Entities affected : ID: 95dfc5bc-2a31-405c-ada0-6015edd281da Type: Storage
2024-01-03 10:05:26,981Z INFO [org.ovirt.engine.core.vdsbroker.irsbroker.CreateVolumeVDSCommand] (default task-1) [7855db64-635d-430f-9de8-21b1983e43a0] START, CreateVolumeVDSCommand( CreateVolumeVDSCommandParameters:{storagePoolId='4c5f1e92-239b-471c-9db0-970029129a62', ignoreFailoverLimit='false', storageDomainId='95dfc5bc-2a31-405c-ada0-6015edd281da', imageGroupId='13dcaf25-6b58-4c79-85a7-0aecd153fb59', imageSizeInBytes='10737418240', volumeFormat='COW', newImageId='6b80fba5-c2ae-4b68-a24d-21d7f657da8f', imageType='Sparse', newImageDescription='{"DiskAlias":"aaa","DiskDescription":""}', imageInitialSizeInBytes='0', imageId='00000000-0000-0000-0000-000000000000', sourceImageGroupId='00000000-0000-0000-0000-000000000000', shouldAddBitmaps='false', legal='true', sequenceNumber='1', bitmap='null'}), log id: b4c79ef
2024-01-03 10:05:27,406Z INFO [org.ovirt.engine.core.vdsbroker.irsbroker.CreateVolumeVDSCommand] (default task-1) [7855db64-635d-430f-9de8-21b1983e43a0] FINISH, CreateVolumeVDSCommand, return: 6b80fba5-c2ae-4b68-a24d-21d7f657da8f, log id: b4c79ef
2024-01-03 10:05:27,409Z INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (default task-1) [7855db64-635d-430f-9de8-21b1983e43a0] CommandAsyncTask::Adding CommandMultiAsyncTasks object for command 'a6f3ff83-daa4-4799-908a-07029ff8f6ef'
2024-01-03 10:05:27,410Z INFO [org.ovirt.engine.core.bll.CommandMultiAsyncTasks] (default task-1) [7855db64-635d-430f-9de8-21b1983e43a0] CommandMultiAsyncTasks::attachTask: Attaching task '5a29a235-b61c-4efb-959a-f29ae7f863be' to command 'a6f3ff83-daa4-4799-908a-07029ff8f6ef'.
2024-01-03 10:05:27,427Z INFO [org.ovirt.engine.core.bll.tasks.AsyncTaskManager] (default task-1) [7855db64-635d-430f-9de8-21b1983e43a0] Adding task '5a29a235-b61c-4efb-959a-f29ae7f863be' (Parent Command 'AddImageFromScratch', Parameters Type 'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters'), polling hasn't started yet..
2024-01-03 10:05:27,435Z INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (default task-1) [7855db64-635d-430f-9de8-21b1983e43a0] BaseAsyncTask::startPollingTask: Starting to poll task '5a29a235-b61c-4efb-959a-f29ae7f863be'.
2024-01-03 10:05:27,449Z INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-1) [7855db64-635d-430f-9de8-21b1983e43a0] EVENT_ID: ADD_DISK_INTERNAL(2,036), Add-Disk operation of 'aaa' was initiated by the system.
2024-01-03 10:05:27,457Z INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-1) [7855db64-635d-430f-9de8-21b1983e43a0] EVENT_ID: TRANSFER_IMAGE_INITIATED(1,031), Image Upload with disk aaa was initiated by [[redacted user]]@[[redacted]]@[[redacted]].
2024-01-03 10:05:27,866Z INFO [org.ovirt.engine.core.bll.storage.disk.AddDiskCommandCallback] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-95) [7855db64-635d-430f-9de8-21b1983e43a0] Command 'AddDisk' (id: 'dfe023fa-0a96-40e8-9934-fb94a156bff6') waiting on child command id: 'a6f3ff83-daa4-4799-908a-07029ff8f6ef' type:'AddImageFromScratch' to complete
2024-01-03 10:05:27,869Z INFO [org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-95) [7855db64-635d-430f-9de8-21b1983e43a0] Waiting for disk to be added for image transfer 'ec704c40-89bc-4fdf-a44a-607dd7b9b2f7'
2024-01-03 10:05:29,872Z INFO [org.ovirt.engine.core.bll.storage.disk.AddDiskCommandCallback] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-16) [7855db64-635d-430f-9de8-21b1983e43a0] Command 'AddDisk' (id: 'dfe023fa-0a96-40e8-9934-fb94a156bff6') waiting on child command id: 'a6f3ff83-daa4-4799-908a-07029ff8f6ef' type:'AddImageFromScratch' to complete
2024-01-03 10:05:29,875Z INFO [org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-16) [7855db64-635d-430f-9de8-21b1983e43a0] Waiting for disk to be added for image transfer 'ec704c40-89bc-4fdf-a44a-607dd7b9b2f7'
2024-01-03 10:05:30,014Z INFO [org.ovirt.engine.core.bll.tasks.AsyncTaskManager] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-46) [] Polling and updating Async Tasks: 1 tasks, 1 tasks to poll now
2024-01-03 10:05:30,019Z INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-46) [] SPMAsyncTask::PollTask: Polling task '5a29a235-b61c-4efb-959a-f29ae7f863be' (Parent Command 'AddImageFromScratch', Parameters Type 'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters') returned status 'finished', result 'success'.
2024-01-03 10:05:30,019Z INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-46) [] BaseAsyncTask::onTaskEndSuccess: Task '5a29a235-b61c-4efb-959a-f29ae7f863be' (Parent Command 'AddImageFromScratch', Parameters Type 'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters') ended successfully.
2024-01-03 10:05:30,021Z INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-46) [] CommandAsyncTask::endActionIfNecessary: All tasks of command 'a6f3ff83-daa4-4799-908a-07029ff8f6ef' has ended -> executing 'endAction'
2024-01-03 10:05:30,022Z INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-46) [] CommandAsyncTask::endAction: Ending action for '1' tasks (command ID: 'a6f3ff83-daa4-4799-908a-07029ff8f6ef'): calling endAction '.
2024-01-03 10:05:30,022Z INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (EE-ManagedThreadFactory-engine-Thread-2956) [] CommandAsyncTask::endCommandAction [within thread] context: Attempting to endAction 'AddImageFromScratch',
2024-01-03 10:05:30,027Z INFO [org.ovirt.engine.core.bll.storage.disk.image.AddImageFromScratchCommand] (EE-ManagedThreadFactory-engine-Thread-2956) [7855db64-635d-430f-9de8-21b1983e43a0] Command [id=a6f3ff83-daa4-4799-908a-07029ff8f6ef]: Updating status to 'SUCCEEDED', The command end method logic will be executed by one of its parent commands.
2024-01-03 10:05:30,027Z INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (EE-ManagedThreadFactory-engine-Thread-2956) [7855db64-635d-430f-9de8-21b1983e43a0] CommandAsyncTask::HandleEndActionResult [within thread]: endAction for action type 'AddImageFromScratch' completed, handling the result.
2024-01-03 10:05:30,027Z INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (EE-ManagedThreadFactory-engine-Thread-2956) [7855db64-635d-430f-9de8-21b1983e43a0] CommandAsyncTask::HandleEndActionResult [within thread]: endAction for action type 'AddImageFromScratch' succeeded, clearing tasks.
2024-01-03 10:05:30,027Z INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (EE-ManagedThreadFactory-engine-Thread-2956) [7855db64-635d-430f-9de8-21b1983e43a0] SPMAsyncTask::ClearAsyncTask: Attempting to clear task '5a29a235-b61c-4efb-959a-f29ae7f863be'
2024-01-03 10:05:30,028Z INFO [org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand] (EE-ManagedThreadFactory-engine-Thread-2956) [7855db64-635d-430f-9de8-21b1983e43a0] START, SPMClearTaskVDSCommand( SPMTaskGuidBaseVDSCommandParameters:{storagePoolId='4c5f1e92-239b-471c-9db0-970029129a62', ignoreFailoverLimit='false', taskId='5a29a235-b61c-4efb-959a-f29ae7f863be'}), log id: af95363
2024-01-03 10:05:30,028Z INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (EE-ManagedThreadFactory-engine-Thread-2956) [7855db64-635d-430f-9de8-21b1983e43a0] START, HSMClearTaskVDSCommand(HostName = kvm-sandbox-qm7, HSMTaskGuidBaseVDSCommandParameters:{hostId='9fb846b0-58cf-41ab-875c-3e3118a24b89', taskId='5a29a235-b61c-4efb-959a-f29ae7f863be'}), log id: 52de524b
2024-01-03 10:05:30,044Z INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (EE-ManagedThreadFactory-engine-Thread-2956) [7855db64-635d-430f-9de8-21b1983e43a0] FINISH, HSMClearTaskVDSCommand, return: , log id: 52de524b
2024-01-03 10:05:30,044Z INFO [org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand] (EE-ManagedThreadFactory-engine-Thread-2956) [7855db64-635d-430f-9de8-21b1983e43a0] FINISH, SPMClearTaskVDSCommand, return: , log id: af95363
2024-01-03 10:05:30,050Z INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (EE-ManagedThreadFactory-engine-Thread-2956) [7855db64-635d-430f-9de8-21b1983e43a0] BaseAsyncTask::removeTaskFromDB: Removed task '5a29a235-b61c-4efb-959a-f29ae7f863be' from DataBase
2024-01-03 10:05:30,050Z INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (EE-ManagedThreadFactory-engine-Thread-2956) [7855db64-635d-430f-9de8-21b1983e43a0] CommandAsyncTask::HandleEndActionResult [within thread]: Removing CommandMultiAsyncTasks object for entity 'a6f3ff83-daa4-4799-908a-07029ff8f6ef'
2024-01-03 10:05:31,567Z INFO [org.ovirt.engine.core.bll.storage.disk.image.TransferImageStatusCommand] (default task-6) [1ab78812-4bb8-4889-bea0-d2f2a82d52af] Running command: TransferImageStatusCommand internal: false. Entities affected : ID: 95dfc5bc-2a31-405c-ada0-6015edd281da Type: SystemAction group CREATE_DISK with role type USER
2024-01-03 10:05:33,877Z INFO [org.ovirt.engine.core.bll.storage.disk.AddDiskCommandCallback] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-68) [7855db64-635d-430f-9de8-21b1983e43a0] Getting volume info for image '13dcaf25-6b58-4c79-85a7-0aecd153fb59/6b80fba5-c2ae-4b68-a24d-21d7f657da8f'
2024-01-03 10:05:33,894Z INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVolumeInfoVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-68) [7855db64-635d-430f-9de8-21b1983e43a0] START, GetVolumeInfoVDSCommand(HostName = kvm-sandbox-qm7, GetVolumeInfoVDSCommandParameters:{hostId='9fb846b0-58cf-41ab-875c-3e3118a24b89', storagePoolId='4c5f1e92-239b-471c-9db0-970029129a62', storageDomainId='95dfc5bc-2a31-405c-ada0-6015edd281da', imageGroupId='13dcaf25-6b58-4c79-85a7-0aecd153fb59', imageId='6b80fba5-c2ae-4b68-a24d-21d7f657da8f'}), log id: 78edc340
2024-01-03 10:05:33,908Z INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVolumeInfoVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-68) [7855db64-635d-430f-9de8-21b1983e43a0] FINISH, GetVolumeInfoVDSCommand, return: org.ovirt.engine.core.common.businessentities.storage.DiskImage@14184952, log id: 78edc340
2024-01-03 10:05:33,908Z INFO [org.ovirt.engine.core.bll.storage.disk.AddDiskCommandCallback] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-68) [7855db64-635d-430f-9de8-21b1983e43a0] Command 'AddDisk' id: 'dfe023fa-0a96-40e8-9934-fb94a156bff6' child commands '[a6f3ff83-daa4-4799-908a-07029ff8f6ef]' executions were completed, status 'SUCCEEDED'
2024-01-03 10:05:33,967Z INFO [org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-68) [7855db64-635d-430f-9de8-21b1983e43a0] Successfully added Upload disk 'aaa' (disk id: '13dcaf25-6b58-4c79-85a7-0aecd153fb59', image id: '6b80fba5-c2ae-4b68-a24d-21d7f657da8f') for image transfer 'ec704c40-89bc-4fdf-a44a-607dd7b9b2f7'
2024-01-03 10:05:33,978Z INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.PrepareImageVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-68) [7855db64-635d-430f-9de8-21b1983e43a0] START, PrepareImageVDSCommand(HostName = kvm-sandbox-gcz, PrepareImageVDSCommandParameters:{hostId='059c7eaf-da39-41f2-bb61-659ab3bd1b61'}), log id: 104764a4
2024-01-03 10:05:33,981Z ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.PrepareImageVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-68) [7855db64-635d-430f-9de8-21b1983e43a0] Failed in 'PrepareImageVDS' method, for vds: 'kvm-sandbox-gcz'; host: 'kvm-sandbox-gcz.hprvsr.infra.pdc.[[redacted]]': null
2024-01-03 10:05:33,982Z ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.PrepareImageVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-68) [7855db64-635d-430f-9de8-21b1983e43a0] Command 'PrepareImageVDSCommand(HostName = kvm-sandbox-gcz, PrepareImageVDSCommandParameters:{hostId='059c7eaf-da39-41f2-bb61-659ab3bd1b61'})' execution failed: null
2024-01-03 10:05:33,982Z INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.PrepareImageVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-68) [7855db64-635d-430f-9de8-21b1983e43a0] FINISH, PrepareImageVDSCommand, return: , log id: 104764a4
2024-01-03 10:05:33,982Z ERROR [org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-68) [7855db64-635d-430f-9de8-21b1983e43a0] Failed to prepare image for image transfer 'ec704c40-89bc-4fdf-a44a-607dd7b9b2f7': {}: org.ovirt.engine.core.common.errors.EngineException: EngineException: java.lang.NullPointerException (Failed with error ENGINE and code 5001)
at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.VdsHandler.handleVdsResult(VdsHandler.java:114)
at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.VDSBrokerFrontendImpl.runVdsCommand(VDSBrokerFrontendImpl.java:33)
at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.CommandBase.runVdsCommand(CommandBase.java:2121)
at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand.prepareImage(TransferDiskImageCommand.java:188)
at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand.startImageTransferSession(TransferDiskImageCommand.java:1064)
at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand.handleImageIsReadyForTransfer(TransferDiskImageCommand.java:681)
at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand.handleInitializing(TransferDiskImageCommand.java:654)
at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand.executeStateHandler(TransferDiskImageCommand.java:587)
at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand.proceedCommandExecution(TransferDiskImageCommand.java:574)
at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.storage.disk.image.TransferImageCommandCallback.doPolling(TransferImageCommandCallback.java:21)
at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller.invokeCallbackMethodsImpl(CommandCallbacksPoller.java:175)
at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller.invokeCallbackMethods(CommandCallbacksPoller.java:109)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305)
at org.glassfish.javax.enterprise.concurrent//org.glassfish.enterprise.concurrent.internal.ManagedScheduledThreadPoolExecutor$ManagedScheduledFutureTask.access$201(ManagedScheduledThreadPoolExecutor.java:360)
at org.glassfish.javax.enterprise.concurrent//org.glassfish.enterprise.concurrent.internal.ManagedScheduledThreadPoolExecutor$ManagedScheduledFutureTask.run(ManagedScheduledThreadPoolExecutor.java:511)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
at org.glassfish.javax.enterprise.concurrent//org.glassfish.enterprise.concurrent.ManagedThreadFactoryImpl$ManagedThread.run(ManagedThreadFactoryImpl.java:227)
Caused by: java.lang.NullPointerException
at deployment.engine.ear//org.ovirt.engine.core.vdsbroker.vdsbroker.PrepareImageReturn.<init>(PrepareImageReturn.java:15)
at deployment.engine.ear//org.ovirt.engine.core.vdsbroker.jsonrpc.JsonRpcVdsServer.prepareImage(JsonRpcVdsServer.java:1947)
at deployment.engine.ear//org.ovirt.engine.core.vdsbroker.vdsbroker.PrepareImageVDSCommand.executeImageActionVdsBrokerCommand(PrepareImageVDSCommand.java:18)
at deployment.engine.ear//org.ovirt.engine.core.vdsbroker.vdsbroker.PrepareImageVDSCommand.executeImageActionVdsBrokerCommand(PrepareImageVDSCommand.java:5)
at deployment.engine.ear//org.ovirt.engine.core.vdsbroker.vdsbroker.ImageActionsVDSCommandBase.executeVdsBrokerCommand(ImageActionsVDSCommandBase.java:14)
at deployment.engine.ear//org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerCommand.executeVdsCommandWithNetworkEvent(VdsBrokerCommand.java:123)
at deployment.engine.ear//org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerCommand.executeVDSCommand(VdsBrokerCommand.java:111)
at deployment.engine.ear//org.ovirt.engine.core.vdsbroker.VDSCommandBase.executeCommand(VDSCommandBase.java:65)
at org.ovirt.engine.core.dal//org.ovirt.engine.core.dal.VdcCommandBase.execute(VdcCommandBase.java:31)
at deployment.engine.ear//org.ovirt.engine.core.vdsbroker.vdsbroker.DefaultVdsCommandExecutor.execute(DefaultVdsCommandExecutor.java:14)
at deployment.engine.ear//org.ovirt.engine.core.vdsbroker.ResourceManager.runVdsCommand(ResourceManager.java:410)
at deployment.engine.ear//org.ovirt.engine.core.vdsbroker.ResourceManager$Proxy$_$$_WeldSubclass.runVdsCommand$$super(Unknown Source)
at jdk.internal.reflect.GeneratedMethodAccessor87.invoke(Unknown Source)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at org.jboss.weld.core@3.1.7.SP1//org.jboss.weld.interceptor.proxy.TerminalAroundInvokeInvocationContext.proceedInternal(TerminalAroundInvokeInvocationContext.java:51)
at org.jboss.weld.core@3.1.7.SP1//org.jboss.weld.interceptor.proxy.AroundInvokeInvocationContext.proceed(AroundInvokeInvocationContext.java:78)
at org.ovirt.engine.core.common//org.ovirt.engine.core.common.di.interceptor.LoggingInterceptor.apply(LoggingInterceptor.java:12)
at jdk.internal.reflect.GeneratedMethodAccessor80.invoke(Unknown Source)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at org.jboss.weld.core@3.1.7.SP1//org.jboss.weld.interceptor.reader.SimpleInterceptorInvocation$SimpleMethodInvocation.invoke(SimpleInterceptorInvocation.java:73)
at org.jboss.weld.core@3.1.7.SP1//org.jboss.weld.interceptor.proxy.InterceptorMethodHandler.executeAroundInvoke(InterceptorMethodHandler.java:84)
at org.jboss.weld.core@3.1.7.SP1//org.jboss.weld.interceptor.proxy.InterceptorMethodHandler.executeInterception(InterceptorMethodHandler.java:72)
at org.jboss.weld.core@3.1.7.SP1//org.jboss.weld.interceptor.proxy.InterceptorMethodHandler.invoke(InterceptorMethodHandler.java:56)
at org.jboss.weld.core@3.1.7.SP1//org.jboss.weld.bean.proxy.CombinedInterceptorAndDecoratorStackMethodHandler.invoke(CombinedInterceptorAndDecoratorStackMethodHandler.java:79)
at org.jboss.weld.core@3.1.7.SP1//org.jboss.weld.bean.proxy.CombinedInterceptorAndDecoratorStackMethodHandler.invoke(CombinedInterceptorAndDecoratorStackMethodHandler.java:68)
at deployment.engine.ear//org.ovirt.engine.core.vdsbroker.ResourceManager$Proxy$_$$_WeldSubclass.runVdsCommand(Unknown Source)
... 19 more
2024-01-03 10:05:34,991Z INFO [org.ovirt.engine.core.bll.storage.disk.AddDiskCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-47) [7855db64-635d-430f-9de8-21b1983e43a0] Ending command 'org.ovirt.engine.core.bll.storage.disk.AddDiskCommand' successfully.
2024-01-03 10:05:34,997Z INFO [org.ovirt.engine.core.bll.storage.disk.image.AddImageFromScratchCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-47) [7855db64-635d-430f-9de8-21b1983e43a0] Ending command 'org.ovirt.engine.core.bll.storage.disk.image.AddImageFromScratchCommand' successfully.
2024-01-03 10:05:35,010Z INFO [org.ovirt.engine.core.vdsbroker.irsbroker.GetImageInfoVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-47) [7855db64-635d-430f-9de8-21b1983e43a0] START, GetImageInfoVDSCommand( GetImageInfoVDSCommandParameters:{storagePoolId='4c5f1e92-239b-471c-9db0-970029129a62', ignoreFailoverLimit='false', storageDomainId='95dfc5bc-2a31-405c-ada0-6015edd281da', imageGroupId='13dcaf25-6b58-4c79-85a7-0aecd153fb59', imageId='6b80fba5-c2ae-4b68-a24d-21d7f657da8f'}), log id: 127cba80
2024-01-03 10:05:35,011Z INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVolumeInfoVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-47) [7855db64-635d-430f-9de8-21b1983e43a0] START, GetVolumeInfoVDSCommand(HostName = kvm-sandbox-qm7, GetVolumeInfoVDSCommandParameters:{hostId='9fb846b0-58cf-41ab-875c-3e3118a24b89', storagePoolId='4c5f1e92-239b-471c-9db0-970029129a62', storageDomainId='95dfc5bc-2a31-405c-ada0-6015edd281da', imageGroupId='13dcaf25-6b58-4c79-85a7-0aecd153fb59', imageId='6b80fba5-c2ae-4b68-a24d-21d7f657da8f'}), log id: 1f0ed2a8
2024-01-03 10:05:35,024Z INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVolumeInfoVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-47) [7855db64-635d-430f-9de8-21b1983e43a0] FINISH, GetVolumeInfoVDSCommand, return: org.ovirt.engine.core.common.businessentities.storage.DiskImage@14184952, log id: 1f0ed2a8
2024-01-03 10:05:35,024Z INFO [org.ovirt.engine.core.vdsbroker.irsbroker.GetImageInfoVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-47) [7855db64-635d-430f-9de8-21b1983e43a0] FINISH, GetImageInfoVDSCommand, return: org.ovirt.engine.core.common.businessentities.storage.DiskImage@14184952, log id: 127cba80
2024-01-03 10:05:35,046Z INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.PrepareImageVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-47) [7855db64-635d-430f-9de8-21b1983e43a0] START, PrepareImageVDSCommand(HostName = kvm-sandbox-qm7, PrepareImageVDSCommandParameters:{hostId='9fb846b0-58cf-41ab-875c-3e3118a24b89'}), log id: 39df22a1
2024-01-03 10:05:35,076Z INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.PrepareImageVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-47) [7855db64-635d-430f-9de8-21b1983e43a0] FINISH, PrepareImageVDSCommand, return: PrepareImageReturn:{status='Status [code=0, message=Done]'}, log id: 39df22a1
2024-01-03 10:05:35,077Z INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetQemuImageInfoVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-47) [7855db64-635d-430f-9de8-21b1983e43a0] START, GetQemuImageInfoVDSCommand(HostName = kvm-sandbox-qm7, GetVolumeInfoVDSCommandParameters:{hostId='9fb846b0-58cf-41ab-875c-3e3118a24b89', storagePoolId='4c5f1e92-239b-471c-9db0-970029129a62', storageDomainId='95dfc5bc-2a31-405c-ada0-6015edd281da', imageGroupId='13dcaf25-6b58-4c79-85a7-0aecd153fb59', imageId='6b80fba5-c2ae-4b68-a24d-21d7f657da8f'}), log id: 59083d66
2024-01-03 10:05:35,093Z INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetQemuImageInfoVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-47) [7855db64-635d-430f-9de8-21b1983e43a0] FINISH, GetQemuImageInfoVDSCommand, return: org.ovirt.engine.core.common.businessentities.storage.QemuImageInfo@12025249, log id: 59083d66
2024-01-03 10:05:35,095Z INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.TeardownImageVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-47) [7855db64-635d-430f-9de8-21b1983e43a0] START, TeardownImageVDSCommand(HostName = kvm-sandbox-qm7, ImageActionsVDSCommandParameters:{hostId='9fb846b0-58cf-41ab-875c-3e3118a24b89'}), log id: 2c3b32ab
2024-01-03 10:05:35,097Z INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.TeardownImageVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-47) [7855db64-635d-430f-9de8-21b1983e43a0] FINISH, TeardownImageVDSCommand, return: StatusReturn:{status='Status [code=0, message=Done]'}, log id: 2c3b32ab
2024-01-03 10:05:35,106Z WARN [org.ovirt.engine.core.bll.storage.disk.AddDiskCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-47) [] VM is null - no unlocking
2024-01-03 10:05:35,132Z INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-47) [] EVENT_ID: USER_ADD_DISK_FINISHED_SUCCESS(2,021), The disk 'aaa' was successfully added.
2024-01-03 10:05:35,134Z ERROR [org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-47) [7855db64-635d-430f-9de8-21b1983e43a0] Ending command 'org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand' with failure.
2024-01-03 10:05:35,134Z ERROR [org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-47) [7855db64-635d-430f-9de8-21b1983e43a0] Failed to transfer disk '00000000-0000-0000-0000-000000000000' for image transfer 'ec704c40-89bc-4fdf-a44a-607dd7b9b2f7'
2024-01-03 10:05:35,157Z INFO [org.ovirt.engine.core.bll.storage.disk.RemoveDiskCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-47) [5871c85c] Running command: RemoveDiskCommand internal: true. Entities affected : ID: 13dcaf25-6b58-4c79-85a7-0aecd153fb59 Type: DiskAction group DELETE_DISK with role type USER
2024-01-03 10:05:35,176Z INFO [org.ovirt.engine.core.bll.storage.disk.image.RemoveImageCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-47) [5871c85c] Running command: RemoveImageCommand internal: true. Entities affected : ID: 95dfc5bc-2a31-405c-ada0-6015edd281da Type: Storage
2024-01-03 10:05:35,202Z INFO [org.ovirt.engine.core.vdsbroker.irsbroker.DeleteImageGroupVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-47) [5871c85c] START, DeleteImageGroupVDSCommand( DeleteImageGroupVDSCommandParameters:{storagePoolId='4c5f1e92-239b-471c-9db0-970029129a62', ignoreFailoverLimit='false', storageDomainId='95dfc5bc-2a31-405c-ada0-6015edd281da', imageGroupId='13dcaf25-6b58-4c79-85a7-0aecd153fb59', postZeros='false', discard='false', forceDelete='false'}), log id: 10fab46b
2024-01-03 10:05:35,449Z INFO [org.ovirt.engine.core.vdsbroker.irsbroker.DeleteImageGroupVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-47) [5871c85c] FINISH, DeleteImageGroupVDSCommand, return: , log id: 10fab46b
2024-01-03 10:05:35,451Z INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-47) [5871c85c] CommandAsyncTask::Adding CommandMultiAsyncTasks object for command '8640049f-0ead-486a-93b4-dcbbaa353294'
2024-01-03 10:05:35,451Z INFO [org.ovirt.engine.core.bll.CommandMultiAsyncTasks] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-47) [5871c85c] CommandMultiAsyncTasks::attachTask: Attaching task 'b51f31d8-944b-4539-beb6-a0ab995073c6' to command '8640049f-0ead-486a-93b4-dcbbaa353294'.
2024-01-03 10:05:35,463Z INFO [org.ovirt.engine.core.bll.tasks.AsyncTaskManager] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-47) [5871c85c] Adding task 'b51f31d8-944b-4539-beb6-a0ab995073c6' (Parent Command 'RemoveImage', Parameters Type 'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters'), polling hasn't started yet..
2024-01-03 10:05:35,468Z INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-47) [5871c85c] BaseAsyncTask::startPollingTask: Starting to poll task 'b51f31d8-944b-4539-beb6-a0ab995073c6'.
2024-01-03 10:05:35,468Z INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-47) [5871c85c] BaseAsyncTask::startPollingTask: Starting to poll task 'b51f31d8-944b-4539-beb6-a0ab995073c6'.
2024-01-03 10:05:35,533Z INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-47) [5871c85c] EVENT_ID: USER_FINISHED_REMOVE_DISK(2,014), Disk aaa was successfully removed from domain localstorage (User [[redacted user]]@[[redacted]]@[[redacted]]).
2024-01-03 10:05:35,534Z INFO [org.ovirt.engine.core.bll.storage.disk.RemoveDiskCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-47) [5871c85c] Lock freed to object 'EngineLock:{exclusiveLocks='[]', sharedLocks='[]'}'
2024-01-03 10:05:35,534Z INFO [org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-47) [5871c85c] Lock freed to object 'EngineLock:{exclusiveLocks='[]', sharedLocks='[]'}'
2024-01-03 10:05:35,535Z INFO [org.ovirt.engine.core.bll.storage.disk.image.ImageTransferUpdater] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-47) [5871c85c] Updating image transfer 'ec704c40-89bc-4fdf-a44a-607dd7b9b2f7' phase from 'Initializing' to 'Finished Failure'
2024-01-03 10:05:35,547Z ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-47) [5871c85c] EVENT_ID: TRANSFER_IMAGE_FAILED(1,034), Image Upload with disk aaa failed.
2024-01-03 10:05:35,646Z INFO [org.ovirt.engine.core.bll.storage.disk.image.TransferImageStatusCommand] (default task-1) [d4821bf6-c786-4129-a3b0-4a79fc2d61d7] Running command: TransferImageStatusCommand internal: false. Entities affected : ID: 95dfc5bc-2a31-405c-ada0-6015edd281da Type: SystemAction group CREATE_DISK with role type USER
2024-01-03 10:05:36,562Z INFO [org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-62) [5871c85c] Command 'RemoveDisk' (id: 'b01c95d8-b3c3-40db-9a41-6fe1e817fe5a') waiting on child command id: '8640049f-0ead-486a-93b4-dcbbaa353294' type:'RemoveImage' to complete
2024-01-03 10:05:36,565Z INFO [org.ovirt.engine.core.bll.storage.disk.image.RemoveImageCommandCallback] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-62) [5871c85c] Waiting on remove image command to complete the task 'b51f31d8-944b-4539-beb6-a0ab995073c6'
2024-01-03 10:05:38,569Z INFO [org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-32) [5871c85c] Command 'RemoveDisk' (id: 'b01c95d8-b3c3-40db-9a41-6fe1e817fe5a') waiting on child command id: '8640049f-0ead-486a-93b4-dcbbaa353294' type:'RemoveImage' to complete
2024-01-03 10:05:38,573Z INFO [org.ovirt.engine.core.bll.storage.disk.image.RemoveImageCommandCallback] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-32) [5871c85c] Waiting on remove image command to complete the task 'b51f31d8-944b-4539-beb6-a0ab995073c6'
2024-01-03 10:05:39,645Z INFO [org.ovirt.engine.core.bll.storage.disk.image.TransferImageStatusCommand] (default task-1) [95e8a4ca-324f-4f9e-86a0-ef324383ca64] Running command: TransferImageStatusCommand internal: false. Entities affected : ID: 95dfc5bc-2a31-405c-ada0-6015edd281da Type: SystemAction group CREATE_DISK with role type USER
2024-01-03 10:05:40,022Z INFO [org.ovirt.engine.core.bll.tasks.AsyncTaskManager] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-40) [] Polling and updating Async Tasks: 2 tasks, 1 tasks to poll now
2024-01-03 10:05:40,027Z INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-40) [] SPMAsyncTask::PollTask: Polling task 'b51f31d8-944b-4539-beb6-a0ab995073c6' (Parent Command 'RemoveImage', Parameters Type 'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters') returned status 'finished', result 'success'.
2024-01-03 10:05:40,027Z INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-40) [] BaseAsyncTask::onTaskEndSuccess: Task 'b51f31d8-944b-4539-beb6-a0ab995073c6' (Parent Command 'RemoveImage', Parameters Type 'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters') ended successfully.
2024-01-03 10:05:40,030Z INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-40) [] CommandAsyncTask::endActionIfNecessary: All tasks of command '8640049f-0ead-486a-93b4-dcbbaa353294' has ended -> executing 'endAction'
2024-01-03 10:05:40,030Z INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-40) [] CommandAsyncTask::endAction: Ending action for '1' tasks (command ID: '8640049f-0ead-486a-93b4-dcbbaa353294'): calling endAction '.
2024-01-03 10:05:40,031Z INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (EE-ManagedThreadFactory-engine-Thread-2965) [] CommandAsyncTask::endCommandAction [within thread] context: Attempting to endAction 'RemoveImage',
2024-01-03 10:05:40,033Z INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (EE-ManagedThreadFactory-engine-Thread-2965) [5871c85c] CommandAsyncTask::HandleEndActionResult [within thread]: endAction for action type 'RemoveImage' completed, handling the result.
2024-01-03 10:05:40,034Z INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (EE-ManagedThreadFactory-engine-Thread-2965) [5871c85c] CommandAsyncTask::HandleEndActionResult [within thread]: endAction for action type 'RemoveImage' succeeded, clearing tasks.
2024-01-03 10:05:40,034Z INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (EE-ManagedThreadFactory-engine-Thread-2965) [5871c85c] SPMAsyncTask::ClearAsyncTask: Attempting to clear task 'b51f31d8-944b-4539-beb6-a0ab995073c6'
2024-01-03 10:05:40,034Z INFO [org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand] (EE-ManagedThreadFactory-engine-Thread-2965) [5871c85c] START, SPMClearTaskVDSCommand( SPMTaskGuidBaseVDSCommandParameters:{storagePoolId='4c5f1e92-239b-471c-9db0-970029129a62', ignoreFailoverLimit='false', taskId='b51f31d8-944b-4539-beb6-a0ab995073c6'}), log id: 7535a00c
2024-01-03 10:05:40,035Z INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (EE-ManagedThreadFactory-engine-Thread-2965) [5871c85c] START, HSMClearTaskVDSCommand(HostName = kvm-sandbox-qm7, HSMTaskGuidBaseVDSCommandParameters:{hostId='9fb846b0-58cf-41ab-875c-3e3118a24b89', taskId='b51f31d8-944b-4539-beb6-a0ab995073c6'}), log id: 3522ca07
2024-01-03 10:05:40,046Z INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (EE-ManagedThreadFactory-engine-Thread-2965) [5871c85c] FINISH, HSMClearTaskVDSCommand, return: , log id: 3522ca07
2024-01-03 10:05:40,046Z INFO [org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand] (EE-ManagedThreadFactory-engine-Thread-2965) [5871c85c] FINISH, SPMClearTaskVDSCommand, return: , log id: 7535a00c
2024-01-03 10:05:40,050Z INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (EE-ManagedThreadFactory-engine-Thread-2965) [5871c85c] BaseAsyncTask::removeTaskFromDB: Removed task 'b51f31d8-944b-4539-beb6-a0ab995073c6' from DataBase
2024-01-03 10:05:40,050Z INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (EE-ManagedThreadFactory-engine-Thread-2965) [5871c85c] CommandAsyncTask::HandleEndActionResult [within thread]: Removing CommandMultiAsyncTasks object for entity '8640049f-0ead-486a-93b4-dcbbaa353294'
2024-01-03 10:05:42,583Z INFO [org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-11) [5871c85c] Command 'RemoveDisk' (id: 'b01c95d8-b3c3-40db-9a41-6fe1e817fe5a') waiting on child command id: '8640049f-0ead-486a-93b4-dcbbaa353294' type:'RemoveImage' to complete
2024-01-03 10:05:42,593Z INFO [org.ovirt.engine.core.bll.storage.disk.image.RemoveImageCommandCallback] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-11) [5871c85c] Remove image command has completed successfully for disk '13dcaf25-6b58-4c79-85a7-0aecd153fb59' with async task(s) '[b51f31d8-944b-4539-beb6-a0ab995073c6]'.
2024-01-03 10:05:44,692Z INFO [org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-39) [5871c85c] Command 'RemoveDisk' id: 'b01c95d8-b3c3-40db-9a41-6fe1e817fe5a' child commands '[8640049f-0ead-486a-93b4-dcbbaa353294]' executions were completed, status 'SUCCEEDED'
2024-01-03 10:05:45,719Z INFO [org.ovirt.engine.core.bll.storage.disk.RemoveDiskCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-26) [5871c85c] Ending command 'org.ovirt.engine.core.bll.storage.disk.RemoveDiskCommand' successfully.
--- /snip ---
Please let me know if I can supply additional information which could be
relevant.
Kind Regards,
Justin Zandbergen.
1 year, 3 months
ovirt-engine certificate renewal
by bill.hong@neurogine.com
Hi,
I'm running ovirt Version 4.5.3.2-1.el8 with 1 + 3 nodes setup.
Currently i'm encountering this issue of ovirt-engine portal certificate which has already expired.
"PKIX path validation failed: java.security.cert.CertPathValidatorException: validity check failed "
I'm aware of the solution by running "engine-setup --offline" to renew the cert. (https://yaohuablog.com/zh/ovirt-engine-upgrade-web-certificate)
However my host machine has this problem with the psql command issue whenever i run the engine-backup.
[root@server1~]# engine-backup --mode=backup
Start of engine-backup with mode 'backup'
scope: all
archive file: /var/lib/ovirt-engine-backup/ovirt-engine-backup-20240103163047.backup
log file: /var/log/ovirt-engine-backup/ovirt-engine-backup-20240103163047.log
psql: /lib64/libpq.so.5: no version information available (required by psql)
psql: /lib64/libpq.so.5: no version information available (required by psql)
psql: /lib64/libpq.so.5: no version information available (required by psql)
Backing up:
psql: /lib64/libpq.so.5: no version information available (required by psql)
psql: /lib64/libpq.so.5: no version information available (required by psql)
psql: /lib64/libpq.so.5: no version information available (required by psql)
Notifying engine
- Files
- Engine database 'engine'
Notifying engine
FATAL: Database engine backup failed
[root@server1~]# dnf module list postgresql
Last metadata expiration check: 1:55:29 ago on Wed 03 Jan 2024 02:38:30 PM +08.
CentOS Stream 8 - AppStream
Name Stream Profiles Summary
postgresql 9.6 client, server [d] PostgreSQL server and client module
postgresql 10 [d] client, server [d] PostgreSQL server and client module
postgresql 12 [e] client, server [d] PostgreSQL server and client module
postgresql 13 client, server [d] PostgreSQL server and client module
postgresql 15 client, server PostgreSQL server and client module
postgresql 16 client, server [d] PostgreSQL server and client module
Question :
1. Should i fix the psql error first ? If i just want to renew my certificate , will the psql error cause me to fail to renew the cert in the "engine-setup --offline" command ?
2. What if after the "engine-setup --offline" failed to renew will my running VM be affected and down ? will there be any recovery method later ? will the reinstallation work on stand-alone machine ?
1 year, 3 months