Migrating from a self hosted engine to standalone
by redhat@intheoutback.com
Hi, I am in the process of moving our oVirt environment from a self hosted engine to a standalone engine on it's own HW. I have Googled and found a procedure for standalone > self hosted, but not the other way around.
My current situation is that I have 5 locations running oVirt 4.3 with 3 to 4 hypervisors and iSCSI storage backend, with a self hosted engine. All these locations are operational and short a downtime is acceptable if a must, losing a VM is NOT good.
I also have 1 oVirt environment 4.3 with 2 hypervisors and iSCSI backend that is my QA/Test.
All my networks are on internal networks with no outside world connections.
Most importantly, we are also looking at upgrading from 4.3 to 4.4
I have not found any straightforward way to migrate from self hosted to standalone.
My current plan is to do the following.
1) Create a new 4.4 standalone engine
2) Remove one hypervisor from the 4.3 cluster
3) Kickstart the hypervisor to RHEL 8.8 and configure ready for oVirt 4.4
4) Add the new host to the standalone engine.
5) Shutdown and export a number of VMs in the oVirt 4.3 and import them in to the new oVirt 4.4.
6) Repeat steps 2 > 5 until the everything is moved over.
Just wanting to get your expert opinions on this method or is there a much quicker easier method that will not risk the chances of losing the cluster/VMs or an extended outage.
Since we need to upgrade anyway from 4.3 to 4.4 I thought this the better method that upgrading the operational clusters.
Thanks
1 year, 4 months
Need to renew ovirt engine certificate
by Sachendra Shukla
Hi Team,
The oVirt Engine certificate is scheduled to expire on February 1, 2024.
Consequently, we need to initiate the certificate upgrade process. Could
you please share the steps and process for the certificate upgrade? I have
attached a snapshot below for your reference.
[image: image.png]
Regards,
Sachendra Shukla
Yagna iQ, Inc. and subsidiaries
HQ Address: Yagna iQ Inc. 7700 Windrose Ave, Suite G300, Plano, TX 75024,
USA 75024,
Website: https://yagnaiq.com
Contact Customer Support: support(a)yagnaiq.com
Privacy Policy: https://www.yagnaiq.com/privacy-policy/
*This communication and any attachments may contain confidential
information and/or copyright material of Yagna iQ, Inc. *
All unauthorized use, disclosure or distribution is prohibited. If you are
not the intended recipient, please notify Yagna iQ immediately by replying
to the email and destroy all copies of this communication.
This email has been scanned for all known viruses. The sender does not
accept liability for any damage inflicted by viewing the content of this
email.
1 year, 4 months
Re: [ovirt-devel] Re: oVirt 4.6 OS versions
by Guillaume Pavese
Unless someone from the community steps up to take RedHat's role, there
won't be any 4.6
On Fri, Jan 12, 2024 at 8:51 AM Diggy Mc <d03(a)bornfree.org> wrote:
>
> Isn't the oVirt 4.5 Hosted Engine built on CentOS Stream 8 ??? Stream 8
> ends in May 2024. I ask because we are still running on 4.4 and are
> thinking about holding off until oVirt 4.6 before we deploy a new oVirt
> environment.
> _______________________________________________
> Devel mailing list -- devel(a)ovirt.org
> To unsubscribe send an email to devel-leave(a)ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/MBQDZTC5K3R...
>
--
Ce message et toutes les pièces jointes (ci-après le “message”) sont
établis à l’intention exclusive de ses destinataires et sont confidentiels.
Si vous recevez ce message par erreur, merci de le détruire et d’en avertir
immédiatement l’expéditeur. Toute utilisation de ce message non conforme a
sa destination, toute diffusion ou toute publication, totale ou partielle,
est interdite, sauf autorisation expresse. L’internet ne permettant pas
d’assurer l’intégrité de ce message . Interactiv-group (et ses filiales)
décline(nt) toute responsabilité au titre de ce message, dans l’hypothèse
ou il aurait été modifié. IT, ES, UK.
<https://interactiv-group.com/disclaimer.html>
1 year, 4 months
oVirt 4.5.5 - Prb with qemu-kvm after upgrade
by Christophe GRENIER
Hello
I have a standalone oVirt Manager 4.5.5-1.el8 and two small clusters.
After upgrading ovir01001 in the "PreProd" cluster from AlmaLinux 8.8 to
8.9, the host was successfully activated but failed to take any VM.
centos-release-ceph-pacific.noarch 1.0-2.el8 @cs8-extras
centos-release-gluster10.noarch 1.0-1.el8s @cs8-extras-common
centos-release-nfv-common.noarch 1-3.el8 @cs8-extras
centos-release-nfv-openvswitch.noarch 1-3.el8 @cs8-extras
centos-release-opstools.noarch 1-12.el8 @cs8-extras
centos-release-ovirt45.noarch 8.9-1.el8s @cs8-extras-common
centos-release-storage-common.noarch 2-2.el8 @cs8-extras
centos-release-stream.x86_64 8.1-1.1911.0.7.el8 @cs8-extras
centos-release-virt-common.noarch 1-2.el8 @cs8-extras
vdsm.x86_64 4.50.5.1-1.el8 @centos-ovirt45
The problem has been "solved" by downgrading all qemu-* packages to the
version in AlmaLinux 8.8
ie. qemu-kvm-6.2.0-40.module_el8.9.0+3681+41cbbcc0.1.alma.1 =>
qemu-kvm-6.2.0-33.module_el8.8.0+3612+f18d2b89.alma.1.x86_64
Please find the relevent log:
- engine_when_failed.log https://pastebin.com/7MG6fYGY
- engine_when_ok.log https://pastebin.com/MegqmMbg
- vdsm_when_failed.log https://pastebin.com/ae4w0pix
- vdsm_when_ok.log https://pastebin.com/d7P0BWDN
Regards
--
,-~~-.___. ._.
/ | ' \ | |--------. Christophe GRENIER
( ) 0 | | | grenier(a)cgsecurity.org
\_/-, ,----' | | |
==== !_!-v---v--.
/ \-'~; .--------. TestDisk & PhotoRec
/ __/~| ._-""|| | Data Recovery
=( _____|_|____||________| https://www.cgsecurity.org
1 year, 4 months
Add Direct LUN to VM with Rest API
by LS CHENG
Hi
Anyone know how to add a Direct LUN to a VM o VM's?
I am trying to clone a couple of VM's fibre channel direct lun's with SAN's
snapshot technology and present those snapshots to another VM's, I would
like to do this with CLI but I cannot find any example for Fibre Channel
Disks and attach them to a VM.
Thanks!
1 year, 4 months
Configure OVN for oVirt failing - vdsm.tool.ovn_config.NetworkNotFoundError: hostname
by huw.m@twinstream.com
Hello,
When installing the self-hosted engine using rocky 9 as a host (using nightly builds), the install gets as far as running the below ansible task from ovirt-engine
- name: Configure OVN for oVirt
ansible.builtin.command: >
vdsm-tool ovn-config {{ ovn_central }} {{ ovn_tunneling_interface }} {{ ovn_host_fqdn }}
This command gets executed as vdsm-tool ovn-config 192.168.57.4 hostname.my.project.com
and fails with error
"stderr" : "Traceback (most recent call last):\n File \"/usr/lib/python3.9/site-packages/vdsm/tool/ovn_config.py\", line 117, in get_network\n return networks[net_name]\nKeyError: 'virt-1.local.hyp.twinstream.com'\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/usr/bin/vdsm-tool\", line 195, in main\n return tool_command[cmd][\"command\"](*args)\n File \"/usr/lib/python3.9/site-packages/vdsm/tool/ovn_config.py\", line 63, in ovn_config\n ip_address = get_ip_addr(get_network(network_caps(), net_name))\n File \"/usr/lib/python3.9/site-packages/vdsm/tool/ovn_config.py\", line 119, in get_network\n raise NetworkNotFoundError(net_name)\nvdsm.tool.ovn_config.NetworkNotFoundError: hostname.my.project.com"
Running `vdsm-tool list-nets` on the host gives an empty list.
`ip a` gives
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 52:54:00:6d:16:65 brd ff:ff:ff:ff:ff:ff
altname enp0s6
altname ens6
inet 192.168.121.29/24 brd 192.168.121.255 scope global dynamic noprefixroute eth0
valid_lft 2482sec preferred_lft 2482sec
inet6 fe80::5054:ff:fe6d:1665/64 scope link
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 52:54:00:6b:f4:7b brd ff:ff:ff:ff:ff:ff
altname enp0s7
altname ens7
inet 192.168.56.151/24 brd 192.168.56.255 scope global noprefixroute eth1
valid_lft forever preferred_lft forever
inet6 fe80::5054:ff:fe6b:f47b/64 scope link
valid_lft forever preferred_lft forever
4: eth2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc fq_codel master bond0 state UP group default qlen 1000
link/ether 52:54:00:8f:40:45 brd ff:ff:ff:ff:ff:ff
altname enp0s8
altname ens8
5: eth3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 52:54:00:2f:27:9d brd ff:ff:ff:ff:ff:ff
altname enp0s9
altname ens9
6: eth4: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc fq_codel master bondstorage state UP group default qlen 1000
link/ether 52:54:00:b8:9b:d7 brd ff:ff:ff:ff:ff:ff
altname enp0s10
altname ens10
7: eth5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 52:54:00:c2:9a:bd brd ff:ff:ff:ff:ff:ff
altname enp0s11
altname ens11
8: eth6: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc fq_codel master bondvm state UP group default qlen 1000
link/ether 52:54:00:ed:f7:cc brd ff:ff:ff:ff:ff:ff
altname enp0s12
altname ens12
9: eth7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 52:54:00:de:8a:48 brd ff:ff:ff:ff:ff:ff
altname enp0s13
altname ens13
10: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 52:54:00:8f:40:45 brd ff:ff:ff:ff:ff:ff
inet 192.168.57.4/24 brd 192.168.57.255 scope global noprefixroute bond0
valid_lft forever preferred_lft forever
inet6 fe80::5054:ff:fe8f:4045/64 scope link
valid_lft forever preferred_lft forever
11: bondvm: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 52:54:00:ed:f7:cc brd ff:ff:ff:ff:ff:ff
inet6 fe80::5054:ff:feed:f7cc/64 scope link
valid_lft forever preferred_lft forever
12: bondstorage: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 52:54:00:b8:9b:d7 brd ff:ff:ff:ff:ff:ff
inet 192.168.59.4/24 brd 192.168.59.255 scope global noprefixroute bondstorage
valid_lft forever preferred_lft forever
inet6 fe80::5054:ff:feb8:9bd7/64 scope link
valid_lft forever preferred_lft forever
13: bondvm.20@bondvm: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 52:54:00:ed:f7:cc brd ff:ff:ff:ff:ff:ff
inet6 fe80::5054:ff:feed:f7cc/64 scope link
valid_lft forever preferred_lft forever
15: virbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 52:54:00:b2:5f:e2 brd ff:ff:ff:ff:ff:ff
inet 192.168.222.1/24 brd 192.168.222.255 scope global virbr0
valid_lft forever preferred_lft forever
16: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master virbr0 state UNKNOWN group default qlen 1000
link/ether fe:16:3e:34:3d:ea brd ff:ff:ff:ff:ff:ff
inet6 fe80::fc16:3eff:fe34:3dea/64 scope link
valid_lft forever preferred_lft forever
47: ;vdsmdummy;: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 6e:27:5f:fa:e3:3a brd ff:ff:ff:ff:ff:ff
48: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 12:7c:d9:2e:cf:26 brd ff:ff:ff:ff:ff:ff
49: br-int: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether a2:35:6e:5e:4c:60 brd ff:ff:ff:ff:ff:ff
bond0 was selected as the ovirtmgmt bridge NIC. It currently only has one member interface eth2 using balance-xor. In the ovirt management console I can the see host in a down state and given the rest of the playbook ran which requires ssh connectivity between hosted-engine and host, I believe the network is generally setup correctly.
No other immediate errors I can. As vdsm-tool ovn-config expects a network to exist with value of the hostname, what is meant to be creating this on the host?
Thanks,
Huw
1 year, 4 months
VM Unknow Status
by ankit@eurus.net
One of node is got non-responsive suddenly and some VM stuck on Unknow Status, I am trying to change status but unable to login in DB.
su - postgres
psql engine
psql command not found error.
Can someone help me to get rid of it?
Thanks,
Ankit Sharma
1 year, 4 months
how to renew expired ovirt node vdsm cert manually ?
by dhanaraj.ramesh@yahoo.com
below are the steps to renew the expired vdsm cert of ovirt node
# To check CERT expired
# openssl x509 -in /etc/pki/vdsm/certs/vdsmcert.pem -noout -dates
1. Backup vdsm folder
# cd /etc/pki
# mv vdsm vdsm.orig
# mkdir vdsm ; chown vdsm:kvm vdsm
# cd vdsm
# mkdir libvirt-vnc certs keys libvirt-spice libvirt-migrate
# chown vdsm:kvm libvirt-vnc certs keys libvirt-spice libvirt-migrate
2. Regenerate cert & keys
# vdsm-tool configure --module certificates
3. Copy the cert to destination location
chmod 440 /etc/pki/vdsm/keys/vdsmkey.pem
chown root /etc/pki/vdsmcerts/*pem
chmod 644 /etc/pki/vdsmcerts/*pem
cp /etc/pki/vdsm/certs/cacert.pem /etc/pki/vdsm/libvirt-spice/ca-cert.pem
cp /etc/pki/vdsm/keys/vdsmkey.pem /etc/pki/vdsm/libvirt-spice/server-key.pem
cp /etc/pki/vdsm/certs/vdsmcert.pem /etc/pki/vdsm/libvirt-spice/server-cert.pem
cp /etc/pki/vdsm/certs/cacert.pem /etc/pki/vdsm/libvirt-vnc/ca-cert.pem
cp /etc/pki/vdsm/keys/vdsmkey.pem /etc/pki/vdsm/libvirt-vnc/server-key.pem
cp /etc/pki/vdsm/certs/vdsmcert.pem /etc/pki/vdsm/libvirt-vnc/server-cert.pem
cp -p /etc/pki/vdsm/certs/cacert.pem /etc/pki/vdsm/libvirt-migrate/ca-cert.pem
cp -p /etc/pki/vdsm/keys/vdsmkey.pem /etc/pki/vdsm/libvirt-migrate/server-key.pem
cp -p /etc/pki/vdsm/certs/vdsmcert.pem /etc/pki/vdsm/libvirt-migrate/server-cert.pem
chown root:qemu /etc/pki/vdsm/libvirt-migrate/server-key.pem
cp -p /etc/pki/vdsm.orig/keys/libvirt_password /etc/pki/vdsm/keys/
mv /etc/pki/libvirt/clientcert.pem /etc/pki/libvirt/clientcert.pem.orig
mv /etc/pki/libvirt/private/clientkey.pem /etc/pki/libvirt/private/clientkey.pem.orig
mv /etc/pki/CA/cacert.pem /etc/pki/CA/cacert.pem.orig
cp -p /etc/pki/vdsm/certs/vdsmcert.pem /etc/pki/libvirt/clientcert.pem
cp -p /etc/pki/vdsm/keys/vdsmkey.pem /etc/pki/libvirt/private/clientkey.pem
cp -p /etc/pki/vdsm/certs/cacert.pem /etc/pki/CA/cacert.pem
3. cross check the backup folder /etc/pki/vdsm.orig vs /etc/pki/vdsm
# refer to /etc/pki/vdsm.orig/*/ and set the correct owner & group permission in /etc/pki/vdsm/*/
4. restart services # Make sure both services are up
systemctl restart vdsmd libvirtd
1 year, 4 months
Updated Ovirt Engine (4.5.5) - apache/websocket certs not renewed. (self signed) - Ive manually updated apache, how to do websocket?
by morgan cox
Hi.
We have an Ovirt system, today I updated the engine to v4.5.5, the engine uses self-signed certs/CA.
After the update (and engine-setup) I checked cert expiry dates
-----
/etc/pki/ovirt-engine/ca.pem: Mar 24 15:10:29 2031 GMT
/etc/pki/ovirt-engine/certs/apache.cer: Jan 11 15:11:58 2029 GMT
/etc/pki/ovirt-engine/certs/engine.cer: May 10 11:13:51 2028 GMT
/etc/pki/ovirt-engine/qemu-ca.pem Mar 24 15:10:35 2031 GMT
/etc/pki/ovirt-engine/certs/websocket-proxy.cer Jun 11 11:13:52 2024 GMT
/etc/pki/ovirt-engine/certs/jboss.cer May 10 11:13:51 2028 GMT
/etc/pki/ovirt-engine/certs/ovirt-provider-ovn May 10 11:13:55 2028 GMT
/etc/pki/ovirt-engine/certs/ovn-ndb.cer May 10 11:13:54 2028 GMT
/etc/pki/ovirt-engine/certs/ovn-sdb.cer May 10 11:13:54 2028 GMT
/etc/pki/ovirt-engine/certs/vmconsole-proxy-helper.cer May 26 16:27:04 2027 GMT
/etc/pki/ovirt-engine/certs/vmconsole-proxy-host.cer May 26 16:27:05 2027 GMT
/etc/pki/ovirt-engine/certs/vmconsole-proxy-user.cer May 26 16:27:04 2027 GMT
---
I thought that Ovirt should auto update these when using engine-setup ?
I manually updated apache cert using info from -> https://access.redhat.com/solutions/3329431 - i.e /usr/share/ovirt-engine/bin/pki-enroll-pkcs12.sh --name=apache --password="@PASSWORD@" --subject="${SUBJECT}"
How can I update the websocket cert also ?
Any help would be welcomed - thanks
1 year, 4 months
hosted-engine deploy skip storage configuration
by laetitia.gilet@bnf.fr
Hello,
I'm trying to install ovirt from the command line on an ovirt 4.5.5 el9 Ovirt node.
I prepared my LUN and multipath configuration and then run
hosted-engine --deploy --4
The storage configuration is skipped and I am not prompted about which storage domain type I want to use.
In the log the shows the few questions i've aswered :
QUESTION/1/CI_APPLY_OPENSCAP_PROFILE=str:no
QUESTION/1/CI_DNS=str:172.20.11.100
QUESTION/1/CI_ENABLE_FIPS=str:no
QUESTION/1/CI_INSTANCE_DOMAINNAME=str:example.fr
QUESTION/1/CI_INSTANCE_HOSTNAME=str:ovirt-prod.example.fr
QUESTION/1/CI_ROOT_PASSWORD=str:**FILTERED**
QUESTION/1/CI_ROOT_SSH_ACCESS=str:yes
QUESTION/1/CI_ROOT_SSH_PUBKEY=str:
QUESTION/1/CI_VM_ETC_HOST=str:yes
QUESTION/1/CI_VM_STATIC_NETWORKING=str:static
QUESTION/1/CLOUDINIT_VM_STATIC_IP_ADDRESS=str:172.20.82.2
QUESTION/1/DEPLOY_PROCEED=str:yes
QUESTION/1/DIALOGOVEHOSTED_NOTIF/destEmail=str:admin@example.fr
QUESTION/1/DIALOGOVEHOSTED_NOTIF/smtpPort=str:25
QUESTION/1/DIALOGOVEHOSTED_NOTIF/smtpServer=str:smtp.example.fr
QUESTION/1/DIALOGOVEHOSTED_NOTIF/sourceEmail=str:noreply-ovirt@example.fr
QUESTION/1/ENGINE_ADMIN_PASSWORD=str:**FILTERED**
QUESTION/1/OVEHOSTED_GATEWAY=str:172.20.82.1
QUESTION/1/OVEHOSTED_NETWORK_TEST=str:dns
QUESTION/1/OVEHOSTED_VMENV_OVF_ANSIBLE=str:
QUESTION/1/OVESETUP_NETWORK_FQDN_first_HE=str:kvm.example.fr
QUESTION/1/ovehosted_bridge_if=str:bond1
QUESTION/1/ovehosted_cluster_name=str:PC_Crise
QUESTION/1/ovehosted_datacenter_name=str:Ovirt-prod
QUESTION/1/ovehosted_enable_keycloak=str:no
QUESTION/1/ovehosted_vmenv_cpu=str:4
QUESTION/1/ovehosted_vmenv_mac=str:00:16:3e:71:7e:ed
QUESTION/1/ovehosted_vmenv_mem=str:16384
QUESTION/2/CI_ROOT_PASSWORD=str:**FILTERED**
QUESTION/2/ENGINE_ADMIN_PASSWORD=str:**FILTERED**
...
otopi.dialog.human dialog.__logString:204 DIALOG:SEND
2024-01-10 15:26:40,556+0100 DEBUG otopi.plugins.otopi.dialog.human dialog.__logString:204 DIALOG:SEND --== STORAGE CONFIGURATION ==--
2024-01-10 15:26:40,556+0100 DEBUG otopi.plugins.otopi.dialog.human dialog.__logString:204 DIALOG:SEND
2024-01-10 15:26:40,557+0100 DEBUG otopi.context context._executeMethod:124 Stage customization METHOD otopi.plugins.otopi.dialog.cli.Plugin._customize
2024-01-10 15:26:40,557+0100 DEBUG otopi.context context._executeMethod:134 otopi.plugins.otopi.dialog.cli.Plugin._customize condition False
2024-01-10 15:26:40,558+0100 DEBUG otopi.context context._executeMethod:124 Stage customization METHOD otopi.plugins.gr_he_common.core.titles.Plugin._storage_end
2024-01-10 15:26:40,559+0100 DEBUG otopi.context context._executeMethod:124 Stage customization METHOD otopi.plugins.gr_he_common.core.titles.Plugin._network_start
2024-01-10 15:26:40,559+0100 DEBUG otopi.plugins.otopi.dialog.human dialog.__logString:204 DIALOG:SEND
2024-01-10 15:26:40,55
My host see the LUN and multipath -ll result is OK
Can you help me to configure the vm engine storage to FC please ?
Laetitia
1 year, 4 months
Usage Of More Up-To-Date Versions Of The "kojihub" rpm Files
by Matthew J Black
Hi All,
When installing the latest version of oVirt on RHEL 9 the doco says to grab a couple of rpm files from `kojihub.stream.centos.org`. The files to grab are for v2.0.0. I'm wondering, because there are newer files on the server, if the doco might be a couple of months(?) out of date and we can instead grab the newer versions (or not, as the case may be)? Could one (or more) of the "main" oVirt devs jump in with an answer, please?
For the record, when I get 5 minutes to scratch my butt I want to spin of a test cluster and try this (and other things) ou for myself, with the idea of reporting back to the Community - but I need to get a PROD cluster up and running ASAP and so don't have the luxury of "experimenting" right at this moment - hence my question.
Thanks in advance
Cheers
Dulux-Oz
1 year, 4 months
oVirt Self-Hosted Engine Deployment Error
by Matthew J Black
Hi Guys,
New oVirt install using latest versions on a Rocky Linux v9.3 host.
We're getting the following error in the setup logs:
~~~
2024-01-09 17:14:53,977+1100 ERROR otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:113 fatal: [localhost]: FAILED! => {"changed": true, "cmd": ["ip", "rule", "add", "from", "192.168.1.1/255.255.255.0", "priority", "101", "table", "main"], "delta": "0:00:00.002702", "end": "2024-01-09 17:14:53.680933", "msg": "non-zero return code", "rc": 2, "start": "2024-01-09 17:14:53.678231", "stderr": "RTNETLINK answers: File exists", "stderr_lines": ["RTNETLINK answers: File exists"], "stdout": "", "stdout_lines": []}
~~~
So, which file "RTNETLINK answers: File exists" and can I simply manually delete that file and re-run `hosted-engine --deploy`?
Cheers
Dulux-Oz
1 year, 4 months
Upgrading EL9 host from 4.5.4 to 4.5.5
by Devin A. Bougie
Hi, All. When upgrading an EL9 host from 4.5.4 to 4.5.5, I've found I need to exclude the following packages to avoid the errors shown below:
*openvswitch*,*ovn*,centos-release-nfv-common
Is that to be expected, or am I missing a required repo or other upgrade step? I just wanted to clarify, as the docs seem a little outdated at least WRT comments about nmstate?
https://ovirt.org/download/install_on_rhel.html
Thanks,
Devin
------
[root@lnxvirt01 ~]# rpm -qa |grep -i openvswitch
openvswitch-selinux-extra-policy-1.0-31.el9s.noarch
ovirt-openvswitch-ovn-2.17-1.el9.noarch
openvswitch2.17-2.17.0-103.el9s.x86_64
python3-openvswitch2.17-2.17.0-103.el9s.x86_64
openvswitch2.17-ipsec-2.17.0-103.el9s.x86_64
ovirt-openvswitch-ovn-host-2.17-1.el9.noarch
ovirt-openvswitch-ipsec-2.17-1.el9.noarch
ovirt-python-openvswitch-2.17-1.el9.noarch
ovirt-openvswitch-2.17-1.el9.noarch
ovirt-openvswitch-ovn-common-2.17-1.el9.noarch
centos-release-nfv-openvswitch-1-5.el9.noarch
[root@lnxvirt01 ~]# dnf update
173 files removed
CLASSE oVirt Packages - x86_64 988 kB/s | 9.6 kB 00:00 CLASSE Packages - x86_64 45 MB/s | 642 kB 00:00 CentOS-9-stream - Ceph Pacific 561 kB/s | 557 kB 00:00 CentOS-9-stream - Gluster 10 245 kB/s | 56 kB 00:00 CentOS-9 - RabbitMQ 38 392 kB/s | 104 kB 00:00 CentOS Stream 9 - NFV OpenvSwitch 709 kB/s | 154 kB 00:00 CentOS-9 - OpenStack yoga 11 MB/s | 3.0 MB 00:00 CentOS Stream 9 - OpsTools - collectd 175 kB/s | 51 kB 00:00 CentOS Stream 9 - Extras packages 57 kB/s | 15 kB 00:00 CentOS Stream 9 - oVirt 4.5 2.7 MB/s | 1.0 MB 00:00 oVirt upstream for CentOS Stream 9 - oVirt 4.5 932 B/s | 7.5 kB 00:08 AlmaLinux 9 - AppStream 84 MB/s | 8.1 MB 00:00 AlmaLinux 9 - BaseOS 75 MB/s | 3.5 MB 00:00 AlmaLinux 9 - BaseOS - Debug 12 MB/s | 2.2 MB 00:00 AlmaLinux 9 - CRB 67 MB/s | 2.3 MB 00:00 AlmaLinux 9 - Extras 1.5 MB/s | 17 kB 00:00 AlmaLinux 9 - HighAvailability 30 MB/s | 434 kB 00:00 AlmaLinux 9 - NFV 70 MB/s | 2.0 MB 00:00 AlmaLinux 9 - Plus 3.2 MB/s | 29 kB 00:00 AlmaLinux 9 - ResilientStorage 14 MB/s | 446 kB 00:00 AlmaLinux 9 - RT 70 MB/s | 1.9 MB 00:00 AlmaLinux 9 - SAP 846 kB/s | 9.7 kB 00:00 AlmaLinux 9 - SAPHANA 1.3 MB/s | 13 kB 00:00 Error: Problem 1: package ovirt-openvswitch-2.17-1.el9.noarch from @System requires openvswitch2.17, but none of the providers can be installed
- package rdo-openvswitch-2:3.1-2.el9s.noarch from centos-openstack-yoga obsoletes openvswitch2.17 < 3.1 provided by openvswitch2.17-2.17.0-103.el9s.x86_64 from @System
- package rdo-openvswitch-2:3.1-2.el9s.noarch from centos-openstack-yoga obsoletes openvswitch2.17 < 3.1 provided by openvswitch2.17-2.17.0-103.el9s.x86_64 from centos-nfv-openvswitch
- package rdo-openvswitch-2:3.1-2.el9s.noarch from centos-openstack-yoga obsoletes openvswitch2.17 < 3.1 provided by openvswitch2.17-2.17.0-108.el9s.x86_64 from centos-nfv-openvswitch
- package rdo-openvswitch-2:3.1-2.el9s.noarch from centos-openstack-yoga obsoletes openvswitch2.17 < 3.1 provided by openvswitch2.17-2.17.0-109.el9s.x86_64 from centos-nfv-openvswitch
- package rdo-openvswitch-2:3.1-2.el9s.noarch from centos-openstack-yoga obsoletes openvswitch2.17 < 3.1 provided by openvswitch2.17-2.17.0-115.el9s.x86_64 from centos-nfv-openvswitch
- package rdo-openvswitch-2:3.1-2.el9s.noarch from centos-openstack-yoga obsoletes openvswitch2.17 < 3.1 provided by openvswitch2.17-2.17.0-120.el9s.x86_64 from centos-nfv-openvswitch
- package rdo-openvswitch-2:3.1-2.el9s.noarch from centos-openstack-yoga obsoletes openvswitch2.17 < 3.1 provided by openvswitch2.17-2.17.0-15.el9s.x86_64 from centos-nfv-openvswitch
- package rdo-openvswitch-2:3.1-2.el9s.noarch from centos-openstack-yoga obsoletes openvswitch2.17 < 3.1 provided by openvswitch2.17-2.17.0-31.el9s.x86_64 from centos-nfv-openvswitch
- package rdo-openvswitch-2:3.1-2.el9s.noarch from centos-openstack-yoga obsoletes openvswitch2.17 < 3.1 provided by openvswitch2.17-2.17.0-51.el9s.x86_64 from centos-nfv-openvswitch
- package rdo-openvswitch-2:3.1-2.el9s.noarch from centos-openstack-yoga obsoletes openvswitch2.17 < 3.1 provided by openvswitch2.17-2.17.0-52.el9s.x86_64 from centos-nfv-openvswitch
- package rdo-openvswitch-2:3.1-2.el9s.noarch from centos-openstack-yoga obsoletes openvswitch2.17 < 3.1 provided by openvswitch2.17-2.17.0-55.el9s.x86_64 from centos-nfv-openvswitch
- package rdo-openvswitch-2:3.1-2.el9s.noarch from centos-openstack-yoga obsoletes openvswitch2.17 < 3.1 provided by openvswitch2.17-2.17.0-57.el9s.x86_64 from centos-nfv-openvswitch
- package rdo-openvswitch-2:3.1-2.el9s.noarch from centos-openstack-yoga obsoletes openvswitch2.17 < 3.1 provided by openvswitch2.17-2.17.0-60.el9s.x86_64 from centos-nfv-openvswitch
- package rdo-openvswitch-2:3.1-2.el9s.noarch from centos-openstack-yoga obsoletes openvswitch2.17 < 3.1 provided by openvswitch2.17-2.17.0-62.el9s.x86_64 from centos-nfv-openvswitch
- package rdo-openvswitch-2:3.1-2.el9s.noarch from centos-openstack-yoga obsoletes openvswitch2.17 < 3.1 provided by openvswitch2.17-2.17.0-63.el9s.x86_64 from centos-nfv-openvswitch
- package rdo-openvswitch-2:3.1-2.el9s.noarch from centos-openstack-yoga obsoletes openvswitch2.17 < 3.1 provided by openvswitch2.17-2.17.0-67.el9s.x86_64 from centos-nfv-openvswitch
- package rdo-openvswitch-2:3.1-2.el9s.noarch from centos-openstack-yoga obsoletes openvswitch2.17 < 3.1 provided by openvswitch2.17-2.17.0-68.el9s.x86_64 from centos-nfv-openvswitch
- package rdo-openvswitch-2:3.1-2.el9s.noarch from centos-openstack-yoga obsoletes openvswitch2.17 < 3.1 provided by openvswitch2.17-2.17.0-71.el9s.x86_64 from centos-nfv-openvswitch
- package rdo-openvswitch-2:3.1-2.el9s.noarch from centos-openstack-yoga obsoletes openvswitch2.17 < 3.1 provided by openvswitch2.17-2.17.0-72.el9s.x86_64 from centos-nfv-openvswitch
- package rdo-openvswitch-2:3.1-2.el9s.noarch from centos-openstack-yoga obsoletes openvswitch2.17 < 3.1 provided by openvswitch2.17-2.17.0-76.el9s.x86_64 from centos-nfv-openvswitch
- package rdo-openvswitch-2:3.1-2.el9s.noarch from centos-openstack-yoga obsoletes openvswitch2.17 < 3.1 provided by openvswitch2.17-2.17.0-77.el9s.x86_64 from centos-nfv-openvswitch
- package rdo-openvswitch-2:3.1-2.el9s.noarch from centos-openstack-yoga obsoletes openvswitch2.17 < 3.1 provided by openvswitch2.17-2.17.0-85.el9s.x86_64 from centos-nfv-openvswitch
- package rdo-openvswitch-2:3.1-2.el9s.noarch from centos-openstack-yoga obsoletes openvswitch2.17 < 3.1 provided by openvswitch2.17-2.17.0-87.el9s.x86_64 from centos-nfv-openvswitch
- package rdo-openvswitch-2:3.1-2.el9s.noarch from centos-openstack-yoga obsoletes openvswitch2.17 < 3.1 provided by openvswitch2.17-2.17.0-92.el9s.x86_64 from centos-nfv-openvswitch
- package rdo-openvswitch-2:3.1-2.el9s.noarch from centos-openstack-yoga obsoletes openvswitch2.17 < 3.1 provided by openvswitch2.17-2.17.0-93.el9s.x86_64 from centos-nfv-openvswitch
- package rdo-openvswitch-2:3.1-2.el9s.noarch from centos-openstack-yoga obsoletes openvswitch2.17 < 3.1 provided by openvswitch2.17-2.17.0-95.el9s.x86_64 from centos-nfv-openvswitch
- cannot install the best update candidate for package ovirt-openvswitch-2.17-1.el9.noarch
- cannot install the best update candidate for package openvswitch2.17-2.17.0-103.el9s.x86_64
Problem 2: package python3-rdo-openvswitch-2:3.1-2.el9s.noarch from centos-openstack-yoga obsoletes python3-openvswitch2.17 < 3.1 provided by python3-openvswitch2.17-2.17.0-120.el9s.x86_64 from centos-nfv-openvswitch
- package openvswitch2.17-ipsec-2.17.0-120.el9s.x86_64 from centos-nfv-openvswitch requires python3-openvswitch2.17 = 2.17.0-120.el9s, but none of the providers can be installed
- cannot install the best update candidate for package python3-openvswitch2.17-2.17.0-103.el9s.x86_64
- cannot install the best update candidate for package openvswitch2.17-ipsec-2.17.0-103.el9s.x86_64
Problem 3: package ovirt-openvswitch-ovn-common-2.17-1.el9.noarch from @System requires ovn22.09, but none of the providers can be installed
- package rdo-ovn-2:22.12-2.el9s.noarch from centos-openstack-yoga obsoletes ovn22.09 < 22.12 provided by ovn22.09-22.09.0-31.el9s.x86_64 from @System
- package rdo-ovn-2:22.12-2.el9s.noarch from centos-openstack-yoga obsoletes ovn22.09 < 22.12 provided by ovn22.09-22.09.0-11.el9s.x86_64 from centos-nfv-openvswitch
- package rdo-ovn-2:22.12-2.el9s.noarch from centos-openstack-yoga obsoletes ovn22.09 < 22.12 provided by ovn22.09-22.09.0-22.el9s.x86_64 from centos-nfv-openvswitch
- package rdo-ovn-2:22.12-2.el9s.noarch from centos-openstack-yoga obsoletes ovn22.09 < 22.12 provided by ovn22.09-22.09.0-31.el9s.x86_64 from centos-nfv-openvswitch
- cannot install the best update candidate for package ovn22.09-22.09.0-31.el9s.x86_64
- cannot install the best update candidate for package ovirt-openvswitch-ovn-common-2.17-1.el9.noarch
Problem 4: package ovirt-openvswitch-ovn-host-2.17-1.el9.noarch from @System requires ovn22.09-host, but none of the providers can be installed
- package rdo-ovn-host-2:22.12-2.el9s.noarch from centos-openstack-yoga obsoletes ovn22.09-host < 22.12 provided by ovn22.09-host-22.09.0-31.el9s.x86_64 from @System
- package rdo-ovn-host-2:22.12-2.el9s.noarch from centos-openstack-yoga obsoletes ovn22.09-host < 22.12 provided by ovn22.09-host-22.09.0-11.el9s.x86_64 from centos-nfv-openvswitch
- package rdo-ovn-host-2:22.12-2.el9s.noarch from centos-openstack-yoga obsoletes ovn22.09-host < 22.12 provided by ovn22.09-host-22.09.0-22.el9s.x86_64 from centos-nfv-openvswitch
- package rdo-ovn-host-2:22.12-2.el9s.noarch from centos-openstack-yoga obsoletes ovn22.09-host < 22.12 provided by ovn22.09-host-22.09.0-31.el9s.x86_64 from centos-nfv-openvswitch
- cannot install the best update candidate for package ovn22.09-host-22.09.0-31.el9s.x86_64
- cannot install the best update candidate for package ovirt-openvswitch-ovn-host-2.17-1.el9.noarch
(try to add '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages)
[root@lnxvirt01 ~]# yum --exclude=kernel*,*openvswitch*,*ovn*,centos-release-nfv-common update
Last metadata expiration check: 0:13:54 ago on Mon 08 Jan 2024 02:51:41 PM EST.
Dependencies resolved.
========================================================================================================================================================================================================================================
Package Architecture Version Repository Size
========================================================================================================================================================================================================================================
Upgrading:
NetworkManager-libreswan x86_64 1.2.14-2.el9_3.alma.1 appstream 124 k
centos-release-ceph-pacific noarch 1.0-2.el9s c9s-extras-common 7.4 k
centos-release-cloud noarch 1-1.el9s c9s-extras-common 7.9 k
centos-release-gluster10 noarch 1.0-2.el9s c9s-extras-common 8.8 k
centos-release-messaging noarch 1-4.el9s c9s-extras-common 8.4 k
centos-release-openstack-yoga noarch 1-4.el9s c9s-extras-common 8.0 k
centos-release-opstools noarch 1-12.el9s c9s-extras-common 8.4 k
centos-release-ovirt45 noarch 9.2-1.el9s c9s-extras-common 18 k
centos-release-rabbitmq-38 noarch 1-4.el9s c9s-extras-common 7.4 k
centos-release-storage-common noarch 2-5.el9s c9s-extras-common 8.3 k
centos-release-virt-common noarch 1-4.el9s c9s-extras-common 7.9 k
ceph-common x86_64 2:16.2.14-1.el9s centos-ceph-pacific 20 M
firefox x86_64 115.6.0-1.el9_3.alma appstream 110 M
glusterfs x86_64 10.5-1.el9s centos-gluster10 606 k
glusterfs-cli x86_64 10.5-1.el9s centos-gluster10 184 k
glusterfs-client-xlators x86_64 10.5-1.el9s centos-gluster10 854 k
glusterfs-fuse x86_64 10.5-1.el9s centos-gluster10 137 k
libcephfs2 x86_64 2:16.2.14-1.el9s centos-ceph-pacific 657 k
libgfrpc0 x86_64 10.5-1.el9s centos-gluster10 53 k
libgfxdr0 x86_64 10.5-1.el9s centos-gluster10 28 k
libglusterd0 x86_64 10.5-1.el9s centos-gluster10 11 k
libglusterfs0 x86_64 10.5-1.el9s centos-gluster10 300 k
libqb x86_64 2.0.8-1.el9 centos-ovirt45 91 k
librados2 x86_64 2:16.2.14-1.el9s centos-ceph-pacific 3.2 M
libradosstriper1 x86_64 2:16.2.14-1.el9s centos-ceph-pacific 469 k
librbd1 x86_64 2:16.2.14-1.el9s centos-ceph-pacific 3.0 M
librgw2 x86_64 2:16.2.14-1.el9s centos-ceph-pacific 3.4 M
libvirt x86_64 9.5.0-7.el9_3.alma.2 appstream 22 k
libvirt-client x86_64 9.5.0-7.el9_3.alma.2 appstream 426 k
libvirt-daemon x86_64 9.5.0-7.el9_3.alma.2 appstream 168 k
libvirt-daemon-common x86_64 9.5.0-7.el9_3.alma.2 appstream 129 k
libvirt-daemon-config-network x86_64 9.5.0-7.el9_3.alma.2 appstream 25 k
libvirt-daemon-config-nwfilter x86_64 9.5.0-7.el9_3.alma.2 appstream 30 k
libvirt-daemon-driver-interface x86_64 9.5.0-7.el9_3.alma.2 appstream 174 k
libvirt-daemon-driver-network x86_64 9.5.0-7.el9_3.alma.2 appstream 212 k
libvirt-daemon-driver-nodedev x86_64 9.5.0-7.el9_3.alma.2 appstream 194 k
libvirt-daemon-driver-nwfilter x86_64 9.5.0-7.el9_3.alma.2 appstream 210 k
libvirt-daemon-driver-qemu x86_64 9.5.0-7.el9_3.alma.2 appstream 909 k
libvirt-daemon-driver-secret x86_64 9.5.0-7.el9_3.alma.2 appstream 171 k
libvirt-daemon-driver-storage x86_64 9.5.0-7.el9_3.alma.2 appstream 22 k
libvirt-daemon-driver-storage-core x86_64 9.5.0-7.el9_3.alma.2 appstream 229 k
libvirt-daemon-driver-storage-disk x86_64 9.5.0-7.el9_3.alma.2 appstream 33 k
libvirt-daemon-driver-storage-iscsi x86_64 9.5.0-7.el9_3.alma.2 appstream 30 k
libvirt-daemon-driver-storage-logical x86_64 9.5.0-7.el9_3.alma.2 appstream 34 k
libvirt-daemon-driver-storage-mpath x86_64 9.5.0-7.el9_3.alma.2 appstream 28 k
libvirt-daemon-driver-storage-rbd x86_64 9.5.0-7.el9_3.alma.2 appstream 38 k
libvirt-daemon-driver-storage-scsi x86_64 9.5.0-7.el9_3.alma.2 appstream 30 k
libvirt-daemon-kvm x86_64 9.5.0-7.el9_3.alma.2 appstream 22 k
libvirt-daemon-lock x86_64 9.5.0-7.el9_3.alma.2 appstream 58 k
libvirt-daemon-log x86_64 9.5.0-7.el9_3.alma.2 appstream 62 k
libvirt-daemon-plugin-lockd x86_64 9.5.0-7.el9_3.alma.2 appstream 33 k
libvirt-daemon-plugin-sanlock x86_64 9.5.0-7.el9_3.alma.2 crb 44 k
libvirt-daemon-proxy x86_64 9.5.0-7.el9_3.alma.2 appstream 166 k
libvirt-libs x86_64 9.5.0-7.el9_3.alma.2 appstream 4.8 M
otopi-common noarch 1.10.4-1.el9 centos-ovirt45 92 k
ovirt-ansible-collection noarch 3.2.0-1.el9 centos-ovirt45 279 k
ovirt-engine-setup-base noarch 4.5.5-1.el9 centos-ovirt45 111 k
ovirt-hosted-engine-ha noarch 2.5.1-1.el9 centos-ovirt45 312 k
ovirt-hosted-engine-setup noarch 2.7.1-1.el9 centos-ovirt45 221 k
ovirt-vmconsole noarch 1.0.9-3.el9 centos-ovirt45 38 k
ovirt-vmconsole-host noarch 1.0.9-3.el9 centos-ovirt45 21 k
python3-ceph-argparse x86_64 2:16.2.14-1.el9s centos-ceph-pacific 46 k
python3-ceph-common x86_64 2:16.2.14-1.el9s centos-ceph-pacific 98 k
python3-cephfs x86_64 2:16.2.14-1.el9s centos-ceph-pacific 193 k
python3-os-brick noarch 5.2.4-1.el9s centos-openstack-yoga 1.1 M
python3-oslo-config noarch 2:8.8.1-1.el9s centos-openstack-yoga 216 k
python3-otopi noarch 1.10.4-1.el9 centos-ovirt45 105 k
python3-ovirt-engine-lib noarch 4.5.5-1.el9 centos-ovirt45 31 k
python3-rados x86_64 2:16.2.14-1.el9s centos-ceph-pacific 343 k
python3-rbd x86_64 2:16.2.14-1.el9s centos-ceph-pacific 314 k
python3-rgw x86_64 2:16.2.14-1.el9s centos-ceph-pacific 106 k
selinux-policy noarch 38.1.29-1.el9 el-classe-ovirt 56 k
selinux-policy-targeted noarch 38.1.29-1.el9 el-classe-ovirt 6.5 M
tigervnc x86_64 1.13.1-3.el9_3.3.alma.1 appstream 297 k
tigervnc-icons noarch 1.13.1-3.el9_3.3.alma.1 appstream 33 k
tigervnc-license noarch 1.13.1-3.el9_3.3.alma.1 appstream 13 k
vdsm x86_64 4.50.5.1-1.el9 centos-ovirt45 337 k
vdsm-api noarch 4.50.5.1-1.el9 centos-ovirt45 101 k
vdsm-client noarch 4.50.5.1-1.el9 centos-ovirt45 23 k
vdsm-common noarch 4.50.5.1-1.el9 centos-ovirt45 130 k
vdsm-http noarch 4.50.5.1-1.el9 centos-ovirt45 14 k
vdsm-jsonrpc noarch 4.50.5.1-1.el9 centos-ovirt45 30 k
vdsm-network x86_64 4.50.5.1-1.el9 centos-ovirt45 209 k
vdsm-python noarch 4.50.5.1-1.el9 centos-ovirt45 1.2 M
vdsm-yajsonrpc noarch 4.50.5.1-1.el9 centos-ovirt45 39 k
vivaldi-stable x86_64 6.5.3206.50-1 el-classe 103 M
Transaction Summary
========================================================================================================================================================================================================================================
Upgrade 86 Packages
Total download size: 267 M
Is this ok [y/N]: N
Operation aborted.
------
1 year, 4 months
Error: GPG check FAILED
by juan.gabriel1786@gmail.com
Hello oVirt Support Team,
I am experiencing a GPG key verification issue on my oVirt Node when attempting to update packages. The error persists even after the GPG keys have been imported and seems to be related to package verification.
System Details:
Operating System: oVirt Node 4.5.4
CPE OS Name: cpe:/o:centos:centos:9
Kernel: Linux 5.14.0-202.el9.x86_64
Architecture: x86-64
Hardware Vendor: Supermicro
Hardware Model: X9DRL-3F/iF
Steps to Reproduce:
Imported the GPG key with the command:
bash
Copy code
rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-oVirt-4.5
Ran dnf update, which led to the following error:
The GPG keys listed for the "oVirt upstream for CentOS Stream 9 - oVirt 4.5" repository are already installed but they are not correct for this package.
Check that the correct key URLs are configured for this repository. Failing package is: ovirt-node-ng-image-update-4.5.5-1.el9.noarch
Error: GPG check FAILED
The GPG key at file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oVirt-4.5 (0x24901D0C) is reported to be already installed, but it does not seem to match the packages being updated.
Could you please advise on how to resolve this GPG key verification failure? I am following standard update procedures, and this issue is preventing me from maintaining the system's security and stability.
Thank you for your time and assistance.
Best regards,
1 year, 4 months
Unable to install oVirt on RHEL7.5
by SS00514758@techmahindra.com
Hi All,
I am unable to install oVirt on RHEL7.5, to install it I am taking reference of below link,
https://www.ovirt.org/documentation/install-guide/chap-Installing_oVirt.html
But though it is not working for me, couple of dependencies is not getting installed, and because of this I am not able to run the ovirt-engine, below are the depencies packages that unable to install,
Error: Package: collectd-write_http-5.8.0-6.1.el7.x86_64 (@ovirt-4.2-centos-opstools)
Requires: collectd(x86-64) = 5.8.0-6.1.el7
Removing: collectd-5.8.0-6.1.el7.x86_64 (@ovirt-4.2-centos-opstools)
collectd(x86-64) = 5.8.0-6.1.el7
Updated By: collectd-5.8.1-1.el7.x86_64 (epel)
collectd(x86-64) = 5.8.1-1.el7
Available: collectd-5.7.2-1.el7.x86_64 (ovirt-4.2-centos-opstools)
collectd(x86-64) = 5.7.2-1.el7
Available: collectd-5.7.2-3.el7.x86_64 (ovirt-4.2-centos-opstools)
collectd(x86-64) = 5.7.2-3.el7
Available: collectd-5.8.0-2.el7.x86_64 (ovirt-4.2-centos-opstools)
collectd(x86-64) = 5.8.0-2.el7
Available: collectd-5.8.0-3.el7.x86_64 (ovirt-4.2-centos-opstools)
collectd(x86-64) = 5.8.0-3.el7
Available: collectd-5.8.0-5.el7.x86_64 (ovirt-4.2-centos-opstools)
collectd(x86-64) = 5.8.0-5.el7
Help me to install this.
Looking forward to resolve this issue.
Regards
Sumit Sahay
1 year, 4 months
Cannot get Ovirt 4.5 to work, how ever I try. Virgin install: no pki ca-cert gen, restoring: no OVN connection
by julian.steiner@conesphere.com
Hi there,
over the last months I've hunkered down to update my companies antiquated Ovirt 4.3. To manage this in an orderly fashion we replicated the setup.
In the update process I always arrive at the same problem. Once I managed to solve it by chance, but I cannot reproduce the solution.
The setup is Ovirt Engine running on a dedicated Centos-Stream-8 virtual machine managed in VirtManager. The nodes are either OvirtNode 4.4 or 4.5. The problem exists on both.
Issue1:
Updating to 4.4 works without issue. Then, regardless whether I update by restoring to Ovirt 4.5 or by updating the engine through the update path networks stop functioning and, very peculiarly I get a very strange keymap in the vm console. It's no real keymap. It's quertz, but # resolves as 3 and all kind of strange stuff. However, this can be resolved on individual basis by setting the vm-console keymap to de (german). Connected hosts and new hosts always dispaly "OVN connected: No".
The error log hints at some kind of ssl error. I either get dropping connections, or protocol miss-matches in the node log. I deactivated Ovirt4.4-repositories on the engine and did a distro-sync, because I found an old bug-report that implicated protocol mismatched may result from unclean python-library versioning.
I reenrolled certificates, I reinstalled the host and still cannot get a connection:
Logs on host:
/var/log/ovn-controller.log:
2023-12-19T11:27:14.245Z|00018|memory|INFO|6604 kB peak resident set size after 15.1 seconds
2023-12-19T11:27:14.245Z|00019|memory|INFO|idl-cells:100
2023-12-19T11:29:34.483Z|00001|vlog|INFO|opened log file /var/log/ovn/ovn-controller.log
2023-12-19T11:29:34.512Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
2023-12-19T11:29:34.513Z|00003|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
2023-12-19T11:29:34.517Z|00004|main|INFO|OVN internal version is : [21.12.3-20.21.0-61.4]
2023-12-19T11:29:34.517Z|00005|main|INFO|OVS IDL reconnected, force recompute.
2023-12-19T11:29:34.573Z|00006|reconnect|INFO|ssl:127.0.0.1:6642: connecting...
2023-12-19T11:29:34.573Z|00007|main|INFO|OVNSB IDL reconnected, force recompute.
2023-12-19T11:29:34.573Z|00008|reconnect|INFO|ssl:127.0.0.1:6642: connection attempt failed (Connection refused)
2023-12-19T11:29:35.575Z|00009|reconnect|INFO|ssl:127.0.0.1:6642: connecting...
2023-12-19T11:29:35.589Z|00010|reconnect|INFO|ssl:127.0.0.1:6642: connection attempt failed (Connection refused)
2023-12-19T11:29:35.589Z|00011|reconnect|INFO|ssl:127.0.0.1:6642: waiting 2 seconds before reconnect
2023-12-19T11:29:37.592Z|00012|reconnect|INFO|ssl:127.0.0.1:6642: connecting...
2023-12-19T11:29:37.592Z|00013|reconnect|INFO|ssl:127.0.0.1:6642: connection attempt failed (Connection refused)
2023-12-19T11:29:37.592Z|00014|reconnect|INFO|ssl:127.0.0.1:6642: waiting 4 seconds before reconnect
2023-12-19T11:29:41.596Z|00015|reconnect|INFO|ssl:127.0.0.1:6642: connecting...
2023-12-19T11:29:41.596Z|00016|reconnect|INFO|ssl:127.0.0.1:6642: connection attempt failed (Connection refused)
2023-12-19T11:29:41.596Z|00017|reconnect|INFO|ssl:127.0.0.1:6642: continuing to reconnect in the background but suppressing further logging
/var/log/openvswitch/ovsdb-server.log:
2023-12-19T11:26:56.889Z|00001|vlog|INFO|opened log file /var/log/openvswitch/ovsdb-server.log
2023-12-19T11:26:56.915Z|00002|ovsdb_server|INFO|ovsdb-server (Open vSwitch) 2.15.8
2023-12-19T11:27:06.922Z|00003|memory|INFO|20624 kB peak resident set size after 10.0 seconds
2023-12-19T11:27:06.922Z|00004|memory|INFO|cells:128 monitors:5 sessions:3
2023-12-19T11:29:30.771Z|00001|vlog|INFO|opened log file /var/log/openvswitch/ovsdb-server.log
2023-12-19T11:29:30.813Z|00002|ovsdb_server|INFO|ovsdb-server (Open vSwitch) 2.15.8
2023-12-19T11:29:31.047Z|00003|jsonrpc|WARN|unix#0: receive error: Connection reset by peer
2023-12-19T11:29:31.047Z|00004|reconnect|WARN|unix#0: connection dropped (Connection reset by peer)
2023-12-19T11:29:32.821Z|00005|jsonrpc|WARN|unix#2: receive error: Connection reset by peer
2023-12-19T11:29:32.821Z|00006|reconnect|WARN|unix#2: connection dropped (Connection reset by peer)
2023-12-19T11:29:33.139Z|00007|jsonrpc|WARN|unix#4: receive error: Connection reset by peer
2023-12-19T11:29:33.139Z|00008|reconnect|WARN|unix#4: connection dropped (Connection reset by peer)
2023-12-19T11:29:40.864Z|00009|memory|INFO|23108 kB peak resident set size after 10.1 seconds
2023-12-19T11:29:40.864Z|00010|memory|INFO|cells:128 monitors:4 sessions:3
Logs on engine:
/var/log/ovn/ovsdb-server-nb.log:
2023-12-18T19:36:23.056Z|00001|vlog|INFO|opened log file /var/log/ovn/ovsdb-server-nb.log
2023-12-18T19:36:23.784Z|00002|ovsdb_server|INFO|ovsdb-server (Open vSwitch) 2.15.8
2023-12-18T19:36:24.275Z|00003|jsonrpc|WARN|unix#0: receive error: Connection reset by peer
2023-12-18T19:36:24.276Z|00004|reconnect|WARN|unix#0: connection dropped (Connection reset by peer)
2023-12-18T19:36:33.808Z|00005|memory|INFO|22528 kB peak resident set size after 10.8 seconds
2023-12-18T19:36:33.808Z|00006|memory|INFO|cells:99 monitors:2 sessions:1
/var/log/ovirt-engine/engine.log (currently unable to start vms. normally not the case in my tests but error message seems related)
2023-12-19 06:49:17,982-05 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-34) [43d1e22d] EVENT_ID: PROVIDER_SYNCHRONIZATION_STARTED(223), Provider ovirt-provider-ovn synchronization started.
2023-12-19 06:49:18,122-05 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-34) [43d1e22d] EVENT_ID: PROVIDER_SYNCHRONIZATION_ENDED(224), Provider ovirt-provider-ovn synchronization ended.
2023-12-19 06:49:18,122-05 ERROR [org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-34) [43d1e22d] Command 'org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand' failed: EngineException: (Failed with error Unsupported or unrecognized SSL message and code 5050)
Issue2:
When installing ovirt4.5 engine-setup always fails in pki-phase because no new root cert is generated. I believe it ultimately say apache.ca is missing. This is also on a fresh Centos-Stream-8 machine following official install instructions.
Please help. :)
1 year, 4 months
Disk upload: EngineException: java.lang.NullPointerException (Failed with error ENGINE and code 5001)
by goestin@intert00bz.nl
Hi All, after adding an oVirt node as a local storage machine I am unable to
upload a disk to the datastore. The button "test connection" shows:
"Connection to ovirt-imageio was successful.".
Version: 4.5.4-1.el8
OS: AlmaLinux 8.9 (Midnight Oncilla)
Below a excerpt from the engine.log showing the entire upload session.
Note 1: The machine "kvm-sandbox-qm7" is the machine for the
localstorage cluster. The machine "kvm-sandbox-gcz" is a machine from
the other "default cluster" and I was under the impression that the two
clusters would be completely separate things and should not interfere
with each other. But I am not sure about that.
--- snip ---
2024-01-03 10:05:26,898Z INFO [org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand] (default task-1) [7855db64-635d-430f-9de8-21b1983e43a0] Lock Acquired to object 'EngineLock:{exclusiveLocks='[]', sharedLocks='[]'}'
2024-01-03 10:05:26,938Z INFO [org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand] (default task-1) [7855db64-635d-430f-9de8-21b1983e43a0] Running command: TransferDiskImageCommand internal: false. Entities affected : ID: 95dfc5bc-2a31-405c-ada0-6015edd281da Type: StorageAction group CREATE_DISK with role type USER
2024-01-03 10:05:26,938Z INFO [org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand] (default task-1) [7855db64-635d-430f-9de8-21b1983e43a0] Creating ImageTransfer entity for command 'ec704c40-89bc-4fdf-a44a-607dd7b9b2f7', proxyEnabled: true
2024-01-03 10:05:26,940Z INFO [org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand] (default task-1) [7855db64-635d-430f-9de8-21b1983e43a0] Starting image transfer: ImageTransfer:{id='ec704c40-89bc-4fdf-a44a-607dd7b9b2f7', phase='Initializing', type='Upload', active='false', lastUpdated='Wed Jan 03 10:05:26 UTC 2024', message='null', vdsId='null', diskId='null', imagedTicketId='null', proxyUri='null', bytesSent='null', bytesTotal='697434112', clientInactivityTimeout='60', timeoutPolicy='legacy', imageFormat='COW', transferClientType='Transfer via browser', shallow='false'}
2024-01-03 10:05:26,940Z INFO [org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand] (default task-1) [7855db64-635d-430f-9de8-21b1983e43a0] Creating disk image
2024-01-03 10:05:26,953Z INFO [org.ovirt.engine.core.bll.storage.disk.AddDiskCommand] (default task-1) [7855db64-635d-430f-9de8-21b1983e43a0] Running command: AddDiskCommand internal: true. Entities affected : ID: 95dfc5bc-2a31-405c-ada0-6015edd281da Type: StorageAction group CREATE_DISK with role type USER
2024-01-03 10:05:26,961Z INFO [org.ovirt.engine.core.bll.storage.disk.image.AddImageFromScratchCommand] (default task-1) [7855db64-635d-430f-9de8-21b1983e43a0] Running command: AddImageFromScratchCommand internal: true. Entities affected : ID: 95dfc5bc-2a31-405c-ada0-6015edd281da Type: Storage
2024-01-03 10:05:26,981Z INFO [org.ovirt.engine.core.vdsbroker.irsbroker.CreateVolumeVDSCommand] (default task-1) [7855db64-635d-430f-9de8-21b1983e43a0] START, CreateVolumeVDSCommand( CreateVolumeVDSCommandParameters:{storagePoolId='4c5f1e92-239b-471c-9db0-970029129a62', ignoreFailoverLimit='false', storageDomainId='95dfc5bc-2a31-405c-ada0-6015edd281da', imageGroupId='13dcaf25-6b58-4c79-85a7-0aecd153fb59', imageSizeInBytes='10737418240', volumeFormat='COW', newImageId='6b80fba5-c2ae-4b68-a24d-21d7f657da8f', imageType='Sparse', newImageDescription='{"DiskAlias":"aaa","DiskDescription":""}', imageInitialSizeInBytes='0', imageId='00000000-0000-0000-0000-000000000000', sourceImageGroupId='00000000-0000-0000-0000-000000000000', shouldAddBitmaps='false', legal='true', sequenceNumber='1', bitmap='null'}), log id: b4c79ef
2024-01-03 10:05:27,406Z INFO [org.ovirt.engine.core.vdsbroker.irsbroker.CreateVolumeVDSCommand] (default task-1) [7855db64-635d-430f-9de8-21b1983e43a0] FINISH, CreateVolumeVDSCommand, return: 6b80fba5-c2ae-4b68-a24d-21d7f657da8f, log id: b4c79ef
2024-01-03 10:05:27,409Z INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (default task-1) [7855db64-635d-430f-9de8-21b1983e43a0] CommandAsyncTask::Adding CommandMultiAsyncTasks object for command 'a6f3ff83-daa4-4799-908a-07029ff8f6ef'
2024-01-03 10:05:27,410Z INFO [org.ovirt.engine.core.bll.CommandMultiAsyncTasks] (default task-1) [7855db64-635d-430f-9de8-21b1983e43a0] CommandMultiAsyncTasks::attachTask: Attaching task '5a29a235-b61c-4efb-959a-f29ae7f863be' to command 'a6f3ff83-daa4-4799-908a-07029ff8f6ef'.
2024-01-03 10:05:27,427Z INFO [org.ovirt.engine.core.bll.tasks.AsyncTaskManager] (default task-1) [7855db64-635d-430f-9de8-21b1983e43a0] Adding task '5a29a235-b61c-4efb-959a-f29ae7f863be' (Parent Command 'AddImageFromScratch', Parameters Type 'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters'), polling hasn't started yet..
2024-01-03 10:05:27,435Z INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (default task-1) [7855db64-635d-430f-9de8-21b1983e43a0] BaseAsyncTask::startPollingTask: Starting to poll task '5a29a235-b61c-4efb-959a-f29ae7f863be'.
2024-01-03 10:05:27,449Z INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-1) [7855db64-635d-430f-9de8-21b1983e43a0] EVENT_ID: ADD_DISK_INTERNAL(2,036), Add-Disk operation of 'aaa' was initiated by the system.
2024-01-03 10:05:27,457Z INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-1) [7855db64-635d-430f-9de8-21b1983e43a0] EVENT_ID: TRANSFER_IMAGE_INITIATED(1,031), Image Upload with disk aaa was initiated by [[redacted user]]@[[redacted]]@[[redacted]].
2024-01-03 10:05:27,866Z INFO [org.ovirt.engine.core.bll.storage.disk.AddDiskCommandCallback] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-95) [7855db64-635d-430f-9de8-21b1983e43a0] Command 'AddDisk' (id: 'dfe023fa-0a96-40e8-9934-fb94a156bff6') waiting on child command id: 'a6f3ff83-daa4-4799-908a-07029ff8f6ef' type:'AddImageFromScratch' to complete
2024-01-03 10:05:27,869Z INFO [org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-95) [7855db64-635d-430f-9de8-21b1983e43a0] Waiting for disk to be added for image transfer 'ec704c40-89bc-4fdf-a44a-607dd7b9b2f7'
2024-01-03 10:05:29,872Z INFO [org.ovirt.engine.core.bll.storage.disk.AddDiskCommandCallback] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-16) [7855db64-635d-430f-9de8-21b1983e43a0] Command 'AddDisk' (id: 'dfe023fa-0a96-40e8-9934-fb94a156bff6') waiting on child command id: 'a6f3ff83-daa4-4799-908a-07029ff8f6ef' type:'AddImageFromScratch' to complete
2024-01-03 10:05:29,875Z INFO [org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-16) [7855db64-635d-430f-9de8-21b1983e43a0] Waiting for disk to be added for image transfer 'ec704c40-89bc-4fdf-a44a-607dd7b9b2f7'
2024-01-03 10:05:30,014Z INFO [org.ovirt.engine.core.bll.tasks.AsyncTaskManager] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-46) [] Polling and updating Async Tasks: 1 tasks, 1 tasks to poll now
2024-01-03 10:05:30,019Z INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-46) [] SPMAsyncTask::PollTask: Polling task '5a29a235-b61c-4efb-959a-f29ae7f863be' (Parent Command 'AddImageFromScratch', Parameters Type 'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters') returned status 'finished', result 'success'.
2024-01-03 10:05:30,019Z INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-46) [] BaseAsyncTask::onTaskEndSuccess: Task '5a29a235-b61c-4efb-959a-f29ae7f863be' (Parent Command 'AddImageFromScratch', Parameters Type 'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters') ended successfully.
2024-01-03 10:05:30,021Z INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-46) [] CommandAsyncTask::endActionIfNecessary: All tasks of command 'a6f3ff83-daa4-4799-908a-07029ff8f6ef' has ended -> executing 'endAction'
2024-01-03 10:05:30,022Z INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-46) [] CommandAsyncTask::endAction: Ending action for '1' tasks (command ID: 'a6f3ff83-daa4-4799-908a-07029ff8f6ef'): calling endAction '.
2024-01-03 10:05:30,022Z INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (EE-ManagedThreadFactory-engine-Thread-2956) [] CommandAsyncTask::endCommandAction [within thread] context: Attempting to endAction 'AddImageFromScratch',
2024-01-03 10:05:30,027Z INFO [org.ovirt.engine.core.bll.storage.disk.image.AddImageFromScratchCommand] (EE-ManagedThreadFactory-engine-Thread-2956) [7855db64-635d-430f-9de8-21b1983e43a0] Command [id=a6f3ff83-daa4-4799-908a-07029ff8f6ef]: Updating status to 'SUCCEEDED', The command end method logic will be executed by one of its parent commands.
2024-01-03 10:05:30,027Z INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (EE-ManagedThreadFactory-engine-Thread-2956) [7855db64-635d-430f-9de8-21b1983e43a0] CommandAsyncTask::HandleEndActionResult [within thread]: endAction for action type 'AddImageFromScratch' completed, handling the result.
2024-01-03 10:05:30,027Z INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (EE-ManagedThreadFactory-engine-Thread-2956) [7855db64-635d-430f-9de8-21b1983e43a0] CommandAsyncTask::HandleEndActionResult [within thread]: endAction for action type 'AddImageFromScratch' succeeded, clearing tasks.
2024-01-03 10:05:30,027Z INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (EE-ManagedThreadFactory-engine-Thread-2956) [7855db64-635d-430f-9de8-21b1983e43a0] SPMAsyncTask::ClearAsyncTask: Attempting to clear task '5a29a235-b61c-4efb-959a-f29ae7f863be'
2024-01-03 10:05:30,028Z INFO [org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand] (EE-ManagedThreadFactory-engine-Thread-2956) [7855db64-635d-430f-9de8-21b1983e43a0] START, SPMClearTaskVDSCommand( SPMTaskGuidBaseVDSCommandParameters:{storagePoolId='4c5f1e92-239b-471c-9db0-970029129a62', ignoreFailoverLimit='false', taskId='5a29a235-b61c-4efb-959a-f29ae7f863be'}), log id: af95363
2024-01-03 10:05:30,028Z INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (EE-ManagedThreadFactory-engine-Thread-2956) [7855db64-635d-430f-9de8-21b1983e43a0] START, HSMClearTaskVDSCommand(HostName = kvm-sandbox-qm7, HSMTaskGuidBaseVDSCommandParameters:{hostId='9fb846b0-58cf-41ab-875c-3e3118a24b89', taskId='5a29a235-b61c-4efb-959a-f29ae7f863be'}), log id: 52de524b
2024-01-03 10:05:30,044Z INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (EE-ManagedThreadFactory-engine-Thread-2956) [7855db64-635d-430f-9de8-21b1983e43a0] FINISH, HSMClearTaskVDSCommand, return: , log id: 52de524b
2024-01-03 10:05:30,044Z INFO [org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand] (EE-ManagedThreadFactory-engine-Thread-2956) [7855db64-635d-430f-9de8-21b1983e43a0] FINISH, SPMClearTaskVDSCommand, return: , log id: af95363
2024-01-03 10:05:30,050Z INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (EE-ManagedThreadFactory-engine-Thread-2956) [7855db64-635d-430f-9de8-21b1983e43a0] BaseAsyncTask::removeTaskFromDB: Removed task '5a29a235-b61c-4efb-959a-f29ae7f863be' from DataBase
2024-01-03 10:05:30,050Z INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (EE-ManagedThreadFactory-engine-Thread-2956) [7855db64-635d-430f-9de8-21b1983e43a0] CommandAsyncTask::HandleEndActionResult [within thread]: Removing CommandMultiAsyncTasks object for entity 'a6f3ff83-daa4-4799-908a-07029ff8f6ef'
2024-01-03 10:05:31,567Z INFO [org.ovirt.engine.core.bll.storage.disk.image.TransferImageStatusCommand] (default task-6) [1ab78812-4bb8-4889-bea0-d2f2a82d52af] Running command: TransferImageStatusCommand internal: false. Entities affected : ID: 95dfc5bc-2a31-405c-ada0-6015edd281da Type: SystemAction group CREATE_DISK with role type USER
2024-01-03 10:05:33,877Z INFO [org.ovirt.engine.core.bll.storage.disk.AddDiskCommandCallback] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-68) [7855db64-635d-430f-9de8-21b1983e43a0] Getting volume info for image '13dcaf25-6b58-4c79-85a7-0aecd153fb59/6b80fba5-c2ae-4b68-a24d-21d7f657da8f'
2024-01-03 10:05:33,894Z INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVolumeInfoVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-68) [7855db64-635d-430f-9de8-21b1983e43a0] START, GetVolumeInfoVDSCommand(HostName = kvm-sandbox-qm7, GetVolumeInfoVDSCommandParameters:{hostId='9fb846b0-58cf-41ab-875c-3e3118a24b89', storagePoolId='4c5f1e92-239b-471c-9db0-970029129a62', storageDomainId='95dfc5bc-2a31-405c-ada0-6015edd281da', imageGroupId='13dcaf25-6b58-4c79-85a7-0aecd153fb59', imageId='6b80fba5-c2ae-4b68-a24d-21d7f657da8f'}), log id: 78edc340
2024-01-03 10:05:33,908Z INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVolumeInfoVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-68) [7855db64-635d-430f-9de8-21b1983e43a0] FINISH, GetVolumeInfoVDSCommand, return: org.ovirt.engine.core.common.businessentities.storage.DiskImage@14184952, log id: 78edc340
2024-01-03 10:05:33,908Z INFO [org.ovirt.engine.core.bll.storage.disk.AddDiskCommandCallback] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-68) [7855db64-635d-430f-9de8-21b1983e43a0] Command 'AddDisk' id: 'dfe023fa-0a96-40e8-9934-fb94a156bff6' child commands '[a6f3ff83-daa4-4799-908a-07029ff8f6ef]' executions were completed, status 'SUCCEEDED'
2024-01-03 10:05:33,967Z INFO [org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-68) [7855db64-635d-430f-9de8-21b1983e43a0] Successfully added Upload disk 'aaa' (disk id: '13dcaf25-6b58-4c79-85a7-0aecd153fb59', image id: '6b80fba5-c2ae-4b68-a24d-21d7f657da8f') for image transfer 'ec704c40-89bc-4fdf-a44a-607dd7b9b2f7'
2024-01-03 10:05:33,978Z INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.PrepareImageVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-68) [7855db64-635d-430f-9de8-21b1983e43a0] START, PrepareImageVDSCommand(HostName = kvm-sandbox-gcz, PrepareImageVDSCommandParameters:{hostId='059c7eaf-da39-41f2-bb61-659ab3bd1b61'}), log id: 104764a4
2024-01-03 10:05:33,981Z ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.PrepareImageVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-68) [7855db64-635d-430f-9de8-21b1983e43a0] Failed in 'PrepareImageVDS' method, for vds: 'kvm-sandbox-gcz'; host: 'kvm-sandbox-gcz.hprvsr.infra.pdc.[[redacted]]': null
2024-01-03 10:05:33,982Z ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.PrepareImageVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-68) [7855db64-635d-430f-9de8-21b1983e43a0] Command 'PrepareImageVDSCommand(HostName = kvm-sandbox-gcz, PrepareImageVDSCommandParameters:{hostId='059c7eaf-da39-41f2-bb61-659ab3bd1b61'})' execution failed: null
2024-01-03 10:05:33,982Z INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.PrepareImageVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-68) [7855db64-635d-430f-9de8-21b1983e43a0] FINISH, PrepareImageVDSCommand, return: , log id: 104764a4
2024-01-03 10:05:33,982Z ERROR [org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-68) [7855db64-635d-430f-9de8-21b1983e43a0] Failed to prepare image for image transfer 'ec704c40-89bc-4fdf-a44a-607dd7b9b2f7': {}: org.ovirt.engine.core.common.errors.EngineException: EngineException: java.lang.NullPointerException (Failed with error ENGINE and code 5001)
at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.VdsHandler.handleVdsResult(VdsHandler.java:114)
at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.VDSBrokerFrontendImpl.runVdsCommand(VDSBrokerFrontendImpl.java:33)
at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.CommandBase.runVdsCommand(CommandBase.java:2121)
at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand.prepareImage(TransferDiskImageCommand.java:188)
at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand.startImageTransferSession(TransferDiskImageCommand.java:1064)
at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand.handleImageIsReadyForTransfer(TransferDiskImageCommand.java:681)
at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand.handleInitializing(TransferDiskImageCommand.java:654)
at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand.executeStateHandler(TransferDiskImageCommand.java:587)
at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand.proceedCommandExecution(TransferDiskImageCommand.java:574)
at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.storage.disk.image.TransferImageCommandCallback.doPolling(TransferImageCommandCallback.java:21)
at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller.invokeCallbackMethodsImpl(CommandCallbacksPoller.java:175)
at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller.invokeCallbackMethods(CommandCallbacksPoller.java:109)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305)
at org.glassfish.javax.enterprise.concurrent//org.glassfish.enterprise.concurrent.internal.ManagedScheduledThreadPoolExecutor$ManagedScheduledFutureTask.access$201(ManagedScheduledThreadPoolExecutor.java:360)
at org.glassfish.javax.enterprise.concurrent//org.glassfish.enterprise.concurrent.internal.ManagedScheduledThreadPoolExecutor$ManagedScheduledFutureTask.run(ManagedScheduledThreadPoolExecutor.java:511)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
at org.glassfish.javax.enterprise.concurrent//org.glassfish.enterprise.concurrent.ManagedThreadFactoryImpl$ManagedThread.run(ManagedThreadFactoryImpl.java:227)
Caused by: java.lang.NullPointerException
at deployment.engine.ear//org.ovirt.engine.core.vdsbroker.vdsbroker.PrepareImageReturn.<init>(PrepareImageReturn.java:15)
at deployment.engine.ear//org.ovirt.engine.core.vdsbroker.jsonrpc.JsonRpcVdsServer.prepareImage(JsonRpcVdsServer.java:1947)
at deployment.engine.ear//org.ovirt.engine.core.vdsbroker.vdsbroker.PrepareImageVDSCommand.executeImageActionVdsBrokerCommand(PrepareImageVDSCommand.java:18)
at deployment.engine.ear//org.ovirt.engine.core.vdsbroker.vdsbroker.PrepareImageVDSCommand.executeImageActionVdsBrokerCommand(PrepareImageVDSCommand.java:5)
at deployment.engine.ear//org.ovirt.engine.core.vdsbroker.vdsbroker.ImageActionsVDSCommandBase.executeVdsBrokerCommand(ImageActionsVDSCommandBase.java:14)
at deployment.engine.ear//org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerCommand.executeVdsCommandWithNetworkEvent(VdsBrokerCommand.java:123)
at deployment.engine.ear//org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerCommand.executeVDSCommand(VdsBrokerCommand.java:111)
at deployment.engine.ear//org.ovirt.engine.core.vdsbroker.VDSCommandBase.executeCommand(VDSCommandBase.java:65)
at org.ovirt.engine.core.dal//org.ovirt.engine.core.dal.VdcCommandBase.execute(VdcCommandBase.java:31)
at deployment.engine.ear//org.ovirt.engine.core.vdsbroker.vdsbroker.DefaultVdsCommandExecutor.execute(DefaultVdsCommandExecutor.java:14)
at deployment.engine.ear//org.ovirt.engine.core.vdsbroker.ResourceManager.runVdsCommand(ResourceManager.java:410)
at deployment.engine.ear//org.ovirt.engine.core.vdsbroker.ResourceManager$Proxy$_$$_WeldSubclass.runVdsCommand$$super(Unknown Source)
at jdk.internal.reflect.GeneratedMethodAccessor87.invoke(Unknown Source)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at org.jboss.weld.core@3.1.7.SP1//org.jboss.weld.interceptor.proxy.TerminalAroundInvokeInvocationContext.proceedInternal(TerminalAroundInvokeInvocationContext.java:51)
at org.jboss.weld.core@3.1.7.SP1//org.jboss.weld.interceptor.proxy.AroundInvokeInvocationContext.proceed(AroundInvokeInvocationContext.java:78)
at org.ovirt.engine.core.common//org.ovirt.engine.core.common.di.interceptor.LoggingInterceptor.apply(LoggingInterceptor.java:12)
at jdk.internal.reflect.GeneratedMethodAccessor80.invoke(Unknown Source)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at org.jboss.weld.core@3.1.7.SP1//org.jboss.weld.interceptor.reader.SimpleInterceptorInvocation$SimpleMethodInvocation.invoke(SimpleInterceptorInvocation.java:73)
at org.jboss.weld.core@3.1.7.SP1//org.jboss.weld.interceptor.proxy.InterceptorMethodHandler.executeAroundInvoke(InterceptorMethodHandler.java:84)
at org.jboss.weld.core@3.1.7.SP1//org.jboss.weld.interceptor.proxy.InterceptorMethodHandler.executeInterception(InterceptorMethodHandler.java:72)
at org.jboss.weld.core@3.1.7.SP1//org.jboss.weld.interceptor.proxy.InterceptorMethodHandler.invoke(InterceptorMethodHandler.java:56)
at org.jboss.weld.core@3.1.7.SP1//org.jboss.weld.bean.proxy.CombinedInterceptorAndDecoratorStackMethodHandler.invoke(CombinedInterceptorAndDecoratorStackMethodHandler.java:79)
at org.jboss.weld.core@3.1.7.SP1//org.jboss.weld.bean.proxy.CombinedInterceptorAndDecoratorStackMethodHandler.invoke(CombinedInterceptorAndDecoratorStackMethodHandler.java:68)
at deployment.engine.ear//org.ovirt.engine.core.vdsbroker.ResourceManager$Proxy$_$$_WeldSubclass.runVdsCommand(Unknown Source)
... 19 more
2024-01-03 10:05:34,991Z INFO [org.ovirt.engine.core.bll.storage.disk.AddDiskCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-47) [7855db64-635d-430f-9de8-21b1983e43a0] Ending command 'org.ovirt.engine.core.bll.storage.disk.AddDiskCommand' successfully.
2024-01-03 10:05:34,997Z INFO [org.ovirt.engine.core.bll.storage.disk.image.AddImageFromScratchCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-47) [7855db64-635d-430f-9de8-21b1983e43a0] Ending command 'org.ovirt.engine.core.bll.storage.disk.image.AddImageFromScratchCommand' successfully.
2024-01-03 10:05:35,010Z INFO [org.ovirt.engine.core.vdsbroker.irsbroker.GetImageInfoVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-47) [7855db64-635d-430f-9de8-21b1983e43a0] START, GetImageInfoVDSCommand( GetImageInfoVDSCommandParameters:{storagePoolId='4c5f1e92-239b-471c-9db0-970029129a62', ignoreFailoverLimit='false', storageDomainId='95dfc5bc-2a31-405c-ada0-6015edd281da', imageGroupId='13dcaf25-6b58-4c79-85a7-0aecd153fb59', imageId='6b80fba5-c2ae-4b68-a24d-21d7f657da8f'}), log id: 127cba80
2024-01-03 10:05:35,011Z INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVolumeInfoVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-47) [7855db64-635d-430f-9de8-21b1983e43a0] START, GetVolumeInfoVDSCommand(HostName = kvm-sandbox-qm7, GetVolumeInfoVDSCommandParameters:{hostId='9fb846b0-58cf-41ab-875c-3e3118a24b89', storagePoolId='4c5f1e92-239b-471c-9db0-970029129a62', storageDomainId='95dfc5bc-2a31-405c-ada0-6015edd281da', imageGroupId='13dcaf25-6b58-4c79-85a7-0aecd153fb59', imageId='6b80fba5-c2ae-4b68-a24d-21d7f657da8f'}), log id: 1f0ed2a8
2024-01-03 10:05:35,024Z INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVolumeInfoVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-47) [7855db64-635d-430f-9de8-21b1983e43a0] FINISH, GetVolumeInfoVDSCommand, return: org.ovirt.engine.core.common.businessentities.storage.DiskImage@14184952, log id: 1f0ed2a8
2024-01-03 10:05:35,024Z INFO [org.ovirt.engine.core.vdsbroker.irsbroker.GetImageInfoVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-47) [7855db64-635d-430f-9de8-21b1983e43a0] FINISH, GetImageInfoVDSCommand, return: org.ovirt.engine.core.common.businessentities.storage.DiskImage@14184952, log id: 127cba80
2024-01-03 10:05:35,046Z INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.PrepareImageVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-47) [7855db64-635d-430f-9de8-21b1983e43a0] START, PrepareImageVDSCommand(HostName = kvm-sandbox-qm7, PrepareImageVDSCommandParameters:{hostId='9fb846b0-58cf-41ab-875c-3e3118a24b89'}), log id: 39df22a1
2024-01-03 10:05:35,076Z INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.PrepareImageVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-47) [7855db64-635d-430f-9de8-21b1983e43a0] FINISH, PrepareImageVDSCommand, return: PrepareImageReturn:{status='Status [code=0, message=Done]'}, log id: 39df22a1
2024-01-03 10:05:35,077Z INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetQemuImageInfoVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-47) [7855db64-635d-430f-9de8-21b1983e43a0] START, GetQemuImageInfoVDSCommand(HostName = kvm-sandbox-qm7, GetVolumeInfoVDSCommandParameters:{hostId='9fb846b0-58cf-41ab-875c-3e3118a24b89', storagePoolId='4c5f1e92-239b-471c-9db0-970029129a62', storageDomainId='95dfc5bc-2a31-405c-ada0-6015edd281da', imageGroupId='13dcaf25-6b58-4c79-85a7-0aecd153fb59', imageId='6b80fba5-c2ae-4b68-a24d-21d7f657da8f'}), log id: 59083d66
2024-01-03 10:05:35,093Z INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetQemuImageInfoVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-47) [7855db64-635d-430f-9de8-21b1983e43a0] FINISH, GetQemuImageInfoVDSCommand, return: org.ovirt.engine.core.common.businessentities.storage.QemuImageInfo@12025249, log id: 59083d66
2024-01-03 10:05:35,095Z INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.TeardownImageVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-47) [7855db64-635d-430f-9de8-21b1983e43a0] START, TeardownImageVDSCommand(HostName = kvm-sandbox-qm7, ImageActionsVDSCommandParameters:{hostId='9fb846b0-58cf-41ab-875c-3e3118a24b89'}), log id: 2c3b32ab
2024-01-03 10:05:35,097Z INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.TeardownImageVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-47) [7855db64-635d-430f-9de8-21b1983e43a0] FINISH, TeardownImageVDSCommand, return: StatusReturn:{status='Status [code=0, message=Done]'}, log id: 2c3b32ab
2024-01-03 10:05:35,106Z WARN [org.ovirt.engine.core.bll.storage.disk.AddDiskCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-47) [] VM is null - no unlocking
2024-01-03 10:05:35,132Z INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-47) [] EVENT_ID: USER_ADD_DISK_FINISHED_SUCCESS(2,021), The disk 'aaa' was successfully added.
2024-01-03 10:05:35,134Z ERROR [org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-47) [7855db64-635d-430f-9de8-21b1983e43a0] Ending command 'org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand' with failure.
2024-01-03 10:05:35,134Z ERROR [org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-47) [7855db64-635d-430f-9de8-21b1983e43a0] Failed to transfer disk '00000000-0000-0000-0000-000000000000' for image transfer 'ec704c40-89bc-4fdf-a44a-607dd7b9b2f7'
2024-01-03 10:05:35,157Z INFO [org.ovirt.engine.core.bll.storage.disk.RemoveDiskCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-47) [5871c85c] Running command: RemoveDiskCommand internal: true. Entities affected : ID: 13dcaf25-6b58-4c79-85a7-0aecd153fb59 Type: DiskAction group DELETE_DISK with role type USER
2024-01-03 10:05:35,176Z INFO [org.ovirt.engine.core.bll.storage.disk.image.RemoveImageCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-47) [5871c85c] Running command: RemoveImageCommand internal: true. Entities affected : ID: 95dfc5bc-2a31-405c-ada0-6015edd281da Type: Storage
2024-01-03 10:05:35,202Z INFO [org.ovirt.engine.core.vdsbroker.irsbroker.DeleteImageGroupVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-47) [5871c85c] START, DeleteImageGroupVDSCommand( DeleteImageGroupVDSCommandParameters:{storagePoolId='4c5f1e92-239b-471c-9db0-970029129a62', ignoreFailoverLimit='false', storageDomainId='95dfc5bc-2a31-405c-ada0-6015edd281da', imageGroupId='13dcaf25-6b58-4c79-85a7-0aecd153fb59', postZeros='false', discard='false', forceDelete='false'}), log id: 10fab46b
2024-01-03 10:05:35,449Z INFO [org.ovirt.engine.core.vdsbroker.irsbroker.DeleteImageGroupVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-47) [5871c85c] FINISH, DeleteImageGroupVDSCommand, return: , log id: 10fab46b
2024-01-03 10:05:35,451Z INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-47) [5871c85c] CommandAsyncTask::Adding CommandMultiAsyncTasks object for command '8640049f-0ead-486a-93b4-dcbbaa353294'
2024-01-03 10:05:35,451Z INFO [org.ovirt.engine.core.bll.CommandMultiAsyncTasks] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-47) [5871c85c] CommandMultiAsyncTasks::attachTask: Attaching task 'b51f31d8-944b-4539-beb6-a0ab995073c6' to command '8640049f-0ead-486a-93b4-dcbbaa353294'.
2024-01-03 10:05:35,463Z INFO [org.ovirt.engine.core.bll.tasks.AsyncTaskManager] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-47) [5871c85c] Adding task 'b51f31d8-944b-4539-beb6-a0ab995073c6' (Parent Command 'RemoveImage', Parameters Type 'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters'), polling hasn't started yet..
2024-01-03 10:05:35,468Z INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-47) [5871c85c] BaseAsyncTask::startPollingTask: Starting to poll task 'b51f31d8-944b-4539-beb6-a0ab995073c6'.
2024-01-03 10:05:35,468Z INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-47) [5871c85c] BaseAsyncTask::startPollingTask: Starting to poll task 'b51f31d8-944b-4539-beb6-a0ab995073c6'.
2024-01-03 10:05:35,533Z INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-47) [5871c85c] EVENT_ID: USER_FINISHED_REMOVE_DISK(2,014), Disk aaa was successfully removed from domain localstorage (User [[redacted user]]@[[redacted]]@[[redacted]]).
2024-01-03 10:05:35,534Z INFO [org.ovirt.engine.core.bll.storage.disk.RemoveDiskCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-47) [5871c85c] Lock freed to object 'EngineLock:{exclusiveLocks='[]', sharedLocks='[]'}'
2024-01-03 10:05:35,534Z INFO [org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-47) [5871c85c] Lock freed to object 'EngineLock:{exclusiveLocks='[]', sharedLocks='[]'}'
2024-01-03 10:05:35,535Z INFO [org.ovirt.engine.core.bll.storage.disk.image.ImageTransferUpdater] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-47) [5871c85c] Updating image transfer 'ec704c40-89bc-4fdf-a44a-607dd7b9b2f7' phase from 'Initializing' to 'Finished Failure'
2024-01-03 10:05:35,547Z ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-47) [5871c85c] EVENT_ID: TRANSFER_IMAGE_FAILED(1,034), Image Upload with disk aaa failed.
2024-01-03 10:05:35,646Z INFO [org.ovirt.engine.core.bll.storage.disk.image.TransferImageStatusCommand] (default task-1) [d4821bf6-c786-4129-a3b0-4a79fc2d61d7] Running command: TransferImageStatusCommand internal: false. Entities affected : ID: 95dfc5bc-2a31-405c-ada0-6015edd281da Type: SystemAction group CREATE_DISK with role type USER
2024-01-03 10:05:36,562Z INFO [org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-62) [5871c85c] Command 'RemoveDisk' (id: 'b01c95d8-b3c3-40db-9a41-6fe1e817fe5a') waiting on child command id: '8640049f-0ead-486a-93b4-dcbbaa353294' type:'RemoveImage' to complete
2024-01-03 10:05:36,565Z INFO [org.ovirt.engine.core.bll.storage.disk.image.RemoveImageCommandCallback] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-62) [5871c85c] Waiting on remove image command to complete the task 'b51f31d8-944b-4539-beb6-a0ab995073c6'
2024-01-03 10:05:38,569Z INFO [org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-32) [5871c85c] Command 'RemoveDisk' (id: 'b01c95d8-b3c3-40db-9a41-6fe1e817fe5a') waiting on child command id: '8640049f-0ead-486a-93b4-dcbbaa353294' type:'RemoveImage' to complete
2024-01-03 10:05:38,573Z INFO [org.ovirt.engine.core.bll.storage.disk.image.RemoveImageCommandCallback] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-32) [5871c85c] Waiting on remove image command to complete the task 'b51f31d8-944b-4539-beb6-a0ab995073c6'
2024-01-03 10:05:39,645Z INFO [org.ovirt.engine.core.bll.storage.disk.image.TransferImageStatusCommand] (default task-1) [95e8a4ca-324f-4f9e-86a0-ef324383ca64] Running command: TransferImageStatusCommand internal: false. Entities affected : ID: 95dfc5bc-2a31-405c-ada0-6015edd281da Type: SystemAction group CREATE_DISK with role type USER
2024-01-03 10:05:40,022Z INFO [org.ovirt.engine.core.bll.tasks.AsyncTaskManager] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-40) [] Polling and updating Async Tasks: 2 tasks, 1 tasks to poll now
2024-01-03 10:05:40,027Z INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-40) [] SPMAsyncTask::PollTask: Polling task 'b51f31d8-944b-4539-beb6-a0ab995073c6' (Parent Command 'RemoveImage', Parameters Type 'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters') returned status 'finished', result 'success'.
2024-01-03 10:05:40,027Z INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-40) [] BaseAsyncTask::onTaskEndSuccess: Task 'b51f31d8-944b-4539-beb6-a0ab995073c6' (Parent Command 'RemoveImage', Parameters Type 'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters') ended successfully.
2024-01-03 10:05:40,030Z INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-40) [] CommandAsyncTask::endActionIfNecessary: All tasks of command '8640049f-0ead-486a-93b4-dcbbaa353294' has ended -> executing 'endAction'
2024-01-03 10:05:40,030Z INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-40) [] CommandAsyncTask::endAction: Ending action for '1' tasks (command ID: '8640049f-0ead-486a-93b4-dcbbaa353294'): calling endAction '.
2024-01-03 10:05:40,031Z INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (EE-ManagedThreadFactory-engine-Thread-2965) [] CommandAsyncTask::endCommandAction [within thread] context: Attempting to endAction 'RemoveImage',
2024-01-03 10:05:40,033Z INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (EE-ManagedThreadFactory-engine-Thread-2965) [5871c85c] CommandAsyncTask::HandleEndActionResult [within thread]: endAction for action type 'RemoveImage' completed, handling the result.
2024-01-03 10:05:40,034Z INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (EE-ManagedThreadFactory-engine-Thread-2965) [5871c85c] CommandAsyncTask::HandleEndActionResult [within thread]: endAction for action type 'RemoveImage' succeeded, clearing tasks.
2024-01-03 10:05:40,034Z INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (EE-ManagedThreadFactory-engine-Thread-2965) [5871c85c] SPMAsyncTask::ClearAsyncTask: Attempting to clear task 'b51f31d8-944b-4539-beb6-a0ab995073c6'
2024-01-03 10:05:40,034Z INFO [org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand] (EE-ManagedThreadFactory-engine-Thread-2965) [5871c85c] START, SPMClearTaskVDSCommand( SPMTaskGuidBaseVDSCommandParameters:{storagePoolId='4c5f1e92-239b-471c-9db0-970029129a62', ignoreFailoverLimit='false', taskId='b51f31d8-944b-4539-beb6-a0ab995073c6'}), log id: 7535a00c
2024-01-03 10:05:40,035Z INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (EE-ManagedThreadFactory-engine-Thread-2965) [5871c85c] START, HSMClearTaskVDSCommand(HostName = kvm-sandbox-qm7, HSMTaskGuidBaseVDSCommandParameters:{hostId='9fb846b0-58cf-41ab-875c-3e3118a24b89', taskId='b51f31d8-944b-4539-beb6-a0ab995073c6'}), log id: 3522ca07
2024-01-03 10:05:40,046Z INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (EE-ManagedThreadFactory-engine-Thread-2965) [5871c85c] FINISH, HSMClearTaskVDSCommand, return: , log id: 3522ca07
2024-01-03 10:05:40,046Z INFO [org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand] (EE-ManagedThreadFactory-engine-Thread-2965) [5871c85c] FINISH, SPMClearTaskVDSCommand, return: , log id: 7535a00c
2024-01-03 10:05:40,050Z INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (EE-ManagedThreadFactory-engine-Thread-2965) [5871c85c] BaseAsyncTask::removeTaskFromDB: Removed task 'b51f31d8-944b-4539-beb6-a0ab995073c6' from DataBase
2024-01-03 10:05:40,050Z INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (EE-ManagedThreadFactory-engine-Thread-2965) [5871c85c] CommandAsyncTask::HandleEndActionResult [within thread]: Removing CommandMultiAsyncTasks object for entity '8640049f-0ead-486a-93b4-dcbbaa353294'
2024-01-03 10:05:42,583Z INFO [org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-11) [5871c85c] Command 'RemoveDisk' (id: 'b01c95d8-b3c3-40db-9a41-6fe1e817fe5a') waiting on child command id: '8640049f-0ead-486a-93b4-dcbbaa353294' type:'RemoveImage' to complete
2024-01-03 10:05:42,593Z INFO [org.ovirt.engine.core.bll.storage.disk.image.RemoveImageCommandCallback] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-11) [5871c85c] Remove image command has completed successfully for disk '13dcaf25-6b58-4c79-85a7-0aecd153fb59' with async task(s) '[b51f31d8-944b-4539-beb6-a0ab995073c6]'.
2024-01-03 10:05:44,692Z INFO [org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-39) [5871c85c] Command 'RemoveDisk' id: 'b01c95d8-b3c3-40db-9a41-6fe1e817fe5a' child commands '[8640049f-0ead-486a-93b4-dcbbaa353294]' executions were completed, status 'SUCCEEDED'
2024-01-03 10:05:45,719Z INFO [org.ovirt.engine.core.bll.storage.disk.RemoveDiskCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-26) [5871c85c] Ending command 'org.ovirt.engine.core.bll.storage.disk.RemoveDiskCommand' successfully.
--- /snip ---
Please let me know if I can supply additional information which could be
relevant.
Kind Regards,
Justin Zandbergen.
1 year, 4 months
ovirt-engine certificate renewal
by bill.hong@neurogine.com
Hi,
I'm running ovirt Version 4.5.3.2-1.el8 with 1 + 3 nodes setup.
Currently i'm encountering this issue of ovirt-engine portal certificate which has already expired.
"PKIX path validation failed: java.security.cert.CertPathValidatorException: validity check failed "
I'm aware of the solution by running "engine-setup --offline" to renew the cert. (https://yaohuablog.com/zh/ovirt-engine-upgrade-web-certificate)
However my host machine has this problem with the psql command issue whenever i run the engine-backup.
[root@server1~]# engine-backup --mode=backup
Start of engine-backup with mode 'backup'
scope: all
archive file: /var/lib/ovirt-engine-backup/ovirt-engine-backup-20240103163047.backup
log file: /var/log/ovirt-engine-backup/ovirt-engine-backup-20240103163047.log
psql: /lib64/libpq.so.5: no version information available (required by psql)
psql: /lib64/libpq.so.5: no version information available (required by psql)
psql: /lib64/libpq.so.5: no version information available (required by psql)
Backing up:
psql: /lib64/libpq.so.5: no version information available (required by psql)
psql: /lib64/libpq.so.5: no version information available (required by psql)
psql: /lib64/libpq.so.5: no version information available (required by psql)
Notifying engine
- Files
- Engine database 'engine'
Notifying engine
FATAL: Database engine backup failed
[root@server1~]# dnf module list postgresql
Last metadata expiration check: 1:55:29 ago on Wed 03 Jan 2024 02:38:30 PM +08.
CentOS Stream 8 - AppStream
Name Stream Profiles Summary
postgresql 9.6 client, server [d] PostgreSQL server and client module
postgresql 10 [d] client, server [d] PostgreSQL server and client module
postgresql 12 [e] client, server [d] PostgreSQL server and client module
postgresql 13 client, server [d] PostgreSQL server and client module
postgresql 15 client, server PostgreSQL server and client module
postgresql 16 client, server [d] PostgreSQL server and client module
Question :
1. Should i fix the psql error first ? If i just want to renew my certificate , will the psql error cause me to fail to renew the cert in the "engine-setup --offline" command ?
2. What if after the "engine-setup --offline" failed to renew will my running VM be affected and down ? will there be any recovery method later ? will the reinstallation work on stand-alone machine ?
1 year, 4 months