Failed to execute stage 'Closing up': Command '/usr/share/ovirt-engine-keycloak/bin/kk_cli.sh' failed to execute
by yp414@163.com
engine-setup
[ INFO ] Stage: Initializing
[ INFO ] Stage: Environment setup
Configuration files: /etc/ovirt-engine-setup.conf.d/10-packaging-jboss.conf, /etc/ovirt-engine-setup.conf.d/10-packaging.conf
Log file: /var/log/ovirt-engine/setup/ovirt-engine-setup-20221204031123-60y1er.log
Version: otopi-1.10.3 (otopi-1.10.3-1.el8)
[ INFO ] Stage: Environment packages setup
[ INFO ] Stage: Programs detection
[ INFO ] Stage: Environment setup (late)
[ INFO ] Stage: Environment customization
--== PRODUCT OPTIONS ==--
Configure Cinderlib integration (Currently in tech preview) (Yes, No) [No]:
Configure Engine on this host (Yes, No) [Yes]:
Configuring ovirt-provider-ovn also sets the Default cluster's default network provider to ovirt-provider-ovn.
Non-Default clusters may be configured with an OVN after installation.
Configure ovirt-provider-ovn (Yes, No) [Yes]:
Configure WebSocket Proxy on this host (Yes, No) [Yes]:
* Please note * : Data Warehouse is required for the engine.
If you choose to not configure it on this host, you have to configure
it on a remote host, and then configure the engine on this host so
that it can access the database of the remote Data Warehouse host.
Configure Data Warehouse on this host (Yes, No) [Yes]:
* Please note * : Keycloak is now deprecating AAA/JDBC authentication module.
It is highly recommended to install Keycloak based authentication.
Configure Keycloak on this host (Yes, No) [Yes]:
Configure VM Console Proxy on this host (Yes, No) [Yes]:
Configure Grafana on this host (Yes, No) [Yes]:
--== PACKAGES ==--
[ INFO ] Checking for product updates...
[ INFO ] No product updates found
--== NETWORK CONFIGURATION ==--
Host fully qualified DNS name of this server [pm.local]:
[WARNING] Failed to resolve pm.local using DNS, it can be resolved only locally
Setup can automatically configure the firewall on this system.
Note: automatic configuration of the firewall may overwrite current settings.
Do you want Setup to configure the firewall? (Yes, No) [Yes]:
[ INFO ] firewalld will be configured as firewall manager.
--== DATABASE CONFIGURATION ==--
Where is the DWH database located? (Local, Remote) [Local]:
Setup can configure the local postgresql server automatically for the DWH to run. This may conflict with existing applications.
Would you like Setup to automatically configure postgresql and create DWH database, or prefer to perform that manually? (Automatic, Manual) [Automatic]:
Where is the Keycloak database located? (Local, Remote) [Local]:
Setup can configure the local postgresql server automatically for the Keycloak to run. This may conflict with existing applications.
Would you like Setup to automatically configure postgresql and create Keycloak database, or prefer to perform that manually? (Automatic, Manual) [Automatic]:
Where is the Engine database located? (Local, Remote) [Local]:
Setup can configure the local postgresql server automatically for the engine to run. This may conflict with existing applications.
Would you like Setup to automatically configure postgresql and create Engine database, or prefer to perform that manually? (Automatic, Manual) [Automatic]:
--== OVIRT ENGINE CONFIGURATION ==--
Engine admin password:
Confirm engine admin password:
[WARNING] Password is weak: The password is shorter than 8 characters
Use weak password? (Yes, No) [No]:yes
Application mode (Virt, Gluster, Both) [Both]:
Use Engine admin password as initial keycloak admin password (Yes, No) [Yes]:
--== STORAGE CONFIGURATION ==--
Default SAN wipe after delete (Yes, No) [No]:
--== PKI CONFIGURATION ==--
Organization name for certificate [local]:
--== APACHE CONFIGURATION ==--
Setup can configure the default page of the web server to present the application home page. This may conflict with existing applications.
Do you wish to set the application as the default page of the web server? (Yes, No) [Yes]:
Setup can configure apache to use SSL using a certificate issued from the internal CA.
Do you wish Setup to configure that, or prefer to perform that manually? (Automatic, Manual) [Automatic]:
--== SYSTEM CONFIGURATION ==--
--== MISC CONFIGURATION ==--
Please choose Data Warehouse sampling scale:
(1) Basic
(2) Full
(1, 2)[1]:
Use Engine admin password as initial Grafana admin password (Yes, No) [Yes]:
--== END OF CONFIGURATION ==--
[ INFO ] Stage: Setup validation
[WARNING] Less than 16384MB of memory is available
--== CONFIGURATION PREVIEW ==--
Application mode : both
Default SAN wipe after delete : False
Host FQDN : pm.local
Firewall manager : firewalld
Update Firewall : True
Set up Cinderlib integration : False
Configure local Engine database : True
Set application as default page : True
Configure Apache SSL : True
Keycloak installation : True
Engine database host : localhost
Engine database port : 5432
Engine database secured connection : False
Engine database host name validation : False
Engine database name : engine
Engine database user name : engine
Engine installation : True
PKI organization : local
Set up ovirt-provider-ovn : True
DWH installation : True
DWH database host : localhost
DWH database port : 5432
DWH database secured connection : False
DWH database host name validation : False
DWH database name : ovirt_engine_history
Configure local DWH database : True
Grafana integration : True
Grafana database user name : ovirt_engine_history_grafana
Keycloak database host : localhost
Keycloak database port : 5432
Keycloak database secured connection : False
Keycloak database host name validation : False
Keycloak database name : ovirt_engine_keycloak
Keycloak database user name : ovirt_engine_keycloak
Configure local Keycloak database : True
Configure VMConsole Proxy : True
Configure WebSocket Proxy : True
Please confirm installation settings (OK, Cancel) [OK]:
[ INFO ] Stage: Transaction setup
[ INFO ] Stopping engine service
[ INFO ] Stopping ovirt-fence-kdump-listener service
[ INFO ] Stopping dwh service
[ INFO ] Stopping vmconsole-proxy service
[ INFO ] Stopping websocket-proxy service
[ INFO ] Stage: Misc configuration (early)
[ INFO ] Stage: Package installation
[ INFO ] Stage: Misc configuration
[ INFO ] Upgrading CA
[ INFO ] Creating PostgreSQL 'engine' database
[ INFO ] Configuring PostgreSQL
[ INFO ] Creating PostgreSQL 'ovirt_engine_history' database
[ INFO ] Configuring PostgreSQL
[ INFO ] Creating PostgreSQL 'ovirt_engine_keycloak' database
[ INFO ] Configuring PostgreSQL
[ INFO ] Creating CA: /etc/pki/ovirt-engine/ca.pem
[ INFO ] Creating CA: /etc/pki/ovirt-engine/qemu-ca.pem
[ INFO ] Creating a user for Grafana
[ INFO ] Setting up ovirt-vmconsole proxy helper PKI artifacts
[ INFO ] Setting up ovirt-vmconsole SSH PKI artifacts
[ INFO ] Configuring WebSocket Proxy
[ INFO ] Creating/refreshing Engine database schema
[ INFO ] Creating/refreshing DWH database schema
[ INFO ] Updating OVN SSL configuration
[ INFO ] Updating OVN timeout configuration
[ INFO ] Creating/refreshing Engine 'internal' domain database schema
[ INFO ] Creating default mac pool range
[ INFO ] Adding default OVN provider to database
[ INFO ] Adding OVN provider secret to database
[ INFO ] Setting a password for internal user admin
[ INFO ] Creating initial Keycloak admin user
[ INFO ] Generating post install configuration file '/etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf'
[ INFO ] Stage: Transaction commit
[ INFO ] Stage: Closing up
--== SUMMARY ==--
[ INFO ] No need to restart fapolicyd because it is not running.
[ INFO ] Starting dwh service
[ INFO ] Starting Grafana service
[ INFO ] Restarting ovirt-vmconsole proxy service
To login to oVirt using Keycloak SSO, enter 'admin@ovirt' as username and the password provided during Setup
To login to Keycloak Administration Console enter 'admin' as username and the password provided during Setup
Web access for Keycloak Administration Console is enabled at:
https://pm.local/ovirt-engine-auth/admin
Web access is enabled at:
http://pm.local:80/ovirt-engine
https://pm.local:443/ovirt-engine
Internal CA fingerprint: SHA256: F6:6C:CF:41:58:64:D1:84:25:10:A6:6B:4D:96:8B:EB:F5:F2:DA:FB:BD:CF:B4:2C:02:62:0B:0A:B3:15:14:33
SSH fingerprint: SHA256:Xnov0hwwe6/DN5udn3MypHx9EU5CelG6eYMHlaUZJFQ
[ INFO ] Starting engine service
[WARNING] Less than 16384MB of memory is available
Web access for grafana is enabled at:
https://pm.local/ovirt-engine-grafana/
Please run the following command on the engine machine pm.local, for SSO to work:
systemctl restart ovirt-engine
--== END OF SUMMARY ==--
[ INFO ] Restarting httpd
[ INFO ] Start with setting up Keycloak for Ovirt Engine
[ ERROR ] Failed to execute stage 'Closing up': Command '/usr/share/ovirt-engine-keycloak/bin/kk_cli.sh' failed to execute
[ INFO ] Stage: Clean up
Log file is located at /var/log/ovirt-engine/setup/ovirt-engine-setup-20221204031123-60y1er.log
[ INFO ] Generating answer file '/var/lib/ovirt-engine/setup/answers/20221204031509-setup.conf'
[ INFO ] Stage: Pre-termination
[ INFO ] Stage: Termination
[ ERROR ] Execution of setup failed
2 years
After failed upgrade from 4.5.1 to 4.5.3, upgrades do not show up anymore
by Gianluca Amato
Hi all,
I recently tried to upgrade an oVirt node from 4.5.1 to 4.5.3. The upgrade failed (I have no idea idea why... how can I access the installation logs?). The node is still working fine running the old 4.5.1 release, but now the oVirt web console says that it is up to date and it does not let me retry the upgrade. However, I am still on version 4.5.1, as witnessed by the result of "nodectl info":
----
bootloader:
default: ovirt-node-ng-4.5.1-0.20220623.0 (4.18.0-394.el8.x86_64)
entries:
ovirt-node-ng-4.5.1-0.20220623.0 (4.18.0-394.el8.x86_64):
index: 0
kernel: /boot//ovirt-node-ng-4.5.1-0.20220623.0+1/vmlinuz-4.18.0-394.el8.x86_64
args: crashkernel=auto resume=/dev/mapper/onn_ovirt--clai1-swap rd.lvm.lv=onn_ovirt-clai1/ovirt-node-ng-4.5.1-0.20220623.0+1 rd.lvm.lv=onn_ovirt-clai1/swap rhgb quiet boot=UUID=9d44cf2a-38bb-477d-b542-4bfc30463d1f rootflags=discard img.bootid=ovirt-node-ng-4.5.1-0.20220623.0+1 intel_iommu=on modprobe.blacklist=nouveau transparent_hugepage=never hugepagesz=1G hugepages=256 default_hugepagesz=1G
root: /dev/onn_ovirt-clai1/ovirt-node-ng-4.5.1-0.20220623.0+1
initrd: /boot//ovirt-node-ng-4.5.1-0.20220623.0+1/initramfs-4.18.0-394.el8.x86_64.img
title: ovirt-node-ng-4.5.1-0.20220623.0 (4.18.0-394.el8.x86_64)
blsid: ovirt-node-ng-4.5.1-0.20220623.0+1-4.18.0-394.el8.x86_64
layers:
ovirt-node-ng-4.5.1-0.20220623.0:
ovirt-node-ng-4.5.1-0.20220623.0+1
current_layer: ovirt-node-ng-4.5.1-0.20220623.0+1
------
Comparing the situation with other hosts in the same data center, it seems that the problem is that the package ovirt-node-ng-image-update-placeholder is not installed anymore, hence "dnf upgrade" has nothing to do. My idea was to manually download and install the 4.5.1 version of ovirt-node-ng-image-update-placeholder and attempt installation again.
Is it a correct way to proceed ?
Thanks for any help.
--gianluca amato
2 years
el9 official use?
by Nathanaël Blanchet
Hello,
Until 4.5.4, el9 ovirt-node was for testing. Following 4.5.4 releases note, it seems to be officially supported but there is no information about el9 engine support.
What about it?
2 years
Max network performance on w2019 guest
by Gianluca Cecchi
One customer sees 2,5gbs on 10gbs adapters for w2019 VM with virtio using
iperf3.
Instead with Oracle linux 8 VMS of the same infra sees 9gbs.
What is the expected maximum on windows with virtio based on experience?
Thanks
Gianluca
2 years
State sync
by KSNull Zero
Hello!
Is there any way to "sync" current host/datastore/vm state after Engine restore from backup ?
For example - let's say we have a backup made 3 hour later from current time and in this 3 hours we made some changes to vm (change config/power state/take snapshots for example) or hosts (enter maintenace mode, network changes and so on).
If we restore backup we did not see those changes because all of the info is stored in the engine database.
So the question - is there any way to get "synced" with current actual infrastructure/vm state ?
Thank you.
2 years
oVirt Update Errors
by Matthew J Black
Hi Guys,
Attempting to do a Cluster update via the oVirt GUI and I'm getting the following errors (taken from the logs) which I've confirmed via a straight `dnf update`:
Problem 1: package ovirt-hosted-engine-setup-2.6.6-1.el8.noarch conflicts with ansible-core >= 2.13 provided by ansible-core-2.13.3-1.el8.x86_64
- cannot install the best update candidate for package ovirt-hosted-engine-setup-2.6.6-1.el8.noarch
- cannot install the best update candidate for package ansible-core-2.12.7-1.el8.x86_64
Problem 2: problem with installed package ovirt-hosted-engine-setup-2.6.6-1.el8.noarch
- package ovirt-hosted-engine-setup-2.6.6-1.el8.noarch conflicts with ansible-core >= 2.13 provided by ansible-core-2.13.3-1.el8.x86_64
- package ovirt-ansible-collection-3.0.0-1.el8.noarch requires ansible-core >= 2.13.0, but none of the providers can be installed
- cannot install the best update candidate for package ovirt-ansible-collection-2.3.0-1.el8.noarch
Is it OK to do a `dnf update --nobest` or a `dnf update --allowerasing` on each host, or is there some other solution that I'm missing?
Cheers
Dulux-Oz
2 years
Nvidia A10 vGPU support on oVirt 4.5.2
by Don Dupuis
Hello
I can run an Nvidia GRID T4 on oVirt 4.5.2 with no issue but I have a new
GRID A10 that doesn't seem to work in oVirt 4.5.2. This new card seems to
use SRIOV instead of mediated devices or it seems that I only get the
mdev_supported_types directory structure when I run the
/usr/lib/nvidia/sriov-manager command. Has anyone got this card working on
oVirt or has developers working on oVirt/RHV know about this?
Thanks
Don
2 years
oVirt/Ceph iSCSI Issues
by Matthew J Black
Hi All,
I've got some issues with connecting my oVirt Cluster to my Ceph Cluster via iSCSI. There are two issues, and I don't know if one is causing the other, if they are related at all, or if they are two separate, unrelated issues. Let me explain.
The Situation
-------------
- I have a working three node Ceph Cluster (Ceph Quincy on Rocky Linux 8.6)
- The Ceph Cluster has four Storage Pools of between 4 and 8 TB each
- The Ceph Cluster has three iSCSI Gateways
- There is a single iSCSI Target on the Ceph Cluster
- The iSCSI Target has all three iSCSI Gateways attached
- The iSCSI Target has all four Storage Pools attached
- The four Storage Pools have been assigned LUNs 0-3
- I have set up (Discovery) CHAP Authorisation on the iSCSI Target
- I have a working three node self-hosted oVirt Cluster (oVirt v4.5.3 on Rocky Linux 8.6)
- The oVirt Cluster has (in addition to the hosted_storage Storage Domain) three GlusterFS Storage Domains
- I can ping all three Ceph Cluster Nodes to/from all three oVirt Hosts
- The iSCSI Target on the Ceph Cluster has all three oVirt Hosts Initiators attached
- Each Initiator has all four Ceph Storage Pools attached
- I have set up CHAP Authorisation on the iSCSI Target's Initiators
- The Ceph Cluster Admin Portal reports that all three Initiators are "logged_in"
- I have previous connected Ceph iSCSI LUNs to the oVirt Cluster successfully (as an experiment), but had to remove and re-instate them for the "final" version(?).
- The oVirt Admin Portal (ie HostedEngine) reports that Initiators are 1 & 2 (ie oVirt Hosts 1 & 2) are "logged_in" to all three iSCSI Gateways
- The oVirt Admin Portal reports that Initiator 3 (ie oVirt Host 3) is "logged_in" to iSCSI Gateways 1 & 2
- I can "force" Initiator 3 to become "logged_in" to iSCSI Gateway 3, but when I do this it is *not* persistent
- oVirt Hosts 1 & 2 can/have discovered all three iSCSI Gateways
- oVirt Hosts 1 & 2 can/have discovered all four LUNs/Targets on all three iSCSI Gateways
- oVirt Host 3 can only discover 2 of the iSCSI Gateways
- For Target/LUN 0 oVirt Host 3 can only "see" the LUN provided by iSCSI Gateway 1
- For Targets/LUNs 1-3 oVirt Host 3 can only "see" the LUNs provided by iSCSI Gateways 1 & 2
- oVirt Host 3 can *not* "see" any of the Targets/LUNs provided by iSCSI Gateway 3
- When I create a new oVirt Storage Domain for any of the four LUNs:
- I am presented with a message saying "The following LUNs are already in use..."
- I am asked to "Approve operation" via a checkbox, which I do
- As I watch the oVirt Admin Portal I can see the new iSCSI Storage Domain appear in the Storage Domain list, and then after a few minutes it is removed
- After those few minutes I am presented with this failure message: "Error while executing action New SAN Storage Domain: Network error during communication with the Host."
- I have looked in the engine.log and all I could find that was relevant (as far as I know) was this:
~~~
2022-11-28 19:59:20,506+11 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand] (default task-1) [77b0c12d] Command 'CreateStorageDomainVDSCommand(HostName = ovirt_node_1.mynet.local, CreateStorageDomainVDSCommandParameters:{hostId='967301de-be9f-472a-8e66-03c24f01fa71', storageDomain='StorageDomainStatic:{name='data', id='2a14e4bd-c273-40a0-9791-6d683d145558'}', args='s0OGKR-80PH-KVPX-Fi1q-M3e4-Jsh7-gv337P'})' execution failed: VDSGenericException: VDSNetworkException: Message timeout which can be caused by communication issues
2022-11-28 19:59:20,507+11 ERROR [org.ovirt.engine.core.bll.storage.domain.AddSANStorageDomainCommand] (default task-1) [77b0c12d] Command 'org.ovirt.engine.core.bll.storage.domain.AddSANStorageDomainCommand' failed: EngineException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException: VDSGenericException: VDSNetworkException: Message timeout which can be caused by communication issues (Failed with error VDS_NETWORK_ERROR and code 5022)
~~~
I cannot see/detect any "communication issue" - but then again I'm not 100% sure what I should be looking for
I have looked on-line for an answer, and apart from not being able to get past Red Hat's "wall" to see the solutions that they have, all I could find that was relevant was this: https://lists.ovirt.org/archives/list/devel@ovirt.org/thread/AVLORQNOLJHR... . If this *is* relevant then there is not enough context here for me to proceed (ie/eg *where* (which host/vm) should that command be run?).
I also found (for a previous version of oVirt) notes about modifying the Postgres DB manual to resolve a similar issue. While I am more than comfortable doing this (I've been an SQL DBA for well over 20 years) this seems like asking for trouble - at least until I hear back from the oVirt Devs that this is OK to do - and of course, I'll need the relevant commands / locations / authorisations to get into the DB.
Questions
---------
- Are the two issues (oVirt Host 3 not having a full picture of the Ceph iSCSI environment and the oVirt iSCSI Storage Domain creation failure) related?
- Do I need to "refresh" the iSCSI info on the oVirt Hosts, and if so, how do I do this?
- Do I need to "flush" the old LUNs from the oVirt Cluster, and if so, how do I do this?
- Where else should I be looking for info in the logs (& which logs)?
- Does *anyone* have any other ideas how to resolve the situation - especially when using the Ceph iSCII Gateways?
Thanks in advance
Cheers
Dulux-Oz
2 years
Import VM via KVM. Can't see vm's.
by piotret@wp.pl
Hi
I am trying to migrate vm's from oVirt 4.3.10.4-1.el7 to oVirt 4.5.3.2-1.el8.
I use Provider KVM (via libvirt)
My problem is that I can't see vm's from old oVirt when they are shutdown.
When they are running I see but can't import because "All chosen VMs are running in the external system and therefore have been filtered. Please see log for details."
Thank you for help.
Regards
2 years