Backup bug in latest version
by Giulio Casella
Hi,
yesterday we upgraded engine:
ovirt-engine-4.5.7-0.master.20240506114300.git0a1ba8203f.el8.noarch ->
ovirt-engine-4.5.7-0.master.20240527152413.gited023e5e0a.el8.noarch
Since then our backups (made by Storware backup and recovery, vprotect)
are failing.
It seems that engine tries to add backup twice, and db primary key
constraint obviously fails. Find below a snippet from engine.log.
The snapshot part of backup process is executed correctly (but snapshot
named "Auto-generated for Backup VM" remains after failure).
Also script
/usr/share/doc/python3-ovirt-engine-sdk4/examples/backup_vm.py fails to
finalize/terminate backup.
Has anyone seen this and filed a bug on github?
------------------------------------------------------
2024-06-13 10:36:47,481+02 INFO
[org.ovirt.engine.core.bll.storage.backup.HybridBackupCommand] (default
task-8) [38c00ae1-a837-4758-85a9-ee9943395753] Lock Acquired to object
'EngineLock:{exclusiveLocks='[09db7fdc-c4d3-4fbd-a50e-1b04de2b1341=DISK,
32f9b481-8431-4cbc-ad3e-bb226374e693=DISK]', sharedLocks=''}'
2024-06-13 10:36:47,535+02 INFO
[org.ovirt.engine.core.bll.storage.backup.HybridBackupCommand] (default
task-8) [38c00ae1-a837-4758-85a9-ee9943395753] Running command:
HybridBackupCommand internal: false. Entities affected : ID:
dc6d9b23-e2f7-4744-a633-d4e532ef190f Type: VMAction group BACKUP_DISK
with role type ADMIN, ID: 09db7fdc-c4d3-4fbd-a50e-1b04de2b1341 Type:
DiskAction group BACKUP_DISK with role type ADMIN, ID:
32f9b481-8431-4cbc-ad3e-bb226374e693 Type: DiskAction group BACKUP_DISK
with role type ADMIN
2024-06-13 10:36:47,541+02 INFO
[org.ovirt.engine.core.bll.storage.backup.HybridBackupCommand] (default
task-8) [38c00ae1-a837-4758-85a9-ee9943395753] Created VmBackup entity
'd5bed8c5-f172-47c7-9827-5dda51490ee8' for VM
'dc6d9b23-e2f7-4744-a633-d4e532ef190f'
2024-06-13 10:36:47,596+02 INFO
[org.ovirt.engine.core.bll.snapshots.CreateSnapshotForVmCommand]
(default task-8) [38c00ae1-a837-4758-85a9-ee9943395753] Lock Acquired to
object
'EngineLock:{exclusiveLocks='[dc6d9b23-e2f7-4744-a633-d4e532ef190f=VM]',
sharedLocks=''}'
2024-06-13 10:36:47,640+02 INFO
[org.ovirt.engine.core.bll.snapshots.CreateSnapshotForVmCommand]
(default task-8) [38c00ae1-a837-4758-85a9-ee9943395753] Running command:
CreateSnapshotForVmCommand internal: true. Entities affected : ID:
dc6d9b23-e2f7-4744-a633-d4e532ef190f Type: VMAction group
MANIPULATE_VM_SNAPSHOTS with role type USER
2024-06-13 10:36:47,668+02 INFO
[org.ovirt.engine.core.bll.snapshots.CreateSnapshotDiskCommand] (default
task-8) [38c00ae1-a837-4758-85a9-ee9943395753] Running command:
CreateSnapshotDiskCommand internal: true. Entities affected : ID:
dc6d9b23-e2f7-4744-a633-d4e532ef190f Type: VMAction group
MANIPULATE_VM_SNAPSHOTS with role type USER
2024-06-13 10:36:47,731+02 INFO
[org.ovirt.engine.core.bll.snapshots.CreateSnapshotCommand] (default
task-8) [38c00ae1-a837-4758-85a9-ee9943395753] Running command:
CreateSnapshotCommand internal: true. Entities affected : ID:
00000000-0000-0000-0000-000000000000 Type: Storage
2024-06-13 10:36:47,767+02 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.CreateVolumeVDSCommand]
(default task-8) [38c00ae1-a837-4758-85a9-ee9943395753] START,
CreateVolumeVDSCommand(
CreateVolumeVDSCommandParameters:{storagePoolId='42135536-980b-4ea9-ab66-b850ed8c1f2b',
ignoreFailoverLimit='false',
storageDomainId='459011cf-ebb6-46ff-831d-8ccfafd82c8a',
imageGroupId='32f9b481-8431-4cbc-ad3e-bb226374e693',
imageSizeInBytes='107374182400', volumeFormat='COW',
newImageId='754b091f-eaa9-4fd8-afbf-1caa0e747eb6', imageType='Sparse',
newImageDescription='', imageInitialSizeInBytes='0',
imageId='06289dbc-fc0a-4d7b-bd13-92a2df199971',
sourceImageGroupId='32f9b481-8431-4cbc-ad3e-bb226374e693',
shouldAddBitmaps='false', legal='true', sequenceNumber='1',
bitmap='bf62fab5-fb82-4c03-ac60-ddb9943c9f0d'}), log id: 128fa930
2024-06-13 10:36:47,883+02 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.CreateVolumeVDSCommand]
(default task-8) [38c00ae1-a837-4758-85a9-ee9943395753] FINISH,
CreateVolumeVDSCommand, return: 754b091f-eaa9-4fd8-afbf-1caa0e747eb6,
log id: 128fa930
2024-06-13 10:36:47,895+02 INFO
[org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (default task-8)
[38c00ae1-a837-4758-85a9-ee9943395753] CommandAsyncTask::Adding
CommandMultiAsyncTasks object for command
'43b174d8-3b84-46a3-abe6-e825e9ce7c81'
2024-06-13 10:36:47,896+02 INFO
[org.ovirt.engine.core.bll.CommandMultiAsyncTasks] (default task-8)
[38c00ae1-a837-4758-85a9-ee9943395753]
CommandMultiAsyncTasks::attachTask: Attaching task
'86720608-b5bd-4a28-b7ee-b1a3507592aa' to command
'43b174d8-3b84-46a3-abe6-e825e9ce7c81'.
2024-06-13 10:36:47,918+02 INFO
[org.ovirt.engine.core.bll.tasks.AsyncTaskManager] (default task-8)
[38c00ae1-a837-4758-85a9-ee9943395753] Adding task
'86720608-b5bd-4a28-b7ee-b1a3507592aa' (Parent Command 'CreateSnapshot',
Parameters Type
'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters'), polling
hasn't started yet..
2024-06-13 10:36:47,947+02 INFO
[org.ovirt.engine.core.bll.snapshots.CreateSnapshotCommand] (default
task-8) [38c00ae1-a837-4758-85a9-ee9943395753] Running command:
CreateSnapshotCommand internal: true. Entities affected : ID:
00000000-0000-0000-0000-000000000000 Type: Storage
2024-06-13 10:36:47,963+02 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.CreateVolumeVDSCommand]
(default task-8) [38c00ae1-a837-4758-85a9-ee9943395753] START,
CreateVolumeVDSCommand(
CreateVolumeVDSCommandParameters:{storagePoolId='42135536-980b-4ea9-ab66-b850ed8c1f2b',
ignoreFailoverLimit='false',
storageDomainId='459011cf-ebb6-46ff-831d-8ccfafd82c8a',
imageGroupId='09db7fdc-c4d3-4fbd-a50e-1b04de2b1341',
imageSizeInBytes='53687091200', volumeFormat='COW',
newImageId='1407954a-6d41-4f01-928b-98afa8ef215f', imageType='Sparse',
newImageDescription='', imageInitialSizeInBytes='0',
imageId='5d65f77d-25fa-4879-8144-6147a16f3ae3',
sourceImageGroupId='09db7fdc-c4d3-4fbd-a50e-1b04de2b1341',
shouldAddBitmaps='false', legal='true', sequenceNumber='2',
bitmap='bf62fab5-fb82-4c03-ac60-ddb9943c9f0d'}), log id: 5ce87935
2024-06-13 10:36:48,055+02 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.CreateVolumeVDSCommand]
(default task-8) [38c00ae1-a837-4758-85a9-ee9943395753] FINISH,
CreateVolumeVDSCommand, return: 1407954a-6d41-4f01-928b-98afa8ef215f,
log id: 5ce87935
2024-06-13 10:36:48,058+02 INFO
[org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (default task-8)
[38c00ae1-a837-4758-85a9-ee9943395753] CommandAsyncTask::Adding
CommandMultiAsyncTasks object for command
'a3f817d8-ca15-4fff-a411-ac0294e2ea70'
2024-06-13 10:36:48,058+02 INFO
[org.ovirt.engine.core.bll.CommandMultiAsyncTasks] (default task-8)
[38c00ae1-a837-4758-85a9-ee9943395753]
CommandMultiAsyncTasks::attachTask: Attaching task
'60e6899b-f59f-404d-af8f-d2ef73e0c908' to command
'a3f817d8-ca15-4fff-a411-ac0294e2ea70'.
2024-06-13 10:36:48,067+02 INFO
[org.ovirt.engine.core.bll.tasks.AsyncTaskManager] (default task-8)
[38c00ae1-a837-4758-85a9-ee9943395753] Adding task
'60e6899b-f59f-404d-af8f-d2ef73e0c908' (Parent Command 'CreateSnapshot',
Parameters Type
'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters'), polling
hasn't started yet..
2024-06-13 10:36:48,080+02 INFO
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (default task-8)
[38c00ae1-a837-4758-85a9-ee9943395753] BaseAsyncTask::startPollingTask:
Starting to poll task '86720608-b5bd-4a28-b7ee-b1a3507592aa'.
2024-06-13 10:36:48,081+02 INFO
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (default task-8)
[38c00ae1-a837-4758-85a9-ee9943395753] BaseAsyncTask::startPollingTask:
Starting to poll t
ask '60e6899b-f59f-404d-af8f-d2ef73e0c908'.
2024-06-13 10:36:48,147+02 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-8) [38c00ae1-a837-4758-85a9-ee9943395753] EVENT_ID:
USER_CREATE_SNAPS
HOT(45), Snapshot 'Auto-generated for Backup VM' creation for VM
'mercurio' was initiated by admin@internal.
2024-06-13 10:36:48,151+02 ERROR
[org.ovirt.engine.core.bll.storage.backup.HybridBackupCommand] (default
task-8) [38c00ae1-a837-4758-85a9-ee9943395753] Command
'org.ovirt.engine.core.bll.
storage.backup.HybridBackupCommand' failed: CallableStatementCallback;
SQL [{call insertvmbackup(?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)}]; ERROR:
duplicate key value violates unique constraint
"vm_backups_pkey"
Detail: Key (backup_id)=(d5bed8c5-f172-47c7-9827-5dda51490ee8)
already exists.
Where: SQL statement "INSERT INTO vm_backups (
backup_id,
from_checkpoint_id,
to_checkpoint_id,
vm_id,
host_id,
phase,
_create_date,
_update_date,
description,
backup_type,
snapshot_id
)
VALUES (
v_backup_id,
v_from_checkpoint_id,
v_to_checkpoint_id,
v_vm_id,
v_host_id,
v_phase,
v__create_date,
v__update_date,
v_description,
v_backup_type,
v_snapshot_id
)"
PL/pgSQL function insertvmbackup(uuid,uuid,uuid,uuid,uuid,text,timestamp
with time zone,timestamp with time zone,character varying,character
varying,uuid) line 3 at SQL statement; nested exception is
org.postgresql.util.PSQLException: ERROR: duplicate key value violates
unique constraint "vm_backups_pkey"
Detail: Key (backup_id)=(d5bed8c5-f172-47c7-9827-5dda51490ee8)
already exists.
Where: SQL statement "INSERT INTO vm_backups (
backup_id,
from_checkpoint_id,
to_checkpoint_id,
vm_id,
host_id,
phase,
_create_date,
_update_date,
description,
backup_type,
snapshot_id
)
VALUES (
v_backup_id,
v_from_checkpoint_id,
v_to_checkpoint_id,
v_vm_id,
v_host_id,
v_phase,
v__create_date,
v__update_date,
v_description,
v_backup_type,
v_snapshot_id
)"
PL/pgSQL function insertvmbackup(uuid,uuid,uuid,uuid,uuid,text,timestamp
with time zone,timestamp with time zone,character varying,character
varying,uuid) line 3 at SQL statement
------------------------------------------------------
Thanks in advance,
Giulio
10 months, 1 week
VM Windows 2022 always been shutdown
by Kalil de A. Carvalho
Hello all.
Here, in my company, we have oVirt verson 4.5.4-1.el9 running with a lots
of VM's but all Windows 2022 and 11 has a strange behaviour, sometimes just
going down with this message:
"VM VM-NAME is down with error. Exit message: Lost connection with qemu
process."
Looking for any answer on logs I found out this message:
"Code=qemu-kvm: ../hw/core/cpu-sysemu.c:77: int
cpu_asidx_from_attrs(CPUState *, MemTxAttrs): Assertion `ret <
cpu->num_ases && ret >= 0' failed.
2024-06-03 13:59:46.809+0000: shutting down, reason=crashed"
We don't understand why it just has problems with Windwos 2022 and 11.
Windows 2019 run without problem.
Can any one have this kind of situation? Does it have any fixed action?
Best regars
--
Atenciosamente,
Kalil de A. Carvalho
10 months, 1 week
New installation of oVirt 4.5.6: issues with bll.jar
by samu.paaso@liberator.fi
Hello,
We are having issues with a brand new deployment of oVirt. After
installation, the users are unable to access the Administration or VM
portals, they'll face error 500. Other panels, like Keycloak and Grafana
work as expected. Digging deeper, there seems to be an issue with file
/var/lib/ovirt-engine/jboss_runtime/deployments/engine.ear/bll.jar as
described in the log /var/log/ovirt-engine/server.log:
2024-06-11 22:18:12,459+03 INFO [org.jboss.weld.deployer] (MSC service
thread 1-8) WFLYWELD0003: Processing weld deployment restapi.war
2024-06-11 22:18:12,468+03 INFO [org.wildfly.security] (MSC service
thread 1-2) ELY00001: WildFly Elytron version 1.16.1.Final
2024-06-11 22:18:12,665+03 INFO
[org.hibernate.validator.internal.util.Version] (MSC service thread 1-8)
HV000001: Hibernate Validator 6.0.22.Final
2024-06-11 22:18:12,762+03 INFO [org.wildfly.extension.undertow]
(ServerService Thread Pool -- 47) WFLYUT0021: Registered web context:
'/ovirt-engine/web-ui' for server 'default-server'
2024-06-11 22:18:12,761+03 INFO [org.wildfly.extension.undertow]
(ServerService Thread Pool -- 49) WFLYUT0021: Registered web context:
'/ovirt-engine/apidoc' for server 'default-server'
2024-06-11 22:18:12,845+03 INFO [org.infinispan.CONTAINER]
(ServerService Thread Pool -- 44) ISPN000128: Infinispan version:
Infinispan 'Taedonggang' 12.1.4.Final
2024-06-11 22:18:13,041+03 INFO [org.infinispan.CONTAINER]
(ServerService Thread Pool -- 49) ISPN000556: Starting user marshaller
'org.wildfly.clustering.infinispan.marshalling.jboss.JBossMarshaller'
2024-06-11 22:18:13,061+03 INFO [org.jboss.as.server.deployment] (MSC
service thread 1-2) WFLYSRV0207: Starting subdeployment (runtime-name:
"enginesso.war")
2024-06-11 22:18:13,061+03 INFO [org.jboss.as.server.deployment] (MSC
service thread 1-5) WFLYSRV0207: Starting subdeployment (runtime-name:
"bll.jar")
2024-06-11 22:18:13,062+03 INFO [org.jboss.as.server.deployment] (MSC
service thread 1-1) WFLYSRV0207: Starting subdeployment (runtime-name:
"webadmin.war")
2024-06-11 22:18:13,062+03 INFO [org.jboss.as.server.deployment] (MSC
service thread 1-6) WFLYSRV0207: Starting subdeployment (runtime-name:
"docs.war")
2024-06-11 22:18:13,068+03 INFO [org.jboss.as.server.deployment] (MSC
service thread 1-3) WFLYSRV0207: Starting subdeployment (runtime-name:
"root.war")
2024-06-11 22:18:13,068+03 INFO [org.jboss.as.server.deployment] (MSC
service thread 1-8) WFLYSRV0207: Starting subdeployment (runtime-name:
"welcome.war")
2024-06-11 22:18:13,069+03 INFO [org.jboss.as.server.deployment] (MSC
service thread 1-7) WFLYSRV0207: Starting subdeployment (runtime-name:
"services.war")
2024-06-11 22:18:13,080+03 WARN [org.jboss.as.server.deployment] (MSC
service thread 1-5) WFLYSRV0059: Class Path entry lib/utils.jar in
/var/lib/ovirt-engine/jboss_runtime/deployments/engine.ear/bll.jar does
not point to a valid jar for a Class-Path reference.
2024-06-11 22:18:13,080+03 WARN [org.jboss.as.server.deployment] (MSC
service thread 1-5) WFLYSRV0059: Class Path entry lib/uutils.jar in
/var/lib/ovirt-engine/jboss_runtime/deployments/engine.ear/bll.jar does
not point to a valid jar for a Class-Path reference.
2024-06-11 22:18:13,093+03 WARN [org.jboss.as.server.deployment] (MSC
service thread 1-5) WFLYSRV0059: Class Path entry lib/sshd-core.jar in
/var/lib/ovirt-engine/jboss_runtime/deployments/engine.ear/bll.jar does
not point to a valid jar for a Class-Path reference.
2024-06-11 22:18:13,093+03 WARN [org.jboss.as.server.deployment] (MSC
service thread 1-5) WFLYSRV0059: Class Path entry lib/sshd-common.jar in
/var/lib/ovirt-engine/jboss_runtime/deployments/engine.ear/bll.jar does
not point to a valid jar for a Class-Path reference.
2024-06-11 22:18:13,093+03 WARN [org.jboss.as.server.deployment] (MSC
service thread 1-5) WFLYSRV0059: Class Path entry lib/jcl-over-slf4j.jar
in /var/lib/ovirt-engine/jboss_runtime/deployments/engine.ear/bll.jar
does not point to a valid jar for a Class-Path reference.
2024-06-11 22:18:13,093+03 WARN [org.jboss.as.server.deployment] (MSC
service thread 1-5) WFLYSRV0059: Class Path entry
lib/commons-beanutils.jar in
/var/lib/ovirt-engine/jboss_runtime/deployments/engine.ear/bll.jar does
not point to a valid jar for a Class-Path reference.
2024-06-11 22:18:13,093+03 WARN [org.jboss.as.server.deployment] (MSC
service thread 1-5) WFLYSRV0059: Class Path entry
lib/commons-logging.jar in
/var/lib/ovirt-engine/jboss_runtime/deployments/engine.ear/bll.jar does
not point to a valid jar for a Class-Path reference.
2024-06-11 22:18:13,093+03 WARN [org.jboss.as.server.deployment] (MSC
service thread 1-5) WFLYSRV0059: Class Path entry
lib/commons-compress.jar in
/var/lib/ovirt-engine/jboss_runtime/deployments/engine.ear/bll.jar does
not point to a valid jar for a Class-Path reference.
2024-06-11 22:18:13,094+03 WARN [org.jboss.as.server.deployment] (MSC
service thread 1-5) WFLYSRV0059: Class Path entry lib/commons-lang.jar
in /var/lib/ovirt-engine/jboss_runtime/deployments/engine.ear/bll.jar
does not point to a valid jar for a Class-Path reference.
2024-06-11 22:18:13,094+03 WARN [org.jboss.as.server.deployment] (MSC
service thread 1-5) WFLYSRV0059: Class Path entry lib/commons-codec.jar
in /var/lib/ovirt-engine/jboss_runtime/deployments/engine.ear/bll.jar
does not point to a valid jar for a Class-Path reference.
2024-06-11 22:18:13,094+03 WARN [org.jboss.as.server.deployment] (MSC
service thread 1-5) WFLYSRV0059: Class Path entry lib/glance-client.jar
in /var/lib/ovirt-engine/jboss_runtime/deployments/engine.ear/bll.jar
does not point to a valid jar for a Class-Path reference.
2024-06-11 22:18:13,094+03 WARN [org.jboss.as.server.deployment] (MSC
service thread 1-5) WFLYSRV0059: Class Path entry lib/glance-model.jar
in /var/lib/ovirt-engine/jboss_runtime/deployments/engine.ear/bll.jar
does not point to a valid jar for a Class-Path reference.
2024-06-11 22:18:13,094+03 WARN [org.jboss.as.server.deployment] (MSC
service thread 1-5) WFLYSRV0059: Class Path entry lib/cinder-client.jar
in /var/lib/ovirt-engine/jboss_runtime/deployments/engine.ear/bll.jar
does not point to a valid jar for a Class-Path reference.
2024-06-11 22:18:13,094+03 WARN [org.jboss.as.server.deployment] (MSC
service thread 1-5) WFLYSRV0059: Class Path entry lib/cinder-model.jar
in /var/lib/ovirt-engine/jboss_runtime/deployments/engine.ear/bll.jar
does not point to a valid jar for a Class-Path reference.
2024-06-11 22:18:13,094+03 WARN [org.jboss.as.server.deployment] (MSC
service thread 1-5) WFLYSRV0059: Class Path entry lib/cors-filter.jar in
/var/lib/ovirt-engine/jboss_runtime/deployments/engine.ear/bll.jar does
not point to a valid jar for a Class-Path reference.
2024-06-11 22:18:13,094+03 WARN [org.jboss.as.server.deployment] (MSC
service thread 1-5) WFLYSRV0059: Class Path entry lib/compat.jar in
/var/lib/ovirt-engine/jboss_runtime/deployments/engine.ear/bll.jar does
not point to a valid jar for a Class-Path reference.
2024-06-11 22:18:13,095+03 WARN [org.jboss.as.server.deployment] (MSC
service thread 1-5) WFLYSRV0059: Class Path entry lib/common.jar in
/var/lib/ovirt-engine/jboss_runtime/deployments/engine.ear/bll.jar does
not point to a valid jar for a Class-Path reference.
2024-06-11 22:18:13,095+03 WARN [org.jboss.as.server.deployment] (MSC
service thread 1-5) WFLYSRV0059: Class Path entry
lib/hibernate-validator.jar in
/var/lib/ovirt-engine/jboss_runtime/deployments/engine.ear/bll.jar does
not point to a valid jar for a Class-Path reference.
2024-06-11 22:18:13,095+03 WARN [org.jboss.as.server.deployment] (MSC
service thread 1-5) WFLYSRV0059: Class Path entry lib/classmate.jar in
/var/lib/ovirt-engine/jboss_runtime/deployments/engine.ear/bll.jar does
not point to a valid jar for a Class-Path reference.
2024-06-11 22:18:13,095+03 WARN [org.jboss.as.server.deployment] (MSC
service thread 1-5) WFLYSRV0059: Class Path entry lib/jboss-modules.jar
in /var/lib/ovirt-engine/jboss_runtime/deployments/engine.ear/bll.jar
does not point to a valid jar for a Class-Path reference.
2024-06-11 22:18:13,095+03 WARN [org.jboss.as.server.deployment] (MSC
service thread 1-5) WFLYSRV0059: Class Path entry lib/dal.jar in
/var/lib/ovirt-engine/jboss_runtime/deployments/engine.ear/bll.jar does
not point to a valid jar for a Class-Path reference.
2024-06-11 22:18:13,095+03 WARN [org.jboss.as.server.deployment] (MSC
service thread 1-5) WFLYSRV0059: Class Path entry lib/spring-jdbc.jar in
/var/lib/ovirt-engine/jboss_runtime/deployments/engine.ear/bll.jar does
not point to a valid jar for a Class-Path reference.
2024-06-11 22:18:13,095+03 WARN [org.jboss.as.server.deployment] (MSC
service thread 1-5) WFLYSRV0059: Class Path entry lib/spring-tx.jar in
/var/lib/ovirt-engine/jboss_runtime/deployments/engine.ear/bll.jar does
not point to a valid jar for a Class-Path reference.
2024-06-11 22:18:13,095+03 WARN [org.jboss.as.server.deployment] (MSC
service thread 1-5) WFLYSRV0059: Class Path entry
lib/java-client-kubevirt.jar in
/var/lib/ovirt-engine/jboss_runtime/deployments/engine.ear/bll.jar does
not point to a valid jar for a Class-Path reference.
2024-06-11 22:18:13,096+03 WARN [org.jboss.as.server.deployment] (MSC
service thread 1-5) WFLYSRV0059: Class Path entry
lib/vdsm-jsonrpc-java-client.jar in
/var/lib/ovirt-engine/jboss_runtime/deployments/engine.ear/bll.jar does
not point to a valid jar for a Class-Path reference.
2024-06-11 22:18:13,096+03 WARN [org.jboss.as.server.deployment] (MSC
service thread 1-5) WFLYSRV0059: Class Path entry lib/commons-lang3.jar
in /var/lib/ovirt-engine/jboss_runtime/deployments/engine.ear/bll.jar
does not point to a valid jar for a Class-Path reference.
2024-06-11 22:18:13,096+03 WARN [org.jboss.as.server.deployment] (MSC
service thread 1-5) WFLYSRV0059: Class Path entry
lib/commons-collections.jar in
/var/lib/ovirt-engine/jboss_runtime/deployments/engine.ear/bll.jar does
not point to a valid jar for a Class-Path reference.
2024-06-11 22:18:13,096+03 WARN [org.jboss.as.server.deployment] (MSC
service thread 1-5) WFLYSRV0059: Class Path entry lib/snakeyaml.jar in
/var/lib/ovirt-engine/jboss_runtime/deployments/engine.ear/bll.jar does
not point to a valid jar for a Class-Path reference.
2024-06-11 22:18:13,096+03 WARN [org.jboss.as.server.deployment] (MSC
service thread 1-5) WFLYSRV0059: Class Path entry lib/aaa.jar in
/var/lib/ovirt-engine/jboss_runtime/deployments/engine.ear/bll.jar does
not point to a valid jar for a Class-Path reference.
2024-06-11 22:18:13,096+03 WARN [org.jboss.as.server.deployment] (MSC
service thread 1-5) WFLYSRV0059: Class Path entry lib/httpcore.jar in
/var/lib/ovirt-engine/jboss_runtime/deployments/engine.ear/bll.jar does
not point to a valid jar for a Class-Path reference.
2024-06-11 22:18:13,096+03 WARN [org.jboss.as.server.deployment] (MSC
service thread 1-5) WFLYSRV0059: Class Path entry lib/httpclient.jar in
/var/lib/ovirt-engine/jboss_runtime/deployments/engine.ear/bll.jar does
not point to a valid jar for a Class-Path reference.
2024-06-11 22:18:13,096+03 WARN [org.jboss.as.server.deployment] (MSC
service thread 1-5) WFLYSRV0059: Class Path entry
lib/extensions-manager.jar in
/var/lib/ovirt-engine/jboss_runtime/deployments/engine.ear/bll.jar does
not point to a valid jar for a Class-Path reference.
2024-06-11 22:18:13,096+03 WARN [org.jboss.as.server.deployment] (MSC
service thread 1-5) WFLYSRV0059: Class Path entry lib/builtin.jar in
/var/lib/ovirt-engine/jboss_runtime/deployments/engine.ear/bll.jar does
not point to a valid jar for a Class-Path reference.
2024-06-11 22:18:13,097+03 WARN [org.jboss.as.server.deployment] (MSC
service thread 1-5) WFLYSRV0059: Class Path entry lib/searchbackend.jar
in /var/lib/ovirt-engine/jboss_runtime/deployments/engine.ear/bll.jar
does not point to a valid jar for a Class-Path reference.
2024-06-11 22:18:13,097+03 WARN [org.jboss.as.server.deployment] (MSC
service thread 1-5) WFLYSRV0059: Class Path entry lib/branding.jar in
/var/lib/ovirt-engine/jboss_runtime/deployments/engine.ear/bll.jar does
not point to a valid jar for a Class-Path reference.
2024-06-11 22:18:13,097+03 WARN [org.jboss.as.server.deployment] (MSC
service thread 1-5) WFLYSRV0059: Class Path entry
lib/ovirt-engine-extensions-api.jar in
/var/lib/ovirt-engine/jboss_runtime/deployments/engine.ear/bll.jar does
not point to a valid jar for a Class-Path reference.
2024-06-11 22:18:13,097+03 WARN [org.jboss.as.server.deployment] (MSC
service thread 1-5) WFLYSRV0059: Class Path entry
lib/keystone-client.jar in
/var/lib/ovirt-engine/jboss_runtime/deployments/engine.ear/bll.jar does
not point to a valid jar for a Class-Path reference.
2024-06-11 22:18:13,097+03 WARN [org.jboss.as.server.deployment] (MSC
service thread 1-5) WFLYSRV0059: Class Path entry
lib/openstack-client.jar in
/var/lib/ovirt-engine/jboss_runtime/deployments/engine.ear/bll.jar does
not point to a valid jar for a Class-Path reference.
2024-06-11 22:18:13,097+03 WARN [org.jboss.as.server.deployment] (MSC
service thread 1-5) WFLYSRV0059: Class Path entry lib/keystone-model.jar
in /var/lib/ovirt-engine/jboss_runtime/deployments/engine.ear/bll.jar
does not point to a valid jar for a Class-Path reference.
2024-06-11 22:18:13,097+03 WARN [org.jboss.as.server.deployment] (MSC
service thread 1-5) WFLYSRV0059: Class Path entry lib/quantum-client.jar
in /var/lib/ovirt-engine/jboss_runtime/deployments/engine.ear/bll.jar
does not point to a valid jar for a Class-Path reference.
2024-06-11 22:18:13,097+03 WARN [org.jboss.as.server.deployment] (MSC
service thread 1-5) WFLYSRV0059: Class Path entry lib/quantum-model.jar
in /var/lib/ovirt-engine/jboss_runtime/deployments/engine.ear/bll.jar
does not point to a valid jar for a Class-Path reference.
2024-06-11 22:18:13,098+03 WARN [org.jboss.as.server.deployment] (MSC
service thread 1-5) WFLYSRV0059: Class Path entry
lib/resteasy-connector.jar in
/var/lib/ovirt-engine/jboss_runtime/deployments/engine.ear/bll.jar does
not point to a valid jar for a Class-Path reference.
2024-06-11 22:18:13,098+03 WARN [org.jboss.as.server.deployment] (MSC
service thread 1-5) WFLYSRV0059: Class Path entry lib/mail.jar in
/var/lib/ovirt-engine/jboss_runtime/deployments/engine.ear/bll.jar does
not point to a valid jar for a Class-Path reference.
2024-06-11 22:18:13,098+03 WARN [org.jboss.as.server.deployment] (MSC
service thread 1-5) WFLYSRV0059: Class Path entry
lib/jboss-interceptors-api_1.1_spec.jar in
/var/lib/ovirt-engine/jboss_runtime/deployments/engine.ear/bll.jar does
not point to a valid jar for a Class-Path reference.
2024-06-11 22:18:13,098+03 WARN [org.jboss.as.server.deployment] (MSC
service thread 1-5) WFLYSRV0059: Class Path entry lib/xmlrpc-client.jar
in /var/lib/ovirt-engine/jboss_runtime/deployments/engine.ear/bll.jar
does not point to a valid jar for a Class-Path reference.
2024-06-11 22:18:13,098+03 WARN [org.jboss.as.server.deployment] (MSC
service thread 1-5) WFLYSRV0059: Class Path entry lib/xmlrpc-common.jar
in /var/lib/ovirt-engine/jboss_runtime/deployments/engine.ear/bll.jar
does not point to a valid jar for a Class-Path reference.
2024-06-11 22:18:13,098+03 WARN [org.jboss.as.server.deployment] (MSC
service thread 1-5) WFLYSRV0059: Class Path entry
lib/ws-commons-util.jar in
/var/lib/ovirt-engine/jboss_runtime/deployments/engine.ear/bll.jar does
not point to a valid jar for a Class-Path reference.
2024-06-11 22:18:13,098+03 WARN [org.jboss.as.server.deployment] (MSC
service thread 1-5) WFLYSRV0059: Class Path entry lib/xml-apis.jar in
/var/lib/ovirt-engine/jboss_runtime/deployments/engine.ear/bll.jar does
not point to a valid jar for a Class-Path reference.
2024-06-11 22:18:13,098+03 WARN [org.jboss.as.server.deployment] (MSC
service thread 1-5) WFLYSRV0059: Class Path entry lib/spring-core.jar in
/var/lib/ovirt-engine/jboss_runtime/deployments/engine.ear/bll.jar does
not point to a valid jar for a Class-Path reference.
2024-06-11 22:18:13,099+03 WARN [org.jboss.as.server.deployment] (MSC
service thread 1-5) WFLYSRV0059: Class Path entry lib/spring-jcl.jar in
/var/lib/ovirt-engine/jboss_runtime/deployments/engine.ear/bll.jar does
not point to a valid jar for a Class-Path reference.
2024-06-11 22:18:13,099+03 WARN [org.jboss.as.server.deployment] (MSC
service thread 1-5) WFLYSRV0059: Class Path entry lib/spring-beans.jar
in /var/lib/ovirt-engine/jboss_runtime/deployments/engine.ear/bll.jar
does not point to a valid jar for a Class-Path reference.
2024-06-11 22:18:13,099+03 WARN [org.jboss.as.server.deployment] (MSC
service thread 1-5) WFLYSRV0059: Class Path entry lib/spring-context.jar
in /var/lib/ovirt-engine/jboss_runtime/deployments/engine.ear/bll.jar
does not point to a valid jar for a Class-Path reference.
2024-06-11 22:18:13,099+03 WARN [org.jboss.as.server.deployment] (MSC
service thread 1-5) WFLYSRV0059: Class Path entry lib/spring-aop.jar in
/var/lib/ovirt-engine/jboss_runtime/deployments/engine.ear/bll.jar does
not point to a valid jar for a Class-Path reference.
2024-06-11 22:18:13,099+03 WARN [org.jboss.as.server.deployment] (MSC
service thread 1-5) WFLYSRV0059: Class Path entry
lib/spring-expression.jar in
/var/lib/ovirt-engine/jboss_runtime/deployments/engine.ear/bll.jar does
not point to a valid jar for a Class-Path reference.
2024-06-11 22:18:13,099+03 WARN [org.jboss.as.server.deployment] (MSC
service thread 1-5) WFLYSRV0059: Class Path entry lib/jboss-logging.jar
in /var/lib/ovirt-engine/jboss_runtime/deployments/engine.ear/bll.jar
does not point to a valid jar for a Class-Path reference.
2024-06-11 22:18:13,162+03 INFO [org.jboss.weld.Version] (MSC service
thread 1-4) WELD-000900: 3.1.7 (SP1)
2024-06-11 22:18:13,426+03 INFO [org.jboss.as.clustering.infinispan]
(ServerService Thread Pool -- 48) WFLYCLINF0002: Started
authenticationSessions cache from keycloak container
2024-06-11 22:18:13,426+03 INFO [org.jboss.as.clustering.infinispan]
(ServerService Thread Pool -- 54) WFLYCLINF0002: Started authorization
cache from keycloak container
2024-06-11 22:18:13,428+03 INFO [org.jboss.as.clustering.infinispan]
(ServerService Thread Pool -- 51) WFLYCLINF0002: Started work cache from
keycloak container
2024-06-11 22:18:13,428+03 INFO [org.jboss.as.clustering.infinispan]
(ServerService Thread Pool -- 45) WFLYCLINF0002: Started clientSessions
cache from keycloak container
2024-06-11 22:18:13,429+03 INFO [org.jboss.as.clustering.infinispan]
(ServerService Thread Pool -- 44) WFLYCLINF0002: Started
offlineClientSessions cache from keycloak container
2024-06-11 22:18:13,429+03 INFO [org.jboss.as.clustering.infinispan]
(ServerService Thread Pool -- 46) WFLYCLINF0002: Started sessions cache
from keycloak container
2024-06-11 22:18:13,429+03 INFO [org.jboss.as.clustering.infinispan]
(ServerService Thread Pool -- 47) WFLYCLINF0002: Started loginFailures
cache from keycloak container
2024-06-11 22:18:13,429+03 INFO [org.jboss.as.clustering.infinispan]
(ServerService Thread Pool -- 49) WFLYCLINF0002: Started offlineSessions
cache from keycloak container
2024-06-11 22:18:13,430+03 INFO [org.jboss.as.clustering.infinispan]
(ServerService Thread Pool -- 50) WFLYCLINF0002: Started keys cache from
keycloak container
2024-06-11 22:18:13,430+03 INFO [org.jboss.as.clustering.infinispan]
(ServerService Thread Pool -- 52) WFLYCLINF0002: Started realms cache
from keycloak container
2024-06-11 22:18:13,430+03 INFO [org.jboss.as.clustering.infinispan]
(ServerService Thread Pool -- 55) WFLYCLINF0002: Started users cache
from keycloak container
2024-06-11 22:18:13,430+03 INFO [org.jboss.as.clustering.infinispan]
(ServerService Thread Pool -- 53) WFLYCLINF0002: Started actionTokens
cache from keycloak container
2024-06-11 22:18:13,456+03 WARN [org.jboss.as.server.deployment] (MSC
service thread 1-7) WFLYSRV0273: Excluded subsystem webservices via
jboss-deployment-structure.xml does not exist.
2024-06-11 22:18:14,479+03 WARN [org.jboss.as.dependency.deprecated]
(MSC service thread 1-1) WFLYSRV0221: Deployment
"deployment.engine.ear.bll.jar" is using a deprecated module ("sun.jdk")
which may be removed in future versions without notice.
2024-06-11 22:18:14,499+03 INFO [org.jboss.weld.deployer] (MSC service
thread 1-5) WFLYWELD0003: Processing weld deployment engine.ear
2024-06-11 22:18:14,588+03 INFO [org.jboss.weld.deployer] (MSC service
thread 1-3) WFLYWELD0003: Processing weld deployment root.war
2024-06-11 22:18:14,665+03 INFO [org.jboss.weld.deployer] (MSC service
thread 1-2) WFLYWELD0003: Processing weld deployment services.war
2024-06-11 22:18:14,674+03 INFO [org.jboss.weld.deployer] (MSC service
thread 1-6) WFLYWELD0003: Processing weld deployment docs.war
2024-06-11 22:18:14,703+03 INFO [org.jboss.weld.deployer] (MSC service
thread 1-4) WFLYWELD0003: Processing weld deployment webadmin.war
2024-06-11 22:18:14,709+03 INFO [org.jboss.weld.deployer] (MSC service
thread 1-1) WFLYWELD0003: Processing weld deployment enginesso.war
2024-06-11 22:18:14,731+03 INFO [org.jboss.weld.deployer] (MSC service
thread 1-7) WFLYWELD0003: Processing weld deployment welcome.war
2024-06-11 22:18:14,783+03 INFO [org.jboss.weld.deployer] (MSC service
thread 1-8) WFLYWELD0003: Processing weld deployment bll.jar
2024-06-11 22:18:14,798+03 INFO [org.jboss.as.ejb3.deployment] (MSC
service thread 1-8) WFLYEJB0473: JNDI bindings for session bean named
'LockManager' in deployment unit 'subdeployment "bll.jar" of deployment
"engine.ear"' are as follows:
java:global/engine/bll/LockManager!org.ovirt.engine.core.utils.lock.LockManager
java:app/bll/LockManager!org.ovirt.engine.core.utils.lock.LockManager
java:module/LockManager!org.ovirt.engine.core.utils.lock.LockManager
java:global/engine/bll/LockManager
java:app/bll/LockManager
java:module/LockManager
2024-06-11 22:18:14,798+03 INFO [org.jboss.as.ejb3.deployment] (MSC
service thread 1-8) WFLYEJB0473: JNDI bindings for session bean named
'Backend' in deployment unit 'subdeployment "bll.jar" of deployment
"engine.ear"' are as follows:
java:global/engine/bll/Backend!org.ovirt.engine.core.bll.interfaces.BackendCommandObjectsHandler
java:app/bll/Backend!org.ovirt.engine.core.bll.interfaces.BackendCommandObjectsHandler
java:module/Backend!org.ovirt.engine.core.bll.interfaces.BackendCommandObjectsHandler
java:global/engine/bll/Backend!org.ovirt.engine.core.bll.interfaces.BackendInternal
java:app/bll/Backend!org.ovirt.engine.core.bll.interfaces.BackendInternal
java:module/Backend!org.ovirt.engine.core.bll.interfaces.BackendInternal
java:global/engine/bll/Backend!org.ovirt.engine.core.common.interfaces.BackendLocal
java:app/bll/Backend!org.ovirt.engine.core.common.interfaces.BackendLocal
java:module/Backend!org.ovirt.engine.core.common.interfaces.BackendLocal
2024-06-11 22:18:14,798+03 INFO [org.jboss.as.ejb3.deployment] (MSC
service thread 1-8) WFLYEJB0473: JNDI bindings for session bean named
'OvirtGlusterSchedulingService' in deployment unit 'subdeployment
"bll.jar" of deployment "engine.ear"' are as follows:
java:global/engine/bll/OvirtGlusterSchedulingService!org.ovirt.engine.core.bll.scheduling.OvirtGlusterSchedulingService
java:app/bll/OvirtGlusterSchedulingService!org.ovirt.engine.core.bll.scheduling.OvirtGlusterSchedulingService
java:module/OvirtGlusterSchedulingService!org.ovirt.engine.core.bll.scheduling.OvirtGlusterSchedulingService
java:global/engine/bll/OvirtGlusterSchedulingService
java:app/bll/OvirtGlusterSchedulingService
java:module/OvirtGlusterSchedulingService
2024-06-11 22:18:14,799+03 INFO [org.jboss.as.ejb3.deployment] (MSC
service thread 1-8) WFLYEJB0473: JNDI bindings for session bean named
'InitBackendServicesOnStartupBean' in deployment unit 'subdeployment
"bll.jar" of deployment "engine.ear"' are as follows:
java:global/engine/bll/InitBackendServicesOnStartupBean!org.ovirt.engine.core.bll.InitBackendServicesOnStartup
java:app/bll/InitBackendServicesOnStartupBean!org.ovirt.engine.core.bll.InitBackendServicesOnStartup
java:module/InitBackendServicesOnStartupBean!org.ovirt.engine.core.bll.InitBackendServicesOnStartup
java:global/engine/bll/InitBackendServicesOnStartupBean
java:app/bll/InitBackendServicesOnStartupBean
java:module/InitBackendServicesOnStartupBean
2024-06-11 22:18:14,799+03 INFO [org.jboss.as.ejb3.deployment] (MSC
service thread 1-8) WFLYEJB0473: JNDI bindings for session bean named
'ManagedBlockStorageDiskUtil' in deployment unit 'subdeployment
"bll.jar" of deployment "engine.ear"' are as follows:
java:global/engine/bll/ManagedBlockStorageDiskUtil!org.ovirt.engine.core.bll.storage.disk.managedblock.util.ManagedBlockStorageDiskUtil
java:app/bll/ManagedBlockStorageDiskUtil!org.ovirt.engine.core.bll.storage.disk.managedblock.util.ManagedBlockStorageDiskUtil
java:module/ManagedBlockStorageDiskUtil!org.ovirt.engine.core.bll.storage.disk.managedblock.util.ManagedBlockStorageDiskUtil
java:global/engine/bll/ManagedBlockStorageDiskUtil
java:app/bll/ManagedBlockStorageDiskUtil
java:module/ManagedBlockStorageDiskUtil
2024-06-11 22:18:14,799+03 INFO [org.jboss.as.ejb3.deployment] (MSC
service thread 1-8) WFLYEJB0473: JNDI bindings for session bean named
'MacPoolPerCluster' in deployment unit 'subdeployment "bll.jar" of
deployment "engine.ear"' are as follows:
java:global/engine/bll/MacPoolPerCluster!org.ovirt.engine.core.bll.network.macpool.MacPoolPerCluster
java:app/bll/MacPoolPerCluster!org.ovirt.engine.core.bll.network.macpool.MacPoolPerCluster
java:module/MacPoolPerCluster!org.ovirt.engine.core.bll.network.macpool.MacPoolPerCluster
java:global/engine/bll/MacPoolPerCluster
java:app/bll/MacPoolPerCluster
java:module/MacPoolPerCluster
2024-06-11 22:18:14,934+03 INFO [org.jboss.as.clustering.infinispan]
(ServerService Thread Pool -- 55) WFLYCLINF0002: Started realmRevisions
cache from keycloak container
2024-06-11 22:18:14,943+03 INFO [org.jboss.as.clustering.infinispan]
(ServerService Thread Pool -- 55) WFLYCLINF0002: Started userRevisions
cache from keycloak container
2024-06-11 22:18:14,950+03 INFO [org.jboss.as.clustering.infinispan]
(ServerService Thread Pool -- 55) WFLYCLINF0002: Started
authorizationRevisions cache from keycloak container
2024-06-11 22:18:15,074+03 INFO [org.infinispan.CONTAINER]
(ServerService Thread Pool -- 50) ISPN000556: Starting user marshaller
'org.wildfly.clustering.infinispan.marshalling.jboss.JBossMarshaller'
2024-06-11 22:18:15,095+03 INFO [org.jboss.as.clustering.infinispan]
(ServerService Thread Pool -- 50) WFLYCLINF0002: Started dashboard cache
from ovirt-engine container
2024-06-11 22:18:15,095+03 INFO [org.jboss.as.clustering.infinispan]
(ServerService Thread Pool -- 52) WFLYCLINF0002: Started inventory cache
from ovirt-engine container
2024-06-11 22:18:18,858+03 INFO
[org.jboss.resteasy.resteasy_jaxrs.i18n] (ServerService Thread Pool --
56) RESTEASY002225: Deploying javax.ws.rs.core.Application: class
org.ovirt.engine.api.restapi.BackendApplication
2024-06-11 22:18:18,859+03 INFO
[org.jboss.resteasy.resteasy_jaxrs.i18n] (ServerService Thread Pool --
56) RESTEASY002220: Adding singleton resource
org.ovirt.engine.api.restapi.resource.BackendApiResource from
Application class org.ovirt.engine.api.restapi.BackendApplication
2024-06-11 22:18:18,859+03 INFO
[org.jboss.resteasy.resteasy_jaxrs.i18n] (ServerService Thread Pool --
56) RESTEASY002210: Adding provider singleton
org.ovirt.engine.api.restapi.resource.validation.JsonExceptionMapper
from Application class org.ovirt.engine.api.restapi.BackendApplication
2024-06-11 22:18:18,859+03 INFO
[org.jboss.resteasy.resteasy_jaxrs.i18n] (ServerService Thread Pool --
56) RESTEASY002210: Adding provider singleton
org.ovirt.engine.api.restapi.resource.validation.IOExceptionMapper from
Application class org.ovirt.engine.api.restapi.BackendApplication
2024-06-11 22:18:18,859+03 INFO
[org.jboss.resteasy.resteasy_jaxrs.i18n] (ServerService Thread Pool --
56) RESTEASY002210: Adding provider singleton
org.ovirt.engine.api.restapi.resource.validation.MalformedIdExceptionMapper
from Application class org.ovirt.engine.api.restapi.BackendApplication
2024-06-11 22:18:18,859+03 INFO
[org.jboss.resteasy.resteasy_jaxrs.i18n] (ServerService Thread Pool --
56) RESTEASY002210: Adding provider singleton
org.ovirt.engine.api.restapi.resource.validation.ValidationExceptionMapper
from Application class org.ovirt.engine.api.restapi.BackendApplication
2024-06-11 22:18:18,860+03 INFO
[org.jboss.resteasy.resteasy_jaxrs.i18n] (ServerService Thread Pool --
56) RESTEASY002210: Adding provider singleton
org.ovirt.engine.api.restapi.resource.validation.MappingExceptionMapper
from Application class org.ovirt.engine.api.restapi.BackendApplication
2024-06-11 22:18:18,919+03 INFO [org.wildfly.extension.undertow]
(ServerService Thread Pool -- 56) WFLYUT0021: Registered web context:
'/ovirt-engine/api' for server 'default-server'
2024-06-11 22:18:19,391+03 INFO
[org.hibernate.jpa.internal.util.LogHelper] (ServerService Thread Pool
-- 55) HHH000204: Processing PersistenceUnitInfo [
name: keycloak-default
...]
2024-06-11 22:18:19,455+03 INFO [org.hibernate.Version] (ServerService
Thread Pool -- 55) HHH000412: Hibernate Core {5.3.20.Final}
2024-06-11 22:18:19,457+03 INFO [org.hibernate.cfg.Environment]
(ServerService Thread Pool -- 55) HHH000206: hibernate.properties not
found
2024-06-11 22:18:19,589+03 INFO
[org.hibernate.annotations.common.Version] (ServerService Thread Pool --
55) HCANN000001: Hibernate Commons Annotations {5.0.5.Final}
2024-06-11 22:18:19,778+03 INFO [org.hibernate.dialect.Dialect]
(ServerService Thread Pool -- 55) HHH000400: Using dialect:
org.hibernate.dialect.PostgreSQL95Dialect
2024-06-11 22:18:19,851+03 INFO
[org.hibernate.engine.jdbc.env.internal.LobCreatorBuilderImpl]
(ServerService Thread Pool -- 55) HHH000424: Disabling contextual LOB
creation as createClob() method threw error :
java.lang.reflect.InvocationTargetException
2024-06-11 22:18:19,856+03 INFO [org.hibernate.type.BasicTypeRegistry]
(ServerService Thread Pool -- 55) HHH000270: Type registration
[java.util.UUID] overrides previous :
org.hibernate.type.UUIDBinaryType@71c04561
2024-06-11 22:18:19,865+03 INFO
[org.hibernate.envers.boot.internal.EnversServiceImpl] (ServerService
Thread Pool -- 55) Envers integration enabled? : true
2024-06-11 22:18:20,282+03 INFO [org.hibernate.orm.beans]
(ServerService Thread Pool -- 55) HHH10005002: No explicit CDI
BeanManager reference was passed to Hibernate, but CDI is available on
the Hibernate ClassLoader.
2024-06-11 22:18:21,006+03 INFO [org.jboss.weld.Bootstrap] (Weld Thread
Pool -- 1) WELD-001125: Illegal bean type
org.ovirt.engine.core.dao.DefaultGenericDao<org.ovirt.engine.core.common.businessentities.Provider<?>,
org.ovirt.engine.core.compat.Guid> ignored on
[EnhancedAnnotatedTypeImpl] public @Named @Singleton class
org.ovirt.engine.core.dao.provider.ProviderDaoImpl
2024-06-11 22:18:21,016+03 INFO [org.jboss.weld.Bootstrap] (Weld Thread
Pool -- 1) WELD-001125: Illegal bean type interface
org.ovirt.engine.core.dao.ReadDao<org.ovirt.engine.core.common.businessentities.Provider<?>,class
org.ovirt.engine.core.compat.Guid> ignored on
[EnhancedAnnotatedTypeImpl] public @Named @Singleton class
org.ovirt.engine.core.dao.provider.ProviderDaoImpl
2024-06-11 22:18:21,016+03 INFO [org.jboss.weld.Bootstrap] (Weld Thread
Pool -- 1) WELD-001125: Illegal bean type class
org.ovirt.engine.core.dao.DefaultReadDao<org.ovirt.engine.core.common.businessentities.Provider<?>,class
org.ovirt.engine.core.compat.Guid> ignored on
[EnhancedAnnotatedTypeImpl] public @Named @Singleton class
org.ovirt.engine.core.dao.provider.ProviderDaoImpl
2024-06-11 22:18:21,016+03 INFO [org.jboss.weld.Bootstrap] (Weld Thread
Pool -- 1) WELD-001125: Illegal bean type
org.ovirt.engine.core.dao.GenericDao<org.ovirt.engine.core.common.businessentities.Provider<?>,
org.ovirt.engine.core.compat.Guid> ignored on
[EnhancedAnnotatedTypeImpl] public @Named @Singleton class
org.ovirt.engine.core.dao.provider.ProviderDaoImpl
2024-06-11 22:18:21,016+03 INFO [org.jboss.weld.Bootstrap] (Weld Thread
Pool -- 1) WELD-001125: Illegal bean type interface
org.ovirt.engine.core.dao.ModificationDao<org.ovirt.engine.core.common.businessentities.Provider<?>,class
org.ovirt.engine.core.compat.Guid> ignored on
[EnhancedAnnotatedTypeImpl] public @Named @Singleton class
org.ovirt.engine.core.dao.provider.ProviderDaoImpl
2024-06-11 22:18:21,016+03 INFO [org.jboss.weld.Bootstrap] (Weld Thread
Pool -- 1) WELD-001125: Illegal bean type
org.ovirt.engine.core.dao.SearchDao<org.ovirt.engine.core.common.businessentities.Provider<?>>
ignored on [EnhancedAnnotatedTypeImpl] public @Named @Singleton class
org.ovirt.engine.core.dao.provider.ProviderDaoImpl
2024-06-11 22:18:21,061+03 INFO [org.jboss.weld.Bootstrap] (Weld Thread
Pool -- 2) WELD-001125: Illegal bean type
org.springframework.jdbc.core.RowMapper<org.ovirt.engine.core.common.businessentities.Provider<?>>
ignored on [EnhancedAnnotatedTypeImpl] private static class
org.ovirt.engine.core.dao.provider.ProviderDaoImpl$ProviderRowMapper
2024-06-11 22:18:21,788+03 INFO
[org.hibernate.hql.internal.QueryTranslatorFactoryInitiator]
(ServerService Thread Pool -- 55) HHH000397: Using
ASTQueryTranslatorFactory
2024-06-11 22:18:22,098+03 INFO [org.wildfly.extension.undertow]
(ServerService Thread Pool -- 51) WFLYUT0021: Registered web context:
'/ovirt-engine/docs' for server 'default-server'
2024-06-11 22:18:22,103+03 INFO [org.wildfly.extension.undertow]
(ServerService Thread Pool -- 56) WFLYUT0021: Registered web context:
'/' for server 'default-server'
2024-06-11 22:18:22,113+03 INFO [org.wildfly.extension.undertow]
(ServerService Thread Pool -- 51) WFLYUT0021: Registered web context:
'/ovirt-engine/services' for server 'default-server'
2024-06-11 22:18:22,159+03 WARN
[org.jboss.jca.core.connectionmanager.pool.strategy.OnePool]
(ServerService Thread Pool -- 51) IJ000407: No lazy enlistment available
for ENGINEDataSource
2024-06-11 22:18:22,171+03 WARN
[org.jboss.jca.core.connectionmanager.pool.strategy.OnePool]
(ServerService Thread Pool -- 51) IJ000407: No lazy enlistment available
for DWHDataSource
2024-06-11 22:18:22,182+03 INFO [org.wildfly.extension.undertow]
(ServerService Thread Pool -- 49) WFLYUT0021: Registered web context:
'/ovirt-engine' for server 'default-server'
2024-06-11 22:18:22,201+03 INFO [org.wildfly.extension.undertow]
(ServerService Thread Pool -- 51) WFLYUT0021: Registered web context:
'/ovirt-engine/webadmin' for server 'default-server'
Per my understanding, this doesn't seem like an issue with the
PostgreSQL JDBC connector as referenced here
https://lists.ovirt.org/archives/list/users@ovirt.org/thread/SBCWNXLFLJBK...
, but I could be mistaken. What further steps could be taken to mitigate
this issue? This is a new deployment, so I'd assume there'd be issues if
upgrading older instances but not with new deployments.. The server is
running AlmaLinux 9.4 so maybe this would affect this?
Best regards,
Samu Paaso
10 months, 1 week
PKIX path validation failed
by Ali Gusainov
Hello experts.
Environment:
oVirt: Software Version:4.4.10.7-1.el8
OS: CentOS Linux release 8.5.2111
Symptoms:
1. At login prompt I see this:
"PKIX path validation failed: java.security.certCertPathValidatorException: validity check failed"
which successfully resolved by "engine-setup --offline"
2. Now the host at 'Unassigned' status and all VMs marked with '?' symbol.
At vdsm.log I found message:
ERROR (Reactor thread) [ProtocolDetector.SSLHandshakeDispatcher] ssl handshake: socket error, address: ::ffff:..... (sslutils:272)
At engine.log I found messages:
ERROR [org.ovirt.engine.core.vdsbroker.monitoring.HostMonitoring] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-2) [] Unable to RefreshCapabilities: VDSNetworkException: VDSGenericException: VDSNetworkException: PKIX path validation failed: java.security.cert.CertPathValidatorException: validity check failed
...
2024-06-10 17:54:13,576+05 ERROR [org.ovirt.engine.core.vdsbroker.monitoring.HostMonitoring] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-8) [] Unable to RefreshCapabilities: VDSNetworkException: VDSGenericException: VDSNetworkException: PKIX path validation failed: java.security.cert.CertPathValidatorException: validity check failed
Cause:
Certificate expired.
Questions:
1. How to bring host 'Online'?
2. How to properly update SSL?
10 months, 1 week
Can't access (ping) other 10G interface on host from Hosted Engine
by Patrick Lomakin
Hello community. I am using the edge version of the Ovirt installation and hosts with Rocky 9. One of my tasks was to partition the networks in the cluster. To make the networks work more properly in Ovirt, I divided the network into VLANs. VLAN 2 is for “ovirtmgmt” network, VLAN 3 is for “gluster-net”, VLAN - 4 is for “migration”. I configured routing between VLANs. Each host has a 10G NIC for the gluster network, and two 1G Bonded NICs for the management network. In the cluster settings, I switched the gluster network to the configured VLAN3 in ovirt. Before that I configured a gluster network for each host via Ovirt, attached it to the 10G interface, assigned static IPs and synchronized the networks. Then I tried to connect bricks and create volume through Ovirt panel, but I got an error to access IP from VLAN3. I pinged HOST1 <---> HOST2 <---> HOST3 and the gluster network pings perfectly in either direction from any host. But the problem is that Hosted Engine can't ping the gluster IP from the 10G interface on any host. Consequently, Hosted Engine does not see the configured IP on the second 10G interface of the host. Only the first interface in the VLAN2 subnet is pinged. What can be the problem? In my eyes this is a pretty common solution that is used everywhere. Thank for any help
10 months, 1 week
Unable to access the oVirt Manager console and are also unable to connect via SSH
by Sachendra Shukla
HI Team,
We are currently unable to access the oVirt Manager console and are also
unable to connect via SSH. The error message we are receiving is: "The
server is temporarily unable to service your request due to maintenance
downtime or capacity problems. Please try again later."
Please provide a resolution if you have one.
Note: We are able to ping the oVirt Manager IP also vms are working .
Below are the snapshots for your reference.
[image: image.png]
[image: image.png]
[image: image.png]
--
Regards,
*Sachendra Shukla*
IT Administrator
Yagna iQ, Inc. and subsidiaries
Email: Sachendra.shukla(a)yagnaiq.com <dnyanesh.tisge(a)yagnaiq.com>
Website: https://yagnaiq.com
Privacy Policy: https://www.yagnaiq.com/privacy-policy/
<https://www.linkedin.com/company/yagnaiq/mycompany/>
<https://www.youtube.com/channel/UCeHXOpcUxWvOJO0aegD99Jg>
This communication and any attachments may contain confidential information
and/or Yagna iQ, Inc. copyright material.
All unauthorized use, disclosure, or distribution is prohibited. If you are
not the intended recipient, please notify Yagna iQ immediately by replying
to the email and destroy all copies of this communication.
This email has been scanned for all known viruses. The sender does not
accept liability for any damage inflicted by viewing the content of this
email.
10 months, 1 week
Re: HCI Gluster Hosted Engine unexpected behavior
by Patryk Lamakin
And when using a 2 + 1 arbiter replica, if I understand correctly, you can
only disable 1 replica host other than the arbiter? What happens in this
case if you disable only the host with the arbiter and leave the other 2
replicas running?
10 months, 1 week
HCI Gluster Hosted Engine unexpected behavior
by Patrick Lomakin
Hey, everybody. I have 3 hosts on which Gluster replica 3 volume called “engine” is deployed. When I try to put 2 of the 3 hosts into maintenance mode, my deployment crashes. I originally expected that with replica 3 I could shut down 2 of the hosts and everything would work. However, I saw that the default for Gluster is quorum server not allowing more than one host to be disabled. But even after disabling the quorum and verifying that the Gluster disk is available with one host enabled, Hosted Engine still does not access the storage. Who can explain me then the point of using replica 3 if I can't disable 2 hosts and is there any way to fix this behavior?
10 months, 1 week
Re: Problems with RHEL 9.4 hosts
by Devin A. Bougie
Unfortunately I’m not exactly sure what the problem was, but I was able to get the fully-updated EL9.4 host back in the cluster after manually deleting all of the iSCSI nodes.
Some of the iscsiadm commands printed worked fine:
———
[root@lnxvirt06 ~]# /sbin/iscsiadm -m iface
bond1 tcp,<empty>,<empty>,bond1,<empty>
default tcp,<empty>,<empty>,<empty>,<empty>
iser iser,<empty>,<empty>,<empty>,<empty>
[root@lnxvirt06 ~]# /sbin/iscsiadm -m node -T iqn.2002-10.com.infortrend:raid.sn8087428.012 -I bond1 -p 192.168.56.54:3260,1 --op=new
New iSCSI node [tcp:[hw=,ip=,net_if=bond1,iscsi_if=bond1] 192.168.56.54,3260,1 iqn.2002-10.com.infortrend:raid.sn8087428.012] added
[root@lnxvirt06 ~]# iscsiadm -m node
192.168.56.54:3260,1 iqn.2002-10.com.infortrend:raid.sn8073743.001
192.168.56.54:3260,1 iqn.2002-10.com.infortrend:raid.sn8073743.001
192.168.56.56:3260,1 iqn.2002-10.com.infortrend:raid.sn8073743.101
192.168.56.56:3260,1 iqn.2002-10.com.infortrend:raid.sn8073743.101
192.168.56.55:3260,1 iqn.2002-10.com.infortrend:raid.sn8087428.012
192.168.56.55:3260,1 iqn.2002-10.com.infortrend:raid.sn8087428.012
192.168.56.57:3260,1 iqn.2002-10.com.infortrend:raid.sn8087428.112
192.168.56.57:3260,1 iqn.2002-10.com.infortrend:raid.sn8087428.112
192.168.56.50:3260,1 iqn.2002-10.com.infortrend:raid.uid58204.001
192.168.56.50:3260,1 iqn.2002-10.com.infortrend:raid.uid58204.001
192.168.56.51:3260,1 iqn.2002-10.com.infortrend:raid.uid58204.012
192.168.56.51:3260,1 iqn.2002-10.com.infortrend:raid.uid58204.012
192.168.56.52:3260,1 iqn.2002-10.com.infortrend:raid.uid58204.101
192.168.56.52:3260,1 iqn.2002-10.com.infortrend:raid.uid58204.101
192.168.56.53:3260,1 iqn.2002-10.com.infortrend:raid.uid58204.112
192.168.56.53:3260,1 iqn.2002-10.com.infortrend:raid.uid58204.112
192.168.56.50:3260,1 iqn.2002-10.com.infortrend:raid.uid58207.001
192.168.56.50:3260,1 iqn.2002-10.com.infortrend:raid.uid58207.001
192.168.56.51:3260,1 iqn.2002-10.com.infortrend:raid.uid58207.012
192.168.56.52:3260,1 iqn.2002-10.com.infortrend:raid.uid58207.101
192.168.56.53:3260,1 iqn.2002-10.com.infortrend:raid.uid58207.112
———
But others didn’t, where the only difference is the portal:
———
[root@lnxvirt06 ~]# /sbin/iscsiadm -m node -T iqn.2002-10.com.infortrend:raid.sn8087428.012 -I bond1 -p 192.168.56.55:3260,1 --op=new
iscsiadm: Error while adding record: invalid parameter
———
Likewise, I could delete some nodes using iscsiadm but not others:
———
[root@lnxvirt06 ~]# /sbin/iscsiadm -m node -T iqn.2002-10.com.infortrend:raid.sn8087428.012 -I bond1 -p 192.168.56.54:3260,1 --op=delete
[root@lnxvirt06 ~]# /sbin/iscsiadm -m node -T iqn.2002-10.com.infortrend:raid.sn8087428.012 -I bond1 -p 192.168.56.55:3260,1 --op=delete
iscsiadm: Could not execute operation on all records: invalid parameter
[root@lnxvirt06 ~]# iscsiadm -m node -p 192.168.56.50 -o delete
iscsiadm: Could not execute operation on all records: invalid parameter
———
At this point I wiped out /var/lib/iscsi/, rebooted, and everything just worked.
Thanks so much for your time and help!
Sincerely,
Devin
> On Jun 7, 2024, at 10:26 AM, Jean-Louis Dupond <jean-louis(a)dupond.be> wrote:
>
> 2024-06-07 09:59:16,720-0400 WARN (jsonrpc/0) [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'iface'] (iscsiadm:104)
> 2024-06-07 09:59:16,751-0400 INFO (jsonrpc/0) [storage.iscsi] Adding iscsi node for target 192.168.56.55:3260,1 iqn.2002-10.com.infortrend:raid.sn8087428.012 iface bond1 (iscsi:192)
> 2024-06-07 09:59:16,751-0400 WARN (jsonrpc/0) [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'node', '-T', 'iqn.2002-10.com.infortrend:raid.sn8087428.012', '-I', 'bond1', '-p', '192.168.56.55:3260,1', '--op=new'] (iscsiadm:104)
> 2024-06-07 09:59:16,785-0400 WARN (jsonrpc/0) [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'iface'] (iscsiadm:104)
> 2024-06-07 09:59:16,825-0400 ERROR (jsonrpc/0) [storage.storageServer] Could not configure connection to 192.168.56.55:3260,1 iqn.2002-10.com.infortrend:raid.sn8087428.012 and iface <IscsiInterface name='bond1' transport='tcp' netIfaceName='bond1'>: (7, b'', b'iscsiadm: Error while adding record: invalid parameter\n') (storageServer:580)
> Can you try to run those commands manually on the host?
> And see what it gives :)
> On 7/06/2024 16:13, Devin A. Bougie wrote:
>> Thank you! I added a warning at the line you indicated, which produces the following output:
>>
>> ———
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:16,452-0400 WARN (jsonrpc/0) [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'iface'] (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:16,493-0400 WARN (jsonrpc/0) [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'iface'] (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:16,532-0400 WARN (jsonrpc/0) [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'iface'] (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:16,565-0400 WARN (jsonrpc/0) [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'iface'] (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:16,595-0400 WARN (jsonrpc/0) [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'iface'] (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:16,636-0400 WARN (jsonrpc/0) [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'iface'] (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:16,670-0400 WARN (jsonrpc/0) [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'iface'] (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:16,720-0400 WARN (jsonrpc/0) [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'iface'] (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:16,751-0400 WARN (jsonrpc/0) [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'node', '-T', 'iqn.2002-10.com.infortrend:raid.sn8087428.012', '-I', 'bond1', '-p', '192.168.56.55:3260,1', '--op=new'] (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:16,785-0400 WARN (jsonrpc/0) [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'iface'] (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:16,825-0400 WARN (jsonrpc/0) [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'node', '-T', 'iqn.2002-10.com.infortrend:raid.sn8073743.001', '-I', 'bond1', '-p', '192.168.56.54:3260,1', '--op=new'] (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:16,856-0400 WARN (jsonrpc/0) [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'iface'] (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:16,889-0400 WARN (jsonrpc/0) [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'node', '-T', 'iqn.2002-10.com.infortrend:raid.sn8073743.101', '-I', 'bond1', '-p', '192.168.56.56:3260,1', '--op=new'] (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:16,924-0400 WARN (jsonrpc/0) [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'iface'] (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:16,957-0400 WARN (jsonrpc/0) [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'node', '-T', 'iqn.2002-10.com.infortrend:raid.sn8087428.112', '-I', 'bond1', '-p', '192.168.56.57:3260,1', '--op=new'] (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:16,987-0400 WARN (jsonrpc/0) [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'iface'] (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:17,018-0400 WARN (jsonrpc/0) [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'node', '-T', 'iqn.2002-10.com.infortrend:raid.uid58204.012', '-I', 'bond1', '-p', '192.168.56.51:3260,1', '--op=new'] (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:17,051-0400 WARN (jsonrpc/0) [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'iface'] (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:17,079-0400 WARN (jsonrpc/0) [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'node', '-T', 'iqn.2002-10.com.infortrend:raid.uid58204.001', '-I', 'bond1', '-p', '192.168.56.50:3260,1', '--op=new'] (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:17,112-0400 WARN (jsonrpc/0) [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'iface'] (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:17,142-0400 WARN (jsonrpc/0) [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'node', '-T', 'iqn.2002-10.com.infortrend:raid.uid58204.112', '-I', 'bond1', '-p', '192.168.56.53:3260,1', '--op=new'] (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:17,174-0400 WARN (jsonrpc/0) [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'iface'] (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:17,204-0400 WARN (jsonrpc/0) [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'node', '-T', 'iqn.2002-10.com.infortrend:raid.uid58204.101', '-I', 'bond1', '-p', '192.168.56.52:3260,1', '--op=new'] (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:17,237-0400 WARN (jsonrpc/0) [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'iface'] (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:44,186-0400 WARN (jsonrpc/0) [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'iface'] (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:44,234-0400 WARN (jsonrpc/0) [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'iface'] (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:44,268-0400 WARN (jsonrpc/0) [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'iface'] (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:44,310-0400 WARN (jsonrpc/0) [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'iface'] (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:44,343-0400 WARN (jsonrpc/0) [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'iface'] (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:44,370-0400 WARN (jsonrpc/0) [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'iface'] (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:44,408-0400 WARN (jsonrpc/0) [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'iface'] (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:44,442-0400 WARN (jsonrpc/0) [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'iface'] (iscsiadm:104)
>> ———
>>
>> The full vdsm.log is below.
>>
>> Thanks again,
>> Devin
>>
>>
>>
>>
>> > On Jun 7, 2024, at 8:14 AM, Jean-Louis Dupond <jean-louis(a)dupond.be> wrote:
>> >
>> > Weird, I have the same 6.2.1.9-1 version, and here it works.
>> > You can try to add some print here: https://github.com/oVirt/vdsm/blob/4d11cae0b1b7318b282d9f90788748c0ef3cc9...
>> >
>> > This should print all executed iscsiadm commands.
>> >
>> >
>> > On 6/06/2024 20:50, Devin A. Bougie wrote:
>> >> Awesome, thanks again. Yes, the host is fixed by just downgrading the iscsi-initiator-utils and iscsi-initiator-utils-iscsiuio packages from:
>> >> 6.2.1.9-1.gita65a472.el9.x86_64
>> >> to:
>> >> 6.2.1.4-3.git2a8f9d8.el9.x86_64
>> >>
>> >> Any additional pointers of where to look or how to debug the iscsiadm calls would be greatly appreciated.
>> >>
>> >> Many thanks!
>> >> Devin
>> >>
>> >>> On Jun 6, 2024, at 2:04 PM, Jean-Louis Dupond <jean-louis(a)dupond.be> wrote:
>> >>>
>> >>> 2024-06-06 13:28:10,478-0400 ERROR (jsonrpc/5) [storage.storageServer] Could not configure connection to 192.168.56.57:3260,1 iqn.2002-10.com.infortrend:raid.sn8087428.112 and iface <IscsiInterface name='bond1' transport='tcp' netIfaceName='bond1'>: (7, b'', b'iscsiadm: Error while adding record: invalid parameter\n') (storageServer:580)
>> >>>
>> >>> Seems like some issue with iscsiadm calls.
>> >>> Might want to debug which calls it does or what version change there is for iscsiadm.
>> >>>
>> >>>
>> >>>
>> >>> "Devin A. Bougie" <devin.bougie(a)cornell.edu> schreef op 6 juni 2024 19:32:29 CEST:
>> >>> Thanks so much! Yes, that patch fixed the “out of sync network” issue. However, we’re still unable to join a fully updated 9.4 host to the cluster - now with "Failed to connect Host to Storage Servers”. Downgrading all of the updated packages fixes the issue.
>> >>>
>> >>> Please see the attached vdsm.log and supervdsm.log from the host after updating it to EL 9.4 and then trying to activate it. Any more suggestions would be greatly appreciated.
>> >>>
>> >>> Thanks again,
>> >>> Devin
>> >>>
>> >>>
>> >>>
>> >>>
>> >>>
>> >>>> On Jun 5, 2024, at 2:35 AM, Jean-Louis Dupond <jean-louis(a)dupond.be> wrote:
>> >>>>
>> >>>> You most likely need the following patch:
>> >>>> https://github.com/oVirt/vdsm/commit/49eaf70c5a14eb00e85eac5f91ac36f010a9...
>> >>>>
>> >>>> Test with that, guess it's fixed then :)
>> >>>>
>> >>>> On 4/06/2024 22:33, Devin A. Bougie wrote:
>> >>>>> Are there any known incompatibilities with RHEL 9.4 (and derivatives)?
>> >>>>>
>> >>>>> We are running a 7-node ovirt 4.5.5-1.el8 self hosted engine cluster, with all of the hosts running AlmaLinux 9. After upgrading from 9.3 to 9.4, every node started flapping between “Up” and “NonOperational,” with VMs in turn migrating between hosts.
>> >>>>>
>> >>>>> I believe the underlying issue (or at least the point I got stuck at) was with two of our logical networks being stuck “out of sync” on all hosts. I was unable to synchronize networks or setup the networks using the UI. A reinstall of a host succeeded but then the host immediately reverted to the same state with the same networks being out of sync.
>> >>>>>
>> >>>>> I eventually found that if I downgraded the host from 9.4 to 9.3, it immediately became stable and back online.
>> >>>>>
>> >>>>> Are there any known incompatibilities with RHEL 9.4 (and derivatives)? If not, I’m happy to upgrade a single node to test. Please just let me know what log files and details would be most helpful in debugging what goes wrong.
>> >>>>>
>> >>>>> (And yes, I know we need to upgrade the hosted engine VM itself now that CentOS Stream 8 is now EOL).
>> >>>>>
>> >>>>> Many thanks,
>> >>>>> Devin
>> >>>>>
>> >>>>> _______________________________________________
>> >>>>> Users mailing list -- users(a)ovirt.org
>> >>>>> To unsubscribe send an email to users-leave(a)ovirt.org
>> >>>>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> >>>>> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
>> >>>>> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/PIROQCVLPVB...
>>
>>
>>
>> _______________________________________________
>> Users mailing list -- users(a)ovirt.org
>> To unsubscribe send an email to users-leave(a)ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
>> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/AIRCRFWLSG5...
>>
10 months, 1 week
Problems with RHEL 9.4 hosts
by Devin A. Bougie
Are there any known incompatibilities with RHEL 9.4 (and derivatives)?
We are running a 7-node ovirt 4.5.5-1.el8 self hosted engine cluster, with all of the hosts running AlmaLinux 9. After upgrading from 9.3 to 9.4, every node started flapping between “Up” and “NonOperational,” with VMs in turn migrating between hosts.
I believe the underlying issue (or at least the point I got stuck at) was with two of our logical networks being stuck “out of sync” on all hosts. I was unable to synchronize networks or setup the networks using the UI. A reinstall of a host succeeded but then the host immediately reverted to the same state with the same networks being out of sync.
I eventually found that if I downgraded the host from 9.4 to 9.3, it immediately became stable and back online.
Are there any known incompatibilities with RHEL 9.4 (and derivatives)? If not, I’m happy to upgrade a single node to test. Please just let me know what log files and details would be most helpful in debugging what goes wrong.
(And yes, I know we need to upgrade the hosted engine VM itself now that CentOS Stream 8 is now EOL).
Many thanks,
Devin
10 months, 1 week