Error when deploy Ovirt4.4 Hosted Engine
by staybox@gmail.com
Hello, I get error, need help.
[ ERROR ] fatal: [localhost]: FAILED! => {"msg": "The conditional check 'not ipv6_deployment|bool and route_rules_ipv4.stdout | from_json | selectattr('priority', 'equalto', 100) | selectattr('dst', 'equalto', virbr_cidr_ipv4 | ipaddr('address') ) | list | length == 0' failed. The error was: error while evaluating conditional (not ipv6_deployment|bool and route_rules_ipv4.stdout | from_json | selectattr('priority', 'equalto', 100) | selectattr('dst', 'equalto', virbr_cidr_ipv4 | ipaddr('address') ) | list | length == 0): 'dict object' has no attribute 'dst'\n\nThe error appears to be in '/usr/share/ansible/collections/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/bootstrap_local_vm/01_prepare_routing_rules.yml': line 81, column 5, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n changed_when: true\n - name: Add IPv4 inbound route rules\n ^ here\n"}
2 months, 2 weeks
How to list all snapshots?
by jorgevisentini@gmail.com
Hello everyone!
First, I would like to thank everyone involved in this wonderful project. I leave here my sincere thanks!
Does anyone know if it is possible to list all snapshots automatically? It can be by ansible, python, shell... any way that helps to list them all without having to enter Domain by Domain.
Thank you all!
2 months, 2 weeks
snapshot solution: Existing snapshots that were taken after this one will be erased.
by dhanaraj.ramesh@yahoo.com
Hi Team,
when I want to commit the older snapshots I'm getting warning stating " Existing snapshots that were taken after this one will be erased.". is there any way we can retain the latest snapshots as is in the chain?
I knew cloning and template export options are there to secure that latest snapshot data but these are will consume additional space in storage and take time.
3 months
Restart oVirt-Engine
by Jeremey Wise
How ,without reboot of hosting system, do I restart the oVirt engine?
# I tried below but do not seem to effect the virtual machine
[root@thor iso]# systemctl restart ov
ovirt-ha-agent.service ovirt-imageio.service
ovn-controller.service ovs-delete-transient-ports.service
ovirt-ha-broker.service ovirt-vmconsole-host-sshd.service
ovsdb-server.service ovs-vswitchd.service
[root@thor iso]#
# You cannot restart the VM " HostedEngine " as it responses:
Error while executing action:
HostedEngine:
- Cannot restart VM. This VM is not managed by the engine.
Reason is I had to do some work on a node. Reboot it.. it is back up..
network is all fine.. Cockpit working fine... and gluster fine.. But
oVirt-Engine refuses to accept the node is up.
--
p <jeremey.wise(a)gmail.com>enguinpages
3 months, 3 weeks
Unable to access ovirt Admin Screen from ovirt Host
by louisb@ameritech.net
I've reinstalled ovirt 4.4 on my server remotely via cockpit terminal. I'm able to access the ovirt admin screen remotely from the laptop that I used for the install. However, using the same URL I'm unable to gain access to the admin screen.
Following the instruction in the documentation I've modified the file: /etc/ovirt-engine/engine.conf.d/99-custom-sso-setup.conf, to reflect the DNS name and I enter in the IP address. But I'm still unable to access the screen from the server console.
What else needs to change in order to gain access from the server console?
Thanks
4 months, 4 weeks
SPM and Task error ...
by Enrico
Hi all,
my ovirt cluster has got 3 Hypervisors runnig Centos 7.5.1804 vdsm is
4.20.39.1-1.el7,
ovirt engine is 4.2.4.5-1.el7, the storage systems are HP MSA P2000 and
2050 (fibre channel).
I need to stop one of the hypervisors for maintenance but this system is
the storage pool manager.
For this reason I decided to manually activate SPM in one of the other
nodes but this operation is not
successful.
In the ovirt engine (engine.log) the error is this:
2019-07-25 12:39:16,744+02 INFO
[org.ovirt.engine.core.bll.storage.pool.ForceSelectSPMCommand] (default
task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] Running command:
ForceSelectSPMCommand internal: false. Entities affected : ID:
81c9bd3c-ae0a-467f-bf7f-63ab30cd8d9e Type: VDSAction group
MANIPULATE_HOST with role type ADMIN
2019-07-25 12:39:16,745+02 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.SpmStopOnIrsVDSCommand]
(default task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] START,
SpmStopOnIrsVDSCommand(
SpmStopOnIrsVDSCommandParameters:{storagePoolId='18d57688-6ed4-43b8-bd7c-0665b55950b7',
ignoreFailoverLimit='false'}), log id: 37bf4639
2019-07-25 12:39:16,747+02 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.ResetIrsVDSCommand] (default
task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] START,
ResetIrsVDSCommand(
ResetIrsVDSCommandParameters:{storagePoolId='18d57688-6ed4-43b8-bd7c-0665b55950b7',
ignoreFailoverLimit='false',
vdsId='751f3e99-b95e-4c31-bc38-77f5661a0bdc',
ignoreStopFailed='false'}), log id: 2522686f
2019-07-25 12:39:16,749+02 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStopVDSCommand] (default
task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] START,
SpmStopVDSCommand(HostName = infn-vm05.management,
SpmStopVDSCommandParameters:{hostId='751f3e99-b95e-4c31-bc38-77f5661a0bdc',
storagePoolId='18d57688-6ed4-43b8-bd7c-0665b55950b7'}), log id: 1810fd8b
2019-07-25 12:39:16,758+02 *ERROR*
[org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStopVDSCommand] (default
task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] SpmStopVDSCommand::Not
stopping SPM on vds 'infn-vm05.management', pool id
'18d57688-6ed4-43b8-bd7c-0665b55950b7' as there are uncleared tasks
'Task 'fdcf4d1b-82fe-49a6-b233-323ebe568f8e', status 'running''
2019-07-25 12:39:16,758+02 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStopVDSCommand] (default
task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] FINISH,
SpmStopVDSCommand, log id: 1810fd8b
2019-07-25 12:39:16,758+02 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.ResetIrsVDSCommand] (default
task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] FINISH,
ResetIrsVDSCommand, log id: 2522686f
2019-07-25 12:39:16,758+02 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.SpmStopOnIrsVDSCommand]
(default task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] FINISH,
SpmStopOnIrsVDSCommand, log id: 37bf4639
2019-07-25 12:39:16,760+02 *ERROR*
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] EVENT_ID:
USER_FORCE_SELECTED_SPM_STOP_FAILED(4,096), Failed to force select
infn-vm07.management as the SPM due to a failure to stop the current SPM.
while in the hypervisor (SPM) vdsm.log:
2019-07-25 12:39:16,744+02 INFO
[org.ovirt.engine.core.bll.storage.pool.ForceSelectSPMCommand] (default
task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] Running command:
ForceSelectSPMCommand internal: false. Entities affected : ID:
81c9bd3c-ae0a-467f-bf7f-63ab30cd8d9e Type: VDSAction group
MANIPULATE_HOST with role type ADMIN
2019-07-25 12:39:16,745+02 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.SpmStopOnIrsVDSCommand]
(default task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] START,
SpmStopOnIrsVDSCommand(
SpmStopOnIrsVDSCommandParameters:{storagePoolId='18d57688-6ed4-43b8-bd7c-0665b55950b7',
ignoreFailoverLimit='false'}), log id: 37bf4639
2019-07-25 12:39:16,747+02 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.ResetIrsVDSCommand] (default
task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] START,
ResetIrsVDSCommand(
ResetIrsVDSCommandParameters:{storagePoolId='18d57688-6ed4-43b8-bd7c-0665b55950b7',
ignoreFailoverLimit='false',
vdsId='751f3e99-b95e-4c31-bc38-77f5661a0bdc',
ignoreStopFailed='false'}), log id: 2522686f
2019-07-25 12:39:16,749+02 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStopVDSCommand] (default
task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] START,
SpmStopVDSCommand(HostName = infn-vm05.management,
SpmStopVDSCommandParameters:{hostId='751f3e99-b95e-4c31-bc38-77f5661a0bdc',
storagePoolId='18d57688-6ed4-43b8-bd7c-0665b55950b7'}), log id: 1810fd8b
2019-07-25 12:39:16,758+02 *ERROR*
[org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStopVDSCommand] (default
task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] SpmStopVDSCommand::Not
stopping SPM on vds 'infn-vm05.management', pool id
'18d57688-6ed4-43b8-bd7c-0665b55950b7' as there are uncleared tasks
'Task 'fdcf4d1b-82fe-49a6-b233-323ebe568f8e', status 'running''
2019-07-25 12:39:16,758+02 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStopVDSCommand] (default
task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] FINISH,
SpmStopVDSCommand, log id: 1810fd8b
2019-07-25 12:39:16,758+02 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.ResetIrsVDSCommand] (default
task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] FINISH,
ResetIrsVDSCommand, log id: 2522686f
2019-07-25 12:39:16,758+02 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.SpmStopOnIrsVDSCommand]
(default task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] FINISH,
SpmStopOnIrsVDSCommand, log id: 37bf4639
2019-07-25 12:39:16,760+02 *ERROR*
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] EVENT_ID:
USER_FORCE_SELECTED_SPM_STOP_FAILED(4,096), Failed to force select
infn-vm07.management as the SPM due to a failure to stop the current SPM.
2019-07-25 12:39:18,660+02 INFO
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) [] Task id
'fdcf4d1b-82fe-49a6-b233-323ebe568f8e' has passed pre-polling period
time and should be polled. Pre-polling period is 60000 millis.
2019-07-25 12:39:18,660+02 INFO
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) [] Task id
'fdcf4d1b-82fe-49a6-b233-323ebe568f8e' has passed pre-polling period
time and should be polled. Pre-polling period is 60000 millis.
2019-07-25 12:39:18,750+02 INFO
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) [] Task id
'fdcf4d1b-82fe-49a6-b233-323ebe568f8e' has passed pre-polling period
time and should be polled. Pre-polling period is 60000 millis.
2019-07-25 12:39:18,750+02 *ERROR*
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) []
BaseAsyncTask::logEndTaskFailure: Task
'fdcf4d1b-82fe-49a6-b233-323ebe568f8e' (Parent Command 'Unknown',
Parameters Type
'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters') ended
with failure:
2019-07-25 12:39:18,750+02 INFO
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) []
SPMAsyncTask::ClearAsyncTask: Attempting to clear task
'fdcf4d1b-82fe-49a6-b233-323ebe568f8e'
2019-07-25 12:39:18,751+02 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) [] START,
SPMClearTaskVDSCommand(
SPMTaskGuidBaseVDSCommandParameters:{storagePoolId='18d57688-6ed4-43b8-bd7c-0665b55950b7',
ignoreFailoverLimit='false',
taskId='fdcf4d1b-82fe-49a6-b233-323ebe568f8e'}), log id: 34ae2b2f
2019-07-25 12:39:18,752+02 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) [] START,
HSMClearTaskVDSCommand(HostName = infn-vm05.management,
HSMTaskGuidBaseVDSCommandParameters:{hostId='751f3e99-b95e-4c31-bc38-77f5661a0bdc',
taskId='fdcf4d1b-82fe-49a6-b233-323ebe568f8e'}), log id: d3a78ad
2019-07-25 12:39:18,757+02 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) [] FINISH,
HSMClearTaskVDSCommand, log id: d3a78ad
2019-07-25 12:39:18,757+02 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) [] FINISH,
SPMClearTaskVDSCommand, log id: 34ae2b2f
2019-07-25 12:39:18,757+02 INFO
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) []
SPMAsyncTask::ClearAsyncTask: At time of attempt to clear task
'fdcf4d1b-82fe-49a6-b233-323ebe568f8e' the response code was
'TaskStateError' and message was 'Operation is not allowed in this task
state: ("can't clean in state running",)'. Task will not be cleaned
2019-07-25 12:39:18,757+02 INFO
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) []
BaseAsyncTask::onTaskEndSuccess: Task
'fdcf4d1b-82fe-49a6-b233-323ebe568f8e' (Parent Command 'Unknown',
Parameters Type
'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters') ended
successfully.
2019-07-25 12:39:18,757+02 INFO
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) []
SPMAsyncTask::ClearAsyncTask: Attempting to clear task
'fdcf4d1b-82fe-49a6-b233-323ebe568f8e'
2019-07-25 12:39:18,758+02 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) [] START,
SPMClearTaskVDSCommand(
SPMTaskGuidBaseVDSCommandParameters:{storagePoolId='18d57688-6ed4-43b8-bd7c-0665b55950b7',
ignoreFailoverLimit='false',
taskId='fdcf4d1b-82fe-49a6-b233-323ebe568f8e'}), log id: 42de0c2b
2019-07-25 12:39:18,759+02 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) [] START,
HSMClearTaskVDSCommand(HostName = infn-vm05.management,
HSMTaskGuidBaseVDSCommandParameters:{hostId='751f3e99-b95e-4c31-bc38-77f5661a0bdc',
taskId='fdcf4d1b-82fe-49a6-b233-323ebe568f8e'}), log id: 4895c79c
2019-07-25 12:39:18,764+02 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) [] FINISH,
HSMClearTaskVDSCommand, log id: 4895c79c
2019-07-25 12:39:18,764+02 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) [] FINISH,
SPMClearTaskVDSCommand, log id: 42de0c2b
2019-07-25 12:39:18,764+02 INFO
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) []
SPMAsyncTask::ClearAsyncTask: At time of attempt to clear task
'fdcf4d1b-82fe-49a6-b233-323ebe568f8e' the response code was
'TaskStateError' and message was 'Operation is not allowed in this task
state: ("can't clean in state running",)'. Task will not be cleaned
2019-07-25 12:39:18,764+02 INFO
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) [] Task id
'fdcf4d1b-82fe-49a6-b233-323ebe568f8e' has passed pre-polling period
time and should be polled. Pre-polling period is 60000 millis.
2019-07-25 12:39:18,764+02 INFO
[org.ovirt.engine.core.bll.tasks.AsyncTaskManager]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) [] Cleaning zombie
tasks: Clearing async task 'Unknown' that started at 'Fri May 03
14:48:50 CEST 2019'
2019-07-25 12:39:18,764+02 INFO
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) []
SPMAsyncTask::ClearAsyncTask: Attempting to clear task
'fdcf4d1b-82fe-49a6-b233-323ebe568f8e'
2019-07-25 12:39:18,765+02 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) [] START,
SPMClearTaskVDSCommand(
SPMTaskGuidBaseVDSCommandParameters:{storagePoolId='18d57688-6ed4-43b8-bd7c-0665b55950b7',
ignoreFailoverLimit='false',
taskId='fdcf4d1b-82fe-49a6-b233-323ebe568f8e'}), log id: da77af2
2019-07-25 12:39:18,766+02 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) [] START,
HSMClearTaskVDSCommand(HostName = infn-vm05.management,
HSMTaskGuidBaseVDSCommandParameters:{hostId='751f3e99-b95e-4c31-bc38-77f5661a0bdc',
taskId='fdcf4d1b-82fe-49a6-b233-323ebe568f8e'}), log id: 530694fb
2019-07-25 12:39:18,771+02 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) [] FINISH,
HSMClearTaskVDSCommand, log id: 530694fb
2019-07-25 12:39:18,771+02 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) [] FINISH,
SPMClearTaskVDSCommand, log id: da77af2
2019-07-25 12:39:18,771+02 INFO
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) []
SPMAsyncTask::ClearAsyncTask: At time of attempt to clear task
'fdcf4d1b-82fe-49a6-b233-323ebe568f8e' the response code was
'TaskStateError' and message was 'Operation is not allowed in this task
state: ("can't clean in state running",)'. Task will not be cleaned
there is some relation between this error and a task that has remained
hanging, from SPM server:
# vdsm-client Task getInfo taskID=fdcf4d1b-82fe-49a6-b233-323ebe568f8e
{
"verb": "prepareMerge",
"id": "fdcf4d1b-82fe-49a6-b233-323ebe568f8e"
}
# vdsm-client Task getStatus taskID=fdcf4d1b-82fe-49a6-b233-323ebe568f8e
{
"message": "running job 1 of 1",
"code": 0,
"taskID": "fdcf4d1b-82fe-49a6-b233-323ebe568f8e",
"taskResult": "",
"taskState": "running"
}
How can I solve this problem ?
Thanks a lot for your help !!
Best Regards
Enrico
--
_______________________________________________________________________
Enrico Becchetti Servizio di Calcolo e Reti
Istituto Nazionale di Fisica Nucleare - Sezione di Perugia
Via Pascoli,c/o Dipartimento di Fisica 06123 Perugia (ITALY)
Phone:+39 075 5852777 Mail: Enrico.Becchetti<at>pg.infn.it
_______________________________________________________________________
5 months, 1 week
oVirt networks
by Enrico Becchetti
Dear all,
Ineed your help to understand how to configure the network of a new
oVirt cluster.
Mynew system will have a 4.3 engine thatruns in a virtual machine, andsome
Dell R7525 AMD EPYC hypervisors, eachholding two 4-port PCI network cards.
These servers will have node-ovirt image again in version 4.3.
As for the network, there are two HPE Aruba 2540G, non-stackable, with
24 1Gbs ports
and 2 10Gbs uplinks to the star center.
This is a simplified scheme:
My goal is to make the most of the server's 8 ethernet interfaces to have
both reliability and maximum possible throughput.
This cluster will have two virtual networks, one forovirt management and
one for
the traffic of individual virtual machines.
With that said here's what my idea is. I would like to have two links
aggregated by 4Gbs,
one for ovrtmgt and the other for vmnet.
With the ovirt web interface I can createan active-passive "Mode 1"
bond, but this
won'tallow me to go beyond 1Gbs. Alternatively I could create a "Mode 4"
bond
802.3ad but unfortunately the switches are not stacked and therefore not
even
this solution applies.
This is an example with active passive configuration:
Can you tell me if ovirt can generate//nested bonds? Or do you have
other solutions ?
Thanks a lot !
Best Regards
Enrico
--
_______________________________________________________________________
Enrico Becchetti Servizio di Calcolo e Reti
Istituto Nazionale di Fisica Nucleare - Sezione di Perugia
Via Pascoli,c/o Dipartimento di Fisica 06123 Perugia (ITALY)
Phone:+39 075 5852777 Skype:enrico_becchetti
Mail: Enrico.Becchetti<at>pg.infn.it
Pagina web personale: https://www.pg.infn.it/home/enrico-becchetti/
______________________________________________________________________
8 months, 1 week
boot from cdrom & error code 0005
by edp@maddalena.it
Hi.
I have created a new storage domain (data domain, storage type nfs) to use it to upload iso images.
I have so uploaded a new iso and then attach the iso to a new vm.
But when I try to boot the vm I obtain this error:
booting from dvd/cd...
boot failed: could not read from cdrom (code 0005)
no bootable device
The iso file has been uploaded with success in the data storage domain and so the vm lets my attach the iso to the vm in the boot settings.
Can you help me?
Thank you
8 months, 2 weeks
VM Migration Failed
by KSNull Zero
Running oVirt 4.4.5
VM cannot migrate between hosts.
vdsm.log contains the following error:
libvirt.libvirtError: operation failed: Failed to connect to remote libvirt URI qemu+tls://ovhost01.local/system: authentication failed: Failed to verify peer's certificate
Certificates on hosts was renewed some time ago. How this issue can be fixed ?
Thank you.
9 months, 4 weeks
How to re-enroll (or renew) host certificates for a single-host hosted-engine deployment?
by Derek Atkins
Hi,
I've got a single-host hosted-engine deployment that I originally
installed with 4.0 and have upgraded over the years to 4.3.10. I and some
of my users have upgraded remote-viewer and now I get an error when I try
to view the console of my VMs:
(remote-viewer:8252): Spice-WARNING **: 11:30:41.806:
../subprojects/spice-common/common/ssl_verify.c:477:openssl_verify: Error
in server certificate verification: CA signature digest algorithm too weak
(num=68:depth0:/O=<My Org Name>/CN=<Host's Name>)
I am 99.99% sure this is because the old certs use SHA1.
I reran engine-setup on the engine and it asked me if I wanted to renew
the PKI, and I answered yes. This replaced many[1] of the certificates in
/etc/pki/ovirt-engine/certs on the engine, but it did not update the
Host's certificate.
All the documentation I've seen says that to refresh this certificate I
need to put the host into maintenance mode and then re-enroll.. However I
cannot do that, because this is a single-host system so I cannot put the
host in local mode -- there is no place to migrate the VMs (let alone the
Engine VM).
So.... Is there a command-line way to re-enroll manually and update the
host certs? Or some other way to get all the leftover certs renewed?
Thanks,
-derek
[1] Not only did it not update the Host's cert, it did not update any of
the vmconsole-proxy certs, nor the certs in /etc/pki/ovirt-vmconsole/, and
obviously nothing in /etc/pki/ on the host itself.
--
Derek Atkins 617-623-3745
derek(a)ihtfp.com www.ihtfp.com
Computer and Internet Security Consultant
10 months, 1 week
4.5.4 with Ceph only storage
by Maurice Burrows
Hey ... A long story short ... I have an existing Red Hat Virt / Gluster hyperconverged solution that I am moving away from.
I have an existing Ceph cluster that I primarily use for OpenStack and a small requirement for S3 via RGW.
I'm planning to build a new oVirt 4.5.4 cluster on RHEL9 using Ceph for all storage requirements. I've read many online articles on oVirt and Ceph, and they all seem to use the Ceph iSCSI gateway, which is now in maintenance, so I'm not real keen to commit to iSCSI.
So my question is, IS there any reason I cannot use CephFS for both hosted-engine and as a data storage domain?
I'm currently running Ceph Pacific FWIW.
Cheers
10 months, 1 week
i can't access console with noVNC or VNC client(console.vv)
by z84614242@163.com
i installed the ovirt 4.5 engine on centos stream 9 and add a ovirt node(ovirt node 4.5 iso) to this engine. i am going to run my vm on this node. i follow the instruction to create the data center, the cluster, the storage domain, upload the image. everything is fine. and after i create a vm with ubuntu image attach, i found that i can't visit the console. when i using the noVNC, it says "Something went wrong, connection is closed", when i visit vnc with virt-viewver, is says "Failed to complete handshake Error in the pull function". i try to change the console type to Bochs one and it appear the same. i change to QXL mode and the vm can't start any more. i check the log, it says "unsupported configuration: domain configuration does not support video model 'qxl'".
so now i can't visit my vm by anyway. i deploy the engine follow the official instruction and keep mostly option default but why still have this issue. why the noVNC says "Something went wrong" instead of telling me what is actually wrong
11 months, 3 weeks
Changing disk QoS causes segfault with IO-Threads enabled (oVirt 4.3.0.4-1.el7)
by jloh@squiz.net
We recently upgraded to 4.3.0 and have found that when changing disk QoS settings on VMs whilst IO-Threads is enabled causes them to segfault and the VM to reboot. We've been able to replicate this across several VMs. VMs with IO-Threads disabled/turned off do not segfault when changing the QoS.
Mar 1 11:49:06 srvXX kernel: IO iothread1[30468]: segfault at fffffffffffffff8 ip 0000557649f2bd24 sp 00007f80de832f60 error 5 in qemu-kvm[5576498dd000+a03000]
Mar 1 11:49:06 srvXX abrt-hook-ccpp: invalid number 'iothread1'
Mar 1 11:49:11 srvXX libvirtd: 2019-03-01 00:49:11.116+0000: 13365: error : qemuMonitorIORead:609 : Unable to read from monitor: Connection reset by peer
Happy to supply some more logs to someone if they'll help but just wondering whether anyone else has experienced this or knows of a current fix other than turning io-threads off.
Cheers.
1 year
Deploy oVirt Engine fail behind proxy
by Matteo Bonardi
Hi,
I am trying to deploy the ovirt engine following self-hosted engine installation procedure on documentation.
Deployment servers are behind a proxy and I have set it in environment and in yum.conf before run deploy.
Deploy fails because ovirt engine vm cannot resolve AppStream repository url:
[ INFO ] TASK [ovirt.engine-setup : Install oVirt Engine package]
[ ERROR ] fatal: [localhost -> ovirt-manager.mydomain]: FAILED! => {"changed": false, "msg": "Failed to download metadata for repo 'AppStream': Cannot prepare internal mirrorlist: Curl error (6): Couldn't resolve host name for http://mirrorlist.centos.org/?release=8&arch=x86_64&repo=AppStream&infra=... [Could not resolve host: mirrorlist.centos.org]", "rc": 1, "results": []}
[ ERROR ] Failed to execute stage 'Closing up': Failed executing ansible-playbook
[ INFO ] Stage: Clean up
[ INFO ] Cleaning temporary resources
[ INFO ] TASK [ovirt.hosted_engine_setup : Execute just a specific set of steps]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Force facts gathering]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Fetch logs from the engine VM]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Set destination directory path]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Create destination directory]
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : include_tasks]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Find the local appliance image]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Set local_vm_disk_path]
[ INFO ] skipping: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Give the vm time to flush dirty buffers]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Copy engine logs]
[ INFO ] TASK [ovirt.hosted_engine_setup : include_tasks]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Remove local vm dir]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Remove temporary entry in /etc/hosts for the local VM]
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Clean local storage pools]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Destroy local storage-pool {{ he_local_vm_dir | basename }}]
[ INFO ] TASK [ovirt.hosted_engine_setup : Undefine local storage-pool {{ he_local_vm_dir | basename }}]
[ INFO ] TASK [ovirt.hosted_engine_setup : Destroy local storage-pool {{ local_vm_disk_path.split('/')[5] }}]
[ INFO ] TASK [ovirt.hosted_engine_setup : Undefine local storage-pool {{ local_vm_disk_path.split('/')[5] }}]
[ INFO ] Generating answer file '/var/lib/ovirt-hosted-engine-setup/answers/answers-20201109165237.conf'
[ INFO ] Stage: Pre-termination
[ INFO ] Stage: Termination
[ ERROR ] Hosted Engine deployment failed: please check the logs for the issue, fix accordingly or re-deploy from scratch.
Log file is located at /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20201109164244-b3e8sd.log
How I can set proxy for the engine vm?
Ovirt version:
[root@myhost ~]# rpm -qa | grep ovirt-engine-appliance
ovirt-engine-appliance-4.4-20200916125954.1.el8.x86_64
[root@myhost ~]# rpm -qa | grep ovirt-hosted-engine-setup
ovirt-hosted-engine-setup-2.4.6-1.el8.noarch
OS version:
[root@myhost ~]# cat /etc/centos-release
CentOS Linux release 8.2.2004 (Core)
[root@myhost ~]# uname -a
Linux myhost.mydomain 4.18.0-193.28.1.el8_2.x86_64 #1 SMP Thu Oct 22 00:20:22 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
Thanks for the help.
Regards,
Matteo
1 year, 1 month
The oVirt Counter
by Sandro Bonazzola
Hi, for those who remember the Linux Counter project, if you'd like other
to know you're using oVirt and know some details about your deployment,
here's a way to count you in:
https://ovirt.org/community/ovirt-counter.html
Enjoy!
--
Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D PERFORMANCE & SCALE
Red Hat EMEA <https://www.redhat.com/>
sbonazzo(a)redhat.com
<https://www.redhat.com/>
*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.*
1 year, 2 months
Cannot restart ovirt after massive failure.
by Gilboa Davara
Hello all,
During the night, one of my (smaller) setups, a single node self hosted
engine (localhost NFS) crashed due to what-looks-like a massive disk
failure (Software RAID6, with 10 drives + spare).
After a reboot, I let the RAID resync with a fresh drive) and went on to
start oVirt.
However, no such luck.
Two issues:
1. ovirt-ha-broker fails due to broken hosted engine state (log attached).
2. ovirt-ha-agent fails due to network test (tcp) even though both
remote-host and DNS servers are active. (log attached).
Two questions:
1. Can I somehow force the agent to disable the network liveliness test?
2. Can I somehow force the broker to rebuild / fix the hosted engine state?
- Gilboa
1 year, 3 months
Please, Please Help - New oVirt Install/Deployment Failing - "Host is not up..."
by Matthew J Black
Hi Everyone,
Could someone please help me - I've been trying to do an install of oVirt for *weeks* (including false starts and self-inflicted wounds/errors) and it is still not working.
My setup:
- oVirt v4.5.3
- A brand new fresh vanilla install of RockyLinux 8.6 - all working AOK
- 2*NICs in a bond (802.3ad) with a couple of sub-Interfaces/VLANs - all working AOK
- All relevant IPv4 Address in DNS with Reverse Lookups - all working AOK
- All relevant IPv4 Address in "/etc/hosts" file - all working AOK
- IPv6 (using "method=auto" in the interface config file) enabled on the relevant sub-Interface/VLAN - I'm not using IPv6 on the network, only IPv4, but I'm trying to cover all the bases.
- All relevant Ports (as per the oVirt documentation) set up on the firewall
- ie firewall-cmd --add-service={{ libvirt-tls | ovirt-imageio | ovirt-vmconsole | vdsm }}
- All the relevant Repositories installed (ie RockyLinux BaseOS, AppStream, & PowerTools, and the EPEL, plus the ones from the oVirt documentation)
I have followed the oVirt documentation (including the special RHEL-instructions and RockyLinux-instructions) to the letter - no deviations, no special settings, exactly as they are written.
All the dnf installs, etc, went off without a hitch, including the "dnf install centos-release-ovirt45", "dnf install ovirt-engine-appliance", and "dnf install ovirt-hosted-engine-setup" - no errors anywhere.
Here is the results of a "dnf repolist":
- appstream Rocky Linux 8 - AppStream
- baseos Rocky Linux 8 - BaseOS
- centos-ceph-pacific CentOS-8-stream - Ceph Pacific
- centos-gluster10 CentOS-8-stream - Gluster 10
- centos-nfv-openvswitch CentOS-8 - NFV OpenvSwitch
- centos-opstools CentOS-OpsTools - collectd
- centos-ovirt45 CentOS Stream 8 - oVirt 4.5
- cs8-extras CentOS Stream 8 - Extras
- cs8-extras-common CentOS Stream 8 - Extras common packages
- epel Extra Packages for Enterprise Linux 8 - x86_64
- epel-modular Extra Packages for Enterprise Linux Modular 8 - x86_64
- ovirt-45-centos-stream-openstack-yoga CentOS Stream 8 - oVirt 4.5 - OpenStack Yoga Repository
- ovirt-45-upstream oVirt upstream for CentOS Stream 8 - oVirt 4.5
- powertools Rocky Linux 8 - PowerTools
So I kicked-off the oVirt deployment with: "hosted-engine --deploy --4 --ansible-extra-vars=he_offline_deployment=true".
I used "--ansible-extra-vars=he_offline_deployment=true" because without that flag I was getting "DNF timout" issues (see my previous post `Local (Deployment) VM Can't Reach "centos-ceph-pacific" Repo`).
I answer the defaults to all of questions the script asked, or entered the deployment-relevant answers where appropriate. In doing this I double-checked every answer before hitting <Enter>. Everything progressed smoothly until the deployment reached the "Wait for the host to be up" task... which then hung for more than 30 minutes before failing.
From the ovirt-hosted-engine-setup... log file:
- 2022-10-20 17:54:26,285+1100 ERROR otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:113 fatal: [localhost]: FAILED! => {"changed": false, "msg": "Host is not up, please check logs, perhaps also on the engine machine"}
I checked the following log files and found all of the relevant ERROR lines, then checked several 10s of proceeding and succeeding lines trying to determine what was going wrong, but I could not determine anything.
- ovirt-hosted-engine-setup...
- ovirt-hosted-engine-setup-ansible-bootstrap_local_vm...
- ovirt-hosted-engine-setup-ansible-final_clean... - not really relevant, I believe
I can include the log files (or the relevant parts of the log files) if people want - but that are very large: several 100 kilobytes each.
I also googled "oVirt Host is not up" and found several entries, but after reading them all the most relevant seems to be a thread from these mailing list: `Install of RHV 4.4 failing - "Host is not up, please check logs, perhaps also on the engine machine"` - but this seems to be talking about an upgrade and I didn't gleam anything useful from it - I could, of course, be wrong about that.
So my questions are:
- Where else should I be looking (ie other log files, etc, and possible where to find them)?
- Does anyone have any idea why this isn't working?
- Does anyone have a work-around (including a completely manual process to get things working - I don't mind working in the CLI with virsh, etc)?
- What am I doing wrong?
Please, I'm really stumped with this, and I really do need help.
Cheers
Dulux-Oz
1 year, 3 months
how to renew expired ovirt node vdsm cert manually ?
by dhanaraj.ramesh@yahoo.com
below are the steps to renew the expired vdsm cert of ovirt node
# To check CERT expired
# openssl x509 -in /etc/pki/vdsm/certs/vdsmcert.pem -noout -dates
1. Backup vdsm folder
# cd /etc/pki
# mv vdsm vdsm.orig
# mkdir vdsm ; chown vdsm:kvm vdsm
# cd vdsm
# mkdir libvirt-vnc certs keys libvirt-spice libvirt-migrate
# chown vdsm:kvm libvirt-vnc certs keys libvirt-spice libvirt-migrate
2. Regenerate cert & keys
# vdsm-tool configure --module certificates
3. Copy the cert to destination location
chmod 440 /etc/pki/vdsm/keys/vdsmkey.pem
chown root /etc/pki/vdsmcerts/*pem
chmod 644 /etc/pki/vdsmcerts/*pem
cp /etc/pki/vdsm/certs/cacert.pem /etc/pki/vdsm/libvirt-spice/ca-cert.pem
cp /etc/pki/vdsm/keys/vdsmkey.pem /etc/pki/vdsm/libvirt-spice/server-key.pem
cp /etc/pki/vdsm/certs/vdsmcert.pem /etc/pki/vdsm/libvirt-spice/server-cert.pem
cp /etc/pki/vdsm/certs/cacert.pem /etc/pki/vdsm/libvirt-vnc/ca-cert.pem
cp /etc/pki/vdsm/keys/vdsmkey.pem /etc/pki/vdsm/libvirt-vnc/server-key.pem
cp /etc/pki/vdsm/certs/vdsmcert.pem /etc/pki/vdsm/libvirt-vnc/server-cert.pem
cp -p /etc/pki/vdsm/certs/cacert.pem /etc/pki/vdsm/libvirt-migrate/ca-cert.pem
cp -p /etc/pki/vdsm/keys/vdsmkey.pem /etc/pki/vdsm/libvirt-migrate/server-key.pem
cp -p /etc/pki/vdsm/certs/vdsmcert.pem /etc/pki/vdsm/libvirt-migrate/server-cert.pem
chown root:qemu /etc/pki/vdsm/libvirt-migrate/server-key.pem
cp -p /etc/pki/vdsm.orig/keys/libvirt_password /etc/pki/vdsm/keys/
mv /etc/pki/libvirt/clientcert.pem /etc/pki/libvirt/clientcert.pem.orig
mv /etc/pki/libvirt/private/clientkey.pem /etc/pki/libvirt/private/clientkey.pem.orig
mv /etc/pki/CA/cacert.pem /etc/pki/CA/cacert.pem.orig
cp -p /etc/pki/vdsm/certs/vdsmcert.pem /etc/pki/libvirt/clientcert.pem
cp -p /etc/pki/vdsm/keys/vdsmkey.pem /etc/pki/libvirt/private/clientkey.pem
cp -p /etc/pki/vdsm/certs/cacert.pem /etc/pki/CA/cacert.pem
3. cross check the backup folder /etc/pki/vdsm.orig vs /etc/pki/vdsm
# refer to /etc/pki/vdsm.orig/*/ and set the correct owner & group permission in /etc/pki/vdsm/*/
4. restart services # Make sure both services are up
systemctl restart vdsmd libvirtd
1 year, 3 months
Unable to install oVirt on RHEL7.5
by SS00514758@techmahindra.com
Hi All,
I am unable to install oVirt on RHEL7.5, to install it I am taking reference of below link,
https://www.ovirt.org/documentation/install-guide/chap-Installing_oVirt.html
But though it is not working for me, couple of dependencies is not getting installed, and because of this I am not able to run the ovirt-engine, below are the depencies packages that unable to install,
Error: Package: collectd-write_http-5.8.0-6.1.el7.x86_64 (@ovirt-4.2-centos-opstools)
Requires: collectd(x86-64) = 5.8.0-6.1.el7
Removing: collectd-5.8.0-6.1.el7.x86_64 (@ovirt-4.2-centos-opstools)
collectd(x86-64) = 5.8.0-6.1.el7
Updated By: collectd-5.8.1-1.el7.x86_64 (epel)
collectd(x86-64) = 5.8.1-1.el7
Available: collectd-5.7.2-1.el7.x86_64 (ovirt-4.2-centos-opstools)
collectd(x86-64) = 5.7.2-1.el7
Available: collectd-5.7.2-3.el7.x86_64 (ovirt-4.2-centos-opstools)
collectd(x86-64) = 5.7.2-3.el7
Available: collectd-5.8.0-2.el7.x86_64 (ovirt-4.2-centos-opstools)
collectd(x86-64) = 5.8.0-2.el7
Available: collectd-5.8.0-3.el7.x86_64 (ovirt-4.2-centos-opstools)
collectd(x86-64) = 5.8.0-3.el7
Available: collectd-5.8.0-5.el7.x86_64 (ovirt-4.2-centos-opstools)
collectd(x86-64) = 5.8.0-5.el7
Help me to install this.
Looking forward to resolve this issue.
Regards
Sumit Sahay
1 year, 4 months
Grafana - Origin Not Allowed
by Maton, Brett
oVirt 4.5.0.8-1.el8
I tried to connect to grafana via the monitoring portal link from the dash
and all panels are failing to display any data with varying error messages,
but all include 'Origin Not Allowed'
I navigated to Data Sources and ran a test on the PostgreSQL connection
(localhost) which threw the same Origin Not Allowed error message.
Any suggestions?
1 year, 4 months
Multiple hosts stuck in Connecting state waiting for storage pool to go up.
by ivan.lezhnjov.iv@gmail.com
Hi!
We have a problem with multiple hosts stuck in Connecting state, which I hoped somebody here could help us wrap our heads around.
All hosts, except one, seem to have very similar symptoms but I'll focus on one host that represents the rest.
So, the host is stuck in Connecting state and this what we see in oVirt log files.
/var/log/ovirt-engine/engine.log:
2023-04-20 09:51:53,021+03 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesAsyncVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-37) [] Command 'GetCapabilitiesAsyncVDSCommand(HostName = ABC010-176-XYZ, VdsIdAndVdsVDSCommandParametersBase:{hostId='2c458562-3d4d-4408-afc9-9a9484984a91', vds='Host[ABC010-176-XYZ,2c458562-3d4d-4408-afc9-9a9484984a91]'})' execution failed: org.ovirt.vdsm.jsonrpc.client.ClientConnectionException: SSL session is invalid
2023-04-20 09:55:16,556+03 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-67) [] EVENT_ID: VDS_BROKER_COMMAND_FAILURE(10,802), VDSM ABC010-176-XYZ command Get Host Capabilities failed: Message timeout which can be caused by communication issues
/var/log/vdsm/vdsm.log:
2023-04-20 17:48:51,977+0300 INFO (vmrecovery) [vdsm.api] START getConnectedStoragePoolsList() from=internal, task_id=ebce7c8c-6ded-454e-9aee-86edf72764ef (api:31)
2023-04-20 17:48:51,977+0300 INFO (vmrecovery) [vdsm.api] FINISH getConnectedStoragePoolsList return={'poollist': []} from=internal, task_id=ebce7c8c-6ded-454e-9aee-86edf72764ef (api:37)
2023-04-20 17:48:51,978+0300 INFO (vmrecovery) [vds] recovery: waiting for storage pool to go up (clientIF:723)
Both engine.log and vdsm.log are flooded with these messages. They are repeated at regular intervals ad infinitum. This is one common symptom shared by multiple hosts in our deployment. They all have these message loops in engine.log and vdsm.log files. On all
Running vdsm-client Host getConnectedStoragePools also returns an empty list represented by [] on all hosts (but interestingly there is one that showed Storage Pool UUID and yet it was still stuck in Connecting state).
This particular host (ABC010-176-XYZ) is connected to 3 CEPH iSCSI Storage Domains and lsblk shows 3 block devices with matching UUIDs in their device components. So, the storage seems to be connected but the Storage Pool is not? How is that even possible?
Now, what's even more weird is that we tried rebooting the host (via Administrator Portal) and it didn't help. We even tried removing and re-adding the host in Administrator Portal but to no avail.
Additionally, the host refused to go into Maintenance mode so we had to enforce it by manually updating Engine DB.
We also tried reinstalling the host via Administrator Portal and ran into another weird problem, which I'm not sure if it's a related one or a problem that deserves a dedicated discussion thread but, basically, the underlying Ansible playbook exited with the following error message:
"stdout" : "fatal: [10.10.10.176]: UNREACHABLE! => {\"changed\": false, \"msg\": \"Data could not be sent to remote host \\\"10.10.10.176\\\". Make sure this host can be reached over ssh: \", \"unreachable\": true}",
Counterintuitively, just before running Reinstall via Administrator Portal we had been able to reboot the same host (which as you know oVirt does via Ansible as well). So, no changes on the host in between just different Ansible playbooks. To confirm that we actually had access to the host over ssh we successfully ran ssh -p $PORT root(a)10.10.10.176 -i /etc/pki/ovirt-engine/keys/engine_id_rsa and it worked.
That made us scratch our heads for a while but what seems to had fixed Ansible's ssh access problems was manual full stop of all VDSM-related systemd services on the host. It was just a wild guess but as soon as we stopped all VDSM services Ansible stopped complaining about not being able to reach the target host and successfully did its job.
I'm sure you'd like to see more logs but I'm not certain what exactly is relevant. There are a ton of logs as this deployment is comprised of nearly 80 hosts. So, I guess it's best if you just request to see specific logs, messages or configuration details and I'll cherry-pick what's relevant.
We don't really understand what's going on and would appreciate any help. We tried just about anything we could think of to resolve this issue and are running out of ideas what to do next.
If you have any questions just ask and I'll do my best to answer them.
1 year, 6 months
Re: Failed to synchronize networks of Provider ovirt-provider-ovn
by Mail SET Inc. Group
Yes, i use same manual to change WebUI SSL.
ovirt-ca-file= is a same SSL file which use WebUI.
Yes, i restart ovirt-provider-ovn, i restart engine, i restart all what i can restart. Nothing...
> 12 сент. 2018 г., в 16:11, Dominik Holler <dholler(a)redhat.com> написал(а):
>
> On Wed, 12 Sep 2018 14:23:54 +0300
> "Mail SET Inc. Group" <mail(a)set-pro.net> wrote:
>
>> Ok!
>
> Not exactly, please use users(a)ovirt.org for such questions.
> Other should benefit from this questions, too.
> Please write the next mail to users(a)ovirt.org and keep me in CC.
>
>> What i did:
>>
>> 1) install oVirt «from box» (4.2.5.2-1.el7);
>> 2) generate own ssl for my engine using my FreeIPA CA, Install it and
>
> What means "Install it"? You can use the doc from the following link
> https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.2/...
>
> Ensure that ovirt-ca-file= in
> /etc/ovirt-provider-ovn/conf.d/10-setup-ovirt-provider-ovn.conf
> points to the correct file and ovirt-provider-ovn is restarted.
>
>> get tis issue;
>>
>>
>> [root@engine ~]# tail -n 50 /var/log/ovirt-provider-ovn.log
>> 2018-09-12 14:10:23,828 root [SSL: CERTIFICATE_VERIFY_FAILED]
>> certificate verify failed (_ssl.c:579) Traceback (most recent call
>> last): File "/usr/share/ovirt-provider-ovn/handlers/base_handler.py",
>> line 133, in _handle_request method, path_parts, content
>> File "/usr/share/ovirt-provider-ovn/handlers/selecting_handler.py",
>> line 175, in handle_request return
>> self.call_response_handler(handler, content, parameters) File
>> "/usr/share/ovirt-provider-ovn/handlers/keystone.py", line 33, in
>> call_response_handler return response_handler(content, parameters)
>> File "/usr/share/ovirt-provider-ovn/handlers/keystone_responses.py",
>> line 62, in post_tokens user_password=user_password) File
>> "/usr/share/ovirt-provider-ovn/auth/plugin_facade.py", line 26, in
>> create_token return auth.core.plugin.create_token(user_at_domain,
>> user_password) File
>> "/usr/share/ovirt-provider-ovn/auth/plugins/ovirt/plugin.py", line
>> 48, in create_token timeout=self._timeout()) File
>> "/usr/share/ovirt-provider-ovn/auth/plugins/ovirt/sso.py", line 75,
>> in create_token username, password, engine_url, ca_file, timeout)
>> File "/usr/share/ovirt-provider-ovn/auth/plugins/ovirt/sso.py", line
>> 91, in _get_sso_token timeout=timeout File
>> "/usr/share/ovirt-provider-ovn/auth/plugins/ovirt/sso.py", line 54,
>> in wrapper response = func(*args, **kwargs) File
>> "/usr/share/ovirt-provider-ovn/auth/plugins/ovirt/sso.py", line 47,
>> in wrapper raise BadGateway(e) BadGateway: [SSL:
>> CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:579)
>>
>>
>> [root@engine ~]# tail -n 20 /var/log/ovirt-engine/engine.log
>> 2018-09-12 14:10:23,773+03 INFO
>> [org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand]
>> (EE-ManagedThreadFactory-engineScheduled-Thread-47) [316db685] Lock
>> Acquired to object
>> 'EngineLock:{exclusiveLocks='[14e4fb72-9764-4757-b37d-4d487995571a=PROVIDER]',
>> sharedLocks=''}' 2018-09-12 14:10:23,778+03 INFO
>> [org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand]
>> (EE-ManagedThreadFactory-engineScheduled-Thread-47) [316db685]
>> Running command: SyncNetworkProviderCommand internal: true.
>> 2018-09-12 14:10:23,836+03 ERROR
>> [org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand]
>> (EE-ManagedThreadFactory-engineScheduled-Thread-47) [316db685]
>> Command
>> 'org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand'
>> failed: EngineException: (Failed with error Bad Gateway and code
>> 5050) 2018-09-12 14:10:23,837+03 INFO
>> [org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand]
>> (EE-ManagedThreadFactory-engineScheduled-Thread-47) [316db685] Lock
>> freed to object
>> 'EngineLock:{exclusiveLocks='[14e4fb72-9764-4757-b37d-4d487995571a=PROVIDER]',
>> sharedLocks=''}' 2018-09-12 14:14:12,477+03 INFO
>> [org.ovirt.engine.core.sso.utils.AuthenticationUtils] (default
>> task-6) [] User admin@internal successfully logged in with scopes:
>> ovirt-app-admin ovirt-app-api ovirt-app-portal
>> ovirt-ext=auth:sequence-priority=~ ovirt-ext=revoke:revoke-all
>> ovirt-ext=token-info:authz-search
>> ovirt-ext=token-info:public-authz-search
>> ovirt-ext=token-info:validate ovirt-ext=token:password-access
>> 2018-09-12 14:14:12,587+03 INFO
>> [org.ovirt.engine.core.bll.aaa.CreateUserSessionCommand] (default
>> task-6) [1bf1b763] Running command: CreateUserSessionCommand
>> internal: false. 2018-09-12 14:14:12,628+03 INFO
>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
>> (default task-6) [1bf1b763] EVENT_ID: USER_VDC_LOGIN(30), User
>> admin@internal-authz connecting from '10.0.3.61' using session
>> 's8jAm7BUJGlicthm6yZBA3CUM8QpRdtwFaK3M/IppfhB3fHFB9gmNf0cAlbl1xIhcJ2WX+ww7e71Ri+MxJSsIg=='
>> logged in. 2018-09-12 14:14:30,972+03 INFO
>> [org.ovirt.engine.core.bll.provider.ImportProviderCertificateCommand]
>> (default task-6) [ee3cc8a7-4485-4fdf-a0c2-e9d67b5cfcd3] Running
>> command: ImportProviderCertificateCommand internal: false. Entities
>> affected : ID: aaa00000-0000-0000-0000-123456789aaa Type:
>> SystemAction group CREATE_STORAGE_POOL with role type ADMIN
>> 2018-09-12 14:14:30,982+03 INFO
>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
>> (default task-6) [ee3cc8a7-4485-4fdf-a0c2-e9d67b5cfcd3] EVENT_ID:
>> PROVIDER_CERTIFICATE_IMPORTED(213), Certificate for provider
>> ovirt-provider-ovn was imported. (User: admin@internal-authz)
>> 2018-09-12 14:14:31,006+03 INFO
>> [org.ovirt.engine.core.bll.provider.TestProviderConnectivityCommand]
>> (default task-6) [a48d94ab-b0b2-42a2-a667-0525b4c652ea] Running
>> command: TestProviderConnectivityCommand internal: false. Entities
>> affected : ID: aaa00000-0000-0000-0000-123456789aaa Type:
>> SystemAction group CREATE_STORAGE_POOL with role type ADMIN
>> 2018-09-12 14:14:31,058+03 ERROR
>> [org.ovirt.engine.core.bll.provider.TestProviderConnectivityCommand]
>> (default task-6) [a48d94ab-b0b2-42a2-a667-0525b4c652ea] Command
>> 'org.ovirt.engine.core.bll.provider.TestProviderConnectivityCommand'
>> failed: EngineException: (Failed with error Bad Gateway and code
>> 5050) 2018-09-12 14:15:10,954+03 INFO
>> [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService]
>> (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread
>> pool 'default' is using 0 threads out of 1, 5 threads waiting for
>> tasks. 2018-09-12 14:15:10,954+03 INFO
>> [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService]
>> (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread
>> pool 'engine' is using 0 threads out of 500, 16 threads waiting for
>> tasks and 0 tasks in queue. 2018-09-12 14:15:10,954+03 INFO
>> [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService]
>> (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread
>> pool 'engineScheduled' is using 0 threads out of 100, 100 threads
>> waiting for tasks. 2018-09-12 14:15:10,954+03 INFO
>> [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService]
>> (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread
>> pool 'engineThreadMonitoring' is using 1 threads out of 1, 0 threads
>> waiting for tasks. 2018-09-12 14:15:10,954+03 INFO
>> [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService]
>> (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread
>> pool 'hostUpdatesChecker' is using 0 threads out of 5, 2 threads
>> waiting for tasks. 2018-09-12 14:15:23,843+03 INFO
>> [org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand]
>> (EE-ManagedThreadFactory-engineScheduled-Thread-61) [2455041f] Lock
>> Acquired to object
>> 'EngineLock:{exclusiveLocks='[14e4fb72-9764-4757-b37d-4d487995571a=PROVIDER]',
>> sharedLocks=''}' 2018-09-12 14:15:23,849+03 INFO
>> [org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand]
>> (EE-ManagedThreadFactory-engineScheduled-Thread-61) [2455041f]
>> Running command: SyncNetworkProviderCommand internal: true.
>> 2018-09-12 14:15:23,900+03 ERROR
>> [org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand]
>> (EE-ManagedThreadFactory-engineScheduled-Thread-61) [2455041f]
>> Command
>> 'org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand'
>> failed: EngineException: (Failed with error Bad Gateway and code
>> 5050) 2018-09-12 14:15:23,901+03 INFO
>> [org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand]
>> (EE-ManagedThreadFactory-engineScheduled-Thread-61) [2455041f] Lock
>> freed to object
>> 'EngineLock:{exclusiveLocks='[14e4fb72-9764-4757-b37d-4d487995571a=PROVIDER]',
>> sharedLocks=''}'
>>
>>
>> [root@engine ~]#
>> cat /etc/ovirt-provider-ovn/conf.d/10-setup-ovirt-provider-ovn.conf #
>> This file is automatically generated by engine-setup. Please do not
>> edit manually [OVN REMOTE] ovn-remote=ssl:127.0.0.1:6641
>> [SSL]
>> https-enabled=true
>> ssl-cacert-file=/etc/pki/ovirt-engine/ca.pem
>> ssl-cert-file=/etc/pki/ovirt-engine/certs/ovirt-provider-ovn.cer
>> ssl-key-file=/etc/pki/ovirt-engine/keys/ovirt-provider-ovn.key.nopass
>> [OVIRT]
>> ovirt-sso-client-secret=Ms7Gw9qNT6IkXu7oA54tDmxaZDIukABV
>> ovirt-host=https://engine.set.local:443
>> ovirt-sso-client-id=ovirt-provider-ovn
>> ovirt-ca-file=/etc/pki/ovirt-engine/apache-ca.pem
>> [PROVIDER]
>> provider-host=engine.set.local
>>
>>
>>> 12 сент. 2018 г., в 13:59, Dominik Holler <dholler(a)redhat.com>
>>> написал(а):
>>>
>>> On Wed, 12 Sep 2018 13:04:53 +0300
>>> "Mail SET Inc. Group" <mail(a)set-pro.net> wrote:
>>>
>>>> Hello Dominik!
>>>> I have a same issue with OVN provider and SSL
>>>> https://www.mail-archive.com/users@ovirt.org/msg47020.html
>>>> <https://www.mail-archive.com/users@ovirt.org/msg47020.html> But
>>>> certificate changes not helps to resolve it. Maybe you can help me
>>>> with this?
>>>
>>> Sure. Can you please share the relevant lines of
>>> ovirt-provider-ovn.log and engine.log, and the information if you
>>> are using the certificates generated by engine-setup with
>>> users(a)ovirt.org ? Thanks,
>>> Dominik
>>>
>>
>
>
1 year, 6 months
Unable to enable HPET component of an specific VM in oVirt 4.7
by ricardoot@gmail.com
Hello community members,
I'm currently using oVirt 4.7 as my virtualization environment, and I'm facing an issue with enabling the HPET (High Precision Event Timer) component in the XML configuration file of virtual machine (VM).
Upon inspecting the XML file, I noticed that the `<timer name='hpet' present='no'/>` line is missing, indicating that the HPET component is disabled.
Here are the steps I have taken so far:
1. I verified that the VM's XML configuration file does not include the `<timer name='hpet' present='yes'/>` line.
2. While the VM was powered on, I used the following command to edit the XML configuration file:
```
virsh edit VM_NAME
```
I added the `<timer name='hpet' present='yes'/>` line to the XML file. However, the changes did not persist after restarting the VM.
To provide additional information, on the host where oVirt is running, the available clock sources can be viewed by executing the following command:
```
cat /sys/devices/system/clocksource/clocksource0/available_clocksource
```
The output shows the available clock sources, such as `tsc`, `hpet`, and `acpi_pm`.
To resolve the authentication issue with the `virsh` command, I created a user with appropriate privileges using the following command:
```
sudo saslpasswd2 -a libvirt USERNAME
```
After creating the user, I was able to authenticate successfully with the `virsh` command using the newly created credentials.
However, I'm unable to find an option to add the HPET parameter in the web console of oVirt. It seems that the option to configure HPET is not available in the web console.
Has anyone else encountered a similar issue in oVirt 4.7? Could you please provide guidance or suggest a solution to enable the HPET component in the XML configuration file of a powered-off VM in oVirt 4.7? Any insights, experiences, or suggestions would be greatly appreciated.
Thank you in advance for your assistance!
Best regards,
1 year, 6 months
engine-config -s UserSessionTimeOutInterval=X problem
by marek
ovirt 4.5.4, standalone engine, centos 8 stream
[root@ovirt ~]# engine-config -g UserSessionTimeOutInterval
Picked up JAVA_TOOL_OPTIONS: -Dcom.redhat.fips=false
UserSessionTimeOutInterval: 30 version: general
[root@ovirt ~]# engine-config -s UserSessionTimeOutInterval=60
Picked up JAVA_TOOL_OPTIONS: -Dcom.redhat.fips=false
Cannot set value 60 to key UserSessionTimeOutInterval.
any ideas where is the problem?
Marek
1 year, 8 months
Unable to upload or download iso via admin portal
by Igor Filipovic
Hi, I'm having a trouble on fresh 4.4.10.7 installation (on oracle linux), I'm not able to upload or download any file using storage domain upload image function. I've imported CA certificate and have tried on several browsers (firefow, chrome,edge), on different computers (and browsers are green - claiming that I'm securely connected), but I'm always getting error regarding CA certificate when I test connection, or when I try to upload ISO image. I've tried to upload ISO image via cli commands (upload_disk.py), and that scenario was successful, however this method It is not very convenient for my co-workers.
I have 5 physical hosts, one is dedicated to run ovirt-engine, and other 4 are kvm hypervisors. When I try to upload ISO this is what engine.log logs:
2023-04-08 11:00:28,339+02 INFO [org.ovirt.engine.core.bll.storage.disk.image.TransferImageStatusCommand] (default task-2) [f6b62add-0a0c-45ee-a985-a76171843382] Running command: TransferImageStatusCommand internal: false. Entities affected : ID: 1eb97088-b805-4616-af55-0ac9d1d7dfbe Type: SystemAction group CREATE_DISK with role type USER
2023-04-08 11:00:28,340+02 INFO [org.ovirt.engine.core.bll.storage.disk.image.ImageTransferUpdater] (default task-2) [f6b62add-0a0c-45ee-a985-a76171843382] Updating image transfer a78b18c5-e395-4c29-aa5c-15ffff8a1cb6 (image 4f758325-ac11-4071-a9fa-d180425e8604) phase to Paused by System (message: 'Sent 0MB')
2023-04-08 11:00:28,363+02 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-2) [f6b62add-0a0c-45ee-a985-a76171843382] EVENT_ID: UPLOAD_IMAGE_NETWORK_ERROR(1,062), Unable to upload image to disk 4f758325-ac11-4071-a9fa-d180425e8604 due to a network error. Ensure ovirt-engine's CA certificate is registered as a trusted CA in the browser. The certificate can be fetched from https://engine-dr.somedomain/ovirt-engine/services/pki-resource?resource=...
2023-04-08 11:00:28,363+02 INFO [org.ovirt.engine.core.bll.storage.disk.image.TransferImageStatusCommand] (default task-2) [f6b62add-0a0c-45ee-a985-a76171843382] Running command: TransferImageStatusCommand internal: false. Entities affected : ID: 1eb97088-b805-4616-af55-0ac9d1d7dfbe Type: SystemAction group CREATE_DISK with role type USER
Can you please point me in some direction to try to fix this?
Thanks, and best regards
Igor
1 year, 8 months
How restore nodes ovirt UP from NonResponsive and VMs executing
by José Pascual
Hello,
I have ovirt with two nodes that are NonResponsive and all the VMs are
properly executing but i cant manage them because are in Unknown state.
It seems that nodes lost connection for a while with their gateway.
I have thought of first restarting the node where the engine is not
running and trying to put in UP. Then restart the engine from within de
VM to see if it starts up on this node.
What is the proper way of restoring management? I
Thanks,
Best Regards
--
Saludos,
José Pascual Gallud Martínez
Nombre | Dpto. Ingeniería <http://telfy.com/>
1 year, 9 months
Installing oVirt as a self-hosted engine - big big problem :()
by Jorge Visentini
Hi guys, starting the weekend with a "cucumber" like that in my hands.
I've been racking my brains for about 4 days to deploy a new engine.
Turns out I already tested *4.4.10*, *4.5.4.x*, and *4.5.5(master)* (el8
and el9) and none is working.
It seems to me to be ansible or a python problem, but I'm not sure.
I've read several oVirt reddit and github threads, but they seem to have no
effect anymore. I believe it's some package in the CentOS Stream
repositories, *but unfortunately I don't have it frozen locally here*.
Deploy hangs at *[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Wait for
the host to be up]*
I already tried to update the version of *python netaddr* as I read in git
and it still didn't work
I also tried to *freeze the ansible update* in the engine and it didn't
work.
I updated the version of *ovirt-ansible-collection to
ovirt-ansible-collection-3.1.3-0.1.master.20230420113738.el8.noarch.rpm*
and it didn't work either...
*The error seems to be on all oVirt builds*, but I don't know what I'm
doing wrong anymore because I can't pinpoint where the error is.
I appreciate any tips
*Below are some log outputs:*
[root@ksmmi1r02ovirt36 ~]# tail -f /var/log/vdsm/vdsm.log
2023-07-07 22:23:30,144-0300 INFO (vmrecovery) [vdsm.api] FINISH
getConnectedStoragePoolsList return={'poollist': []} from=internal,
task_id=6df1f5ed-0f41-4001-bb2e-e50fb0214ac7 (api:37)
2023-07-07 22:23:30,144-0300 INFO (vmrecovery) [vds] recovery: waiting for
storage pool to go up (clientIF:723)
2023-07-07 22:23:35,146-0300 INFO (vmrecovery) [vdsm.api] START
getConnectedStoragePoolsList() from=internal,
task_id=bd2a755d-3488-4b43-8ca4-44717dd6b017 (api:31)
2023-07-07 22:23:35,146-0300 INFO (vmrecovery) [vdsm.api] FINISH
getConnectedStoragePoolsList return={'poollist': []} from=internal,
task_id=bd2a755d-3488-4b43-8ca4-44717dd6b017 (api:37)
2023-07-07 22:23:35,146-0300 INFO (vmrecovery) [vds] recovery: waiting for
storage pool to go up (clientIF:723)
2023-07-07 22:23:39,320-0300 INFO (periodic/3) [vdsm.api] START
repoStats(domains=()) from=internal,
task_id=68567ce3-b579-469d-a46d-7bafc7b3e6bd (api:31)
2023-07-07 22:23:39,320-0300 INFO (periodic/3) [vdsm.api] FINISH repoStats
return={} from=internal, task_id=68567ce3-b579-469d-a46d-7bafc7b3e6bd
(api:37)
2023-07-07 22:23:40,151-0300 INFO (vmrecovery) [vdsm.api] START
getConnectedStoragePoolsList() from=internal,
task_id=fadcf734-9f7e-4681-8764-9d3863718644 (api:31)
2023-07-07 22:23:40,151-0300 INFO (vmrecovery) [vdsm.api] FINISH
getConnectedStoragePoolsList return={'poollist': []} from=internal,
task_id=fadcf734-9f7e-4681-8764-9d3863718644 (api:37)
2023-07-07 22:23:40,151-0300 INFO (vmrecovery) [vds] recovery: waiting for
storage pool to go up (clientIF:723)
2023-07-07 22:23:44,183-0300 INFO (jsonrpc/1) [api.host] START
getAllVmStats() from=::1,49920 (api:31)
2023-07-07 22:23:44,184-0300 INFO (jsonrpc/1) [api.host] FINISH
getAllVmStats return={'status': {'code': 0, 'message': 'Done'},
'statsList': (suppressed)} from=::1,49920 (api:37)
2023-07-07 22:23:45,157-0300 INFO (vmrecovery) [vdsm.api] START
getConnectedStoragePoolsList() from=internal,
task_id=504d8028-35be-45a3-b24d-4ec7cbc82f7e (api:31)
2023-07-07 22:23:45,157-0300 INFO (vmrecovery) [vdsm.api] FINISH
getConnectedStoragePoolsList return={'poollist': []} from=internal,
task_id=504d8028-35be-45a3-b24d-4ec7cbc82f7e (api:37)
2023-07-07 22:23:45,157-0300 INFO (vmrecovery) [vds] recovery: waiting for
storage pool to go up (clientIF:723)
2023-07-07 22:23:50,162-0300 INFO (vmrecovery) [vdsm.api] START
getConnectedStoragePoolsList() from=internal,
task_id=297ad1df-c855-4fbb-a89f-dfbe7a1b60a2 (api:31)
2023-07-07 22:23:50,162-0300 INFO (vmrecovery) [vdsm.api] FINISH
getConnectedStoragePoolsList return={'poollist': []} from=internal,
task_id=297ad1df-c855-4fbb-a89f-dfbe7a1b60a2 (api:37)
2023-07-07 22:23:50,162-0300 INFO (vmrecovery) [vds] recovery: waiting for
storage pool to go up (clientIF:723)
[root@ksmmi1r02ovirt36 ~]# journalctl -f
-- Logs begin at Fri 2023-07-07 21:57:13 -03. --
Jul 07 22:24:46 ksmmi1r02ovirt36.kosmo.cloud
ansible-async_wrapper.py[13790]: 13791 still running (86045)
Jul 07 22:24:50 ksmmi1r02ovirt36.kosmo.cloud platform-python[22812]:
ansible-ovirt_host_info Invoked with
pattern=name=ksmmi1r02ovirt36.kosmo.cloud auth={'token':
'eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICIyZzVfWUdWX08wSFJoWnlVeFNkdGl4d0liRWV6Wkp5NTgwN3BaXzUxelBvIn0.eyJleHAiOjE2ODg3OTY0MzAsImlhdCI6MTY4ODc3OTE1MCwianRpIjoiOTBlOWE5ZTAtMmExNS00MzNiLWIxOGQtMmUwNmI4MTQ5NGE2IiwiaXNzIjoiaHR0cHM6Ly9rc21lbmdpbmUwMS5rb3Ntby5jbG91ZC9vdmlydC1lbmdpbmUtYXV0aC9yZWFsbXMvb3ZpcnQtaW50ZXJuYWwiLCJhdWQiOiJhY2NvdW50Iiwic3ViIjoiYWRkMWMyYzYtYzJjMy00N2M4LWI1ODUtNGI2MTU2ZDAxYTE3IiwidHlwIjoiQmVhcmVyIiwiYXpwIjoib3ZpcnQtZW5naW5lLWludGVybmFsIiwic2Vzc2lvbl9zdGF0ZSI6IjNhMWUzZTU2LWIyZTUtNGMyYi05YTIxLThjZjE1YzY4NzlmNiIsImFjciI6IjEiLCJhbGxvd2VkLW9yaWdpbnMiOlsiaHR0cHM6Ly9rc21lbmdpbmUwMS5rb3Ntby5jbG91ZCJdLCJyZWFsbV9hY2Nlc3MiOnsicm9sZXMiOlsiZGVmYXVsdC1yb2xlcy1vdmlydC1pbnRlcm5hbCIsIm9mZmxpbmVfYWNjZXNzIiwidW1hX2F1dGhvcml6YXRpb24iXX0sInJlc291cmNlX2FjY2VzcyI6eyJhY2NvdW50Ijp7InJvbGVzIjpbIm1hbmFnZS1hY2NvdW50IiwibWFuYWdlLWFjY291bnQtbGlua3MiLCJ2aWV3LXByb2ZpbGUiXX19LCJzY29wZSI6Im92aXJ0LWV4dD10b2tlbjpwYXNzd29yZC1hY2Nlc3Mgb3ZpcnQtZXh0PXRva2VuLWluZm86cHVibGljLWF1dGh6LXNlYXJjaCBvdmlydC1hcHAtYXBpIG92aXJ0LWV4dD10b2tlbi1pbmZvOnZhbGlkYXRlIHByb2ZpbGUgZW1haWwgb3ZpcnQtZXh0PXRva2VuLWluZm86YXV0aHotc2VhcmNoIiwic2lkIjoiM2ExZTNlNTYtYjJlNS00YzJiLTlhMjEtOGNmMTVjNjg3OWY2IiwiZW1haWxfdmVyaWZpZWQiOmZhbHNlLCJncm91cHMiOlsiL292aXJ0LWFkbWluaXN0cmF0b3IiXSwicHJlZmVycmVkX3VzZXJuYW1lIjoiYWRtaW5Ab3ZpcnQiLCJlbWFpbCI6ImFkbWluQGxvY2FsaG9zdCJ9.o9PsulNw0urPphWITcB6Y3wpHQiiQ0v00su6XorITcvNElzkfHqyYfJd8W-kIfgElh6BNnCmYyIwtX7t3T4-PiLgDdipH1J9uzuDBXkmNBNcVmFimfUAqyC8aUITK56CqZ5TyRyHqhOicPciqGSY8R98hQ8I8y11w2RiIFT0rQYnRev75gjKoqUH29uNyeCAdTyKvPSGHNm1pLLrtPUmk-JCGmsYytNRCMHAPoNIlZP3k94PbQ9pI4jZ5O7kcRSgJik8tUDOVglcL4g0MoAJwracek2MUTvK8pDpRghI9hSQVLFtAXCyGRxfHHzTko4EbHBbFlz5s3pfs2kbF6TFmw',
'url': 'https://ksmengine01.kosmo.cloud/ovirt-engine/api', 'ca_file': None,
'insecure': True, 'timeout': 0, 'compress': True, 'kerberos': False,
'headers': None, 'hostname': None, 'username': None, 'password': None}
fetch_nested=False nested_attributes=[] follow=[] all_content=False
cluster_version=None
Jul 07 22:24:51 ksmmi1r02ovirt36.kosmo.cloud
ansible-async_wrapper.py[13790]: 13791 still running (86040)
Jul 07 22:24:56 ksmmi1r02ovirt36.kosmo.cloud
ansible-async_wrapper.py[13790]: 13791 still running (86035)
Jul 07 22:25:00 ksmmi1r02ovirt36.kosmo.cloud platform-python[22829]:
ansible-ovirt_host_info Invoked with
pattern=name=ksmmi1r02ovirt36.kosmo.cloud auth={'token':
'eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICIyZzVfWUdWX08wSFJoWnlVeFNkdGl4d0liRWV6Wkp5NTgwN3BaXzUxelBvIn0.eyJleHAiOjE2ODg3OTY0MzAsImlhdCI6MTY4ODc3OTE1MCwianRpIjoiOTBlOWE5ZTAtMmExNS00MzNiLWIxOGQtMmUwNmI4MTQ5NGE2IiwiaXNzIjoiaHR0cHM6Ly9rc21lbmdpbmUwMS5rb3Ntby5jbG91ZC9vdmlydC1lbmdpbmUtYXV0aC9yZWFsbXMvb3ZpcnQtaW50ZXJuYWwiLCJhdWQiOiJhY2NvdW50Iiwic3ViIjoiYWRkMWMyYzYtYzJjMy00N2M4LWI1ODUtNGI2MTU2ZDAxYTE3IiwidHlwIjoiQmVhcmVyIiwiYXpwIjoib3ZpcnQtZW5naW5lLWludGVybmFsIiwic2Vzc2lvbl9zdGF0ZSI6IjNhMWUzZTU2LWIyZTUtNGMyYi05YTIxLThjZjE1YzY4NzlmNiIsImFjciI6IjEiLCJhbGxvd2VkLW9yaWdpbnMiOlsiaHR0cHM6Ly9rc21lbmdpbmUwMS5rb3Ntby5jbG91ZCJdLCJyZWFsbV9hY2Nlc3MiOnsicm9sZXMiOlsiZGVmYXVsdC1yb2xlcy1vdmlydC1pbnRlcm5hbCIsIm9mZmxpbmVfYWNjZXNzIiwidW1hX2F1dGhvcml6YXRpb24iXX0sInJlc291cmNlX2FjY2VzcyI6eyJhY2NvdW50Ijp7InJvbGVzIjpbIm1hbmFnZS1hY2NvdW50IiwibWFuYWdlLWFjY291bnQtbGlua3MiLCJ2aWV3LXByb2ZpbGUiXX19LCJzY29wZSI6Im92aXJ0LWV4dD10b2tlbjpwYXNzd29yZC1hY2Nlc3Mgb3ZpcnQtZXh0PXRva2VuLWluZm86cHVibGljLWF1dGh6LXNlYXJjaCBvdmlydC1hcHAtYXBpIG92aXJ0LWV4dD10b2tlbi1pbmZvOnZhbGlkYXRlIHByb2ZpbGUgZW1haWwgb3ZpcnQtZXh0PXRva2VuLWluZm86YXV0aHotc2VhcmNoIiwic2lkIjoiM2ExZTNlNTYtYjJlNS00YzJiLTlhMjEtOGNmMTVjNjg3OWY2IiwiZW1haWxfdmVyaWZpZWQiOmZhbHNlLCJncm91cHMiOlsiL292aXJ0LWFkbWluaXN0cmF0b3IiXSwicHJlZmVycmVkX3VzZXJuYW1lIjoiYWRtaW5Ab3ZpcnQiLCJlbWFpbCI6ImFkbWluQGxvY2FsaG9zdCJ9.o9PsulNw0urPphWITcB6Y3wpHQiiQ0v00su6XorITcvNElzkfHqyYfJd8W-kIfgElh6BNnCmYyIwtX7t3T4-PiLgDdipH1J9uzuDBXkmNBNcVmFimfUAqyC8aUITK56CqZ5TyRyHqhOicPciqGSY8R98hQ8I8y11w2RiIFT0rQYnRev75gjKoqUH29uNyeCAdTyKvPSGHNm1pLLrtPUmk-JCGmsYytNRCMHAPoNIlZP3k94PbQ9pI4jZ5O7kcRSgJik8tUDOVglcL4g0MoAJwracek2MUTvK8pDpRghI9hSQVLFtAXCyGRxfHHzTko4EbHBbFlz5s3pfs2kbF6TFmw',
'url': 'https://ksmengine01.kosmo.cloud/ovirt-engine/api', 'ca_file': None,
'insecure': True, 'timeout': 0, 'compress': True, 'kerberos': False,
'headers': None, 'hostname': None, 'username': None, 'password': None}
fetch_nested=False nested_attributes=[] follow=[] all_content=False
cluster_version=None
cat
/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20230707220613-2t8ze9.log
2023-07-07 22:19:11,816-0300 INFO
otopi.ovirt_hosted_engine_setup.ansible_utils
ansible_utils._process_output:115 TASK [ovirt.ovirt.hosted_engine_setup :
Wait for the host to be up]
2023-07-07 22:39:39,882-0300 DEBUG
otopi.ovirt_hosted_engine_setup.ansible_utils
ansible_utils._process_output:109 {'changed': False, 'ovirt_hosts':
[{'href': '/ovirt-engine/api/hosts/d1bf8fb2-74f4-4954-8c34-66eb99ba2bf3',
'comment': '', 'id': 'd1bf8fb2-74f4-4954-8c34-66eb99ba2bf3', 'name':
'ksmmi1r02ovirt36.kosmo.cloud', 'address': 'ksmmi1r02ovirt36.kosmo.cloud',
'affinity_labels': [], 'auto_numa_status': 'unknown', 'certificate':
{'organization': 'kosmo.cloud', 'subject':
'O=kosmo.cloud,CN=ksmmi1r02ovirt36.kosmo.cloud'}, 'cluster': {'href':
'/ovirt-engine/api/clusters/d8784faf-8b77-45c8-9fa4-b9b4b0404d95', 'id':
'd8784faf-8b77-45c8-9fa4-b9b4b0404d95'}, 'cpu': {'speed': 0.0, 'topology':
{}}, 'cpu_units': [], 'device_passthrough': {'enabled': False}, 'devices':
[], 'external_network_provider_configurations': [], 'external_status':
'ok', 'hardware_information': {'supported_rng_sources': []}, 'hooks': [],
'katello_errata': [], 'kdump_status': 'unknown', 'ksm': {'enabled': False},
'max_scheduling_memory': 0, 'memory': 0, 'network_attachments': [], 'nics':
[], 'numa_nodes': [], 'numa_supported': False, 'os':
{'custom_kernel_cmdline': ''}, 'ovn_configured': False, 'permissions': [],
'port': 54321, 'power_management': {'automatic_pm_enabled': True,
'enabled': False, 'kdump_detection': True, 'pm_proxies': []}, 'protocol':
'stomp', 'reinstallation_required': False, 'se_**FILTERED**': {}, 'spm':
{'priority': 5, 'status': 'none'}, 'ssh': {'fingerprint':
'SHA256:Nr04m1g0UxbpqxwMBr93DLHz2m2wzR8+xFJhBVNovHY', 'port': 22,
'public_key': 'ecdsa-sha2-nistp256
AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBE2EdJn0vJiJUagEK3w2G2nHmziJJasailwapaL06qWU2+BkPwkokSvyK07APhwyynnz6lw8J4y/kWv12D7/r+s='},
'statistics': [], 'status': 'install_failed',
'storage_connection_extensions': [], 'summary': {'total': 0}, 'tags': [],
'transparent_huge_pages': {'enabled': False}, 'type': 'rhel',
'unmanaged_networks': [], 'update_available': False, 'vgpu_placement':
'consolidated'}], 'invocation': {'module_args': {'pattern':
'name=ksmmi1r02ovirt36.kosmo.cloud', 'fetch_nested': False,
'nested_attributes': [], 'follow': [], 'all_content': False,
'cluster_version': None}}, '_ansible_no_log': None, 'attempts': 120}
2023-07-07 22:39:39,983-0300 DEBUG
otopi.ovirt_hosted_engine_setup.ansible_utils
ansible_utils._process_output:109 ignored: [localhost]: FAILED! =>
{"attempts": 120, "changed": false, "ovirt_hosts": [{"address":
"ksmmi1r02ovirt36.kosmo.cloud", "affinity_labels": [], "auto_numa_status":
"unknown", "certificate": {"organization": "kosmo.cloud", "subject":
"O=kosmo.cloud,CN=ksmmi1r02ovirt36.kosmo.cloud"}, "cluster": {"href":
"/ovirt-engine/api/clusters/d8784faf-8b77-45c8-9fa4-b9b4b0404d95", "id":
"d8784faf-8b77-45c8-9fa4-b9b4b0404d95"}, "comment": "", "cpu": {"speed":
0.0, "topology": {}}, "cpu_units": [], "device_passthrough": {"enabled":
false}, "devices": [], "external_network_provider_configurations": [],
"external_status": "ok", "hardware_information": {"supported_rng_sources":
[]}, "hooks": [], "href":
"/ovirt-engine/api/hosts/d1bf8fb2-74f4-4954-8c34-66eb99ba2bf3", "id":
"d1bf8fb2-74f4-4954-8c34-66eb99ba2bf3", "katello_errata": [],
"kdump_status": "unknown", "ksm": {"enabled": false},
"max_scheduling_memory": 0, "memory": 0, "name":
"ksmmi1r02ovirt36.kosmo.cloud", "network_attachments": [], "nics": [],
"numa_nodes": [], "numa_supported": false, "os": {"custom_kernel_cmdline":
""}, "ovn_configured": false, "permissions": [], "port": 54321,
"power_management": {"automatic_pm_enabled": true, "enabled": false,
"kdump_detection": true, "pm_proxies": []}, "protocol": "stomp",
"reinstallation_required": false, "se_**FILTERED**": {}, "spm":
{"priority": 5, "status": "none"}, "ssh": {"fingerprint":
"SHA256:Nr04m1g0UxbpqxwMBr93DLHz2m2wzR8+xFJhBVNovHY", "port": 22,
"public_key": "ecdsa-sha2-nistp256
AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBE2EdJn0vJiJUagEK3w2G2nHmziJJasailwapaL06qWU2+BkPwkokSvyK07APhwyynnz6lw8J4y/kWv12D7/r+s="},
"statistics": [], "status": "install_failed",
"storage_connection_extensions": [], "summary": {"total": 0}, "tags": [],
"transparent_huge_pages": {"enabled": false}, "type": "rhel",
"unmanaged_networks": [], "update_available": false, "vgpu_placement":
"consolidated"}]}
2023-07-07 22:39:40,284-0300 INFO
otopi.ovirt_hosted_engine_setup.ansible_utils
ansible_utils._process_output:115 TASK [ovirt.ovirt.hosted_engine_setup :
Notify the user about a failure]
2023-07-07 22:39:40,685-0300 DEBUG
otopi.ovirt_hosted_engine_setup.ansible_utils
ansible_utils._process_output:109 {'msg': 'Host is not up, please check
logs, perhaps also on the engine machine', '_ansible_no_log': None,
'changed': False}
2023-07-07 22:39:40,786-0300 ERROR
otopi.ovirt_hosted_engine_setup.ansible_utils
ansible_utils._process_output:113 fatal: [localhost]: FAILED! =>
{"changed": false, "msg": "Host is not up, please check logs, perhaps also
on the engine machine"}
Have a nice weekend!
--
Att,
Jorge Visentini
+55 55 98432-9868
1 year, 9 months
Ovirt engine after OS update doesn't start with WFLYEE0042: Failed t,o construct component instance
by Jirka Simon
Hello oVirt community. I just updated some OS packages (engine was up to date) and after restart engine doesn't start with error 500 and following error message in server.log
I checked issue with postgres-jdbc (it has similar symptoms) and I
downgraded it but it should be fixed now, but downgrade didn't help.
here is part of server.log and list of all updated packages.
thank you for any help.
Jirka
023-07-29 12:01:51,382+02 ERROR [org.jboss.msc.service.fail]
(ServerService Thread Pool -- 42) MSC000001: Failed to start service
jboss.deployment.subunit."engine.ear"."bll.jar".component.
Backend.START: org.jboss.msc.service.StartException in service
jboss.deployment.subunit."engine.ear"."bll.jar".component.Backend.START:
java.lang.IllegalStateException: WFLYEE0042: Failed t o construct
component instance at
org.jboss.as.ee@24.0.1.Final//org.jboss.as.ee.component.ComponentStartService$1.run(ComponentStartService.java:57)
at
java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at
java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at
org.jboss.threads@2.4.0.Final//org.jboss.threads.ContextClassLoaderSavingRunnable.run(ContextClassLoaderSavingRunnable.java:35)
at
org.jboss.threads@2.4.0.Final//org.jboss.threads.EnhancedQueueExecutor.safeRun(EnhancedQueueExecutor.java:1990)
at
org.jboss.threads@2.4.0.Final//org.jboss.threads.EnhancedQueueExecutor$ThreadBody.doRunTask(EnhancedQueueExecutor.java:1486)
at
org.jboss.threads@2.4.0.Final//org.jboss.threads.EnhancedQueueExecutor$ThreadBody.run(EnhancedQueueExecutor.java:1377)
at java.base/java.lang.Thread.run(Thread.java:829) at
org.jboss.threads@2.4.0.Final//org.jboss.threads.JBossThread.run(JBossThread.java:513)
Caused by: java.lang.IllegalStateException: WFLYEE0042: Failed to
construct component instance at
org.jboss.as.ee@24.0.1.Final//org.jboss.as.ee.component.BasicComponent.constructComponentInstance(BasicComponent.java:170)
at
org.jboss.as.ee@24.0.1.Final//org.jboss.as.ee.component.BasicComponent.constructComponentInstance(BasicComponent.java:141)
at
org.jboss.as.ee@24.0.1.Final//org.jboss.as.ee.component.BasicComponent.createInstance(BasicComponent.java:88)
at
org.jboss.as.ejb3@24.0.1.Final//org.jboss.as.ejb3.component.singleton.SingletonComponent.getComponentInstance(SingletonComponent.java:127)
at
org.jboss.as.ejb3@24.0.1.Final//org.jboss.as.ejb3.component.singleton.SingletonComponent.start(SingletonComponent.java:141)
at
org.jboss.as.ee@24.0.1.Final//org.jboss.as.ee.component.ComponentStartService$1.run(ComponentStartService.java:54)
... 8 more Caused by: javax.ejb.EJBException:
java.lang.RuntimeException: colliding os id 302 at node
/os/centos_9x64/id at
org.jboss.as.ejb3@24.0.1.Final//org.jboss.as.ejb3.tx.CMTTxInterceptor.invokeInNoTx(CMTTxInterceptor.java:239)
at
org.jboss.as.ejb3@24.0.1.Final//org.jboss.as.ejb3.tx.CMTTxInterceptor.supports(CMTTxInterceptor.java:446)
at
org.jboss.as.ejb3@24.0.1.Final//org.jboss.as.ejb3.tx.LifecycleCMTTxInterceptor.processInvocation(LifecycleCMTTxInterceptor.java:70)
at
org.jboss.invocation@1.6.0.Final//org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
at
org.jboss.as.weld@24.0.1.Final//org.jboss.as.weld.injection.WeldInjectionContextInterceptor.processInvocation(WeldInjectionContextInterceptor.java:43)
at
org.jboss.invocation@1.6.0.Final//org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
at
org.jboss.as.ejb3@24.0.1.Final//org.jboss.as.ejb3.component.interceptors.CurrentInvocationContextInterceptor.processInvocation(CurrentInvocationContextInterceptor.java:41)
at
org.jboss.invocation@1.6.0.Final//org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
at
org.jboss.as.ee@24.0.1.Final//org.jboss.as.ee.concurrent.ConcurrentContextInterceptor.processInvocation(ConcurrentContextInterceptor.java:45)
at
org.jboss.invocation@1.6.0.Final//org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
at
org.jboss.invocation@1.6.0.Final//org.jboss.invocation.ContextClassLoaderInterceptor.processInvocation(ContextClassLoaderInterceptor.java:60)
at
org.jboss.invocation@1.6.0.Final//org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
at
org.jboss.as.ejb3@24.0.1.Final//org.jboss.as.ejb3.component.singleton.StartupCountDownInterceptor.processInvocation(StartupCountDownInterceptor.java:25)
at
org.jboss.invocation@1.6.0.Final//org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
at
org.jboss.invocation@1.6.0.Final//org.jboss.invocation.ChainedInterceptor.processInvocation(ChainedInterceptor.java:53)
at
org.jboss.as.ee@24.0.1.Final//org.jboss.as.ee.component.BasicComponent.constructComponentInstance(BasicComponent.java:168)
... 13 more Caused by: java.lang.RuntimeException: colliding os
id 302 at node /os/centos_9x64/id at
org.ovirt.engine.core.utils//org.ovirt.engine.core.utils.OsRepositoryImpl.buildIdToUnameLookup(OsRepositoryImpl.java:105)
at
org.ovirt.engine.core.utils//org.ovirt.engine.core.utils.OsRepositoryImpl.init(OsRepositoryImpl.java:66)
at
deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.Backend.initOsRepository(Backend.java:674)
at
deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.Backend.initialize(Backend.java:242)
at
deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.Backend.create(Backend.java:178)
at
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native
Method) at
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at
org.jboss.as.ee@24.0.1.Final//org.jboss.as.ee.component.ManagedReferenceLifecycleMethodInterceptor.processInvocation(ManagedReferenceLifecycleMethodInterceptor.java:96)
at
org.jboss.invocation@1.6.0.Final//org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
at
org.jboss.invocation@1.6.0.Final//org.jboss.invocation.InterceptorContext$Invocation.proceed(InterceptorContext.java:509)
at
org.jboss.as.weld.common@24.0.1.Final//org.jboss.as.weld.interceptors.Jsr299BindingsInterceptor.delegateInterception(Jsr299BindingsInterceptor.java:79)
at
org.jboss.as.weld.common@24.0.1.Final//org.jboss.as.weld.interceptors.Jsr299BindingsInterceptor.doLifecycleInterception(Jsr299BindingsInterceptor.java:126)
at
org.jboss.as.weld.common@24.0.1.Final//org.jboss.as.weld.interceptors.Jsr299BindingsInterceptor.processInvocation(Jsr299BindingsInterceptor.java:112)
at
org.jboss.invocation@1.6.0.Final//org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
at
org.jboss.invocation@1.6.0.Final//org.jboss.invocation.InterceptorContext$Invocation.proceed(InterceptorContext.java:509)
at
org.jboss.weld.core@3.1.7.SP1//org.jboss.weld.module.ejb.AbstractEJBRequestScopeActivationInterceptor.aroundInvoke(AbstractEJBRequestScopeActivationInterceptor.java:81)
at
org.jboss.as.weld.common@24.0.1.Final//org.jboss.as.weld.ejb.EjbRequestScopeActivationInterceptor.processInvocation(EjbRequestScopeActivationInterceptor.java:89)
at
org.jboss.invocation@1.6.0.Final//org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
at
org.jboss.as.weld@24.0.1.Final//org.jboss.as.weld.injection.WeldInjectionInterceptor.processInvocation(WeldInjectionInterceptor.java:53)
at
org.jboss.invocation@1.6.0.Final//org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
at
org.jboss.as.ee@24.0.1.Final//org.jboss.as.ee.component.AroundConstructInterceptorFactory$1.processInvocation(AroundConstructInterceptorFactory.java:28)
at
org.jboss.invocation@1.6.0.Final//org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
at
org.jboss.as.weld@24.0.1.Final//org.jboss.as.weld.injection.WeldInterceptorInjectionInterceptor.processInvocation(WeldInterceptorInjectionInterceptor.java:56)
at
org.jboss.invocation@1.6.0.Final//org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
at
org.jboss.as.ee@24.0.1.Final//org.jboss.as.ee.component.ComponentInstantiatorInterceptor.processInvocation(ComponentInstantiatorInterceptor.java:74)
at
org.jboss.invocation@1.6.0.Final//org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
at
org.jboss.as.weld@24.0.1.Final//org.jboss.as.weld.interceptors.Jsr299BindingsCreateInterceptor.processInvocation(Jsr299BindingsCreateInterceptor.java:111)
at
org.jboss.invocation@1.6.0.Final//org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
at
org.jboss.as.ee@24.0.1.Final//org.jboss.as.ee.component.NamespaceContextInterceptor.processInvocation(NamespaceContextInterceptor.java:50)
at
org.jboss.invocation@1.6.0.Final//org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
at
org.jboss.as.ejb3@24.0.1.Final//org.jboss.as.ejb3.tx.CMTTxInterceptor.invokeInNoTx(CMTTxInterceptor.java:232)
... 28 more 2023-07-29 12:01:51,393+02 ERROR
[org.jboss.as.controller.management-operation] (Controller Boot Thread)
WFLYCTL0013: Operation ("deploy") failed - address: ([("deployment" =>
"engine.ear") ]) - failure description: {"WFLYCTL0080: Failed services"
=>
{"jboss.deployment.subunit.\"engine.ear\".\"bll.jar\".component.Backend.START"
=> "java.lang.IllegalStateException: WFLYEE0042: Failed to construct
component instance Caused by: java.lang.IllegalStateException:
WFLYEE0042: Failed to construct component instance Caused by:
javax.ejb.EJBException: java.lang.RuntimeException: colliding os id 302
at node /os/centos_9x64/id Caused by: java.lang.RuntimeException:
colliding os id 302 at node /os/centos_9x64/id"}} 2023-07-29
12:01:51,416+02 INFO [org.jboss.as.server] (ServerService Thread Pool
-- 27) WFLYSRV0010: Deployed "ovirt-web-ui.war" (runtime-name :
"ovirt-web-ui.war") 2023-07-29 12:01:51,417+02 INFO
[org.jboss.as.server] (ServerService Thread Pool -- 27) WFLYSRV0010:
Deployed "apidoc.war" (runtime-name : "apidoc.war") 2023-07-29
12:01:51,417+02 INFO [org.jboss.as.server] (ServerService Thread Pool
-- 27) WFLYSRV0010: Deployed "restapi.war" (runtime-name :
"restapi.war") 2023-07-29 12:01:51,417+02 INFO [org.jboss.as.server]
(ServerService Thread Pool -- 27) WFLYSRV0010: Deployed "engine.ear"
(runtime-name : "engine.ear") 2023-07-29 12:01:51,420+02 INFO
[org.jboss.as.controller] (Controller Boot Thread) WFLYCTL0183:
Service status report WFLYCTL0186: Services which failed to start:
service
jboss.deployment.subunit."engine.ear"."bll.jar".component.Backend.START:
java.lang.IllegalStateException: WFLYEE0042: Failed to c onstruct
component instance WFLYCTL0448: 2 additional services are down due to
their dependencies being missing or failed 2023-07-29 12:01:51,462+02
INFO [org.jboss.as.server] (Controller Boot Thread) WFLYSRV0212:
Resuming server 2023-07-29 12:01:51,466+02 ERROR [org.jboss.as]
(Controller Boot Thread) WFLYSRV0026: WildFly Full 24.0.1.Final (WildFly
Core 16.0.1.Final) started (with errors) in 14577ms - Started 1670 o f
1890 services (6 services failed or missing dependencies, 393 services
are lazy, passive or on-demand) 2023-07-29 12:01:51,468+02 INFO
[org.jboss.as] (Controller Boot Thread) WFLYSRV0060: Http management
interface listening on http://127.0.0.1:8706/management 2023-07-29
12:01:51,469+02 INFO [org.jboss.as] (Controller Boot Thread)
WFLYSRV0051: Admin console listening on http://127.0.0.1:8706 list of
all updated packages: dnf history info 21 Transaction ID : 21 Begin
time : Sat 29 Jul 2023 10:59:14 AM CEST Begin rpmdb :
1077:5e724adb5bc93f2e2930cf25e895ccb36798d23c End time : Sat 29
Jul 2023 11:07:20 AM CEST (8 minutes) End rpmdb :
1087:60820a34eb4d036ba2c19d49841036fae4209af5 User : root
<root> Return-Code : Success Releasever : 8 Command Line :
update Comment : Packages Altered: Install
apr-util-bdb-1.6.1-9.el8.x86_64
@appstream Install
apr-util-openssl-1.6.1-9.el8.x86_64
@appstream Install
kernel-4.18.0-500.el8.x86_64
@baseos Install
kernel-core-4.18.0-500.el8.x86_64
@baseos Install
kernel-modules-4.18.0-500.el8.x86_64
@baseos Install
ongres-stringprep-1.1-2.el8.noarch
@centos-ovirt45 Install
python3.11-jmespath-0.9.0-11.5.el8.noarch
@centos-ovirt45 Install
python3.11-ovirt-engine-sdk4-4.6.2-1.el8.x86_64
@centos-ovirt45 Install
python3.11-ovirt-imageio-client-2.5.0-1.el8.x86_64
@centos-ovirt45 Install
python3.11-ovirt-imageio-common-2.5.0-1.el8.x86_64
@centos-ovirt45 Install
python3.11-passlib-1.7.4-3.3.el8.noarch
@centos-ovirt45 Install
python3.11-pycurl-7.45.2-2.2.el8.x86_64
@centos-ovirt45 Install
velocity-1.7-36.2.el8s.noarch
@centos-ovirt45 Upgrade
ceph-common-2:16.2.13-1.el8s.x86_64
@centos-ceph-pacific Upgraded
ceph-common-2:16.2.11-1.el8s.x86_64
@@System Upgrade
libcephfs2-2:16.2.13-1.el8s.x86_64
@centos-ceph-pacific Upgraded
libcephfs2-2:16.2.11-1.el8s.x86_64
@@System Upgrade
librados2-2:16.2.13-1.el8s.x86_64
@centos-ceph-pacific Upgraded
librados2-2:16.2.11-1.el8s.x86_64
@@System Upgrade
libradosstriper1-2:16.2.13-1.el8s.x86_64
@centos-ceph-pacific Upgraded
libradosstriper1-2:16.2.11-1.el8s.x86_64
@@System Upgrade
librbd1-2:16.2.13-1.el8s.x86_64
@centos-ceph-pacific Upgraded
librbd1-2:16.2.11-1.el8s.x86_64
@@System Upgrade
librgw2-2:16.2.13-1.el8s.x86_64
@centos-ceph-pacific Upgraded
librgw2-2:16.2.11-1.el8s.x86_64
@@System Upgrade
python3-ceph-argparse-2:16.2.13-1.el8s.x86_64
@centos-ceph-pacific Upgraded
python3-ceph-argparse-2:16.2.11-1.el8s.x86_64
@@System Upgrade
python3-ceph-common-2:16.2.13-1.el8s.x86_64
@centos-ceph-pacific Upgraded
python3-ceph-common-2:16.2.11-1.el8s.x86_64
@@System Upgrade
python3-cephfs-2:16.2.13-1.el8s.x86_64
@centos-ceph-pacific Upgraded
python3-cephfs-2:16.2.11-1.el8s.x86_64
@@System Upgrade
python3-rados-2:16.2.13-1.el8s.x86_64
@centos-ceph-pacific Upgraded
python3-rados-2:16.2.11-1.el8s.x86_64
@@System Upgrade
python3-rbd-2:16.2.13-1.el8s.x86_64
@centos-ceph-pacific Upgraded
python3-rbd-2:16.2.11-1.el8s.x86_64
@@System Upgrade
python3-rgw-2:16.2.13-1.el8s.x86_64
@centos-ceph-pacific Upgraded
python3-rgw-2:16.2.11-1.el8s.x86_64
@@System Upgrade
openvswitch2.15-2.15.0-136.el8s.x86_64
@centos-nfv-openvswitch Upgraded
openvswitch2.15-2.15.0-135.el8s.x86_64
@@System Upgrade
python3-openvswitch2.15-2.15.0-136.el8s.x86_64
@centos-nfv-openvswitch Upgraded
python3-openvswitch2.15-2.15.0-135.el8s.x86_64
@@System Upgrade PackageKit-1.1.12-7.el8.x86_64
@appstream Upgraded
PackageKit-1.1.12-6.el8.x86_64
@@System Upgrade
PackageKit-glib-1.1.12-7.el8.x86_64
@appstream Upgraded
PackageKit-glib-1.1.12-6.el8.x86_64
@@System Upgrade
alsa-lib-1.2.9-1.el8.x86_64
@appstream Upgraded
alsa-lib-1.2.8-2.el8.x86_64
@@System Upgrade
ansible-core-2.15.0-1.el8.x86_64
@appstream Upgraded
ansible-core-2.14.2-3.el8.x86_64
@@System Upgrade
apr-util-1.6.1-9.el8.x86_64
@appstream Upgraded
apr-util-1.6.1-6.el8.x86_64
@@System Upgrade
cjose-0.6.1-3.module_el8+302+abe4241d.x86_64
@appstream Upgraded
cjose-0.6.1-2.module_el8.4.0+674+2c6c7264.x86_64
@@System Upgrade cockpit-packagekit-295-1.el8.noarch
@appstream Upgraded
cockpit-packagekit-284-1.el8.noarch
@@System Upgrade
gdb-headless-8.2-20.el8.x86_64
@appstream Upgraded
gdb-headless-8.2-19.el8.x86_64
@@System Upgrade
git-2.39.3-1.el8.x86_64
@appstream Upgraded
git-2.39.1-1.el8.x86_64
@@System Upgrade
git-core-2.39.3-1.el8.x86_64
@appstream Upgraded
git-core-2.39.1-1.el8.x86_64
@@System Upgrade
git-core-doc-2.39.3-1.el8.noarch
@appstream Upgraded
git-core-doc-2.39.1-1.el8.noarch
@@System Upgrade
grafana-9.2.10-4.el8.x86_64
@appstream Upgraded
grafana-7.5.15-4.el8.x86_64
@@System Upgrade
grafana-pcp-5.1.1-1.el8.x86_64
@appstream Upgraded
grafana-pcp-3.2.0-3.el8.x86_64
@@System Upgrade jq-1.6-7.el8.x86_64
@appstream Upgraded
jq-1.6-6.el8.x86_64
@@System Upgrade
libfastjson-0.99.9-2.el8.x86_64
@appstream Upgraded
libfastjson-0.99.9-1.el8.x86_64
@@System Upgrade
librsvg2-2.42.7-5.el8.x86_64
@appstream Upgraded
librsvg2-2.42.7-4.el8.x86_64
@@System Upgrade
libtiff-4.0.9-28.el8.x86_64
@appstream Upgraded
libtiff-4.0.9-27.el8.x86_64
@@System Upgrade
libwebp-1.0.0-9.el8.x86_64
@appstream Upgraded
libwebp-1.0.0-5.el8.x86_64
@@System Upgrade
mod_auth_openidc-2.4.9.4-5.module_el8+319+bd773fd7.x86_64
@appstream Upgraded
mod_auth_openidc-2.4.9.4-1.module_el8.7.0+1136+d8f380b8.x86_64
@@System Upgrade nspr-4.35.0-1.el8.x86_64
@appstream Upgraded
nspr-4.34.0-3.el8.x86_64
@@System Upgrade
perl-Git-2.39.3-1.el8.noarch
@appstream Upgraded
perl-Git-2.39.1-1.el8.noarch
@@System Upgrade
platform-python-devel-3.6.8-52.el8.x86_64
@appstream Upgraded
platform-python-devel-3.6.8-51.el8.x86_64
@@System Upgrade protobuf-c-1.3.0-8.el8.x86_64
@appstream Upgraded
protobuf-c-1.3.0-6.el8.x86_64
@@System Upgrade
python3-tkinter-3.6.8-52.el8.x86_64
@appstream Upgraded
python3-tkinter-3.6.8-51.el8.x86_64
@@System Upgrade
python3.11-3.11.4-1.el8.x86_64
@appstream Upgraded
python3.11-3.11.2-2.el8.x86_64
@@System Upgrade
python3.11-libs-3.11.4-1.el8.x86_64
@appstream Upgraded
python3.11-libs-3.11.2-2.el8.x86_64
@@System Upgrade
python3.11-tkinter-3.11.4-1.el8.x86_64
@appstream Upgraded
python3.11-tkinter-3.11.2-2.el8.x86_64
@@System Upgrade
python38-3.8.16-1.module_el8.8.0+1242+93d6d191.x86_64
@appstream Upgraded
python38-3.8.13-1.module_el8.7.0+1177+19c53253.x86_64
@@System Upgrade
python38-libs-3.8.16-1.module_el8.8.0+1242+93d6d191.x86_64
@appstream Upgraded
python38-libs-3.8.13-1.module_el8.7.0+1177+19c53253.x86_64
@@System Upgrade
python38-tkinter-3.8.16-1.module_el8.8.0+1242+93d6d191.x86_64
@appstream Upgraded
python38-tkinter-3.8.13-1.module_el8.7.0+1177+19c53253.x86_64
@@System Upgrade
qemu-guest-agent-15:6.2.0-35.module_el8+466+00f6f2b0.x86_64
@appstream Upgraded
qemu-guest-agent-15:6.2.0-28.module_el8.8.0+1257+0c3374ae.x86_64
@@System Upgrade qemu-img-15:6.2.0-35.module_el8+466+00f6f2b0.x86_64
@appstream Upgraded
qemu-img-15:6.2.0-28.module_el8.8.0+1257+0c3374ae.x86_64
@@System Upgrade rhel-system-roles-1.22.0-0.13.el8.noarch
@appstream Upgraded
rhel-system-roles-1.21.0-2.el8.noarch
@@System Upgrade
rsyslog-8.2102.0-15.el8.x86_64
@appstream Upgraded
rsyslog-8.2102.0-13.el8.x86_64
@@System Upgrade
rsyslog-elasticsearch-8.2102.0-15.el8.x86_64
@appstream Upgraded
rsyslog-elasticsearch-8.2102.0-13.el8.x86_64
@@System Upgrade
rsyslog-gnutls-8.2102.0-15.el8.x86_64
@appstream Upgraded
rsyslog-gnutls-8.2102.0-13.el8.x86_64
@@System Upgrade
rsyslog-mmjsonparse-8.2102.0-15.el8.x86_64
@appstream Upgraded
rsyslog-mmjsonparse-8.2102.0-13.el8.x86_64
@@System Upgrade
rsyslog-mmnormalize-8.2102.0-15.el8.x86_64
@appstream Upgraded
rsyslog-mmnormalize-8.2102.0-13.el8.x86_64
@@System Upgrade
texlive-base-7:20180414-29.el8.noarch
@appstream Upgraded
texlive-base-7:20180414-28.el8.noarch
@@System Upgrade
texlive-dvipng-7:20180414-29.el8.x86_64
@appstream Upgraded
texlive-dvipng-7:20180414-28.el8.x86_64
@@System Upgrade
texlive-kpathsea-7:20180414-29.el8.x86_64
@appstream Upgraded
texlive-kpathsea-7:20180414-28.el8.x86_64
@@System Upgrade
texlive-lib-7:20180414-29.el8.x86_64
@appstream Upgraded
texlive-lib-7:20180414-28.el8.x86_64
@@System Upgrade
texlive-tetex-7:20180414-29.el8.noarch
@appstream Upgraded
texlive-tetex-7:20180414-28.el8.noarch
@@System Upgrade
texlive-texlive.infra-7:20180414-29.el8.noarch
@appstream Upgraded
texlive-texlive.infra-7:20180414-28.el8.noarch
@@System Upgrade tzdata-java-2023c-1.el8.noarch
@appstream Upgraded
tzdata-java-2022g-2.el8.noarch
@@System Upgrade
NetworkManager-1:1.40.16-8.el8.x86_64
@baseos Upgraded
NetworkManager-1:1.40.16-2.el8.x86_64
@@System Upgrade
NetworkManager-libnm-1:1.40.16-8.el8.x86_64
@baseos Upgraded
NetworkManager-libnm-1:1.40.16-2.el8.x86_64
@@System Upgrade
NetworkManager-team-1:1.40.16-8.el8.x86_64
@baseos Upgraded
NetworkManager-team-1:1.40.16-2.el8.x86_64
@@System Upgrade
NetworkManager-tui-1:1.40.16-8.el8.x86_64
@baseos Upgraded
NetworkManager-tui-1:1.40.16-2.el8.x86_64
@@System Upgrade audit-3.0.7-5.el8.x86_64
@baseos Upgraded
audit-3.0.7-4.el8.x86_64
@@System Upgrade
audit-libs-3.0.7-5.el8.x86_64
@baseos Upgraded
audit-libs-3.0.7-4.el8.x86_64
@@System Upgrade
binutils-2.30-121.el8.x86_64
@baseos Upgraded
binutils-2.30-119.el8.x86_64
@@System Upgrade
c-ares-1.13.0-8.el8.x86_64
@baseos Upgraded
c-ares-1.13.0-6.el8.x86_64
@@System Upgrade
chkconfig-1.19.2-1.el8.x86_64
@baseos Upgraded
chkconfig-1.19.1-1.el8.x86_64
@@System Upgrade
cockpit-295-1.el8.x86_64
@baseos Upgraded
cockpit-288.2-1.el8.x86_64
@@System Upgrade
cockpit-bridge-295-1.el8.x86_64
@baseos Upgraded
cockpit-bridge-288.2-1.el8.x86_64
@@System Upgrade
cockpit-system-295-1.el8.noarch
@baseos Upgraded
cockpit-system-288.2-1.el8.noarch
@@System Upgrade
cockpit-ws-295-1.el8.x86_64
@baseos Upgraded
cockpit-ws-288.2-1.el8.x86_64
@@System Upgrade
curl-7.61.1-31.el8.x86_64
@baseos Upgraded
curl-7.61.1-30.el8.x86_64
@@System Upgrade
dbus-1:1.12.8-25.el8.x86_64
@baseos Upgraded
dbus-1:1.12.8-24.el8.x86_64
@@System Upgrade
dbus-common-1:1.12.8-25.el8.noarch
@baseos Upgraded
dbus-common-1:1.12.8-24.el8.noarch
@@System Upgrade
dbus-daemon-1:1.12.8-25.el8.x86_64
@baseos Upgraded
dbus-daemon-1:1.12.8-24.el8.x86_64
@@System Upgrade
dbus-libs-1:1.12.8-25.el8.x86_64
@baseos Upgraded
dbus-libs-1:1.12.8-24.el8.x86_64
@@System Upgrade
dbus-tools-1:1.12.8-25.el8.x86_64
@baseos Upgraded
dbus-tools-1:1.12.8-24.el8.x86_64
@@System Upgrade
dnf-4.7.0-18.el8.noarch
@baseos Upgraded
dnf-4.7.0-15.el8.noarch
@@System Upgrade
dnf-data-4.7.0-18.el8.noarch
@baseos Upgraded
dnf-data-4.7.0-15.el8.noarch
@@System Upgrade
dnf-plugins-core-4.0.21-23.el8.noarch
@baseos Upgraded
dnf-plugins-core-4.0.21-18.el8.noarch
@@System Upgrade
dracut-049-225.git20230614.el8.x86_64
@baseos Upgraded
dracut-049-223.git20230119.el8.x86_64
@@System Upgrade
dracut-config-generic-049-225.git20230614.el8.x86_64
@baseos Upgraded
dracut-config-generic-049-223.git20230119.el8.x86_64
@@System Upgrade
dracut-config-rescue-049-225.git20230614.el8.x86_64
@baseos Upgraded
dracut-config-rescue-049-223.git20230119.el8.x86_64
@@System Upgrade
dracut-network-049-225.git20230614.el8.x86_64
@baseos Upgraded
dracut-network-049-223.git20230119.el8.x86_64
@@System Upgrade
dracut-squash-049-225.git20230614.el8.x86_64
@baseos Upgraded
dracut-squash-049-223.git20230119.el8.x86_64
@@System Upgrade elfutils-0.189-2.el8.x86_64
@baseos Upgraded
elfutils-0.188-3.el8.x86_64
@@System Upgrade
elfutils-debuginfod-client-0.189-2.el8.x86_64
@baseos Upgraded
elfutils-debuginfod-client-0.188-3.el8.x86_64
@@System Upgrade
elfutils-default-yama-scope-0.189-2.el8.noarch
@baseos Upgraded
elfutils-default-yama-scope-0.188-3.el8.noarch
@@System Upgrade elfutils-libelf-0.189-2.el8.x86_64
@baseos Upgraded
elfutils-libelf-0.188-3.el8.x86_64
@@System Upgrade
elfutils-libs-0.189-2.el8.x86_64
@baseos Upgraded
elfutils-libs-0.188-3.el8.x86_64
@@System Upgrade
emacs-filesystem-1:26.1-11.el8.noarch
@baseos Upgraded
emacs-filesystem-1:26.1-9.el8.noarch
@@System Upgrade file-5.33-25.el8.x86_64
@baseos Upgraded
file-5.33-24.el8.x86_64
@@System Upgrade
file-libs-5.33-25.el8.x86_64
@baseos Upgraded
file-libs-5.33-24.el8.x86_64
@@System Upgrade
fuse-libs-2.9.7-17.el8.x86_64
@baseos Upgraded
fuse-libs-2.9.7-16.el8.x86_64
@@System Upgrade
fuse3-libs-3.3.0-17.el8.x86_64
@baseos Upgraded
fuse3-libs-3.3.0-16.el8.x86_64
@@System Upgrade
glibc-2.28-228.el8.x86_64
@baseos Upgraded
glibc-2.28-225.el8.x86_64
@@System Upgrade
glibc-common-2.28-228.el8.x86_64
@baseos Upgraded
glibc-common-2.28-225.el8.x86_64
@@System Upgrade
glibc-gconv-extra-2.28-228.el8.x86_64
@baseos Upgraded
glibc-gconv-extra-2.28-225.el8.x86_64
@@System Upgrade
glibc-langpack-en-2.28-228.el8.x86_64
@baseos Upgraded
glibc-langpack-en-2.28-225.el8.x86_64
@@System Upgrade gnutls-3.6.16-7.el8.x86_64
@baseos Upgraded
gnutls-3.6.16-6.el8.x86_64
@@System Upgrade
grubby-8.40-48.el8.x86_64
@baseos Upgraded
grubby-8.40-47.el8.x86_64
@@System Upgrade
hwdata-0.314-8.18.el8.noarch
@baseos Upgraded
hwdata-0.314-8.16.el8.noarch
@@System Upgrade
iproute-6.2.0-2.el8.x86_64
@baseos Upgraded
iproute-5.18.0-1.el8.x86_64
@@System Upgrade
iscsi-initiator-utils-6.2.1.4-8.git095f59c.el8.x86_64
@baseos Upgraded
iscsi-initiator-utils-6.2.1.4-4.git095f59c.el8.x86_64
@@System Upgrade
iscsi-initiator-utils-iscsiuio-6.2.1.4-8.git095f59c.el8.x86_64
@baseos Upgraded
iscsi-initiator-utils-iscsiuio-6.2.1.4-4.git095f59c.el8.x86_64
@@System Upgrade kbd-2.0.4-11.el8.x86_64
@baseos Upgraded
kbd-2.0.4-10.el8.x86_64
@@System Upgrade
kbd-legacy-2.0.4-11.el8.noarch
@baseos Upgraded
kbd-legacy-2.0.4-10.el8.noarch
@@System Upgrade
kbd-misc-2.0.4-11.el8.noarch
@baseos Upgraded
kbd-misc-2.0.4-10.el8.noarch
@@System Upgrade
kernel-tools-4.18.0-500.el8.x86_64
@baseos Upgraded
kernel-tools-4.18.0-481.el8.x86_64
@@System Upgrade
kernel-tools-libs-4.18.0-500.el8.x86_64
@baseos Upgraded
kernel-tools-libs-4.18.0-481.el8.x86_64
@@System Upgrade
libblkid-2.32.1-42.el8.x86_64
@baseos Upgraded
libblkid-2.32.1-41.el8.x86_64
@@System Upgrade
libcurl-7.61.1-31.el8.x86_64
@baseos Upgraded
libcurl-7.61.1-30.el8.x86_64
@@System Upgrade
libdnf-0.63.0-16.el8.x86_64
@baseos Upgraded
libdnf-0.63.0-13.el8.x86_64
@@System Upgrade
libfdisk-2.32.1-42.el8.x86_64
@baseos Upgraded
libfdisk-2.32.1-41.el8.x86_64
@@System Upgrade
libgcc-8.5.0-20.el8.x86_64
@baseos Upgraded
libgcc-8.5.0-18.el8.x86_64
@@System Upgrade
libgfortran-8.5.0-20.el8.x86_64
@baseos Upgraded
libgfortran-8.5.0-18.el8.x86_64
@@System Upgrade
libgomp-8.5.0-20.el8.x86_64
@baseos Upgraded
libgomp-8.5.0-18.el8.x86_64
@@System Upgrade
libibverbs-46.0-1.el8.1.x86_64
@baseos Upgraded
libibverbs-44.0-2.el8.1.x86_64
@@System Upgrade
libldb-2.7.2-3.el8.x86_64
@baseos Upgraded
libldb-2.6.1-1.el8.x86_64
@@System Upgrade
libmount-2.32.1-42.el8.x86_64
@baseos Upgraded
libmount-2.32.1-41.el8.x86_64
@@System Upgrade
libnftnl-1.2.2-3.el8.x86_64
@baseos Upgraded
libnftnl-1.1.5-5.el8.x86_64
@@System Upgrade
libquadmath-8.5.0-20.el8.x86_64
@baseos Upgraded
libquadmath-8.5.0-18.el8.x86_64
@@System Upgrade
librabbitmq-0.9.0-4.el8.x86_64
@baseos Upgraded
librabbitmq-0.9.0-3.el8.x86_64
@@System Upgrade
librdmacm-46.0-1.el8.1.x86_64
@baseos Upgraded
librdmacm-44.0-2.el8.1.x86_64
@@System Upgrade
libsmartcols-2.32.1-42.el8.x86_64
@baseos Upgraded
libsmartcols-2.32.1-41.el8.x86_64
@@System Upgrade
libsolv-0.7.20-6.el8.x86_64
@baseos Upgraded
libsolv-0.7.20-4.el8.x86_64
@@System Upgrade
libsoup-2.62.3-4.el8.x86_64
@baseos Upgraded
libsoup-2.62.3-3.el8.x86_64
@@System Upgrade
libsss_autofs-2.9.1-1.el8.x86_64
@baseos Upgraded
libsss_autofs-2.8.2-1.el8.x86_64
@@System Upgrade
libsss_certmap-2.9.1-1.el8.x86_64
@baseos Upgraded
libsss_certmap-2.8.2-1.el8.x86_64
@@System Upgrade
libsss_idmap-2.9.1-1.el8.x86_64
@baseos Upgraded
libsss_idmap-2.8.2-1.el8.x86_64
@@System Upgrade
libsss_nss_idmap-2.9.1-1.el8.x86_64
@baseos Upgraded
libsss_nss_idmap-2.8.2-1.el8.x86_64
@@System Upgrade
libsss_sudo-2.9.1-1.el8.x86_64
@baseos Upgraded
libsss_sudo-2.8.2-1.el8.x86_64
@@System Upgrade
libstdc++-8.5.0-20.el8.x86_64
@baseos Upgraded
libstdc++-8.5.0-18.el8.x86_64
@@System Upgrade
libtalloc-2.4.0-3.el8.x86_64
@baseos Upgraded
libtalloc-2.3.4-1.el8.x86_64
@@System Upgrade
libtdb-1.4.8-3.el8.x86_64
@baseos Upgraded
libtdb-1.4.7-1.el8.x86_64
@@System Upgrade
libtevent-0.14.1-3.el8.x86_64
@baseos Upgraded
libtevent-0.13.0-1.el8.x86_64
@@System Upgrade
libuuid-2.32.1-42.el8.x86_64
@baseos Upgraded
libuuid-2.32.1-41.el8.x86_64
@@System Upgrade
linux-firmware-20230515-115.gitd1962891.el8.noarch
@baseos Upgraded
linux-firmware-20230217-113.git83f1d778.el8.noarch
@@System Upgrade memstrack-0.2.5-2.el8.x86_64
@baseos Upgraded
memstrack-0.2.4-2.el8.x86_64
@@System Upgrade
microcode_ctl-4:20230214-2.el8.x86_64
@baseos Upgraded
microcode_ctl-4:20220809-2.el8.x86_64
@@System Upgrade nvme-cli-1.16-9.el8.x86_64
@baseos Upgraded
nvme-cli-1.16-7.el8.x86_64
@@System Upgrade
nvmetcli-0.7-5.el8.noarch
@baseos Upgraded
nvmetcli-0.7-3.el8.noarch
@@System Upgrade
pam-1.3.1-27.el8.x86_64
@baseos Upgraded
pam-1.3.1-25.el8.x86_64
@@System Upgrade
platform-python-3.6.8-52.el8.x86_64
@baseos Upgraded
platform-python-3.6.8-51.el8.x86_64
@@System Upgrade
python-rpm-macros-3-45.el8.noarch
@baseos Upgraded
python-rpm-macros-3-44.el8.noarch
@@System Upgrade
python-srpm-macros-3-45.el8.noarch
@baseos Upgraded
python-srpm-macros-3-44.el8.noarch
@@System Upgrade
python3-audit-3.0.7-5.el8.x86_64
@baseos Upgraded
python3-audit-3.0.7-4.el8.x86_64
@@System Upgrade
python3-cryptography-3.2.1-6.el8.x86_64
@baseos Upgraded
python3-cryptography-3.2.1-5.el8.x86_64
@@System Upgrade
python3-dnf-4.7.0-18.el8.noarch
@baseos Upgraded
python3-dnf-4.7.0-15.el8.noarch
@@System Upgrade
python3-dnf-plugin-versionlock-4.0.21-23.el8.noarch
@baseos Upgraded
python3-dnf-plugin-versionlock-4.0.21-18.el8.noarch
@@System Upgrade
python3-dnf-plugins-core-4.0.21-23.el8.noarch
@baseos Upgraded
python3-dnf-plugins-core-4.0.21-18.el8.noarch
@@System Upgrade
python3-hawkey-0.63.0-16.el8.x86_64
@baseos Upgraded
python3-hawkey-0.63.0-13.el8.x86_64
@@System Upgrade
python3-libdnf-0.63.0-16.el8.x86_64
@baseos Upgraded
python3-libdnf-0.63.0-13.el8.x86_64
@@System Upgrade
python3-libs-3.6.8-52.el8.x86_64
@baseos Upgraded
python3-libs-3.6.8-51.el8.x86_64
@@System Upgrade
python3-magic-5.33-25.el8.noarch
@baseos Upgraded
python3-magic-5.33-24.el8.noarch
@@System Upgrade
python3-perf-4.18.0-500.el8.x86_64
@baseos Upgraded
python3-perf-4.18.0-481.el8.x86_64
@@System Upgrade
python3-rpm-macros-3-45.el8.noarch
@baseos Upgraded
python3-rpm-macros-3-44.el8.noarch
@@System Upgrade
python3-syspurpose-1.28.38-1.el8.x86_64
@baseos Upgraded
python3-syspurpose-1.28.35-1.el8.x86_64
@@System Upgrade
selinux-policy-3.14.3-123.el8.noarch
@baseos Upgraded
selinux-policy-3.14.3-117.el8.noarch
@@System Upgrade
selinux-policy-targeted-3.14.3-123.el8.noarch
@baseos Upgraded
selinux-policy-targeted-3.14.3-117.el8.noarch
@@System Upgrade shadow-utils-2:4.6-18.el8.x86_64
@baseos Upgraded
shadow-utils-2:4.6-17.el8.x86_64
@@System Upgrade
sos-4.5.4-1.el8.noarch
@baseos Upgraded
sos-4.5.0-1.el8.noarch
@@System Upgrade
sqlite-3.26.0-18.el8.x86_64
@baseos Upgraded
sqlite-3.26.0-17.el8.x86_64
@@System Upgrade
sqlite-libs-3.26.0-18.el8.x86_64
@baseos Upgraded
sqlite-libs-3.26.0-17.el8.x86_64
@@System Upgrade
sssd-client-2.9.1-1.el8.x86_64
@baseos Upgraded
sssd-client-2.8.2-1.el8.x86_64
@@System Upgrade
sssd-common-2.9.1-1.el8.x86_64
@baseos Upgraded
sssd-common-2.8.2-1.el8.x86_64
@@System Upgrade
sssd-kcm-2.9.1-1.el8.x86_64
@baseos Upgraded
sssd-kcm-2.8.2-1.el8.x86_64
@@System Upgrade
sssd-nfs-idmap-2.9.1-1.el8.x86_64
@baseos Upgraded
sssd-nfs-idmap-2.8.2-1.el8.x86_64
@@System Upgrade
systemd-239-76.el8.x86_64
@baseos Upgraded
systemd-239-73.el8.x86_64
@@System Upgrade
systemd-libs-239-76.el8.x86_64
@baseos Upgraded
systemd-libs-239-73.el8.x86_64
@@System Upgrade
systemd-pam-239-76.el8.x86_64
@baseos Upgraded
systemd-pam-239-73.el8.x86_64
@@System Upgrade
systemd-udev-239-76.el8.x86_64
@baseos Upgraded
systemd-udev-239-73.el8.x86_64
@@System Upgrade
tmux-2.7-3.el8.x86_64
@baseos Upgraded
tmux-2.7-1.el8.x86_64
@@System Upgrade
tzdata-2023c-1.el8.noarch
@baseos Upgraded
tzdata-2022g-2.el8.noarch
@@System Upgrade
util-linux-2.32.1-42.el8.x86_64
@baseos Upgraded
util-linux-2.32.1-41.el8.x86_64
@@System Upgrade
which-2.21-20.el8.x86_64
@baseos Upgraded
which-2.21-18.el8.x86_64
@@System Upgrade
xfsprogs-5.0.0-12.el8.x86_64
@baseos Upgraded
xfsprogs-5.0.0-10.el8.x86_64
@@System Upgrade
yum-4.7.0-18.el8.noarch
@baseos Upgraded
yum-4.7.0-15.el8.noarch
@@System Upgrade
yum-utils-4.0.21-23.el8.noarch
@baseos Upgraded
yum-utils-4.0.21-18.el8.noarch
@@System Upgrade
zlib-1.2.11-25.el8.x86_64
@baseos Upgraded
zlib-1.2.11-21.el8.x86_64
@@System Upgrade
centos-release-ovirt45-8.9-1.el8s.noarch
@extras-common Upgraded
centos-release-ovirt45-8.7-3.el8s.noarch
@@System Upgrade ongres-scram-2.1-3.el8.noarch
@centos-ovirt45 Upgraded
ongres-scram-1.0.0~beta.2-5.el8.noarch
@@System Upgrade
ongres-scram-client-2.1-3.el8.noarch
@centos-ovirt45 Upgraded
ongres-scram-client-1.0.0~beta.2-5.el8.noarch
@@System Upgrade
ovirt-ansible-collection-3.1.2-1.el8.noarch
@centos-ovirt45 Upgraded
ovirt-ansible-collection-3.0.0-1.el8.noarch
@@System Upgrade
ovirt-dependencies-4.5.3-1.el8.noarch
@centos-ovirt45 Upgraded
ovirt-dependencies-4.5.2-1.el8.noarch
@@System Upgrade
ovirt-imageio-client-2.5.0-1.el8.x86_64
@centos-ovirt45 Upgraded
ovirt-imageio-client-2.4.7-1.el8.x86_64
@@System Upgrade
ovirt-imageio-common-2.5.0-1.el8.x86_64
@centos-ovirt45 Upgraded
ovirt-imageio-common-2.4.7-1.el8.x86_64
@@System Upgrade
ovirt-imageio-daemon-2.5.0-1.el8.x86_64
@centos-ovirt45 Upgraded
ovirt-imageio-daemon-2.4.7-1.el8.x86_64
@@System Upgrade
postgresql-jdbc-42.2.27-1.el8.noarch
@centos-ovirt45 Upgraded
postgresql-jdbc-42.2.14-2.el8.noarch
@@System Upgrade
python3-jmespath-0.9.0-11.5.el8.noarch
@centos-ovirt45 Upgraded
python3-jmespath-0.9.0-11.2.el8.noarch
@@System Upgrade
python3-netaddr-0.8.0-12.3.el8.noarch
@centos-ovirt45 Upgraded
python3-netaddr-0.7.19-8.1.2.el8.noarch
@@System Upgrade
python3-ovirt-engine-sdk4-4.6.2-1.el8.x86_64
@centos-ovirt45 Upgraded
python3-ovirt-engine-sdk4-4.6.0-1.el8.x86_64
@@System Upgrade
python3-passlib-1.7.4-3.3.el8.noarch
@centos-ovirt45 Upgraded
python3-passlib-1.7.4-1.el8.noarch
@@System Upgrade
python3-pycurl-7.45.2-2.2.el8.x86_64
@centos-ovirt45 Upgraded
python3-pycurl-7.43.0.2-4.2.el8.x86_64
@@System Upgrade
python3-cinder-common-1:20.3.0-1.el8.noarch
@ovirt-45-centos-stream-openstack-yoga Upgraded
python3-cinder-common-1:20.1.0-1.el8.noarch
@@System Upgrade
python3-os-brick-5.2.3-1.el8.noarch
@ovirt-45-centos-stream-openstack-yoga
Upgraded python3-os-brick-5.2.2-1.el8.noarch
@@System Upgrade
python3-oslo-messaging-12.13.1-1.el8.noarch
@ovirt-45-centos-stream-openstack-yoga Upgraded
python3-oslo-messaging-12.13.0-1.el8.noarch
@@System Reason ChangeGConf2-3.2.6-22.el8.x86_64
@appstream Removed
kernel-4.18.0-383.el8.x86_64
@@System Removed
kernel-core-4.18.0-383.el8.x86_64
@@System Removed
kernel-modules-4.18.0-383.el8.x86_64 @@System
1 year, 9 months
CPU Compatibility with 4.5.4-1.el8 (4.7)
by mark.williams@nist.gov
Having issue adding oVirt host using AlmaLinux 8/9 with systems using the sandybridge CPU Intel(R) Xeon(R) CPU E5-2690 0 @ 2.90GHz. Error give CPU not compatible with Ovirt "compatibility type 4.7".
Works with creating another Cluster at "compatibility type 4.3", but not 4.7. These host were re-purposed from RHV 4.3 Cluster.
Any thoughts?
1 year, 9 months
Connection to ovirt-imageio has failed
by masth.ganesh@gmail.com
I am trying to upload an ISO onto the OLVM and I get the "connection to ovirt-imageio has failed" message when I test the connection. I have uploaded the certificates onto the web browser but for some reason I am unable to test the connection successfully. I have tried it on other browsers like Edge and Firefox as well.
1 year, 9 months
Problema VM with Disk blocked creating snapshot wrong
by José Pascual
Hi,
I have a Virtual Machine Bloqued by the disk and i cant delete for
restoreing de VM and i cant do anything.
This is a error with the snapshot and never ends (the virtual machine
now is shutdown). What can i do for deleteting de virtual machine from
my engine and the disk?
--
Saludos,
José Pascual Gallud Martínez
Nombre | Dpto. Ingeniería <http://telfy.com/>
1 year, 9 months
Ovirt_Provider_Citrix-Xen
by rj.indramaya@gmail.com
Hi All,
First of all apology If i'm doing it wrong way. I could not find any solution so I hoped I'll get some answers here.
In my Organization I started exploring the Ovirt-Engine, While I have created multiple VMs and all working fine I started facing one issue as in my organization we had Citrix Xen server which is out of date so I started working on migrating them directly to Ovirt-enigne. so I tried to add my Xen-server as a provider but its failing to load or connect with the Citrix-Xen. Let me know how should I move forward,
Thanks in advance, people are amazing.
1 year, 9 months
Obtain SSO token using username/password credentials] fails Error during SSO authentication access_denied : Cannot authenticate user Invalid user credentials.
by antonio.riggio@mail.com
can anyone tell how I can fix this I have not been able to install ovirt. Im using the same password when I login to ovirt too. thanks
INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Obtain SSO token using username/password credentials]
[ ERROR ] ovirtsdk4.AuthError: Error during SSO authentication access_denied : Cannot authenticate user Invalid user credentials.
[ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 50, "changed": false, "msg": "Error during SSO authentication access_denied : Cannot authenticate user Invalid user credentials."}
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Sync on engine machine]
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Fetch logs from the engine VM]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Set destination directory path]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Create destination directory]
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : include_tasks]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Find the local appliance image]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Set local_vm_disk_path]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Give the vm time to flush dirty buffers]
[ INFO ] ok: [localhost -> localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Copy engine logs]
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Change ownership of copied engine logs]
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : include_tasks]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Remove local vm dir]
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Remove temporary entry in /etc/hosts for the local VM]
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : include_tasks]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Destroy local storage-pool localvmxfo24vb0]
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Undefine local storage-pool localvmxfo24vb0]
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Destroy local storage-pool 9df88328-fb97-4230-a679-a9ab4cc59562]
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Undefine local storage-pool 9df88328-fb97-4230-a679-a9ab4cc59562]
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Notify the user about a failure]
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "The system may not be provisioned according to the playbook results: please check the logs for the issue, fix accordingly or re-deploy from scratch.\n"}
1 year, 9 months
oVirt Host EL9 with UEFI Secure Boot
by Jorge Visentini
Hi there!
Is this information still valid?
To use Enterprise Linux 9 on virtualization hosts, the UEFI Secure Boot
option must be disabled due to *Bug 2081648 - dmidecode module fails to
decode DMI data <https://bugzilla.redhat.com/show_bug.cgi?id=2081648>*.
I read that it was fixed in *python-dmidecode-3.12.3-1.el9* but we still
use *python-3.11*.
Is this bug only for hosted-engine or for standalone too?
BR.
--
Att,
Jorge Visentini
+55 55 98432-9868
1 year, 9 months
Processors compatibility matrix oVirt
by Jorge Visentini
Do you have any documentation or compatibility matrix between oVirt and
processors?
How can I know if a processor is compatible?
I ask because in the *lscpu* command I see 2 sockets, but in the *engine* I
only see 1 socket.
BR.
--
Att,
Jorge Visentini
+55 55 98432-9868
1 year, 9 months
ovirt with rocky linux kvm
by cynthiaberbery@outlook.com
Hello,
after installing multiple machines as rocky linux, Ovirt was chosen to be used to manage this infra.
But adding these machines as "hosts" on ovirt always give:
Error while executing action: Cannot add Host. Connecting to host via SSH has failed, verify that the host is reachable (IP address, routable address etc.) You may refer to the engine.log file for further details.
Is rocky kvm supported on ovirt 4.3.10.4-1.el7?
Do you have any contact with a supporting company/team for ovirt to check their support plan?
1 year, 9 months
How to re-enroll a host with an active workload whose certificate expired
by David Johnson
Good evening all,
I have a three host installation with a separate dedicated bare metal
system for the engine, running Ovirt 4.5.2.4-1.el8.
This afternoon, the engine lost communication with one of the hosts. The
engine log says the certificate is expired.
The official solution appears to be to put the host into maintenance mode
then re-enroll it.
Unfortunately, because the certificate is expired, the engine cannot switch
to maintenance mode or control the VM's to shut them down.
Error while executing action: Cannot switch Host to Maintenance mode.
Host still has running VMs on it and is in Non Responsive state.
See log excerpt below
What is the correct way to update/reinstate a certificate in a running
cluster when the engine does not acknowledge the host is operational due to
an expired certificate?
Thank you.
*David Johnson*
Log excerpt:
2023-07-20 16:27:46,904-05 INFO
[org.ovirt.vdsm.jsonrpc.client.reactors.ReactorClient] (SSL Stomp Reactor)
[] Connecting to /192.168.2.18
2023-07-20 16:27:46,904-05 INFO
[org.ovirt.vdsm.jsonrpc.client.reactors.ReactorClient] (SSL Stomp Reactor)
[] *Connected to /192.168.2.18:54321 <http://192.168.2.18:54321>*
2023-07-20 16:27:46,912-05 ERROR
[org.ovirt.vdsm.jsonrpc.client.reactors.Reactor] (SSL Stomp Reactor) [] *Unable
to process messages Received fatal alert: certificate_expired*
2023-07-20 16:27:46,914-05 ERROR
[org.ovirt.engine.core.vdsbroker.monitoring.HostMonitoring]
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-52) []
Unable to RefreshCapabilities: VDSNetworkException: VDSGenericException:
VDSNetworkException: Received fatal alert: certificate_expired
2023-07-20 16:27:47,356-05 ERROR
[org.ovirt.engine.core.vdsbroker.monitoring.HostMonitoring]
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-34) []
Unable to RefreshCapabilities: ClientConnectionException: SSL session is
invalid
2023-07-20 16:27:47,356-05 WARN
[org.ovirt.engine.core.bll.lock.InMemoryLockManager]
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-34) []
Trying to release exclusive lock which does not exist, lock key:
'f69d35b2-7666-4ac6-8645-2f119cf2ce1cVDS_INIT'
2023-07-20 16:27:47,356-05 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesAsyncVDSCommand]
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-34) []
Command
'org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesAsyncVDSCommand'
return value
'org.ovirt.engine.core.vdsbroker.vdsbroker.VDSInfoReturn@7d03f4f0'
2023-07-20 16:27:47,356-05 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesAsyncVDSCommand]
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-34) []
HostName = ovirt-host-03
2023-07-20 16:27:47,356-05 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesAsyncVDSCommand]
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-34) []
Command 'GetCapabilitiesAsyncVDSCommand(HostName = ovirt-host-03,
VdsIdAndVdsVDSCommandParametersBase:{hostId='f69d35b2-7666-4ac6-8645-2f119cf2ce1c',
vds='Host[ovirt-host-03,f69d35b2-7666-4ac6-8645-2f119cf2ce1c]'})' execution
failed: org.ovirt.vdsm.jsonrpc.client.ClientConnectionException: *SSL
session is invalid*
1 year, 9 months
First time user/setup
by jmred88@gmail.com
Could someone explain to me the proper documentation I should be using for a standalone setup? I have one host with local storage/resources and would like that to both host and manage my VMs but I keep getting into the weeds with which documentation to follow.
1 year, 9 months
ovirt node 4.5 is not working on esxi8 on my lab
by poper@windowslive.com
Hello there,
May I ask why I have installed ovirt node 4.5 latest from iso on my esxi8 after I logged in to the web interface to manage this host I cannot find menu virtualization but when I tested on 4.4.6 everything is work. Do you have any idea?
Thanks.
1 year, 9 months
Commit or release history
by Jorge Visentini
Hi.
I'm following the release of the 4.5.5 isos and I see that new isos are
coming out almost every day.
Is there any place where I can keep track of the changes that were made
from one iso to another? Is it open for us to follow?
Cheers!
--
Att,
Jorge Visentini
+55 55 98432-9868
1 year, 9 months
python sdk4 ovirt 4.5.5.0 master
by Jorge Visentini
Hi.
I am testing oVirt 4.5.5-0.master.20230712143502.git07e865d650.el8.
I missed the python scripts to download and upload discs and images... Will
it still be possible to use them or should I consider using Ansible?
BR.
--
Att,
Jorge Visentini
+55 55 98432-9868
1 year, 9 months
oVirt 4.4. Engine Deployment: Problems with Gluster Storage Domain
by Thyen, Niko
Hi everybody!
I am having a hard time getting oVirt 4.4 to work. We want to update our
4.3 Cluster and i am trying to set up a fresh 4.4 Cluster (and restore
the backup later on) in order to update to 4.5. It fails at the end of
the engine deloyment, when the Gluster Storage Domain should be added.
I installed oVirt Node 4.4.10 on an old PC and made the following
modifications to the engine deployment process:
- altered defaults in
/usr/share/ansible/collections/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/defaults/main.yml:
- "he_pause_before_engine_setup: true" (in this pause before engine
setup, i ssh into the engine and exclude the package postgresql-jdbc
from update, which otherwise breaks the deployment [1])
- "he_remove_appliance_rpm: false" (to avoid the large download every
single try, i tried a lot)
- "he_force_ip4: true" (to avoid problems with IPv6, see below)
- in
/usr/share/ansible/collections/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/fetch_host_ip.yml
i added after "- name: Get host address resolution:" the following lines
(to avoid a problem with an "invalid" IPv6-Adress, which otherwise
breaks the deployment [2]):
- name: Get host IP addresses
ansible.builtin.command: hostname -I
register: hostname_addresses_output
changed_when: true
Most times, i started deployment via shell but tried via webinterface of
the node as well. It fails at the task "Add glusterfs storage domain"
with the following message:
"[ ERROR ] ovirtsdk4.Error: Fault reason is "Operation Failed". Fault
detail is "[Failed to fetch Gluster Volume List]". HTTP response code is
400." (See also [3])
When the setup asks for storage, i tried different answers
(gluster.local:/volume, gluster.local:/path/to/brick/volume,
192.168.8.51:/volume ...), no mount options.
I added firewall rules for glusterfs at node and engine. Even tried
disabling the firewall. No firewall on the gluster servers running. On
the Node, i also tested setting SELinux to permissive.
Recorded the traffic at different interfaces ("ovirtmgmt" and "virbr0"
on the node and "eth0" on the engine) and i can see the node and the
gluster server talking: Node gets the volume with options (which are,
btw, compliant to the docs, "storage.owner-gid: 36" "storage.owner-uid:
36" etc) but thats it, no further packets to mount the volume.
I noticed some ARP packets as well, the node asks the IP from the engine
(the configured static IP, which is not yet active). And the engine
sends a dns request for the gluster server to the node (via interface
virbr0), but doesnt connect to the gluster server. At least, thats what
i can see, most of the traffic is TLS, which i couldnt decrypt yet. I
appreciate any hint where to find the right keys.
Anyway, i can ssh from the engine to the gluster server and mount the
gluster volume manually on the node (mount -t glusterfs
gluster.local:/volume /local/path), so there seem no connectivity
issues.
Since the engine deployment log is around 30MB i attached a log summary
with findings i found relevant. I'll provide more logs if needed.
I really wanna put this huge timesink to an end. Can anyone help me or
point me in the right direction?
Many thanks in advance :)
Regards,
Niko
[1] This was the error message i got:
"[ ERROR ] fatal: [localhost -> 192.168.222.195]: FAILED! =>
{"attempts": 30, "changed": false, "connection": "close", "content":
"Error500 - Internal Server Error", "content_encoding": "identity",
"content_length": "86", "content_type": "text/html; charset=UTF-8",
"date": "Wed, 17 May 2023 22:42:27 GMT", "elapsed": 0, "msg": "Status
code was 500 and not [200]: HTTP Error 500: Internal Server Error",
"redirected": false, "server": "Apache/2.4.37 (centos) OpenSSL/1.1.1k
mod_auth_gssapi/1.6.1 mod_wsgi/4.6.4 Python/3.6", "status": 500, "url":
"http://localhost/ovirt-engine/services/health"}"
[2] This was the error message i got:
"VDSM ovirt.martinwi.local command HostSetupNetworksVDS failed: Internal
JSON-RPC error: {'reason': "Invalid IP address:
'fe80::ea3f:67ff:fe7f:a029%ovirtmgmt' does not appear to be an IPv4 or
IPv6 address"}"
[3]
/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20230718140628-rryscj.log:
2023-07-18 16:32:35,877+0200 DEBUG
otopi.ovirt_hosted_engine_setup.ansible_utils
ansible_utils._process_output:106 {'msg': 'Fault reason is "Operation
Failed". Fault detail is "[Failed to fetch Gluster Volume List]". HTTP
response code is 400.', 'exception': 'Traceback (most recent call
last):\n File
"/tmp/ansible_ovirt_storage_domain_payload_b4ofbzxa/ansible_ovirt_storage_domain_payload.zip/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_storage_domain.py",
line 804, in main\n File
"/tmp/ansible_ovirt_storage_domain_payload_b4ofbzxa/ansible_ovirt_storage_domain_payload.zip/ansible_collections/ovirt/ovirt/plugins/module_utils/ovirt.py",
line 674, in create\n **kwargs\n File
"/usr/lib64/python3.6/site-packages/ovirtsdk4/services.py", line 26258,
in add\n return self._internal_add(storage_domain, headers, query,
wait)\n File "/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py",
line 232, in _internal_add\n return future.wait() if wait else
future\n File
"/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py", line 55, in
wait\n return self._code(response)\n File
"/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py", line 229, in
callback\n self._check_fault(response)\n File
"/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py", line 132, in
_check_fault\n self._raise_error(response, body)\n File
"/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py", line 118, in
_raise_error\n raise error\novirtsdk4.Error: Fault reason is
"Operation Failed". Fault detail is "[Failed to fetch Gluster Volume
List]". HTTP response code is 400.\n', 'invocation': {'module_args':
{'state': 'unattached', 'name': 'hosted_storage', 'host':
'ovirt.martinwi.local', 'data_center': 'Default', 'wait': True,
'glusterfs': {'address': 'gluster1.martinwi.local', 'path': '/gv3',
'mount_options': ''}, 'timeout': 180, 'poll_interval': 3,
'fetch_nested': False, 'nested_attributes': [], 'domain_function':
'data', 'id': None, 'description': None, 'comment': None, 'localfs':
None, 'nfs': None, 'iscsi': None, 'managed_block_storage': None,
'posixfs': None, 'fcp': None, 'wipe_after_delete': None, 'backup': None,
'critical_space_action_blocker': None, 'warning_low_space': None,
'destroy': None, 'format': None, 'discard_after_delete': None}},
'_ansible_no_log': False, 'changed': False}
1 year, 9 months
engine-setup failing on 4.3.2 -> 4.3.3 fails during Engine schema refresh fail
by Edward Berger
I was trying to upgrade a hyperconverged oVirt hosted engine and failed in
the engine-setup command with these error and warnings.
...
[ INFO ] Creating/refreshing Engine database schema
[ ERROR ] schema.sh: FATAL: Cannot execute sql command:
--file=/usr/share/ovirt-engine/dbscripts/upgrade/04_03_0830_add_foreign_key_to_image_transfers.sql
[ ERROR ] Failed to execute stage 'Misc configuration': Engine schema
refresh failed
...
[ INFO ] Yum Verify: 16/16: ovirt-engine-tools.noarch 0:4.3.3.5-1.el7 - e
[WARNING] Rollback of DWH database postponed to Stage "Clean up"
[ INFO ] Rolling back database schema
...
[ INFO ] Stage: Pre-termination
[ INFO ] Stage: Termination
[ ERROR ] Execution of setup failed
Attaching engine-setup logfile.
1 year, 9 months
Local physical disk directly on the VM
by Jorge Visentini
Hi.
Is it possible to deliver a physical disk straight to the VM?
What's my idea... I have a local disk on the server and I want to deliver
it directly to a VM, is it possible?
For example, take the hypervisor disk
/dev/disk/by-id/ata-INTEL_SSDSC2KB480G8_PHYF117203AA480BGN and hand it to
the VM.
BR.
--
Att,
Jorge Visentini
+55 55 98432-9868
1 year, 9 months
Re: Future of the oVirt CSI Driver
by Mike Rochefort
On 7/11/23 2:58 AM, Sandro Bonazzola wrote:
> I think that if OKD is dropping the oVirt CSI Driver this also will mean
> nobody will be actively developing it anymore.
I asked about this on the Kubernetes Slack workspace and Vadim mentioned
that in order for the OKD project to continue providing oVirt support,
someone with oVirt knowledge and operator development would need to step
up. It would also require forking the installer and a few other
projects, which is something the OKD team spent a lot of effort to not
have to do anymore during the 4.x series.
https://github.com/openshift/installer
Looking at the OpenShift Installer git, so far what's changed is an
admin can no longer specify oVirt as a deployment target. According to
Jira, there will be a second phase of the deprecation where the actual
provisioning pieces (e.g. Terraform) will be removed, but that probably
won't be for 4.14.
https://issues.redhat.com//browse/OCPBUGS-14818
But not having anyone develop the CSI driver means it will likely end up
with bit rot. With OpenShift 4.14 the oVirt platform becomes less
attractive to use, though IPI/UPI installations should be possible. A
different storage backend would probably be recommended, however.
--
Mike Rochefort
1 year, 9 months
oVirt Self-Hosted Deploy in lopping during search available subnet
by Lucy Silvestre
Hi,
I have been stuck for weeks in this problem during self-hosted deployment. Does someone have suggestions to resolve this?
Error:
The Deploy loops in this part until the Deployment Fail:
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Get ip route]
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Fail if can't find an available subnet]
[ INFO ] skipping: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Set new IPv4 subnet prefix]
[ INFO ] skipping: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Search again with another prefix]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Define 3rd chunk]
[ INFO ] skipping: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Set 3rd chunk]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Get ip route]
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Fail if can't find an available subnet]
[ INFO ] skipping: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Set new IPv4 subnet prefix]
[ INFO ] skipping: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Search again with another prefix]
[ INFO ] ok: [localhost]
Details:
OS: Oracle Linux 8.8 (minimum installation) and updated
oVirt 4.4
ovirt-hosted-engine-setup and ovirt-engine-appliance installed
Network device used in the Host machine: eno1 - 192.168.10.x and I also tried with Bond.
I made a reservation on DHCP for these IPs used during the installation.
I tried by terminal and Cockpit, and it is the same issue.
I tried with the command: hosted-engine --deploy --4
I tried with DHCP and Static IP
I will appreciate every help. Thank you.
1 year, 9 months
ACTION_TYPE_FAILED_DISK_IS_BEING_TRANSFERRED
by eshwayri@gmail.com
System unexpectedly lost power. Now when I try to start one of the VMs I get: "ACTION_TYPE_FAILED_DISK_IS_BEING_TRANSFERRED". This may be due to a failed backup earlier in the week. There are no active tasks against this VM or its disk at this time. The disk shows OK in the storage view. Any ideas?
1 year, 10 months
Suggestion to switch to nightly
by Sandro Bonazzola
Hi,
As you probably noticed there were no regular releases after oVirt 4.5.4
<https://ovirt.org/release/4.5.4/> in December 2022.
Despite the calls to action to the community and to the companies involved
with oVirt, there have been no uptake of the leading of the oVirt project
yet.
The developers at Red Hat still dedicating time to the project are now
facing the fact they lack the time to do formal releases despite they keep
fixing platform regressions like the recent ones due to the new ansible
changes. That makes a nightly snapshot setup a more stable environment than
oVirt 4.5.4.
For this reason, we would like to suggest the user community to enable
nightly repositories for oVirt by following the procedure at:
https://www.ovirt.org/develop/dev-process/install-nightly-snapshot.html
This will ensure that the latest fixes for the platform regressions will be
promptly available.
Regards,
--
Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING - Red Hat In-Vehicle Operating System
Red Hat EMEA <https://www.redhat.com/>
sbonazzo(a)redhat.com
<https://www.redhat.com/>
*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.*
1 year, 10 months
GPU Passthrough issues with oVirt 4.5
by Vinícius Ferrão
Hello, does anyone is having issues with device passthrough on oVirt 4.5?
I can passthrough the devices without issue to a given VM, but inside the VM it fails to recognize all the devices.
In my case I’ve added 4x GPUs to a VM, but only one show up, and there’s the following errors inside the VM:
[ 23.006655] nvidia 0000:0a:00.0: enabling device (0000 -> 0002)
[ 23.008026] NVRM: This PCI I/O region assigned to your NVIDIA device is invalid:
NVRM: BAR1 is 0M @ 0x0 (PCI:0000:0a:00.0)
[ 23.008035] NVRM: This PCI I/O region assigned to your NVIDIA device is invalid:
NVRM: BAR2 is 0M @ 0x0 (PCI:0000:0a:00.0)
[ 23.008040] NVRM: This PCI I/O region assigned to your NVIDIA device is invalid:
NVRM: BAR3 is 0M @ 0x0 (PCI:0000:0a:00.0)
[ 23.008045] NVRM: This PCI I/O region assigned to your NVIDIA device is invalid:
NVRM: BAR4 is 0M @ 0x0 (PCI:0000:0a:00.0)
[ 23.008049] NVRM: This PCI I/O region assigned to your NVIDIA device is invalid:
NVRM: BAR5 is 0M @ 0x0 (PCI:0000:0a:00.0)
[ 23.012339] NVRM: The NVIDIA GPU 0000:0a:00.0 (PCI ID: 10de:1db1)
NVRM: installed in this system is not supported by the
NVRM: NVIDIA 535.54.03 driver release.
NVRM: Please see 'Appendix A - Supported NVIDIA GPU Products'
NVRM: in this release's README, available on the operating system
NVRM: specific graphics driver download page at www.nvidia.com.
[ 23.016175] nvidia: probe of 0000:0a:00.0 failed with error -1
[ 23.016838] NVRM: This PCI I/O region assigned to your NVIDIA device is invalid:
NVRM: BAR0 is 0M @ 0x0 (PCI:0000:0b:00.0)
[ 23.016842] nvidia: probe of 0000:0b:00.0 failed with error -1
[ 23.017211] NVRM: This PCI I/O region assigned to your NVIDIA device is invalid:
NVRM: BAR0 is 0M @ 0x0 (PCI:0000:0c:00.0)
[ 23.017215] nvidia: probe of 0000:0c:00.0 failed with error -1
[ 23.017248] NVRM: The NVIDIA probe routine failed for 3 device(s).
[ 23.214409] NVRM: loading NVIDIA UNIX x86_64 Kernel Module 535.54.03 Tue Jun 6 22:20:39 UTC 2023
[ 23.485704] [drm] [nvidia-drm] [GPU ID 0x00000900] Loading driver
[ 23.485708] [drm] Initialized nvidia-drm 0.0.0 20160202 for 0000:09:00.0 on minor 1
On the host this shows up on dmesg, but seems right:
[ 709.572845] vfio-pci 0000:1a:00.0: vfio_ecap_init: hiding ecap 0x1e@0x258
[ 709.572877] vfio-pci 0000:1a:00.0: vfio_ecap_init: hiding ecap 0x19@0x900
[ 709.572883] vfio-pci 0000:1a:00.0: vfio_ecap_init: hiding ecap 0x23@0xac0
[ 710.660813] vfio-pci 0000:1d:00.0: vfio_ecap_init: hiding ecap 0x1e@0x258
[ 710.660845] vfio-pci 0000:1d:00.0: vfio_ecap_init: hiding ecap 0x19@0x900
[ 710.660851] vfio-pci 0000:1d:00.0: vfio_ecap_init: hiding ecap 0x23@0xac0
[ 711.748760] vfio-pci 0000:1e:00.0: vfio_ecap_init: hiding ecap 0x1e@0x258
[ 711.748791] vfio-pci 0000:1e:00.0: vfio_ecap_init: hiding ecap 0x19@0x900
[ 711.748797] vfio-pci 0000:1e:00.0: vfio_ecap_init: hiding ecap 0x23@0xac0
[ 712.836687] vfio-pci 0000:1c:00.0: vfio_ecap_init: hiding ecap 0x1e@0x258
[ 712.836718] vfio-pci 0000:1c:00.0: vfio_ecap_init: hiding ecap 0x19@0x900
[ 712.836725] vfio-pci 0000:1c:00.0: vfio_ecap_init: hiding ecap 0x23@0xac0
Thanks.
1 year, 10 months
No bootable disk OVA
by cello86@gmail.com
Hi all,
we imported a rhel 9.2 image from an OVA generated on vmware and we tried to create a new VM but we had a no bootable disk error. The OVA has been imported with virt-2-v library and if we create a new VM we noticed that the disk is 2 GB size but we resized the disk to 50 GB.
The VM has been started with Q35 Chipset with UEFI options and the disk has the flag bootable activated.
We're using ovirt 4.5.4-1
Could you help us to sort this issue?
Thanks,
Marcello
1 year, 10 months
Restoring HE Fails, engine-config cannot connect to database
by Levi Wilbert
I am attempting to restore n HE backup to a fresh host (not previously in the old environment) in order to restore our old environment but running into issues during the deployment.
Basically my goal is to remove and redeploy an existing HE back into its same environment on a new storage domain.
What I've done:
backed up HE from prior environment
Installed oVirt 4.5.10 on a fresh node that was not in the prior environment
Ran the redeployment: hosted-engine --deploy --restore-from-file=<bkpfile> --4
The script pauses the deployment (even tho I told it not to), during this part I update /etc/dnf/dnf.conf w/ "exclude=ansible-core" since once ansible-core is updated it breaks the deployment script w/ Python incompatibilities.
But I'm running into the following:
[ ERROR ] fatal: [localhost -> 192.168.222.158]: FAILED! => {"changed": true, "cmd": "set -euo pipefail && engine-config -g DisableFenceAtStartupInSec | cut -d' ' -f2 > /root/DisableFenceAtStartupInSec.txt", "delta": "0:00:01.296169", "end": "2023-07-05 11:29:14.101292", "msg": "non-zero return code", "rc": 1, "start": "2023-07-05 11:29:12.805123", "stderr": "Picked up JAVA_TOOL_OPTIONS: -Dcom.redhat.fips=false", "stderr_lines": ["Picked up JAVA_TOOL_OPTIONS: -Dcom.redhat.fips=false"], "stdout": "", "stdout_lines": []}
I see that it fails running the engine-config command on the new hosted engine, but when I SSH to it and try running it, I get:
# engine-config -l
Picked up JAVA_TOOL_OPTIONS: -Dcom.redhat.fips=false
Connection to the Database failed. Please check that the hostname and port number are correct and that the Database service is up and running.
I haven't been able to find anything specifically for this area searching through Google. Anyone have any idea where to go with this?
1 year, 10 months
ovirt 4.5.4 deploy self-hosted engine
by Jorge Visentini
Hi.
I'm trying to deploy the engine but I'm having some errors that I couldn't
identify.
I don't know if it's incompatibility with my hardware or some libvirt bug.
Jul 05 10:06:21 ksmmi1r02ovirt36.kosmo.cloud
ansible-async_wrapper.py[690916]: 690917 still running (48505)
Jul 05 10:06:21 ksmmi1r02ovirt36.kosmo.cloud libvirtd[701878]: Domain id=1
name='HostedEngineLocal' uuid=922a156c-7f4c-4815-a645-54ed07 794451 is
tainted: custom-ga-command
Jul 05 10:06:21 ksmmi1r02ovirt36.kosmo.cloud virtlogd[630980]: Client hit
max requests limit 1. This may result in keep-alive timeouts. Consider
tuning the max_client_requests server parameter
Jul 05 10:06:22 ksmmi1r02ovirt36.kosmo.cloud libvirtd[701878]: Invalid
value '-1' for 'cpu.max': Invalid argument
Jul 05 10:06:26 ksmmi1r02ovirt36.kosmo.cloud
ansible-async_wrapper.py[690916]: 690917 still running (48500)
Jul 05 10:06:31 ksmmi1r02ovirt36.kosmo.cloud
ansible-async_wrapper.py[690916]: 690917 still running (48495)
Jul 05 10:06:31 ksmmi1r02ovirt36.kosmo.cloud systemd[1]:
systemd-timedated.service: Deactivated successfully.
Jul 05 10:06:36 ksmmi1r02ovirt36.kosmo.cloud
ansible-async_wrapper.py[690916]: 690917 still running (48490)
Jul 05 10:06:37 ksmmi1r02ovirt36.kosmo.cloud libvirtd[701878]: Invalid
value '-1' for 'cpu.max': Invalid argument
Jul 05 10:06:41 ksmmi1r02ovirt36.kosmo.cloud
ansible-async_wrapper.py[690916]: 690917 still running (48485)
Jul 05 10:06:46 ksmmi1r02ovirt36.kosmo.cloud
ansible-async_wrapper.py[690916]: 690917 still running (48480)
Jul 05 10:06:51 ksmmi1r02ovirt36.kosmo.cloud
ansible-async_wrapper.py[690916]: 690917 still running (48475)
Jul 05 10:06:52 ksmmi1r02ovirt36.kosmo.cloud libvirtd[701878]: Invalid
value '-1' for 'cpu.max': Invalid argument
Jul 05 10:06:56 ksmmi1r02ovirt36.kosmo.cloud
ansible-async_wrapper.py[690916]: 690917 still running (48470)
Jul 05 10:07:01 ksmmi1r02ovirt36.kosmo.cloud
ansible-async_wrapper.py[690916]: 690917 still running (48465)
*My config:*
*CPU:* 2 x Intel(R) Xeon(R) Platinum 8276M CPU @ 2.20GHz
*Memory:* 4TB
*Disk:* 120GB RAID 1
*ISO:* ovirt-node-ng-installer-4.5.4-2022120615.el9.iso
*Packages:*
kernel-5.14.0-202.el9.x86_64
libvirt-8.9.0-2.el9.x86_64
centos-release-ovirt45-9.1-3.el9s.noarch
python3-ovirt-engine-sdk4-4.6.0-1.el9.x86_64
ovirt-imageio-common-2.4.7-1.el9.x86_64
ovirt-imageio-client-2.4.7-1.el9.x86_64
ovirt-openvswitch-ovn-2.15-4.el9.noarch
ovirt-openvswitch-ovn-common-2.15-4.el9.noarch
ovirt-imageio-daemon-2.4.7-1.el9.x86_64
ovirt-openvswitch-ovn-host-2.15-4.el9.noarch
python3-ovirt-setup-lib-1.3.3-1.el9.noarch
ovirt-vmconsole-1.0.9-1.el9.noarch
ovirt-vmconsole-host-1.0.9-1.el9.noarch
ovirt-openvswitch-2.15-4.el9.noarch
python3-ovirt-node-ng-nodectl-4.4.2-1.el9.noarch
ovirt-node-ng-nodectl-4.4.2-1.el9.noarch
ovirt-ansible-collection-3.0.0-1.el9.noarch
ovirt-python-openvswitch-2.15-4.el9.noarch
ovirt-openvswitch-ipsec-2.15-4.el9.noarch
ovirt-hosted-engine-ha-2.5.0-1.el9.noarch
ovirt-provider-ovn-driver-1.2.36-1.el9.noarch
ovirt-host-dependencies-4.5.0-3.el9.x86_64
ovirt-hosted-engine-setup-2.7.0-1.el9.noarch
ovirt-host-4.5.0-3.el9.x86_64
ovirt-release-host-node-4.5.4-1.el9.x86_64
ovirt-node-ng-image-update-placeholder-4.5.4-1.el9.noarch
ovirt-engine-appliance-4.5-20221206125848.1.el9.x86_64
For better understanding, the deploy log is attached.
I appreciate any tips that help me.
Thank you!
--
Att,
Jorge Visentini
+55 55 98432-9868
1 year, 10 months
ovirt template import using ansible
by destfinal@googlemail.com
Hi,
I use a set of templates, generated in one cluster and re-used in multiple clusters. The clusters do not have direct connections between each other. My dev environment can talk to all the clusters. Currently,
1. I export the templates (as OVAs) to one of the nodes (example: node1.source.cluster), from the ovirt console (https://management.source.cluster)
2. scp the templates to my dev machine (example: scp -r node1.source.cluster:/tmp/ovirt_templates /tmp/ovirt_templates)
3. scp the templates from my dev environment to the target cluster (example: scp /tmp/ovirt_templates node1.target.cluster:/tmp)
4. Import the templates using the ovirt console of the target cluster (https://management.target.cluster)
This is highly a manual job and I am trying to automate the process using ansible. I am unable to work it out using the ovit_template module documentation (https://docs.ansible.com/ansible/latest/collections/ovirt/ovirt/ovirt_tem...) and could not able to see any other module in this relation.
Has anybody done this before and point me to the right direction? Or if there is a better process than what I follow above, please suggest me one.
Please let me know if you need more information in this regard.
Thanks
1 year, 10 months