Error when deploy Ovirt4.4 Hosted Engine
by staybox@gmail.com
Hello, I get error, need help.
[ ERROR ] fatal: [localhost]: FAILED! => {"msg": "The conditional check 'not ipv6_deployment|bool and route_rules_ipv4.stdout | from_json | selectattr('priority', 'equalto', 100) | selectattr('dst', 'equalto', virbr_cidr_ipv4 | ipaddr('address') ) | list | length == 0' failed. The error was: error while evaluating conditional (not ipv6_deployment|bool and route_rules_ipv4.stdout | from_json | selectattr('priority', 'equalto', 100) | selectattr('dst', 'equalto', virbr_cidr_ipv4 | ipaddr('address') ) | list | length == 0): 'dict object' has no attribute 'dst'\n\nThe error appears to be in '/usr/share/ansible/collections/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/bootstrap_local_vm/01_prepare_routing_rules.yml': line 81, column 5, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n changed_when: true\n - name: Add IPv4 inbound route rules\n ^ here\n"}
4 months, 3 weeks
How to list all snapshots?
by jorgevisentini@gmail.com
Hello everyone!
First, I would like to thank everyone involved in this wonderful project. I leave here my sincere thanks!
Does anyone know if it is possible to list all snapshots automatically? It can be by ansible, python, shell... any way that helps to list them all without having to enter Domain by Domain.
Thank you all!
4 months, 3 weeks
snapshot solution: Existing snapshots that were taken after this one will be erased.
by dhanaraj.ramesh@yahoo.com
Hi Team,
when I want to commit the older snapshots I'm getting warning stating " Existing snapshots that were taken after this one will be erased.". is there any way we can retain the latest snapshots as is in the chain?
I knew cloning and template export options are there to secure that latest snapshot data but these are will consume additional space in storage and take time.
5 months, 1 week
Restart oVirt-Engine
by Jeremey Wise
How ,without reboot of hosting system, do I restart the oVirt engine?
# I tried below but do not seem to effect the virtual machine
[root@thor iso]# systemctl restart ov
ovirt-ha-agent.service ovirt-imageio.service
ovn-controller.service ovs-delete-transient-ports.service
ovirt-ha-broker.service ovirt-vmconsole-host-sshd.service
ovsdb-server.service ovs-vswitchd.service
[root@thor iso]#
# You cannot restart the VM " HostedEngine " as it responses:
Error while executing action:
HostedEngine:
- Cannot restart VM. This VM is not managed by the engine.
Reason is I had to do some work on a node. Reboot it.. it is back up..
network is all fine.. Cockpit working fine... and gluster fine.. But
oVirt-Engine refuses to accept the node is up.
--
p <jeremey.wise(a)gmail.com>enguinpages
5 months, 4 weeks
Unable to access ovirt Admin Screen from ovirt Host
by louisb@ameritech.net
I've reinstalled ovirt 4.4 on my server remotely via cockpit terminal. I'm able to access the ovirt admin screen remotely from the laptop that I used for the install. However, using the same URL I'm unable to gain access to the admin screen.
Following the instruction in the documentation I've modified the file: /etc/ovirt-engine/engine.conf.d/99-custom-sso-setup.conf, to reflect the DNS name and I enter in the IP address. But I'm still unable to access the screen from the server console.
What else needs to change in order to gain access from the server console?
Thanks
7 months
SPM and Task error ...
by Enrico
Hi all,
my ovirt cluster has got 3 Hypervisors runnig Centos 7.5.1804 vdsm is
4.20.39.1-1.el7,
ovirt engine is 4.2.4.5-1.el7, the storage systems are HP MSA P2000 and
2050 (fibre channel).
I need to stop one of the hypervisors for maintenance but this system is
the storage pool manager.
For this reason I decided to manually activate SPM in one of the other
nodes but this operation is not
successful.
In the ovirt engine (engine.log) the error is this:
2019-07-25 12:39:16,744+02 INFO
[org.ovirt.engine.core.bll.storage.pool.ForceSelectSPMCommand] (default
task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] Running command:
ForceSelectSPMCommand internal: false. Entities affected : ID:
81c9bd3c-ae0a-467f-bf7f-63ab30cd8d9e Type: VDSAction group
MANIPULATE_HOST with role type ADMIN
2019-07-25 12:39:16,745+02 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.SpmStopOnIrsVDSCommand]
(default task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] START,
SpmStopOnIrsVDSCommand(
SpmStopOnIrsVDSCommandParameters:{storagePoolId='18d57688-6ed4-43b8-bd7c-0665b55950b7',
ignoreFailoverLimit='false'}), log id: 37bf4639
2019-07-25 12:39:16,747+02 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.ResetIrsVDSCommand] (default
task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] START,
ResetIrsVDSCommand(
ResetIrsVDSCommandParameters:{storagePoolId='18d57688-6ed4-43b8-bd7c-0665b55950b7',
ignoreFailoverLimit='false',
vdsId='751f3e99-b95e-4c31-bc38-77f5661a0bdc',
ignoreStopFailed='false'}), log id: 2522686f
2019-07-25 12:39:16,749+02 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStopVDSCommand] (default
task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] START,
SpmStopVDSCommand(HostName = infn-vm05.management,
SpmStopVDSCommandParameters:{hostId='751f3e99-b95e-4c31-bc38-77f5661a0bdc',
storagePoolId='18d57688-6ed4-43b8-bd7c-0665b55950b7'}), log id: 1810fd8b
2019-07-25 12:39:16,758+02 *ERROR*
[org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStopVDSCommand] (default
task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] SpmStopVDSCommand::Not
stopping SPM on vds 'infn-vm05.management', pool id
'18d57688-6ed4-43b8-bd7c-0665b55950b7' as there are uncleared tasks
'Task 'fdcf4d1b-82fe-49a6-b233-323ebe568f8e', status 'running''
2019-07-25 12:39:16,758+02 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStopVDSCommand] (default
task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] FINISH,
SpmStopVDSCommand, log id: 1810fd8b
2019-07-25 12:39:16,758+02 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.ResetIrsVDSCommand] (default
task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] FINISH,
ResetIrsVDSCommand, log id: 2522686f
2019-07-25 12:39:16,758+02 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.SpmStopOnIrsVDSCommand]
(default task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] FINISH,
SpmStopOnIrsVDSCommand, log id: 37bf4639
2019-07-25 12:39:16,760+02 *ERROR*
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] EVENT_ID:
USER_FORCE_SELECTED_SPM_STOP_FAILED(4,096), Failed to force select
infn-vm07.management as the SPM due to a failure to stop the current SPM.
while in the hypervisor (SPM) vdsm.log:
2019-07-25 12:39:16,744+02 INFO
[org.ovirt.engine.core.bll.storage.pool.ForceSelectSPMCommand] (default
task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] Running command:
ForceSelectSPMCommand internal: false. Entities affected : ID:
81c9bd3c-ae0a-467f-bf7f-63ab30cd8d9e Type: VDSAction group
MANIPULATE_HOST with role type ADMIN
2019-07-25 12:39:16,745+02 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.SpmStopOnIrsVDSCommand]
(default task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] START,
SpmStopOnIrsVDSCommand(
SpmStopOnIrsVDSCommandParameters:{storagePoolId='18d57688-6ed4-43b8-bd7c-0665b55950b7',
ignoreFailoverLimit='false'}), log id: 37bf4639
2019-07-25 12:39:16,747+02 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.ResetIrsVDSCommand] (default
task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] START,
ResetIrsVDSCommand(
ResetIrsVDSCommandParameters:{storagePoolId='18d57688-6ed4-43b8-bd7c-0665b55950b7',
ignoreFailoverLimit='false',
vdsId='751f3e99-b95e-4c31-bc38-77f5661a0bdc',
ignoreStopFailed='false'}), log id: 2522686f
2019-07-25 12:39:16,749+02 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStopVDSCommand] (default
task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] START,
SpmStopVDSCommand(HostName = infn-vm05.management,
SpmStopVDSCommandParameters:{hostId='751f3e99-b95e-4c31-bc38-77f5661a0bdc',
storagePoolId='18d57688-6ed4-43b8-bd7c-0665b55950b7'}), log id: 1810fd8b
2019-07-25 12:39:16,758+02 *ERROR*
[org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStopVDSCommand] (default
task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] SpmStopVDSCommand::Not
stopping SPM on vds 'infn-vm05.management', pool id
'18d57688-6ed4-43b8-bd7c-0665b55950b7' as there are uncleared tasks
'Task 'fdcf4d1b-82fe-49a6-b233-323ebe568f8e', status 'running''
2019-07-25 12:39:16,758+02 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStopVDSCommand] (default
task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] FINISH,
SpmStopVDSCommand, log id: 1810fd8b
2019-07-25 12:39:16,758+02 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.ResetIrsVDSCommand] (default
task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] FINISH,
ResetIrsVDSCommand, log id: 2522686f
2019-07-25 12:39:16,758+02 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.SpmStopOnIrsVDSCommand]
(default task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] FINISH,
SpmStopOnIrsVDSCommand, log id: 37bf4639
2019-07-25 12:39:16,760+02 *ERROR*
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] EVENT_ID:
USER_FORCE_SELECTED_SPM_STOP_FAILED(4,096), Failed to force select
infn-vm07.management as the SPM due to a failure to stop the current SPM.
2019-07-25 12:39:18,660+02 INFO
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) [] Task id
'fdcf4d1b-82fe-49a6-b233-323ebe568f8e' has passed pre-polling period
time and should be polled. Pre-polling period is 60000 millis.
2019-07-25 12:39:18,660+02 INFO
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) [] Task id
'fdcf4d1b-82fe-49a6-b233-323ebe568f8e' has passed pre-polling period
time and should be polled. Pre-polling period is 60000 millis.
2019-07-25 12:39:18,750+02 INFO
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) [] Task id
'fdcf4d1b-82fe-49a6-b233-323ebe568f8e' has passed pre-polling period
time and should be polled. Pre-polling period is 60000 millis.
2019-07-25 12:39:18,750+02 *ERROR*
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) []
BaseAsyncTask::logEndTaskFailure: Task
'fdcf4d1b-82fe-49a6-b233-323ebe568f8e' (Parent Command 'Unknown',
Parameters Type
'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters') ended
with failure:
2019-07-25 12:39:18,750+02 INFO
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) []
SPMAsyncTask::ClearAsyncTask: Attempting to clear task
'fdcf4d1b-82fe-49a6-b233-323ebe568f8e'
2019-07-25 12:39:18,751+02 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) [] START,
SPMClearTaskVDSCommand(
SPMTaskGuidBaseVDSCommandParameters:{storagePoolId='18d57688-6ed4-43b8-bd7c-0665b55950b7',
ignoreFailoverLimit='false',
taskId='fdcf4d1b-82fe-49a6-b233-323ebe568f8e'}), log id: 34ae2b2f
2019-07-25 12:39:18,752+02 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) [] START,
HSMClearTaskVDSCommand(HostName = infn-vm05.management,
HSMTaskGuidBaseVDSCommandParameters:{hostId='751f3e99-b95e-4c31-bc38-77f5661a0bdc',
taskId='fdcf4d1b-82fe-49a6-b233-323ebe568f8e'}), log id: d3a78ad
2019-07-25 12:39:18,757+02 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) [] FINISH,
HSMClearTaskVDSCommand, log id: d3a78ad
2019-07-25 12:39:18,757+02 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) [] FINISH,
SPMClearTaskVDSCommand, log id: 34ae2b2f
2019-07-25 12:39:18,757+02 INFO
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) []
SPMAsyncTask::ClearAsyncTask: At time of attempt to clear task
'fdcf4d1b-82fe-49a6-b233-323ebe568f8e' the response code was
'TaskStateError' and message was 'Operation is not allowed in this task
state: ("can't clean in state running",)'. Task will not be cleaned
2019-07-25 12:39:18,757+02 INFO
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) []
BaseAsyncTask::onTaskEndSuccess: Task
'fdcf4d1b-82fe-49a6-b233-323ebe568f8e' (Parent Command 'Unknown',
Parameters Type
'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters') ended
successfully.
2019-07-25 12:39:18,757+02 INFO
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) []
SPMAsyncTask::ClearAsyncTask: Attempting to clear task
'fdcf4d1b-82fe-49a6-b233-323ebe568f8e'
2019-07-25 12:39:18,758+02 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) [] START,
SPMClearTaskVDSCommand(
SPMTaskGuidBaseVDSCommandParameters:{storagePoolId='18d57688-6ed4-43b8-bd7c-0665b55950b7',
ignoreFailoverLimit='false',
taskId='fdcf4d1b-82fe-49a6-b233-323ebe568f8e'}), log id: 42de0c2b
2019-07-25 12:39:18,759+02 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) [] START,
HSMClearTaskVDSCommand(HostName = infn-vm05.management,
HSMTaskGuidBaseVDSCommandParameters:{hostId='751f3e99-b95e-4c31-bc38-77f5661a0bdc',
taskId='fdcf4d1b-82fe-49a6-b233-323ebe568f8e'}), log id: 4895c79c
2019-07-25 12:39:18,764+02 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) [] FINISH,
HSMClearTaskVDSCommand, log id: 4895c79c
2019-07-25 12:39:18,764+02 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) [] FINISH,
SPMClearTaskVDSCommand, log id: 42de0c2b
2019-07-25 12:39:18,764+02 INFO
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) []
SPMAsyncTask::ClearAsyncTask: At time of attempt to clear task
'fdcf4d1b-82fe-49a6-b233-323ebe568f8e' the response code was
'TaskStateError' and message was 'Operation is not allowed in this task
state: ("can't clean in state running",)'. Task will not be cleaned
2019-07-25 12:39:18,764+02 INFO
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) [] Task id
'fdcf4d1b-82fe-49a6-b233-323ebe568f8e' has passed pre-polling period
time and should be polled. Pre-polling period is 60000 millis.
2019-07-25 12:39:18,764+02 INFO
[org.ovirt.engine.core.bll.tasks.AsyncTaskManager]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) [] Cleaning zombie
tasks: Clearing async task 'Unknown' that started at 'Fri May 03
14:48:50 CEST 2019'
2019-07-25 12:39:18,764+02 INFO
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) []
SPMAsyncTask::ClearAsyncTask: Attempting to clear task
'fdcf4d1b-82fe-49a6-b233-323ebe568f8e'
2019-07-25 12:39:18,765+02 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) [] START,
SPMClearTaskVDSCommand(
SPMTaskGuidBaseVDSCommandParameters:{storagePoolId='18d57688-6ed4-43b8-bd7c-0665b55950b7',
ignoreFailoverLimit='false',
taskId='fdcf4d1b-82fe-49a6-b233-323ebe568f8e'}), log id: da77af2
2019-07-25 12:39:18,766+02 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) [] START,
HSMClearTaskVDSCommand(HostName = infn-vm05.management,
HSMTaskGuidBaseVDSCommandParameters:{hostId='751f3e99-b95e-4c31-bc38-77f5661a0bdc',
taskId='fdcf4d1b-82fe-49a6-b233-323ebe568f8e'}), log id: 530694fb
2019-07-25 12:39:18,771+02 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) [] FINISH,
HSMClearTaskVDSCommand, log id: 530694fb
2019-07-25 12:39:18,771+02 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) [] FINISH,
SPMClearTaskVDSCommand, log id: da77af2
2019-07-25 12:39:18,771+02 INFO
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) []
SPMAsyncTask::ClearAsyncTask: At time of attempt to clear task
'fdcf4d1b-82fe-49a6-b233-323ebe568f8e' the response code was
'TaskStateError' and message was 'Operation is not allowed in this task
state: ("can't clean in state running",)'. Task will not be cleaned
there is some relation between this error and a task that has remained
hanging, from SPM server:
# vdsm-client Task getInfo taskID=fdcf4d1b-82fe-49a6-b233-323ebe568f8e
{
"verb": "prepareMerge",
"id": "fdcf4d1b-82fe-49a6-b233-323ebe568f8e"
}
# vdsm-client Task getStatus taskID=fdcf4d1b-82fe-49a6-b233-323ebe568f8e
{
"message": "running job 1 of 1",
"code": 0,
"taskID": "fdcf4d1b-82fe-49a6-b233-323ebe568f8e",
"taskResult": "",
"taskState": "running"
}
How can I solve this problem ?
Thanks a lot for your help !!
Best Regards
Enrico
--
_______________________________________________________________________
Enrico Becchetti Servizio di Calcolo e Reti
Istituto Nazionale di Fisica Nucleare - Sezione di Perugia
Via Pascoli,c/o Dipartimento di Fisica 06123 Perugia (ITALY)
Phone:+39 075 5852777 Mail: Enrico.Becchetti<at>pg.infn.it
_______________________________________________________________________
7 months, 2 weeks
oVirt networks
by Enrico Becchetti
Dear all,
Ineed your help to understand how to configure the network of a new
oVirt cluster.
Mynew system will have a 4.3 engine thatruns in a virtual machine, andsome
Dell R7525 AMD EPYC hypervisors, eachholding two 4-port PCI network cards.
These servers will have node-ovirt image again in version 4.3.
As for the network, there are two HPE Aruba 2540G, non-stackable, with
24 1Gbs ports
and 2 10Gbs uplinks to the star center.
This is a simplified scheme:
My goal is to make the most of the server's 8 ethernet interfaces to have
both reliability and maximum possible throughput.
This cluster will have two virtual networks, one forovirt management and
one for
the traffic of individual virtual machines.
With that said here's what my idea is. I would like to have two links
aggregated by 4Gbs,
one for ovrtmgt and the other for vmnet.
With the ovirt web interface I can createan active-passive "Mode 1"
bond, but this
won'tallow me to go beyond 1Gbs. Alternatively I could create a "Mode 4"
bond
802.3ad but unfortunately the switches are not stacked and therefore not
even
this solution applies.
This is an example with active passive configuration:
Can you tell me if ovirt can generate//nested bonds? Or do you have
other solutions ?
Thanks a lot !
Best Regards
Enrico
--
_______________________________________________________________________
Enrico Becchetti Servizio di Calcolo e Reti
Istituto Nazionale di Fisica Nucleare - Sezione di Perugia
Via Pascoli,c/o Dipartimento di Fisica 06123 Perugia (ITALY)
Phone:+39 075 5852777 Skype:enrico_becchetti
Mail: Enrico.Becchetti<at>pg.infn.it
Pagina web personale: https://www.pg.infn.it/home/enrico-becchetti/
______________________________________________________________________
10 months, 2 weeks
boot from cdrom & error code 0005
by edp@maddalena.it
Hi.
I have created a new storage domain (data domain, storage type nfs) to use it to upload iso images.
I have so uploaded a new iso and then attach the iso to a new vm.
But when I try to boot the vm I obtain this error:
booting from dvd/cd...
boot failed: could not read from cdrom (code 0005)
no bootable device
The iso file has been uploaded with success in the data storage domain and so the vm lets my attach the iso to the vm in the boot settings.
Can you help me?
Thank you
10 months, 3 weeks
VM Migration Failed
by KSNull Zero
Running oVirt 4.4.5
VM cannot migrate between hosts.
vdsm.log contains the following error:
libvirt.libvirtError: operation failed: Failed to connect to remote libvirt URI qemu+tls://ovhost01.local/system: authentication failed: Failed to verify peer's certificate
Certificates on hosts was renewed some time ago. How this issue can be fixed ?
Thank you.
12 months
How to re-enroll (or renew) host certificates for a single-host hosted-engine deployment?
by Derek Atkins
Hi,
I've got a single-host hosted-engine deployment that I originally
installed with 4.0 and have upgraded over the years to 4.3.10. I and some
of my users have upgraded remote-viewer and now I get an error when I try
to view the console of my VMs:
(remote-viewer:8252): Spice-WARNING **: 11:30:41.806:
../subprojects/spice-common/common/ssl_verify.c:477:openssl_verify: Error
in server certificate verification: CA signature digest algorithm too weak
(num=68:depth0:/O=<My Org Name>/CN=<Host's Name>)
I am 99.99% sure this is because the old certs use SHA1.
I reran engine-setup on the engine and it asked me if I wanted to renew
the PKI, and I answered yes. This replaced many[1] of the certificates in
/etc/pki/ovirt-engine/certs on the engine, but it did not update the
Host's certificate.
All the documentation I've seen says that to refresh this certificate I
need to put the host into maintenance mode and then re-enroll.. However I
cannot do that, because this is a single-host system so I cannot put the
host in local mode -- there is no place to migrate the VMs (let alone the
Engine VM).
So.... Is there a command-line way to re-enroll manually and update the
host certs? Or some other way to get all the leftover certs renewed?
Thanks,
-derek
[1] Not only did it not update the Host's cert, it did not update any of
the vmconsole-proxy certs, nor the certs in /etc/pki/ovirt-vmconsole/, and
obviously nothing in /etc/pki/ on the host itself.
--
Derek Atkins 617-623-3745
derek(a)ihtfp.com www.ihtfp.com
Computer and Internet Security Consultant
1 year
4.5.4 with Ceph only storage
by Maurice Burrows
Hey ... A long story short ... I have an existing Red Hat Virt / Gluster hyperconverged solution that I am moving away from.
I have an existing Ceph cluster that I primarily use for OpenStack and a small requirement for S3 via RGW.
I'm planning to build a new oVirt 4.5.4 cluster on RHEL9 using Ceph for all storage requirements. I've read many online articles on oVirt and Ceph, and they all seem to use the Ceph iSCSI gateway, which is now in maintenance, so I'm not real keen to commit to iSCSI.
So my question is, IS there any reason I cannot use CephFS for both hosted-engine and as a data storage domain?
I'm currently running Ceph Pacific FWIW.
Cheers
1 year
Changing disk QoS causes segfault with IO-Threads enabled (oVirt 4.3.0.4-1.el7)
by jloh@squiz.net
We recently upgraded to 4.3.0 and have found that when changing disk QoS settings on VMs whilst IO-Threads is enabled causes them to segfault and the VM to reboot. We've been able to replicate this across several VMs. VMs with IO-Threads disabled/turned off do not segfault when changing the QoS.
Mar 1 11:49:06 srvXX kernel: IO iothread1[30468]: segfault at fffffffffffffff8 ip 0000557649f2bd24 sp 00007f80de832f60 error 5 in qemu-kvm[5576498dd000+a03000]
Mar 1 11:49:06 srvXX abrt-hook-ccpp: invalid number 'iothread1'
Mar 1 11:49:11 srvXX libvirtd: 2019-03-01 00:49:11.116+0000: 13365: error : qemuMonitorIORead:609 : Unable to read from monitor: Connection reset by peer
Happy to supply some more logs to someone if they'll help but just wondering whether anyone else has experienced this or knows of a current fix other than turning io-threads off.
Cheers.
1 year, 2 months
Deploy oVirt Engine fail behind proxy
by Matteo Bonardi
Hi,
I am trying to deploy the ovirt engine following self-hosted engine installation procedure on documentation.
Deployment servers are behind a proxy and I have set it in environment and in yum.conf before run deploy.
Deploy fails because ovirt engine vm cannot resolve AppStream repository url:
[ INFO ] TASK [ovirt.engine-setup : Install oVirt Engine package]
[ ERROR ] fatal: [localhost -> ovirt-manager.mydomain]: FAILED! => {"changed": false, "msg": "Failed to download metadata for repo 'AppStream': Cannot prepare internal mirrorlist: Curl error (6): Couldn't resolve host name for http://mirrorlist.centos.org/?release=8&arch=x86_64&repo=AppStream&infra=... [Could not resolve host: mirrorlist.centos.org]", "rc": 1, "results": []}
[ ERROR ] Failed to execute stage 'Closing up': Failed executing ansible-playbook
[ INFO ] Stage: Clean up
[ INFO ] Cleaning temporary resources
[ INFO ] TASK [ovirt.hosted_engine_setup : Execute just a specific set of steps]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Force facts gathering]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Fetch logs from the engine VM]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Set destination directory path]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Create destination directory]
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : include_tasks]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Find the local appliance image]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Set local_vm_disk_path]
[ INFO ] skipping: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Give the vm time to flush dirty buffers]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Copy engine logs]
[ INFO ] TASK [ovirt.hosted_engine_setup : include_tasks]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Remove local vm dir]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Remove temporary entry in /etc/hosts for the local VM]
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Clean local storage pools]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Destroy local storage-pool {{ he_local_vm_dir | basename }}]
[ INFO ] TASK [ovirt.hosted_engine_setup : Undefine local storage-pool {{ he_local_vm_dir | basename }}]
[ INFO ] TASK [ovirt.hosted_engine_setup : Destroy local storage-pool {{ local_vm_disk_path.split('/')[5] }}]
[ INFO ] TASK [ovirt.hosted_engine_setup : Undefine local storage-pool {{ local_vm_disk_path.split('/')[5] }}]
[ INFO ] Generating answer file '/var/lib/ovirt-hosted-engine-setup/answers/answers-20201109165237.conf'
[ INFO ] Stage: Pre-termination
[ INFO ] Stage: Termination
[ ERROR ] Hosted Engine deployment failed: please check the logs for the issue, fix accordingly or re-deploy from scratch.
Log file is located at /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20201109164244-b3e8sd.log
How I can set proxy for the engine vm?
Ovirt version:
[root@myhost ~]# rpm -qa | grep ovirt-engine-appliance
ovirt-engine-appliance-4.4-20200916125954.1.el8.x86_64
[root@myhost ~]# rpm -qa | grep ovirt-hosted-engine-setup
ovirt-hosted-engine-setup-2.4.6-1.el8.noarch
OS version:
[root@myhost ~]# cat /etc/centos-release
CentOS Linux release 8.2.2004 (Core)
[root@myhost ~]# uname -a
Linux myhost.mydomain 4.18.0-193.28.1.el8_2.x86_64 #1 SMP Thu Oct 22 00:20:22 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
Thanks for the help.
Regards,
Matteo
1 year, 3 months
The oVirt Counter
by Sandro Bonazzola
Hi, for those who remember the Linux Counter project, if you'd like other
to know you're using oVirt and know some details about your deployment,
here's a way to count you in:
https://ovirt.org/community/ovirt-counter.html
Enjoy!
--
Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D PERFORMANCE & SCALE
Red Hat EMEA <https://www.redhat.com/>
sbonazzo(a)redhat.com
<https://www.redhat.com/>
*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.*
1 year, 4 months
Cannot restart ovirt after massive failure.
by Gilboa Davara
Hello all,
During the night, one of my (smaller) setups, a single node self hosted
engine (localhost NFS) crashed due to what-looks-like a massive disk
failure (Software RAID6, with 10 drives + spare).
After a reboot, I let the RAID resync with a fresh drive) and went on to
start oVirt.
However, no such luck.
Two issues:
1. ovirt-ha-broker fails due to broken hosted engine state (log attached).
2. ovirt-ha-agent fails due to network test (tcp) even though both
remote-host and DNS servers are active. (log attached).
Two questions:
1. Can I somehow force the agent to disable the network liveliness test?
2. Can I somehow force the broker to rebuild / fix the hosted engine state?
- Gilboa
1 year, 5 months
Please, Please Help - New oVirt Install/Deployment Failing - "Host is not up..."
by Matthew J Black
Hi Everyone,
Could someone please help me - I've been trying to do an install of oVirt for *weeks* (including false starts and self-inflicted wounds/errors) and it is still not working.
My setup:
- oVirt v4.5.3
- A brand new fresh vanilla install of RockyLinux 8.6 - all working AOK
- 2*NICs in a bond (802.3ad) with a couple of sub-Interfaces/VLANs - all working AOK
- All relevant IPv4 Address in DNS with Reverse Lookups - all working AOK
- All relevant IPv4 Address in "/etc/hosts" file - all working AOK
- IPv6 (using "method=auto" in the interface config file) enabled on the relevant sub-Interface/VLAN - I'm not using IPv6 on the network, only IPv4, but I'm trying to cover all the bases.
- All relevant Ports (as per the oVirt documentation) set up on the firewall
- ie firewall-cmd --add-service={{ libvirt-tls | ovirt-imageio | ovirt-vmconsole | vdsm }}
- All the relevant Repositories installed (ie RockyLinux BaseOS, AppStream, & PowerTools, and the EPEL, plus the ones from the oVirt documentation)
I have followed the oVirt documentation (including the special RHEL-instructions and RockyLinux-instructions) to the letter - no deviations, no special settings, exactly as they are written.
All the dnf installs, etc, went off without a hitch, including the "dnf install centos-release-ovirt45", "dnf install ovirt-engine-appliance", and "dnf install ovirt-hosted-engine-setup" - no errors anywhere.
Here is the results of a "dnf repolist":
- appstream Rocky Linux 8 - AppStream
- baseos Rocky Linux 8 - BaseOS
- centos-ceph-pacific CentOS-8-stream - Ceph Pacific
- centos-gluster10 CentOS-8-stream - Gluster 10
- centos-nfv-openvswitch CentOS-8 - NFV OpenvSwitch
- centos-opstools CentOS-OpsTools - collectd
- centos-ovirt45 CentOS Stream 8 - oVirt 4.5
- cs8-extras CentOS Stream 8 - Extras
- cs8-extras-common CentOS Stream 8 - Extras common packages
- epel Extra Packages for Enterprise Linux 8 - x86_64
- epel-modular Extra Packages for Enterprise Linux Modular 8 - x86_64
- ovirt-45-centos-stream-openstack-yoga CentOS Stream 8 - oVirt 4.5 - OpenStack Yoga Repository
- ovirt-45-upstream oVirt upstream for CentOS Stream 8 - oVirt 4.5
- powertools Rocky Linux 8 - PowerTools
So I kicked-off the oVirt deployment with: "hosted-engine --deploy --4 --ansible-extra-vars=he_offline_deployment=true".
I used "--ansible-extra-vars=he_offline_deployment=true" because without that flag I was getting "DNF timout" issues (see my previous post `Local (Deployment) VM Can't Reach "centos-ceph-pacific" Repo`).
I answer the defaults to all of questions the script asked, or entered the deployment-relevant answers where appropriate. In doing this I double-checked every answer before hitting <Enter>. Everything progressed smoothly until the deployment reached the "Wait for the host to be up" task... which then hung for more than 30 minutes before failing.
From the ovirt-hosted-engine-setup... log file:
- 2022-10-20 17:54:26,285+1100 ERROR otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:113 fatal: [localhost]: FAILED! => {"changed": false, "msg": "Host is not up, please check logs, perhaps also on the engine machine"}
I checked the following log files and found all of the relevant ERROR lines, then checked several 10s of proceeding and succeeding lines trying to determine what was going wrong, but I could not determine anything.
- ovirt-hosted-engine-setup...
- ovirt-hosted-engine-setup-ansible-bootstrap_local_vm...
- ovirt-hosted-engine-setup-ansible-final_clean... - not really relevant, I believe
I can include the log files (or the relevant parts of the log files) if people want - but that are very large: several 100 kilobytes each.
I also googled "oVirt Host is not up" and found several entries, but after reading them all the most relevant seems to be a thread from these mailing list: `Install of RHV 4.4 failing - "Host is not up, please check logs, perhaps also on the engine machine"` - but this seems to be talking about an upgrade and I didn't gleam anything useful from it - I could, of course, be wrong about that.
So my questions are:
- Where else should I be looking (ie other log files, etc, and possible where to find them)?
- Does anyone have any idea why this isn't working?
- Does anyone have a work-around (including a completely manual process to get things working - I don't mind working in the CLI with virsh, etc)?
- What am I doing wrong?
Please, I'm really stumped with this, and I really do need help.
Cheers
Dulux-Oz
1 year, 5 months
how to renew expired ovirt node vdsm cert manually ?
by dhanaraj.ramesh@yahoo.com
below are the steps to renew the expired vdsm cert of ovirt node
# To check CERT expired
# openssl x509 -in /etc/pki/vdsm/certs/vdsmcert.pem -noout -dates
1. Backup vdsm folder
# cd /etc/pki
# mv vdsm vdsm.orig
# mkdir vdsm ; chown vdsm:kvm vdsm
# cd vdsm
# mkdir libvirt-vnc certs keys libvirt-spice libvirt-migrate
# chown vdsm:kvm libvirt-vnc certs keys libvirt-spice libvirt-migrate
2. Regenerate cert & keys
# vdsm-tool configure --module certificates
3. Copy the cert to destination location
chmod 440 /etc/pki/vdsm/keys/vdsmkey.pem
chown root /etc/pki/vdsmcerts/*pem
chmod 644 /etc/pki/vdsmcerts/*pem
cp /etc/pki/vdsm/certs/cacert.pem /etc/pki/vdsm/libvirt-spice/ca-cert.pem
cp /etc/pki/vdsm/keys/vdsmkey.pem /etc/pki/vdsm/libvirt-spice/server-key.pem
cp /etc/pki/vdsm/certs/vdsmcert.pem /etc/pki/vdsm/libvirt-spice/server-cert.pem
cp /etc/pki/vdsm/certs/cacert.pem /etc/pki/vdsm/libvirt-vnc/ca-cert.pem
cp /etc/pki/vdsm/keys/vdsmkey.pem /etc/pki/vdsm/libvirt-vnc/server-key.pem
cp /etc/pki/vdsm/certs/vdsmcert.pem /etc/pki/vdsm/libvirt-vnc/server-cert.pem
cp -p /etc/pki/vdsm/certs/cacert.pem /etc/pki/vdsm/libvirt-migrate/ca-cert.pem
cp -p /etc/pki/vdsm/keys/vdsmkey.pem /etc/pki/vdsm/libvirt-migrate/server-key.pem
cp -p /etc/pki/vdsm/certs/vdsmcert.pem /etc/pki/vdsm/libvirt-migrate/server-cert.pem
chown root:qemu /etc/pki/vdsm/libvirt-migrate/server-key.pem
cp -p /etc/pki/vdsm.orig/keys/libvirt_password /etc/pki/vdsm/keys/
mv /etc/pki/libvirt/clientcert.pem /etc/pki/libvirt/clientcert.pem.orig
mv /etc/pki/libvirt/private/clientkey.pem /etc/pki/libvirt/private/clientkey.pem.orig
mv /etc/pki/CA/cacert.pem /etc/pki/CA/cacert.pem.orig
cp -p /etc/pki/vdsm/certs/vdsmcert.pem /etc/pki/libvirt/clientcert.pem
cp -p /etc/pki/vdsm/keys/vdsmkey.pem /etc/pki/libvirt/private/clientkey.pem
cp -p /etc/pki/vdsm/certs/cacert.pem /etc/pki/CA/cacert.pem
3. cross check the backup folder /etc/pki/vdsm.orig vs /etc/pki/vdsm
# refer to /etc/pki/vdsm.orig/*/ and set the correct owner & group permission in /etc/pki/vdsm/*/
4. restart services # Make sure both services are up
systemctl restart vdsmd libvirtd
1 year, 5 months
Unable to install oVirt on RHEL7.5
by SS00514758@techmahindra.com
Hi All,
I am unable to install oVirt on RHEL7.5, to install it I am taking reference of below link,
https://www.ovirt.org/documentation/install-guide/chap-Installing_oVirt.html
But though it is not working for me, couple of dependencies is not getting installed, and because of this I am not able to run the ovirt-engine, below are the depencies packages that unable to install,
Error: Package: collectd-write_http-5.8.0-6.1.el7.x86_64 (@ovirt-4.2-centos-opstools)
Requires: collectd(x86-64) = 5.8.0-6.1.el7
Removing: collectd-5.8.0-6.1.el7.x86_64 (@ovirt-4.2-centos-opstools)
collectd(x86-64) = 5.8.0-6.1.el7
Updated By: collectd-5.8.1-1.el7.x86_64 (epel)
collectd(x86-64) = 5.8.1-1.el7
Available: collectd-5.7.2-1.el7.x86_64 (ovirt-4.2-centos-opstools)
collectd(x86-64) = 5.7.2-1.el7
Available: collectd-5.7.2-3.el7.x86_64 (ovirt-4.2-centos-opstools)
collectd(x86-64) = 5.7.2-3.el7
Available: collectd-5.8.0-2.el7.x86_64 (ovirt-4.2-centos-opstools)
collectd(x86-64) = 5.8.0-2.el7
Available: collectd-5.8.0-3.el7.x86_64 (ovirt-4.2-centos-opstools)
collectd(x86-64) = 5.8.0-3.el7
Available: collectd-5.8.0-5.el7.x86_64 (ovirt-4.2-centos-opstools)
collectd(x86-64) = 5.8.0-5.el7
Help me to install this.
Looking forward to resolve this issue.
Regards
Sumit Sahay
1 year, 6 months
Grafana - Origin Not Allowed
by Maton, Brett
oVirt 4.5.0.8-1.el8
I tried to connect to grafana via the monitoring portal link from the dash
and all panels are failing to display any data with varying error messages,
but all include 'Origin Not Allowed'
I navigated to Data Sources and ran a test on the PostgreSQL connection
(localhost) which threw the same Origin Not Allowed error message.
Any suggestions?
1 year, 6 months
Re: Failed to synchronize networks of Provider ovirt-provider-ovn
by Mail SET Inc. Group
Yes, i use same manual to change WebUI SSL.
ovirt-ca-file= is a same SSL file which use WebUI.
Yes, i restart ovirt-provider-ovn, i restart engine, i restart all what i can restart. Nothing...
> 12 сент. 2018 г., в 16:11, Dominik Holler <dholler(a)redhat.com> написал(а):
>
> On Wed, 12 Sep 2018 14:23:54 +0300
> "Mail SET Inc. Group" <mail(a)set-pro.net> wrote:
>
>> Ok!
>
> Not exactly, please use users(a)ovirt.org for such questions.
> Other should benefit from this questions, too.
> Please write the next mail to users(a)ovirt.org and keep me in CC.
>
>> What i did:
>>
>> 1) install oVirt «from box» (4.2.5.2-1.el7);
>> 2) generate own ssl for my engine using my FreeIPA CA, Install it and
>
> What means "Install it"? You can use the doc from the following link
> https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.2/...
>
> Ensure that ovirt-ca-file= in
> /etc/ovirt-provider-ovn/conf.d/10-setup-ovirt-provider-ovn.conf
> points to the correct file and ovirt-provider-ovn is restarted.
>
>> get tis issue;
>>
>>
>> [root@engine ~]# tail -n 50 /var/log/ovirt-provider-ovn.log
>> 2018-09-12 14:10:23,828 root [SSL: CERTIFICATE_VERIFY_FAILED]
>> certificate verify failed (_ssl.c:579) Traceback (most recent call
>> last): File "/usr/share/ovirt-provider-ovn/handlers/base_handler.py",
>> line 133, in _handle_request method, path_parts, content
>> File "/usr/share/ovirt-provider-ovn/handlers/selecting_handler.py",
>> line 175, in handle_request return
>> self.call_response_handler(handler, content, parameters) File
>> "/usr/share/ovirt-provider-ovn/handlers/keystone.py", line 33, in
>> call_response_handler return response_handler(content, parameters)
>> File "/usr/share/ovirt-provider-ovn/handlers/keystone_responses.py",
>> line 62, in post_tokens user_password=user_password) File
>> "/usr/share/ovirt-provider-ovn/auth/plugin_facade.py", line 26, in
>> create_token return auth.core.plugin.create_token(user_at_domain,
>> user_password) File
>> "/usr/share/ovirt-provider-ovn/auth/plugins/ovirt/plugin.py", line
>> 48, in create_token timeout=self._timeout()) File
>> "/usr/share/ovirt-provider-ovn/auth/plugins/ovirt/sso.py", line 75,
>> in create_token username, password, engine_url, ca_file, timeout)
>> File "/usr/share/ovirt-provider-ovn/auth/plugins/ovirt/sso.py", line
>> 91, in _get_sso_token timeout=timeout File
>> "/usr/share/ovirt-provider-ovn/auth/plugins/ovirt/sso.py", line 54,
>> in wrapper response = func(*args, **kwargs) File
>> "/usr/share/ovirt-provider-ovn/auth/plugins/ovirt/sso.py", line 47,
>> in wrapper raise BadGateway(e) BadGateway: [SSL:
>> CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:579)
>>
>>
>> [root@engine ~]# tail -n 20 /var/log/ovirt-engine/engine.log
>> 2018-09-12 14:10:23,773+03 INFO
>> [org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand]
>> (EE-ManagedThreadFactory-engineScheduled-Thread-47) [316db685] Lock
>> Acquired to object
>> 'EngineLock:{exclusiveLocks='[14e4fb72-9764-4757-b37d-4d487995571a=PROVIDER]',
>> sharedLocks=''}' 2018-09-12 14:10:23,778+03 INFO
>> [org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand]
>> (EE-ManagedThreadFactory-engineScheduled-Thread-47) [316db685]
>> Running command: SyncNetworkProviderCommand internal: true.
>> 2018-09-12 14:10:23,836+03 ERROR
>> [org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand]
>> (EE-ManagedThreadFactory-engineScheduled-Thread-47) [316db685]
>> Command
>> 'org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand'
>> failed: EngineException: (Failed with error Bad Gateway and code
>> 5050) 2018-09-12 14:10:23,837+03 INFO
>> [org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand]
>> (EE-ManagedThreadFactory-engineScheduled-Thread-47) [316db685] Lock
>> freed to object
>> 'EngineLock:{exclusiveLocks='[14e4fb72-9764-4757-b37d-4d487995571a=PROVIDER]',
>> sharedLocks=''}' 2018-09-12 14:14:12,477+03 INFO
>> [org.ovirt.engine.core.sso.utils.AuthenticationUtils] (default
>> task-6) [] User admin@internal successfully logged in with scopes:
>> ovirt-app-admin ovirt-app-api ovirt-app-portal
>> ovirt-ext=auth:sequence-priority=~ ovirt-ext=revoke:revoke-all
>> ovirt-ext=token-info:authz-search
>> ovirt-ext=token-info:public-authz-search
>> ovirt-ext=token-info:validate ovirt-ext=token:password-access
>> 2018-09-12 14:14:12,587+03 INFO
>> [org.ovirt.engine.core.bll.aaa.CreateUserSessionCommand] (default
>> task-6) [1bf1b763] Running command: CreateUserSessionCommand
>> internal: false. 2018-09-12 14:14:12,628+03 INFO
>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
>> (default task-6) [1bf1b763] EVENT_ID: USER_VDC_LOGIN(30), User
>> admin@internal-authz connecting from '10.0.3.61' using session
>> 's8jAm7BUJGlicthm6yZBA3CUM8QpRdtwFaK3M/IppfhB3fHFB9gmNf0cAlbl1xIhcJ2WX+ww7e71Ri+MxJSsIg=='
>> logged in. 2018-09-12 14:14:30,972+03 INFO
>> [org.ovirt.engine.core.bll.provider.ImportProviderCertificateCommand]
>> (default task-6) [ee3cc8a7-4485-4fdf-a0c2-e9d67b5cfcd3] Running
>> command: ImportProviderCertificateCommand internal: false. Entities
>> affected : ID: aaa00000-0000-0000-0000-123456789aaa Type:
>> SystemAction group CREATE_STORAGE_POOL with role type ADMIN
>> 2018-09-12 14:14:30,982+03 INFO
>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
>> (default task-6) [ee3cc8a7-4485-4fdf-a0c2-e9d67b5cfcd3] EVENT_ID:
>> PROVIDER_CERTIFICATE_IMPORTED(213), Certificate for provider
>> ovirt-provider-ovn was imported. (User: admin@internal-authz)
>> 2018-09-12 14:14:31,006+03 INFO
>> [org.ovirt.engine.core.bll.provider.TestProviderConnectivityCommand]
>> (default task-6) [a48d94ab-b0b2-42a2-a667-0525b4c652ea] Running
>> command: TestProviderConnectivityCommand internal: false. Entities
>> affected : ID: aaa00000-0000-0000-0000-123456789aaa Type:
>> SystemAction group CREATE_STORAGE_POOL with role type ADMIN
>> 2018-09-12 14:14:31,058+03 ERROR
>> [org.ovirt.engine.core.bll.provider.TestProviderConnectivityCommand]
>> (default task-6) [a48d94ab-b0b2-42a2-a667-0525b4c652ea] Command
>> 'org.ovirt.engine.core.bll.provider.TestProviderConnectivityCommand'
>> failed: EngineException: (Failed with error Bad Gateway and code
>> 5050) 2018-09-12 14:15:10,954+03 INFO
>> [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService]
>> (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread
>> pool 'default' is using 0 threads out of 1, 5 threads waiting for
>> tasks. 2018-09-12 14:15:10,954+03 INFO
>> [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService]
>> (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread
>> pool 'engine' is using 0 threads out of 500, 16 threads waiting for
>> tasks and 0 tasks in queue. 2018-09-12 14:15:10,954+03 INFO
>> [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService]
>> (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread
>> pool 'engineScheduled' is using 0 threads out of 100, 100 threads
>> waiting for tasks. 2018-09-12 14:15:10,954+03 INFO
>> [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService]
>> (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread
>> pool 'engineThreadMonitoring' is using 1 threads out of 1, 0 threads
>> waiting for tasks. 2018-09-12 14:15:10,954+03 INFO
>> [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService]
>> (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread
>> pool 'hostUpdatesChecker' is using 0 threads out of 5, 2 threads
>> waiting for tasks. 2018-09-12 14:15:23,843+03 INFO
>> [org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand]
>> (EE-ManagedThreadFactory-engineScheduled-Thread-61) [2455041f] Lock
>> Acquired to object
>> 'EngineLock:{exclusiveLocks='[14e4fb72-9764-4757-b37d-4d487995571a=PROVIDER]',
>> sharedLocks=''}' 2018-09-12 14:15:23,849+03 INFO
>> [org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand]
>> (EE-ManagedThreadFactory-engineScheduled-Thread-61) [2455041f]
>> Running command: SyncNetworkProviderCommand internal: true.
>> 2018-09-12 14:15:23,900+03 ERROR
>> [org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand]
>> (EE-ManagedThreadFactory-engineScheduled-Thread-61) [2455041f]
>> Command
>> 'org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand'
>> failed: EngineException: (Failed with error Bad Gateway and code
>> 5050) 2018-09-12 14:15:23,901+03 INFO
>> [org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand]
>> (EE-ManagedThreadFactory-engineScheduled-Thread-61) [2455041f] Lock
>> freed to object
>> 'EngineLock:{exclusiveLocks='[14e4fb72-9764-4757-b37d-4d487995571a=PROVIDER]',
>> sharedLocks=''}'
>>
>>
>> [root@engine ~]#
>> cat /etc/ovirt-provider-ovn/conf.d/10-setup-ovirt-provider-ovn.conf #
>> This file is automatically generated by engine-setup. Please do not
>> edit manually [OVN REMOTE] ovn-remote=ssl:127.0.0.1:6641
>> [SSL]
>> https-enabled=true
>> ssl-cacert-file=/etc/pki/ovirt-engine/ca.pem
>> ssl-cert-file=/etc/pki/ovirt-engine/certs/ovirt-provider-ovn.cer
>> ssl-key-file=/etc/pki/ovirt-engine/keys/ovirt-provider-ovn.key.nopass
>> [OVIRT]
>> ovirt-sso-client-secret=Ms7Gw9qNT6IkXu7oA54tDmxaZDIukABV
>> ovirt-host=https://engine.set.local:443
>> ovirt-sso-client-id=ovirt-provider-ovn
>> ovirt-ca-file=/etc/pki/ovirt-engine/apache-ca.pem
>> [PROVIDER]
>> provider-host=engine.set.local
>>
>>
>>> 12 сент. 2018 г., в 13:59, Dominik Holler <dholler(a)redhat.com>
>>> написал(а):
>>>
>>> On Wed, 12 Sep 2018 13:04:53 +0300
>>> "Mail SET Inc. Group" <mail(a)set-pro.net> wrote:
>>>
>>>> Hello Dominik!
>>>> I have a same issue with OVN provider and SSL
>>>> https://www.mail-archive.com/users@ovirt.org/msg47020.html
>>>> <https://www.mail-archive.com/users@ovirt.org/msg47020.html> But
>>>> certificate changes not helps to resolve it. Maybe you can help me
>>>> with this?
>>>
>>> Sure. Can you please share the relevant lines of
>>> ovirt-provider-ovn.log and engine.log, and the information if you
>>> are using the certificates generated by engine-setup with
>>> users(a)ovirt.org ? Thanks,
>>> Dominik
>>>
>>
>
>
1 year, 8 months
engine-config -s UserSessionTimeOutInterval=X problem
by marek
ovirt 4.5.4, standalone engine, centos 8 stream
[root@ovirt ~]# engine-config -g UserSessionTimeOutInterval
Picked up JAVA_TOOL_OPTIONS: -Dcom.redhat.fips=false
UserSessionTimeOutInterval: 30 version: general
[root@ovirt ~]# engine-config -s UserSessionTimeOutInterval=60
Picked up JAVA_TOOL_OPTIONS: -Dcom.redhat.fips=false
Cannot set value 60 to key UserSessionTimeOutInterval.
any ideas where is the problem?
Marek
1 year, 10 months
engine-setup failing on 4.3.2 -> 4.3.3 fails during Engine schema refresh fail
by Edward Berger
I was trying to upgrade a hyperconverged oVirt hosted engine and failed in
the engine-setup command with these error and warnings.
...
[ INFO ] Creating/refreshing Engine database schema
[ ERROR ] schema.sh: FATAL: Cannot execute sql command:
--file=/usr/share/ovirt-engine/dbscripts/upgrade/04_03_0830_add_foreign_key_to_image_transfers.sql
[ ERROR ] Failed to execute stage 'Misc configuration': Engine schema
refresh failed
...
[ INFO ] Yum Verify: 16/16: ovirt-engine-tools.noarch 0:4.3.3.5-1.el7 - e
[WARNING] Rollback of DWH database postponed to Stage "Clean up"
[ INFO ] Rolling back database schema
...
[ INFO ] Stage: Pre-termination
[ INFO ] Stage: Termination
[ ERROR ] Execution of setup failed
Attaching engine-setup logfile.
1 year, 11 months
Unable to change the admin passsword on oVirt 4.5.2.5
by Ayansh Rocks
Hi All,
Any idea hot to change password of admin user on oVirt 4.5.2.5 ?
Below is not working -
[root@ovirt]# ovirt-aaa-jdbc-tool user password-reset admin
Picked up JAVA_TOOL_OPTIONS: -Dcom.redhat.fips=false
Password:
Reenter password:
updating user admin...
user updated successfully
[root@delhi-test-ovirtm-02 ~]#
Above shows successful but password not changed.
Thanks
2 years, 2 months
/tmp/lvm.log keeps growing in Host
by kanehisa@ktcsp.net
I don't understand why lvm.log is placed in /tmp directory without rotation.
I noticed this fact when I got the following event notification every 2 hours.
EventID :24
Message :Critical, Low disk space. Host ovirt01 has less than 500 MB of free space left on: /tmp. Low disk space might cause an issue upgrading this host.
As a workaround, I added a log rotation setting to /tmp/lvm.log, but is this the correct way?
I should have understood the contents of the Python program below before asking the question,
but please forgive me because I am not very knowledgeable about Python.
# cat /usr/lib/python3.6/site-packages/blivet/devicelibs/lvm.py | grep lvm.log
config_string += "log {level=7 file=/tmp/lvm.log syslog=0}"
Thanks in advance!!
Further information is below
[root@ovirt01 ~]# cat /etc/os-release
NAME="CentOS Stream"
VERSION="8"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="8.7.2206.0"
VARIANT="oVirt Node 4.5.4"
VARIANT_ID="ovirt-node"
PRETTY_NAME="oVirt Node 4.5.4"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:8"
HOME_URL="https://www.ovirt.org/"
BUG_REPORT_URL="https://bugzilla.redhat.com/"
PLATFORM_ID="platform:el8"
[root@ovirt01 ~]# uname -a
Linux ovirt01 4.18.0-408.el8.x86_64 #1 SMP Mon Jul 18 17:42:52 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
[root@ovirt01 ~]# df -h | grep -E " /tmp|Filesystem"
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/onn_ovirt01-tmp 1014M 515M 499M 51% /tmp
[root@ovirt01 ~]# stat /tmp/lvm.log
File: /tmp/lvm.log
Size: 463915707 Blocks: 906088 IO Block: 4096 regular file
Device: fd0eh/64782d Inode: 137 Links: 1
Access: (0600/-rw-------) Uid: ( 0/ root) Gid: ( 0/ root)
Context: system_u:object_r:lvm_tmp_t:s0
Access: 2023-02-03 10:30:06.605936740 +0900
Modify: 2023-02-03 09:52:19.712301285 +0900
Change: 2023-02-03 09:52:19.712301285 +0900
Birth: 2023-01-16 01:06:02.768495837 +0900
2 years, 2 months
4.4.9 -> 4.4.10 Cannot start or migrate any VM (hotpluggable cpus requested exceeds the maximum cpus supported by KVM)
by Jillian Morgan
After upgrading the engine from 4.4.9 to 4.4.10, and then upgrading one
host, any attempt to migrate a VM to that host or start a VM on that host
results in the following error:
Number of hotpluggable cpus requested (16) exceeds the maximum cpus
supported by KVM (8)
While the version of qemu is the same across hosts, (
qemu-kvm-6.0.0-33.el8s.x86_64), I traced the difference to the upgraded
kernel on the new host. I have always run elrepo's kernel-ml on these hosts
to support bcache which RHEL's kernel doesn't support. The working hosts
still run kernel-ml-5.15.12. The upgraded host ran kernel-ml-5.17.0.
In case anyone else runs kernel-ml, have you run into this issue?
Does anyone know why KVM's KVM_CAP_MAX_VCPUS value is lowered on the new
kernel?
Does anyone know how to query the KVM capabilities from userspace without
writing a program leveraging kvm_ioctl()'s?
Related to this, it seems that ovirt and/or libvirtd always runs qmu-kvm
with an -smp argument of "maxcpus=16". This causes qemu's built-in check to
fail on the new kernel which is supporting max_vpus of 8.
Why does ovirt always request maxcpus=16?
And yes, before you say it, I know you're going to say that running
kernel-ml isn't supported.
--
Jillian Morgan (she/her) 🏳️⚧️
Systems & Networking Specialist
Primordial Software Group & I.T. Consultancy
https://www.primordial.ca
2 years, 2 months
4.5.2 Create Additional Gluster Logical Volumes fails
by simon@justconnect.ie
Hi,
In 4.4 adding additional gluster volumes was a simple ansible task (or via cockpit).
With 4.5.2 I tried to add new volumes but the logic has changed/broken. Here's the error I am getting:
TASK [gluster.infra/roles/backend_setup : Create volume groups] ********************************************************************************************************************************
failed: [bdtovirthcidmz02-strg.mydomain.com] (item={'key': 'gluster_vg_sda', 'value': [{'vgname': 'gluster_vg_sda', 'pvname': '/dev/sda'}]}) => {"ansible_loop_var": "item", "changed": true, "cmd": ["vgcreate", "--dataalignment", "2048K", "-s", "2048K", "gluster_vg_sda", "/dev/sda"], "delta": "0:00:00.010442", "end": "2022-11-10 13:11:16.717772", "item": {"key": "gluster_vg_sda", "value": [{"pvname": "/dev/sda", "vgname": "gluster_vg_sda"}]}, "msg": "non-zero return code", "rc": 3, "start": "2022-11-10 13:11:16.707330", "stderr": " Configuration setting \"filter\" invalid. It's not part of any section.\n /dev/gluster_vg_sda: already exists in filesystem\n Run `vgcreate --help' for more information.", "stderr_lines": [" Configuration setting \"filter\" invalid. It's not part of any section.", " /dev/gluster_vg_sda: already exists in filesystem", " Run `vgcreate --help' for more information."], "stdout": "", "stdout_lines": []}
failed: [bdtovirthcidmz03-strg.mydomain.com] (item={'key': 'gluster_vg_sda', 'value': [{'vgname': 'gluster_vg_sda', 'pvname': '/dev/sda'}]}) => {"ansible_loop_var": "item", "changed": true, "cmd": ["vgcreate", "--dataalignment", "2048K", "-s", "2048K", "gluster_vg_sda", "/dev/sda"], "delta": "0:00:00.010231", "end": "2022-11-10 13:12:35.607565", "item": {"key": "gluster_vg_sda", "value": [{"pvname": "/dev/sda", "vgname": "gluster_vg_sda"}]}, "msg": "non-zero return code", "rc": 3, "start": "2022-11-10 13:12:35.597334", "stderr": " Configuration setting \"filter\" invalid. It's not part of any section.\n /dev/gluster_vg_sda: already exists in filesystem\n Run `vgcreate --help' for more information.", "stderr_lines": [" Configuration setting \"filter\" invalid. It's not part of any section.", " /dev/gluster_vg_sda: already exists in filesystem", " Run `vgcreate --help' for more information."], "stdout": "", "stdout_lines": []}
failed: [bdtovirthcidmz01-strg.mydomain.com] (item={'key': 'gluster_vg_sda', 'value': [{'vgname': 'gluster_vg_sda', 'pvname': '/dev/sda'}]}) => {"ansible_loop_var": "item", "changed": true, "cmd": ["vgcreate", "--dataalignment", "2048K", "-s", "2048K", "gluster_vg_sda", "/dev/sda"], "delta": "0:00:00.011282", "end": "2022-11-10 13:13:24.336233", "item": {"key": "gluster_vg_sda", "value": [{"pvname": "/dev/sda", "vgname": "gluster_vg_sda"}]}, "msg": "non-zero return code", "rc": 3, "start": "2022-11-10 13:13:24.324951", "stderr": " Configuration setting \"filter\" invalid. It's not part of any section.\n /dev/gluster_vg_sda: already exists in filesystem\n Run `vgcreate --help' for more information.", "stderr_lines": [" Configuration setting \"filter\" invalid. It's not part of any section.", " /dev/gluster_vg_sda: already exists in filesystem", " Run `vgcreate --help' for more information."], "stdout": "", "stdout_lines": []}
The vg was created as part of the initial ansible build with logical volumes being added when required.
Any assistance would be greatly appreciated.
Kind regards
Simon
2 years, 3 months
oVirt On Rocky 8.x - Upgrade To Rocky 9.1
by Matthew J Black
Hi All,
Sorry if this was mentioned previously (I obviously missed it if it was) but can we upgrade an oVirt (latest version) Host/Cluster and/or the oVirt Engine VM from Rocky Linux (RHEL) v8.6/8.7 to v9.1 (yet), and if so, what is / where can I find the procedure to do this - ie is there anything "special" that needs to be done because of oVirt, or can we just do a "simple" v8.x +> v9.1 upgrade?
Thanks in advance
Cheers
Dulux-Oz
2 years, 3 months
oVirt 4.4 hosted engine deploy fails - repository issues
by lars.stolpe@bvg.de
Hi,
I want to upgrade oVirt 4.3 to oVirt 4.4. Thus i have to reinstall one node to EL8 an deploy the engine with restore.
i get this error message at deploy:
[ INFO ] TASK [ovirt.ovirt.engine_setup : Install oVirt Engine package]
[ ERROR ] fatal: [localhost -> 192.168.2.143]: FAILED! => {"changed": false, "msg": "Failed to download metadata for repo 'ovirt-4.4-centos-ceph-pacific': Cannot prepare internal mirrorlist: Curl error (56): Failure when receiving data from the peer for http://mirrorlist.centos.org/?release=8-stream&arch=x86_64&repo=storage-c... [Recv failure: Connection reset by peer]", "rc": 1, "results": []}
Since i do use our satellite server, this URL is not included in the repositories i provided. A repository named 'ovirt-4.4-centos-ceph-pacific' is deinitely provided and available.
How do i get the deploy to use the correct repositories?
I hope someone can help me out,
best regards
2 years, 4 months
Very long reboot times of "RH" hosts with oVirt installed
by Peter H
I was just wondering if anyone else is experiencing the following issue.
On both real physical machines and on VM when I have installed a RH
derivative like CentOS, CentOS Stream, Rocky Linux or Alma Linux the
reboot times are initially normal.
After installation of oVirt and all the dependencies the reboot
(shutdown) increases to anywhere between 3 and 5 minutes. During
shutdown a blinking cursor is visible in the upper left corner. If
sitting at the console
E.g. for
2 years, 4 months
Importing Windows VM from OVA/OVF that was exported from VSphere fails
by wcordero8@gmail.com
Description of problem: when importing Windows VM from OVA/OVF import fails:
Cannot import VM. Invalid time zone for given OS type.
Attribute: vm.vmStatic
Infrastructure:
VMware ESXi, 7.0.3, 19193900
oVirt Version 4.5.4-1.el8
oVirt self-hosted engine
Steps to Reproduce:
1.Export Windows VM Microsoft Windows Server 2019 (64-bit) to OVA/OVF from VSphere that have SA Pacific Standard Time timezone (UTC-05:00) Bogotá, Lima, Quito) with ovftool.
2. import the VM in oVirt
import fails with:
Cannot import VM. Invalid time zone for given OS type.
Attribute: vm.vmStatic
[org.ovirt.engine.core.bll.exportimport.ImportVmFromOvaCommand] (default task-2) [72bb230a-bc1c-41c6-b87f-3891764b9fdd] Validation of action 'ImportVmFromOva' failed for user Reasons: VAR__ACTION__IMPORT,VAR__TYPE__VM,ACTION_TYPE_FAILED_INVALID_TIMEZONE,$groups [Ljava.lang.Class;@746306ef,$message ACTION_TYPE_FAILED_INVALID_TIMEZONE,$payload [Ljava.lang.Class;@2d685ee5,ACTION_TYPE_FAILED_ATTRIBUTE_PATH,$path vm.vmStatic,$validatedValue
# cat Implementacion_02-2.ovf
<?xml version='1.0' encoding='UTF-8'?>
<Envelope xmlns="http://schemas.dmtf.org/ovf/envelope/1" xmlns:ovf="http://schemas.dmtf.org/ovf/envelope/1" xmlns:vmw="http://www.vmware.com/schema/ovf" xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationS..." xmlns:vssd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_VirtualSystemSettin...">
<References>
<File ovf:id="file1" ovf:href="Implementacion_02-2-1.vmdk"/>
<File ovf:id="file2" ovf:href="Implementacion_02-2-2.vmdk"/>
<File ovf:id="file3" ovf:href="Implementacion_02-2-3.nvram" ovf:size="270840"/>
</References>
<DiskSection>
<Info>List of the virtual disks</Info>
<Disk ovf:capacityAllocationUnits="byte" ovf:format="http://www.vmware.com/interfaces/specifications/vmdk.html#streamOptimized" ovf:diskId="vmdisk1" ovf:capacity="161061273600" ovf:fileRef="file1"/>
<Disk ovf:capacityAllocationUnits="byte" ovf:format="http://www.vmware.com/interfaces/specifications/vmdk.html#streamOptimized" ovf:diskId="vmdisk2" ovf:capacity="161061273600" ovf:fileRef="file2"/>
</DiskSection>
<NetworkSection>
<Info>The list of logical networks</Info>
<Network ovf:name="DVPG_102">
<Description>The DVPG_102 network</Description>
</Network>
</NetworkSection>
<VirtualSystem ovf:id="Implementacion_02-2">
<Info>A Virtual system</Info>
<Name>Implementacion_02-2</Name>
<OperatingSystemSection ovf:id="112" vmw:osType="windows2019srv_64Guest">
<Info>The operating system installed</Info>
<Description>Microsoft Windows Server 2019 (64-bit)</Description>
</OperatingSystemSection>
<VirtualHardwareSection>
<Info>Virtual hardware requirements</Info>
<System>
<vssd:ElementName>Virtual Hardware Family</vssd:ElementName>
<vssd:InstanceID>0</vssd:InstanceID>
<vssd:VirtualSystemType>vmx-18</vssd:VirtualSystemType>
</System>
<Item>
<rasd:AllocationUnits>hertz * 10^6</rasd:AllocationUnits>
<rasd:Description>Number of Virtual CPUs</rasd:Description>
<rasd:ElementName>2 virtual CPU(s)</rasd:ElementName>
<rasd:InstanceID>1</rasd:InstanceID>
<rasd:ResourceType>3</rasd:ResourceType>
<rasd:VirtualQuantity>2</rasd:VirtualQuantity>
<vmw:CoresPerSocket ovf:required="false">1</vmw:CoresPerSocket>
</Item>
<Item>
<rasd:AllocationUnits>byte * 2^20</rasd:AllocationUnits>
<rasd:Description>Memory Size</rasd:Description>
<rasd:ElementName>4096MB of memory</rasd:ElementName>
<rasd:InstanceID>2</rasd:InstanceID>
<rasd:ResourceType>4</rasd:ResourceType>
<rasd:VirtualQuantity>4096</rasd:VirtualQuantity>
</Item>
<Item>
<rasd:Address>0</rasd:Address>
<rasd:Description>SCSI Controller</rasd:Description>
<rasd:ElementName>SCSI Controller 1</rasd:ElementName>
<rasd:InstanceID>3</rasd:InstanceID>
<rasd:ResourceSubType>lsilogicsas</rasd:ResourceSubType>
<rasd:ResourceType>6</rasd:ResourceType>
<vmw:Config ovf:required="false" vmw:key="slotInfo.pciSlotNumber" vmw:value="160"/>
</Item>
<Item>
<rasd:Address>0</rasd:Address>
<rasd:Description>SATA Controller</rasd:Description>
<rasd:ElementName>SATA Controller 1</rasd:ElementName>
<rasd:InstanceID>4</rasd:InstanceID>
<rasd:ResourceSubType>vmware.sata.ahci</rasd:ResourceSubType>
<rasd:ResourceType>20</rasd:ResourceType>
<vmw:Config ovf:required="false" vmw:key="slotInfo.pciSlotNumber" vmw:value="33"/>
</Item>
<Item>
<rasd:Description>USB Controller (XHCI)</rasd:Description>
<rasd:ElementName>USB controller</rasd:ElementName>
<rasd:InstanceID>5</rasd:InstanceID>
<rasd:ResourceSubType>vmware.usb.xhci</rasd:ResourceSubType>
<rasd:ResourceType>23</rasd:ResourceType>
<vmw:Config ovf:required="false" vmw:key="slotInfo.pciSlotNumber" vmw:value="224"/>
</Item>
<Item>
<rasd:AddressOnParent>0</rasd:AddressOnParent>
<rasd:ElementName>Hard Disk 1</rasd:ElementName>
<rasd:HostResource>ovf:/disk/vmdisk1</rasd:HostResource>
<rasd:InstanceID>6</rasd:InstanceID>
<rasd:Parent>3</rasd:Parent>
<rasd:ResourceType>17</rasd:ResourceType>
</Item>
<Item>
<rasd:AddressOnParent>1</rasd:AddressOnParent>
<rasd:ElementName>Hard Disk 2</rasd:ElementName>
<rasd:HostResource>ovf:/disk/vmdisk2</rasd:HostResource>
<rasd:InstanceID>7</rasd:InstanceID>
<rasd:Parent>3</rasd:Parent>
<rasd:ResourceType>17</rasd:ResourceType>
</Item>
<Item>
<rasd:AddressOnParent>0</rasd:AddressOnParent>
<rasd:AutomaticAllocation>false</rasd:AutomaticAllocation>
<rasd:ElementName>CD/DVD Drive 1</rasd:ElementName>
<rasd:InstanceID>8</rasd:InstanceID>
<rasd:Parent>4</rasd:Parent>
<rasd:ResourceSubType>vmware.cdrom.remoteatapi</rasd:ResourceSubType>
<rasd:ResourceType>15</rasd:ResourceType>
</Item>
<Item>
<rasd:AddressOnParent>0</rasd:AddressOnParent>
<rasd:AutomaticAllocation>true</rasd:AutomaticAllocation>
<rasd:Connection>DVPG_102</rasd:Connection>
<rasd:ElementName>Network adapter 1</rasd:ElementName>
<rasd:InstanceID>9</rasd:InstanceID>
<rasd:ResourceSubType>VmxNet3</rasd:ResourceSubType>
<rasd:ResourceType>10</rasd:ResourceType>
<vmw:Config ovf:required="false" vmw:key="slotInfo.pciSlotNumber" vmw:value="192"/>
<vmw:Config ovf:required="false" vmw:key="connectable.allowGuestControl" vmw:value="true"/>
<vmw:Config ovf:required="false" vmw:key="wakeOnLanEnabled" vmw:value="true"/>
<vmw:Config ovf:required="false" vmw:key="uptCompatibilityEnabled" vmw:value="true"/>
</Item>
<Item ovf:required="false">
<rasd:ElementName>Video card</rasd:ElementName>
<rasd:InstanceID>10</rasd:InstanceID>
<rasd:ResourceType>24</rasd:ResourceType>
<vmw:Config ovf:required="false" vmw:key="enable3DSupport" vmw:value="false"/>
<vmw:Config ovf:required="false" vmw:key="useAutoDetect" vmw:value="true"/>
<vmw:Config ovf:required="false" vmw:key="graphicsMemorySizeInKB" vmw:value="262144"/>
<vmw:Config ovf:required="false" vmw:key="use3dRenderer" vmw:value="automatic"/>
<vmw:Config ovf:required="false" vmw:key="numDisplays" vmw:value="1"/>
<vmw:Config ovf:required="false" vmw:key="videoRamSizeInKB" vmw:value="16384"/>
</Item>
<vmw:Config ovf:required="false" vmw:key="cpuHotAddEnabled" vmw:value="false"/>
<vmw:Config ovf:required="false" vmw:key="sgxInfo.epcSize" vmw:value="0"/>
<vmw:Config ovf:required="false" vmw:key="nestedHVEnabled" vmw:value="false"/>
<vmw:Config ovf:required="false" vmw:key="virtualSMCPresent" vmw:value="false"/>
<vmw:Config ovf:required="false" vmw:key="flags.vvtdEnabled" vmw:value="false"/>
<vmw:Config ovf:required="false" vmw:key="cpuHotRemoveEnabled" vmw:value="false"/>
<vmw:Config ovf:required="false" vmw:key="sgxInfo.flcMode" vmw:value="unlocked"/>
<vmw:Config ovf:required="false" vmw:key="sevEnabled" vmw:value="false"/>
<vmw:Config ovf:required="false" vmw:key="virtualICH7MPresent" vmw:value="false"/>
<vmw:Config ovf:required="false" vmw:key="flags.vbsEnabled" vmw:value="false"/>
<vmw:Config ovf:required="false" vmw:key="memoryHotAddEnabled" vmw:value="false"/>
<vmw:Config ovf:required="false" vmw:key="bootOptions.efiSecureBootEnabled" vmw:value="true"/>
<vmw:Config ovf:required="false" vmw:key="firmware" vmw:value="efi"/>
<vmw:ExtraConfig ovf:required="false" vmw:key="nvram" vmw:value="ovf:/file/file3"/>
<vmw:ExtraConfig ovf:required="false" vmw:key="svga.autodetect" vmw:value="TRUE"/>
</VirtualHardwareSection>
</VirtualSystem>
# cat /etc/ovirt-engine/timezones/00-defaults.properties | grep -ie bogota
America/Bogota=SA Pacific Standard Time
any suggestions for me??
2 years, 4 months
Importing VM from OVA/OVF that was exported from VSphere fails ConvertOvaCommand - ImportVmFromOvaCommand
by wcordero8@gmail.com
Description of problem: when importing Windows VM from OVA/OVF import fails:
EVENT_ID: IMPORTEXPORT_IMPORT_VM_FAILED(1,153), Failed to import Vm Implementacion_02-2 to Data Center dc_rcloud, Cluster cl_rcloud
Infrastructure:
VMware ESXi, 7.0.3, 19193900
oVirt Version 4.5.4-1.el8
oVirt self-hosted engine
Steps to Reproduce:
1.Export Windows to OVA/OVF from VSphere that with ovftool.
2. import the VM in oVirt
I'm getting some alerts just selected the VM
2023-02-27 11:09:00,676-05 WARN [org.ovirt.engine.core.utils.ovf.OvfReader] (default task-38) [3eba88b2-7b98-48d4-8488-2f3331fb41e6] could not retrieve volume id of file1 from ovf, generating new guid
2023-02-27 11:09:00,676-05 WARN [org.ovirt.engine.core.utils.ovf.OvfReader] (default task-38) [3eba88b2-7b98-48d4-8488-2f3331fb41e6] could not retrieve disk id of vmdisk1 from ovf, generating new guid
2023-02-27 11:09:00,677-05 WARN [org.ovirt.engine.core.utils.ovf.OvfReader] (default task-38) [3eba88b2-7b98-48d4-8488-2f3331fb41e6] could not retrieve volume id of file2 from ovf, generating new guid
2023-02-27 11:09:00,677-05 WARN [org.ovirt.engine.core.utils.ovf.OvfReader] (default task-38) [3eba88b2-7b98-48d4-8488-2f3331fb41e6] could not retrieve disk id of vmdisk2 from ovf, generating new guid
then I start the import process and the first Errors that appear is:
2023-02-27 11:09:42,353-05 ERROR [org.ovirt.engine.core.bll.exportimport.ConvertOvaCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-18) [90ee1c89-6cfb-4632-837b-781d5a32ed4c] Ending command 'org.ovirt.engine.core.bll.exportimport.ConvertOvaCommand' with failure.
2023-02-27 11:09:43,639-05 ERROR [org.ovirt.engine.core.bll.exportimport.ImportVmFromOvaCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-29) [90ee1c89-6cfb-4632-837b-781d5a32ed4c] Ending command 'org.ovirt.engine.core.bll.exportimport.ImportVmFromOvaCommand' with failure.
2023-02-27 11:09:44,409-05 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-29) [17e97345] EVENT_ID: IMPORTEXPORT_IMPORT_VM_FAILED(1,153), Failed to import Vm Implementacion_02-2 to Data Center dc_rcloud, Cluster cl_rcloud
# ls -l Implementacion_02-2/
total 234998648
-rw-rw-rw-. 1 vdsm kvm 111599878656 Feb 27 10:15 Implementacion_02-2-disk1.vmdk
-rw-rw-rw-. 1 vdsm kvm 129038421504 Feb 27 10:22 Implementacion_02-2-disk2.vmdk
-rw-rw-rw-. 1 vdsm kvm 270840 Feb 27 10:22 Implementacion_02-2-file1.nvram
-rw-rw-rw-. 1 vdsm kvm 414 Feb 27 10:22 Implementacion_02-2.mf
-rw-rw-rw-. 1 vdsm kvm 10891 Feb 27 10:26 Implementacion_02-2.ovf
# cat Implementacion_02-2.ovf
<?xml version="1.0" encoding="UTF-8"?>
<!--Generated by VMware VirtualCenter Server, User: VSPHERE.LOCAL\Administrator, UTC time: 2023-02-25T21:32:50.888437Z-->
<Envelope vmw:buildId="build-19480866" xmlns="http://schemas.dmtf.org/ovf/envelope/1" xmlns:cim="http://schemas.dmtf.org/wbem/wscim/1/common" xmlns:ovf="http://schemas.dmtf.org/ovf/envelope/1" xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationS..." xmlns:vmw="http://www.vmware.com/schema/ovf" xmlns:vssd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_VirtualSystemSettin..." xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<References>
<File ovf:href="Implementacion_02-2-disk1.vmdk" ovf:id="file1" ovf:size="111599878656"/>
<File ovf:href="Implementacion_02-2-disk2.vmdk" ovf:id="file2" ovf:size="129038421504"/>
<File ovf:href="Implementacion_02-2-file1.nvram" ovf:id="file3" ovf:size="270840"/>
</References>
<DiskSection>
<Info>Virtual disk information</Info>
<Disk ovf:capacity="150" ovf:capacityAllocationUnits="byte * 2^30" ovf:diskId="vmdisk1" ovf:fileRef="file1" ovf:format="http://www.vmware.com/interfaces/specifications/vmdk.html#streamOptimized" ovf:populatedSize="117837135872"/>
<Disk ovf:capacity="150" ovf:capacityAllocationUnits="byte * 2^30" ovf:diskId="vmdisk2" ovf:fileRef="file2" ovf:format="http://www.vmware.com/interfaces/specifications/vmdk.html#streamOptimized" ovf:populatedSize="148116078592"/>
</DiskSection>
<NetworkSection>
<Info>The list of logical networks</Info>
<Network ovf:name="DVPG_102">
<Description>The DVPG_102 network</Description>
</Network>
</NetworkSection>
<VirtualSystem ovf:id="Implementacion_02-2">
<Info>A virtual machine</Info>
<Name>Implementacion_02-2</Name>
<OperatingSystemSection ovf:id="122" vmw:osType="windows2019srv_64Guest">
<Info>The kind of installed guest operating system</Info>
<Description>Microsoft Windows Server 2019 (64-bit)</Description>
</OperatingSystemSection>
<VirtualHardwareSection>
<Info>Virtual hardware requirements</Info>
<System>
<vssd:ElementName>Virtual Hardware Family</vssd:ElementName>
<vssd:InstanceID>0</vssd:InstanceID>
<vssd:VirtualSystemIdentifier>Implementacion_02-2</vssd:VirtualSystemIdentifier>
<vssd:VirtualSystemType>vmx-18</vssd:VirtualSystemType>
</System>
<Item>
<rasd:AllocationUnits>hertz * 10^6</rasd:AllocationUnits>
<rasd:Description>Number of Virtual CPUs</rasd:Description>
<rasd:ElementName>2 virtual CPU(s)</rasd:ElementName>
<rasd:InstanceID>1</rasd:InstanceID>
<rasd:ResourceType>3</rasd:ResourceType>
<rasd:VirtualQuantity>2</rasd:VirtualQuantity>
</Item>
<Item>
<rasd:AllocationUnits>byte * 2^20</rasd:AllocationUnits>
<rasd:Description>Memory Size</rasd:Description>
<rasd:ElementName>4096MB of memory</rasd:ElementName>
<rasd:InstanceID>2</rasd:InstanceID>
<rasd:ResourceType>4</rasd:ResourceType>
<rasd:VirtualQuantity>4096</rasd:VirtualQuantity>
</Item>
<Item>
<rasd:Address>0</rasd:Address>
<rasd:Description>SATA Controller</rasd:Description>
<rasd:ElementName>SATA controller 0</rasd:ElementName>
<rasd:InstanceID>3</rasd:InstanceID>
<rasd:ResourceSubType>vmware.sata.ahci</rasd:ResourceSubType>
<rasd:ResourceType>20</rasd:ResourceType>
<vmw:Config ovf:required="false" vmw:key="slotInfo.pciSlotNumber" vmw:value="33"/>
</Item>
<Item>
<rasd:Address>0</rasd:Address>
<rasd:Description>SCSI Controller</rasd:Description>
<rasd:ElementName>SCSI controller 0</rasd:ElementName>
<rasd:InstanceID>4</rasd:InstanceID>
<rasd:ResourceSubType>lsilogicsas</rasd:ResourceSubType>
<rasd:ResourceType>6</rasd:ResourceType>
<vmw:Config ovf:required="false" vmw:key="slotInfo.pciSlotNumber" vmw:value="160"/>
</Item>
<Item ovf:required="false">
<rasd:Address>0</rasd:Address>
<rasd:Description>USB Controller (XHCI)</rasd:Description>
<rasd:ElementName>USB xHCI controller</rasd:ElementName>
<rasd:InstanceID>5</rasd:InstanceID>
<rasd:ResourceSubType>vmware.usb.xhci</rasd:ResourceSubType>
<rasd:ResourceType>23</rasd:ResourceType>
<vmw:Config ovf:required="false" vmw:key="slotInfo.pciSlotNumber" vmw:value="224"/>
</Item>
<Item>
<rasd:Address>1</rasd:Address>
<rasd:Description>IDE Controller</rasd:Description>
<rasd:ElementName>IDE 1</rasd:ElementName>
<rasd:InstanceID>6</rasd:InstanceID>
<rasd:ResourceType>5</rasd:ResourceType>
</Item>
<Item>
<rasd:Address>0</rasd:Address>
<rasd:Description>IDE Controller</rasd:Description>
<rasd:ElementName>IDE 0</rasd:ElementName>
<rasd:InstanceID>7</rasd:InstanceID>
<rasd:ResourceType>5</rasd:ResourceType>
</Item>
<Item ovf:required="false">
<rasd:AutomaticAllocation>false</rasd:AutomaticAllocation>
<rasd:ElementName>Video card</rasd:ElementName>
<rasd:InstanceID>8</rasd:InstanceID>
<rasd:ResourceType>24</rasd:ResourceType>
<vmw:Config ovf:required="false" vmw:key="useAutoDetect" vmw:value="true"/>
<vmw:Config ovf:required="false" vmw:key="videoRamSizeInKB" vmw:value="16384"/>
<vmw:Config ovf:required="false" vmw:key="enable3DSupport" vmw:value="false"/>
<vmw:Config ovf:required="false" vmw:key="use3dRenderer" vmw:value="automatic"/>
<vmw:Config ovf:required="false" vmw:key="graphicsMemorySizeInKB" vmw:value="262144"/>
</Item>
<Item ovf:required="false">
<rasd:AutomaticAllocation>false</rasd:AutomaticAllocation>
<rasd:ElementName>VMCI device</rasd:ElementName>
<rasd:InstanceID>9</rasd:InstanceID>
<rasd:ResourceSubType>vmware.vmci</rasd:ResourceSubType>
<rasd:ResourceType>1</rasd:ResourceType>
<vmw:Config ovf:required="false" vmw:key="slotInfo.pciSlotNumber" vmw:value="32"/>
<vmw:Config ovf:required="false" vmw:key="allowUnrestrictedCommunication" vmw:value="false"/>
</Item>
<Item ovf:required="false">
<rasd:AddressOnParent>0</rasd:AddressOnParent>
<rasd:AutomaticAllocation>false</rasd:AutomaticAllocation>
<rasd:ElementName>CD/DVD drive 1</rasd:ElementName>
<rasd:InstanceID>10</rasd:InstanceID>
<rasd:Parent>3</rasd:Parent>
<rasd:ResourceSubType>vmware.cdrom.remoteatapi</rasd:ResourceSubType>
<rasd:ResourceType>15</rasd:ResourceType>
<vmw:Config ovf:required="false" vmw:key="connectable.allowGuestControl" vmw:value="true"/>
</Item>
<Item>
<rasd:AddressOnParent>1</rasd:AddressOnParent>
<rasd:ElementName>Hard disk 1</rasd:ElementName>
<rasd:HostResource>ovf:/disk/vmdisk1</rasd:HostResource>
<rasd:InstanceID>11</rasd:InstanceID>
<rasd:Parent>4</rasd:Parent>
<rasd:ResourceType>17</rasd:ResourceType>
<vmw:Config ovf:required="false" vmw:key="backing.writeThrough" vmw:value="false"/>
</Item>
<Item>
<rasd:AddressOnParent>0</rasd:AddressOnParent>
<rasd:ElementName>Hard disk 2</rasd:ElementName>
<rasd:HostResource>ovf:/disk/vmdisk2</rasd:HostResource>
<rasd:InstanceID>12</rasd:InstanceID>
<rasd:Parent>4</rasd:Parent>
<rasd:ResourceType>17</rasd:ResourceType>
<vmw:Config ovf:required="false" vmw:key="backing.writeThrough" vmw:value="false"/>
</Item>
<Item>
<rasd:AddressOnParent>7</rasd:AddressOnParent>
<rasd:AutomaticAllocation>true</rasd:AutomaticAllocation>
<rasd:Connection>DVPG_102</rasd:Connection>
<rasd:Description>VmxNet3 ethernet adapter on "DVPG_102"</rasd:Description>
<rasd:ElementName>Network adapter 1</rasd:ElementName>
<rasd:InstanceID>13</rasd:InstanceID>
<rasd:ResourceSubType>VmxNet3</rasd:ResourceSubType>
<rasd:ResourceType>10</rasd:ResourceType>
<vmw:Config ovf:required="false" vmw:key="slotInfo.pciSlotNumber" vmw:value="192"/>
<vmw:Config ovf:required="false" vmw:key="wakeOnLanEnabled" vmw:value="true"/>
<vmw:Config ovf:required="false" vmw:key="connectable.allowGuestControl" vmw:value="true"/>
</Item>
<vmw:Config ovf:required="false" vmw:key="cpuHotAddEnabled" vmw:value="false"/>
<vmw:Config ovf:required="false" vmw:key="cpuHotRemoveEnabled" vmw:value="false"/>
<vmw:Config ovf:required="false" vmw:key="memoryHotAddEnabled" vmw:value="false"/>
<vmw:Config ovf:required="false" vmw:key="firmware" vmw:value="efi"/>
<vmw:Config ovf:required="false" vmw:key="cpuAllocation.shares.shares" vmw:value="2000"/>
<vmw:Config ovf:required="false" vmw:key="cpuAllocation.shares.level" vmw:value="normal"/>
<vmw:Config ovf:required="false" vmw:key="tools.syncTimeWithHost" vmw:value="false"/>
<vmw:Config ovf:required="false" vmw:key="tools.syncTimeWithHostAllowed" vmw:value="true"/>
<vmw:Config ovf:required="false" vmw:key="tools.afterPowerOn" vmw:value="true"/>
<vmw:Config ovf:required="false" vmw:key="tools.afterResume" vmw:value="true"/>
<vmw:Config ovf:required="false" vmw:key="tools.beforeGuestShutdown" vmw:value="true"/>
<vmw:Config ovf:required="false" vmw:key="tools.beforeGuestStandby" vmw:value="true"/>
<vmw:Config ovf:required="false" vmw:key="tools.toolsUpgradePolicy" vmw:value="manual"/>
<vmw:Config ovf:required="false" vmw:key="powerOpInfo.powerOffType" vmw:value="soft"/>
<vmw:Config ovf:required="false" vmw:key="powerOpInfo.resetType" vmw:value="soft"/>
<vmw:Config ovf:required="false" vmw:key="powerOpInfo.suspendType" vmw:value="soft"/>
<vmw:Config ovf:required="false" vmw:key="nestedHVEnabled" vmw:value="false"/>
<vmw:Config ovf:required="false" vmw:key="vPMCEnabled" vmw:value="false"/>
<vmw:Config ovf:required="false" vmw:key="virtualICH7MPresent" vmw:value="false"/>
<vmw:Config ovf:required="false" vmw:key="virtualSMCPresent" vmw:value="false"/>
<vmw:Config ovf:required="false" vmw:key="flags.vvtdEnabled" vmw:value="false"/>
<vmw:Config ovf:required="false" vmw:key="flags.vbsEnabled" vmw:value="false"/>
<vmw:Config ovf:required="false" vmw:key="bootOptions.efiSecureBootEnabled" vmw:value="true"/>
<vmw:Config ovf:required="false" vmw:key="powerOpInfo.standbyAction" vmw:value="checkpoint"/>
<vmw:ExtraConfig ovf:required="false" vmw:key="nvram" vmw:value="ovf:/file/file3"/>
<vmw:ExtraConfig ovf:required="false" vmw:key="svga.autodetect" vmw:value="TRUE"/>
</VirtualHardwareSection>
</VirtualSystem>
</Envelope>
the complete log is:
2023-02-27 11:08:57,658-05 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-38) [3eba88b2-7b98-48d4-8488-2f3331fb41e6] EVENT_ID: ANSIBLE_RUNNER_EVENT_NOTIFICATION(559), Query OVA info. Run query script.
2023-02-27 11:08:57,665-05 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-38) [3eba88b2-7b98-48d4-8488-2f3331fb41e6] EVENT_ID: ANSIBLE_RUNNER_EVENT_NOTIFICATION(559), Query OVA info. Remove temp directory.
2023-02-27 11:09:00,676-05 WARN [org.ovirt.engine.core.utils.ovf.OvfReader] (default task-38) [3eba88b2-7b98-48d4-8488-2f3331fb41e6] could not retrieve volume id of file1 from ovf, generating new guid
2023-02-27 11:09:00,676-05 WARN [org.ovirt.engine.core.utils.ovf.OvfReader] (default task-38) [3eba88b2-7b98-48d4-8488-2f3331fb41e6] could not retrieve disk id of vmdisk1 from ovf, generating new guid
2023-02-27 11:09:00,677-05 WARN [org.ovirt.engine.core.utils.ovf.OvfReader] (default task-38) [3eba88b2-7b98-48d4-8488-2f3331fb41e6] could not retrieve volume id of file2 from ovf, generating new guid
2023-02-27 11:09:00,677-05 WARN [org.ovirt.engine.core.utils.ovf.OvfReader] (default task-38) [3eba88b2-7b98-48d4-8488-2f3331fb41e6] could not retrieve disk id of vmdisk2 from ovf, generating new guid
2023-02-27 11:09:19,408-05 INFO [org.ovirt.engine.core.bll.exportimport.ImportVmFromOvaCommand] (default task-38) [90ee1c89-6cfb-4632-837b-781d5a32ed4c] Lock Acquired to object 'EngineLock:{exclusiveLocks='[557421e0-d5f8-4caa-a84c-d1df24cb5a08=VM, 8e4c49b1-5078-4c89-9ccc-237b95b5dce4=DISK, Implementacion_02-2=VM_NAME]', sharedLocks=''}'
2023-02-27 11:09:19,493-05 INFO [org.ovirt.engine.core.bll.exportimport.ImportVmFromOvaCommand] (EE-ManagedThreadFactory-engine-Thread-31181) [90ee1c89-6cfb-4632-837b-781d5a32ed4c] Running command: ImportVmFromOvaCommand internal: false. Entities affected : ID: 5d22cf63-93e3-4d4d-9e53-395608923d6d Type: ClusterAction group CREATE_VM with role type USER, ID: 588d888c-ff14-4191-a5bc-a38f65dc29e9 Type: StorageAction group IMPORT_EXPORT_VM with role type ADMIN
2023-02-27 11:09:19,603-05 INFO [org.ovirt.engine.core.bll.storage.disk.AddDiskCommand] (EE-ManagedThreadFactory-engine-Thread-31181) [90ee1c89-6cfb-4632-837b-781d5a32ed4c] Running command: AddDiskCommand internal: true. Entities affected : ID: 557421e0-d5f8-4caa-a84c-d1df24cb5a08 Type: VMAction group CONFIGURE_VM_STORAGE with role type USER, ID: 588d888c-ff14-4191-a5bc-a38f65dc29e9 Type: StorageAction group CREATE_DISK with role type USER
2023-02-27 11:09:19,619-05 INFO [org.ovirt.engine.core.bll.storage.disk.image.AddImageFromScratchCommand] (EE-ManagedThreadFactory-engine-Thread-31181) [90ee1c89-6cfb-4632-837b-781d5a32ed4c] Running command: AddImageFromScratchCommand internal: true. Entities affected : ID: 588d888c-ff14-4191-a5bc-a38f65dc29e9 Type: Storage
2023-02-27 11:09:19,636-05 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.CreateVolumeVDSCommand] (EE-ManagedThreadFactory-engine-Thread-31181) [90ee1c89-6cfb-4632-837b-781d5a32ed4c] START, CreateVolumeVDSCommand( CreateVolumeVDSCommandParameters:{storagePoolId='26a41fba-0334-4b87-9b87-4ac6e86d32f7', ignoreFailoverLimit='false', storageDomainId='588d888c-ff14-4191-a5bc-a38f65dc29e9', imageGroupId='096444a2-0f21-4ea0-b4c3-3a4604d232bf', imageSizeInBytes='161061273600', volumeFormat='COW', newImageId='213d572b-c37f-484e-ad50-b513ae68d455', imageType='Sparse', newImageDescription='{"DiskAlias":"vmdisk1","DiskDescription":""}', imageInitialSizeInBytes='117837135872', imageId='00000000-0000-0000-0000-000000000000', sourceImageGroupId='00000000-0000-0000-0000-000000000000', shouldAddBitmaps='false', legal='true', sequenceNumber='1', bitmap='null'}), log id: 50719fb2
2023-02-27 11:09:19,673-05 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.CreateVolumeVDSCommand] (EE-ManagedThreadFactory-engine-Thread-31181) [90ee1c89-6cfb-4632-837b-781d5a32ed4c] FINISH, CreateVolumeVDSCommand, return: 213d572b-c37f-484e-ad50-b513ae68d455, log id: 50719fb2
2023-02-27 11:09:19,678-05 INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (EE-ManagedThreadFactory-engine-Thread-31181) [90ee1c89-6cfb-4632-837b-781d5a32ed4c] CommandAsyncTask::Adding CommandMultiAsyncTasks object for command '9850b22f-13fd-4c17-afef-83483140d670'
2023-02-27 11:09:19,678-05 INFO [org.ovirt.engine.core.bll.CommandMultiAsyncTasks] (EE-ManagedThreadFactory-engine-Thread-31181) [90ee1c89-6cfb-4632-837b-781d5a32ed4c] CommandMultiAsyncTasks::attachTask: Attaching task 'b168c512-9335-405d-b239-05b6e663aaa5' to command '9850b22f-13fd-4c17-afef-83483140d670'.
2023-02-27 11:09:19,689-05 INFO [org.ovirt.engine.core.bll.tasks.AsyncTaskManager] (EE-ManagedThreadFactory-engine-Thread-31181) [90ee1c89-6cfb-4632-837b-781d5a32ed4c] Adding task 'b168c512-9335-405d-b239-05b6e663aaa5' (Parent Command 'AddImageFromScratch', Parameters Type 'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters'), polling hasn't started yet..
2023-02-27 11:09:19,697-05 INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (EE-ManagedThreadFactory-engine-Thread-31181) [90ee1c89-6cfb-4632-837b-781d5a32ed4c] BaseAsyncTask::startPollingTask: Starting to poll task 'b168c512-9335-405d-b239-05b6e663aaa5'.
2023-02-27 11:09:19,718-05 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-31181) [90ee1c89-6cfb-4632-837b-781d5a32ed4c] EVENT_ID: ADD_DISK_INTERNAL(2,036), Add-Disk operation of 'vmdisk1' was initiated by the system.
2023-02-27 11:09:19,800-05 INFO [org.ovirt.engine.core.bll.storage.disk.AddDiskCommand] (EE-ManagedThreadFactory-engine-Thread-31181) [90ee1c89-6cfb-4632-837b-781d5a32ed4c] Running command: AddDiskCommand internal: true. Entities affected : ID: 557421e0-d5f8-4caa-a84c-d1df24cb5a08 Type: VMAction group CONFIGURE_VM_STORAGE with role type USER, ID: 588d888c-ff14-4191-a5bc-a38f65dc29e9 Type: StorageAction group CREATE_DISK with role type USER
2023-02-27 11:09:19,813-05 INFO [org.ovirt.engine.core.bll.storage.disk.image.AddImageFromScratchCommand] (EE-ManagedThreadFactory-engine-Thread-31181) [90ee1c89-6cfb-4632-837b-781d5a32ed4c] Running command: AddImageFromScratchCommand internal: true. Entities affected : ID: 588d888c-ff14-4191-a5bc-a38f65dc29e9 Type: Storage
2023-02-27 11:09:19,822-05 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.CreateVolumeVDSCommand] (EE-ManagedThreadFactory-engine-Thread-31181) [90ee1c89-6cfb-4632-837b-781d5a32ed4c] START, CreateVolumeVDSCommand( CreateVolumeVDSCommandParameters:{storagePoolId='26a41fba-0334-4b87-9b87-4ac6e86d32f7', ignoreFailoverLimit='false', storageDomainId='588d888c-ff14-4191-a5bc-a38f65dc29e9', imageGroupId='7ae5ded5-d0c8-41f8-ab30-401aef4a0e93', imageSizeInBytes='161061273600', volumeFormat='COW', newImageId='482e7dc2-faf9-480b-bbf9-2ece1a12cc5a', imageType='Sparse', newImageDescription='{"DiskAlias":"vmdisk2","DiskDescription":""}', imageInitialSizeInBytes='148116078592', imageId='00000000-0000-0000-0000-000000000000', sourceImageGroupId='00000000-0000-0000-0000-000000000000', shouldAddBitmaps='false', legal='true', sequenceNumber='1', bitmap='null'}), log id: 7ca68b15
2023-02-27 11:09:19,846-05 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.CreateVolumeVDSCommand] (EE-ManagedThreadFactory-engine-Thread-31181) [90ee1c89-6cfb-4632-837b-781d5a32ed4c] FINISH, CreateVolumeVDSCommand, return: 482e7dc2-faf9-480b-bbf9-2ece1a12cc5a, log id: 7ca68b15
2023-02-27 11:09:19,850-05 INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (EE-ManagedThreadFactory-engine-Thread-31181) [90ee1c89-6cfb-4632-837b-781d5a32ed4c] CommandAsyncTask::Adding CommandMultiAsyncTasks object for command 'c75cb919-ba4b-481c-8dcd-733573abe129'
2023-02-27 11:09:19,850-05 INFO [org.ovirt.engine.core.bll.CommandMultiAsyncTasks] (EE-ManagedThreadFactory-engine-Thread-31181) [90ee1c89-6cfb-4632-837b-781d5a32ed4c] CommandMultiAsyncTasks::attachTask: Attaching task '9c06a6b6-d2e4-4fe6-a370-d871fc312a16' to command 'c75cb919-ba4b-481c-8dcd-733573abe129'.
2023-02-27 11:09:19,861-05 INFO [org.ovirt.engine.core.bll.tasks.AsyncTaskManager] (EE-ManagedThreadFactory-engine-Thread-31181) [90ee1c89-6cfb-4632-837b-781d5a32ed4c] Adding task '9c06a6b6-d2e4-4fe6-a370-d871fc312a16' (Parent Command 'AddImageFromScratch', Parameters Type 'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters'), polling hasn't started yet..
2023-02-27 11:09:19,867-05 INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (EE-ManagedThreadFactory-engine-Thread-31181) [90ee1c89-6cfb-4632-837b-781d5a32ed4c] BaseAsyncTask::startPollingTask: Starting to poll task '9c06a6b6-d2e4-4fe6-a370-d871fc312a16'.
2023-02-27 11:09:19,888-05 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-31181) [90ee1c89-6cfb-4632-837b-781d5a32ed4c] EVENT_ID: ADD_DISK_INTERNAL(2,036), Add-Disk operation of 'vmdisk2' was initiated by the system.
2023-02-27 11:09:19,890-05 INFO [org.ovirt.engine.core.bll.storage.disk.AddDiskCommandCallback] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-91) [90ee1c89-6cfb-4632-837b-781d5a32ed4c] Command 'AddDisk' (id: 'bd6a7f06-1aeb-407a-8dbc-837280be9fb6') waiting on child command id: 'c75cb919-ba4b-481c-8dcd-733573abe129' type:'AddImageFromScratch' to complete
2023-02-27 11:09:19,891-05 INFO [org.ovirt.engine.core.bll.storage.disk.AddDiskCommandCallback] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-91) [90ee1c89-6cfb-4632-837b-781d5a32ed4c] Command 'AddDisk' (id: 'd0158085-44c8-4dae-8108-6704c16154f7') waiting on child command id: '9850b22f-13fd-4c17-afef-83483140d670' type:'AddImageFromScratch' to complete
2023-02-27 11:09:19,909-05 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-31181) [90ee1c89-6cfb-4632-837b-781d5a32ed4c] EVENT_ID: IMPORTEXPORT_STARTING_IMPORT_VM(1,165), Starting to import Vm Implementacion_02-2 to Data Center dc_rcloud, Cluster cl_rcloud
2023-02-27 11:09:21,857-05 INFO [org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-46) [7346ec67] Lock Acquired to object 'EngineLock:{exclusiveLocks='[0eddd27f-e2d0-4e5f-9fbb-00fd7409c165=PROVIDER]', sharedLocks=''}'
2023-02-27 11:09:21,873-05 INFO [org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-46) [7346ec67] Running command: SyncNetworkProviderCommand internal: true.
2023-02-27 11:09:21,877-05 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-46) [7346ec67] EVENT_ID: PROVIDER_SYNCHRONIZATION_STARTED(223), Provider ovirt-provider-ovn synchronization started.
2023-02-27 11:09:21,893-05 INFO [org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-100) [90ee1c89-6cfb-4632-837b-781d5a32ed4c] Command 'ImportVmFromOva' (id: 'f94e159b-a638-4f81-8af0-8b1fb8653170') waiting on child command id: 'd0158085-44c8-4dae-8108-6704c16154f7' type:'AddDisk' to complete
2023-02-27 11:09:21,894-05 INFO [org.ovirt.engine.core.bll.storage.disk.AddDiskCommandCallback] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-100) [90ee1c89-6cfb-4632-837b-781d5a32ed4c] Command 'AddDisk' (id: 'bd6a7f06-1aeb-407a-8dbc-837280be9fb6') waiting on child command id: 'c75cb919-ba4b-481c-8dcd-733573abe129' type:'AddImageFromScratch' to complete
2023-02-27 11:09:21,894-05 INFO [org.ovirt.engine.core.bll.storage.disk.AddDiskCommandCallback] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-100) [90ee1c89-6cfb-4632-837b-781d5a32ed4c] Command 'AddDisk' (id: 'd0158085-44c8-4dae-8108-6704c16154f7') waiting on child command id: '9850b22f-13fd-4c17-afef-83483140d670' type:'AddImageFromScratch' to complete
2023-02-27 11:09:22,029-05 INFO [org.ovirt.engine.core.sso.service.ExternalOIDCService] (default task-38) [] User admin@ovirt@internalkeycloak-authz with profile [internalsso] successfully logged into external OP with scopes: ovirt-app-api ovirt-ext=token-info:authz-search ovirt-ext=token-info:public-authz-search ovirt-ext=token-info:validate ovirt-ext=token:password-access
2023-02-27 11:09:22,142-05 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-46) [7346ec67] EVENT_ID: PROVIDER_SYNCHRONIZATION_ENDED(224), Provider ovirt-provider-ovn synchronization ended.
2023-02-27 11:09:22,143-05 INFO [org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-46) [7346ec67] Lock freed to object 'EngineLock:{exclusiveLocks='[0eddd27f-e2d0-4e5f-9fbb-00fd7409c165=PROVIDER]', sharedLocks=''}'
2023-02-27 11:09:25,899-05 INFO [org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-94) [90ee1c89-6cfb-4632-837b-781d5a32ed4c] Command 'ImportVmFromOva' (id: 'f94e159b-a638-4f81-8af0-8b1fb8653170') waiting on child command id: 'd0158085-44c8-4dae-8108-6704c16154f7' type:'AddDisk' to complete
2023-02-27 11:09:25,899-05 INFO [org.ovirt.engine.core.bll.storage.disk.AddDiskCommandCallback] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-94) [90ee1c89-6cfb-4632-837b-781d5a32ed4c] Command 'AddDisk' (id: 'bd6a7f06-1aeb-407a-8dbc-837280be9fb6') waiting on child command id: 'c75cb919-ba4b-481c-8dcd-733573abe129' type:'AddImageFromScratch' to complete
2023-02-27 11:09:25,900-05 INFO [org.ovirt.engine.core.bll.storage.disk.AddDiskCommandCallback] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-94) [90ee1c89-6cfb-4632-837b-781d5a32ed4c] Command 'AddDisk' (id: 'd0158085-44c8-4dae-8108-6704c16154f7') waiting on child command id: '9850b22f-13fd-4c17-afef-83483140d670' type:'AddImageFromScratch' to complete
2023-02-27 11:09:26,075-05 INFO [org.ovirt.engine.core.bll.tasks.AsyncTaskManager] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-20) [] Polling and updating Async Tasks: 2 tasks, 2 tasks to poll now
2023-02-27 11:09:26,082-05 INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-20) [] SPMAsyncTask::PollTask: Polling task '9c06a6b6-d2e4-4fe6-a370-d871fc312a16' (Parent Command 'AddImageFromScratch', Parameters Type 'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters') returned status 'finished', result 'success'.
2023-02-27 11:09:26,088-05 INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-20) [] BaseAsyncTask::onTaskEndSuccess: Task '9c06a6b6-d2e4-4fe6-a370-d871fc312a16' (Parent Command 'AddImageFromScratch', Parameters Type 'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters') ended successfully.
2023-02-27 11:09:26,090-05 INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-20) [] CommandAsyncTask::endActionIfNecessary: All tasks of command 'c75cb919-ba4b-481c-8dcd-733573abe129' has ended -> executing 'endAction'
2023-02-27 11:09:26,090-05 INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-20) [] CommandAsyncTask::endAction: Ending action for '1' tasks (command ID: 'c75cb919-ba4b-481c-8dcd-733573abe129'): calling endAction '.
2023-02-27 11:09:26,091-05 INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-20) [] SPMAsyncTask::PollTask: Polling task 'b168c512-9335-405d-b239-05b6e663aaa5' (Parent Command 'AddImageFromScratch', Parameters Type 'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters') returned status 'finished', result 'success'.
2023-02-27 11:09:26,091-05 INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (EE-ManagedThreadFactory-engine-Thread-31184) [] CommandAsyncTask::endCommandAction [within thread] context: Attempting to endAction 'AddImageFromScratch',
2023-02-27 11:09:26,095-05 INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-20) [] BaseAsyncTask::onTaskEndSuccess: Task 'b168c512-9335-405d-b239-05b6e663aaa5' (Parent Command 'AddImageFromScratch', Parameters Type 'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters') ended successfully.
2023-02-27 11:09:26,096-05 INFO [org.ovirt.engine.core.bll.storage.disk.image.AddImageFromScratchCommand] (EE-ManagedThreadFactory-engine-Thread-31184) [90ee1c89-6cfb-4632-837b-781d5a32ed4c] Command [id=c75cb919-ba4b-481c-8dcd-733573abe129]: Updating status to 'SUCCEEDED', The command end method logic will be executed by one of its parent commands.
2023-02-27 11:09:26,096-05 INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (EE-ManagedThreadFactory-engine-Thread-31184) [90ee1c89-6cfb-4632-837b-781d5a32ed4c] CommandAsyncTask::HandleEndActionResult [within thread]: endAction for action type 'AddImageFromScratch' completed, handling the result.
2023-02-27 11:09:26,096-05 INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (EE-ManagedThreadFactory-engine-Thread-31184) [90ee1c89-6cfb-4632-837b-781d5a32ed4c] CommandAsyncTask::HandleEndActionResult [within thread]: endAction for action type 'AddImageFromScratch' succeeded, clearing tasks.
2023-02-27 11:09:26,096-05 INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (EE-ManagedThreadFactory-engine-Thread-31184) [90ee1c89-6cfb-4632-837b-781d5a32ed4c] SPMAsyncTask::ClearAsyncTask: Attempting to clear task '9c06a6b6-d2e4-4fe6-a370-d871fc312a16'
2023-02-27 11:09:26,096-05 INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-20) [] CommandAsyncTask::endActionIfNecessary: All tasks of command '9850b22f-13fd-4c17-afef-83483140d670' has ended -> executing 'endAction'
2023-02-27 11:09:26,096-05 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand] (EE-ManagedThreadFactory-engine-Thread-31184) [90ee1c89-6cfb-4632-837b-781d5a32ed4c] START, SPMClearTaskVDSCommand( SPMTaskGuidBaseVDSCommandParameters:{storagePoolId='26a41fba-0334-4b87-9b87-4ac6e86d32f7', ignoreFailoverLimit='false', taskId='9c06a6b6-d2e4-4fe6-a370-d871fc312a16'}), log id: 5fe37e69
2023-02-27 11:09:26,096-05 INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-20) [] CommandAsyncTask::endAction: Ending action for '1' tasks (command ID: '9850b22f-13fd-4c17-afef-83483140d670'): calling endAction '.
2023-02-27 11:09:26,097-05 INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (EE-ManagedThreadFactory-engine-Thread-31185) [] CommandAsyncTask::endCommandAction [within thread] context: Attempting to endAction 'AddImageFromScratch',
2023-02-27 11:09:26,097-05 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (EE-ManagedThreadFactory-engine-Thread-31184) [90ee1c89-6cfb-4632-837b-781d5a32ed4c] START, HSMClearTaskVDSCommand(HostName = ovirtnodo01.redsiscloud.loc, HSMTaskGuidBaseVDSCommandParameters:{hostId='2b47d08d-16f6-4d5b-bdcd-8c8ac6f30629', taskId='9c06a6b6-d2e4-4fe6-a370-d871fc312a16'}), log id: 3673d24
2023-02-27 11:09:26,099-05 INFO [org.ovirt.engine.core.bll.storage.disk.image.AddImageFromScratchCommand] (EE-ManagedThreadFactory-engine-Thread-31185) [90ee1c89-6cfb-4632-837b-781d5a32ed4c] Command [id=9850b22f-13fd-4c17-afef-83483140d670]: Updating status to 'SUCCEEDED', The command end method logic will be executed by one of its parent commands.
2023-02-27 11:09:26,099-05 INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (EE-ManagedThreadFactory-engine-Thread-31185) [90ee1c89-6cfb-4632-837b-781d5a32ed4c] CommandAsyncTask::HandleEndActionResult [within thread]: endAction for action type 'AddImageFromScratch' completed, handling the result.
2023-02-27 11:09:26,099-05 INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (EE-ManagedThreadFactory-engine-Thread-31185) [90ee1c89-6cfb-4632-837b-781d5a32ed4c] CommandAsyncTask::HandleEndActionResult [within thread]: endAction for action type 'AddImageFromScratch' succeeded, clearing tasks.
2023-02-27 11:09:26,099-05 INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (EE-ManagedThreadFactory-engine-Thread-31185) [90ee1c89-6cfb-4632-837b-781d5a32ed4c] SPMAsyncTask::ClearAsyncTask: Attempting to clear task 'b168c512-9335-405d-b239-05b6e663aaa5'
2023-02-27 11:09:26,100-05 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand] (EE-ManagedThreadFactory-engine-Thread-31185) [90ee1c89-6cfb-4632-837b-781d5a32ed4c] START, SPMClearTaskVDSCommand( SPMTaskGuidBaseVDSCommandParameters:{storagePoolId='26a41fba-0334-4b87-9b87-4ac6e86d32f7', ignoreFailoverLimit='false', taskId='b168c512-9335-405d-b239-05b6e663aaa5'}), log id: 20d1c555
2023-02-27 11:09:26,105-05 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (EE-ManagedThreadFactory-engine-Thread-31184) [90ee1c89-6cfb-4632-837b-781d5a32ed4c] FINISH, HSMClearTaskVDSCommand, return: , log id: 3673d24
2023-02-27 11:09:26,105-05 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand] (EE-ManagedThreadFactory-engine-Thread-31184) [90ee1c89-6cfb-4632-837b-781d5a32ed4c] FINISH, SPMClearTaskVDSCommand, return: , log id: 5fe37e69
2023-02-27 11:09:26,106-05 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (EE-ManagedThreadFactory-engine-Thread-31185) [90ee1c89-6cfb-4632-837b-781d5a32ed4c] START, HSMClearTaskVDSCommand(HostName = ovirtnodo01.redsiscloud.loc, HSMTaskGuidBaseVDSCommandParameters:{hostId='2b47d08d-16f6-4d5b-bdcd-8c8ac6f30629', taskId='b168c512-9335-405d-b239-05b6e663aaa5'}), log id: 162b7940
2023-02-27 11:09:26,107-05 INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (EE-ManagedThreadFactory-engine-Thread-31184) [90ee1c89-6cfb-4632-837b-781d5a32ed4c] BaseAsyncTask::removeTaskFromDB: Removed task '9c06a6b6-d2e4-4fe6-a370-d871fc312a16' from DataBase
2023-02-27 11:09:26,107-05 INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (EE-ManagedThreadFactory-engine-Thread-31184) [90ee1c89-6cfb-4632-837b-781d5a32ed4c] CommandAsyncTask::HandleEndActionResult [within thread]: Removing CommandMultiAsyncTasks object for entity 'c75cb919-ba4b-481c-8dcd-733573abe129'
2023-02-27 11:09:26,117-05 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (EE-ManagedThreadFactory-engine-Thread-31185) [90ee1c89-6cfb-4632-837b-781d5a32ed4c] FINISH, HSMClearTaskVDSCommand, return: , log id: 162b7940
2023-02-27 11:09:26,118-05 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand] (EE-ManagedThreadFactory-engine-Thread-31185) [90ee1c89-6cfb-4632-837b-781d5a32ed4c] FINISH, SPMClearTaskVDSCommand, return: , log id: 20d1c555
2023-02-27 11:09:26,120-05 INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (EE-ManagedThreadFactory-engine-Thread-31185) [90ee1c89-6cfb-4632-837b-781d5a32ed4c] BaseAsyncTask::removeTaskFromDB: Removed task 'b168c512-9335-405d-b239-05b6e663aaa5' from DataBase
2023-02-27 11:09:26,120-05 INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (EE-ManagedThreadFactory-engine-Thread-31185) [90ee1c89-6cfb-4632-837b-781d5a32ed4c] CommandAsyncTask::HandleEndActionResult [within thread]: Removing CommandMultiAsyncTasks object for entity '9850b22f-13fd-4c17-afef-83483140d670'
2023-02-27 11:09:33,905-05 INFO [org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-78) [90ee1c89-6cfb-4632-837b-781d5a32ed4c] Command 'ImportVmFromOva' (id: 'f94e159b-a638-4f81-8af0-8b1fb8653170') waiting on child command id: 'd0158085-44c8-4dae-8108-6704c16154f7' type:'AddDisk' to complete
2023-02-27 11:09:33,906-05 INFO [org.ovirt.engine.core.bll.storage.disk.AddDiskCommandCallback] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-78) [90ee1c89-6cfb-4632-837b-781d5a32ed4c] Getting volume info for image '7ae5ded5-d0c8-41f8-ab30-401aef4a0e93/482e7dc2-faf9-480b-bbf9-2ece1a12cc5a'
2023-02-27 11:09:33,918-05 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVolumeInfoVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-78) [90ee1c89-6cfb-4632-837b-781d5a32ed4c] START, GetVolumeInfoVDSCommand(HostName = ovirtnodo01.redsiscloud.loc, GetVolumeInfoVDSCommandParameters:{hostId='2b47d08d-16f6-4d5b-bdcd-8c8ac6f30629', storagePoolId='26a41fba-0334-4b87-9b87-4ac6e86d32f7', storageDomainId='588d888c-ff14-4191-a5bc-a38f65dc29e9', imageGroupId='7ae5ded5-d0c8-41f8-ab30-401aef4a0e93', imageId='482e7dc2-faf9-480b-bbf9-2ece1a12cc5a'}), log id: 576590a3
2023-02-27 11:09:34,065-05 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVolumeInfoVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-78) [90ee1c89-6cfb-4632-837b-781d5a32ed4c] FINISH, GetVolumeInfoVDSCommand, return: org.ovirt.engine.core.common.businessentities.storage.DiskImage@e712ba11, log id: 576590a3
2023-02-27 11:09:34,065-05 INFO [org.ovirt.engine.core.bll.storage.disk.AddDiskCommandCallback] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-78) [90ee1c89-6cfb-4632-837b-781d5a32ed4c] Command 'AddDisk' id: 'bd6a7f06-1aeb-407a-8dbc-837280be9fb6' child commands '[c75cb919-ba4b-481c-8dcd-733573abe129]' executions were completed, status 'SUCCEEDED'
2023-02-27 11:09:34,070-05 INFO [org.ovirt.engine.core.bll.storage.disk.AddDiskCommandCallback] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-78) [90ee1c89-6cfb-4632-837b-781d5a32ed4c] Getting volume info for image '096444a2-0f21-4ea0-b4c3-3a4604d232bf/213d572b-c37f-484e-ad50-b513ae68d455'
2023-02-27 11:09:34,083-05 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVolumeInfoVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-78) [90ee1c89-6cfb-4632-837b-781d5a32ed4c] START, GetVolumeInfoVDSCommand(HostName = ovirtnodo01.redsiscloud.loc, GetVolumeInfoVDSCommandParameters:{hostId='2b47d08d-16f6-4d5b-bdcd-8c8ac6f30629', storagePoolId='26a41fba-0334-4b87-9b87-4ac6e86d32f7', storageDomainId='588d888c-ff14-4191-a5bc-a38f65dc29e9', imageGroupId='096444a2-0f21-4ea0-b4c3-3a4604d232bf', imageId='213d572b-c37f-484e-ad50-b513ae68d455'}), log id: 7cc26313
2023-02-27 11:09:34,108-05 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVolumeInfoVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-78) [90ee1c89-6cfb-4632-837b-781d5a32ed4c] FINISH, GetVolumeInfoVDSCommand, return: org.ovirt.engine.core.common.businessentities.storage.DiskImage@ed015f5c, log id: 7cc26313
2023-02-27 11:09:34,108-05 INFO [org.ovirt.engine.core.bll.storage.disk.AddDiskCommandCallback] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-78) [90ee1c89-6cfb-4632-837b-781d5a32ed4c] Command 'AddDisk' id: 'd0158085-44c8-4dae-8108-6704c16154f7' child commands '[9850b22f-13fd-4c17-afef-83483140d670]' executions were completed, status 'SUCCEEDED'
2023-02-27 11:09:35,113-05 INFO [org.ovirt.engine.core.bll.storage.disk.AddDiskCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-54) [90ee1c89-6cfb-4632-837b-781d5a32ed4c] Ending command 'org.ovirt.engine.core.bll.storage.disk.AddDiskCommand' successfully.
2023-02-27 11:09:35,119-05 INFO [org.ovirt.engine.core.bll.storage.disk.image.AddImageFromScratchCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-54) [90ee1c89-6cfb-4632-837b-781d5a32ed4c] Ending command 'org.ovirt.engine.core.bll.storage.disk.image.AddImageFromScratchCommand' successfully.
2023-02-27 11:09:35,131-05 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.GetImageInfoVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-54) [90ee1c89-6cfb-4632-837b-781d5a32ed4c] START, GetImageInfoVDSCommand( GetImageInfoVDSCommandParameters:{storagePoolId='26a41fba-0334-4b87-9b87-4ac6e86d32f7', ignoreFailoverLimit='false', storageDomainId='588d888c-ff14-4191-a5bc-a38f65dc29e9', imageGroupId='7ae5ded5-d0c8-41f8-ab30-401aef4a0e93', imageId='482e7dc2-faf9-480b-bbf9-2ece1a12cc5a'}), log id: 57b964f4
2023-02-27 11:09:35,132-05 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVolumeInfoVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-54) [90ee1c89-6cfb-4632-837b-781d5a32ed4c] START, GetVolumeInfoVDSCommand(HostName = ovirtnodo01.redsiscloud.loc, GetVolumeInfoVDSCommandParameters:{hostId='2b47d08d-16f6-4d5b-bdcd-8c8ac6f30629', storagePoolId='26a41fba-0334-4b87-9b87-4ac6e86d32f7', storageDomainId='588d888c-ff14-4191-a5bc-a38f65dc29e9', imageGroupId='7ae5ded5-d0c8-41f8-ab30-401aef4a0e93', imageId='482e7dc2-faf9-480b-bbf9-2ece1a12cc5a'}), log id: 598fc554
2023-02-27 11:09:35,159-05 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVolumeInfoVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-54) [90ee1c89-6cfb-4632-837b-781d5a32ed4c] FINISH, GetVolumeInfoVDSCommand, return: org.ovirt.engine.core.common.businessentities.storage.DiskImage@e712ba11, log id: 598fc554
2023-02-27 11:09:35,159-05 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.GetImageInfoVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-54) [90ee1c89-6cfb-4632-837b-781d5a32ed4c] FINISH, GetImageInfoVDSCommand, return: org.ovirt.engine.core.common.businessentities.storage.DiskImage@e712ba11, log id: 57b964f4
2023-02-27 11:09:35,221-05 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.PrepareImageVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-54) [90ee1c89-6cfb-4632-837b-781d5a32ed4c] START, PrepareImageVDSCommand(HostName = ovirtnodo01.redsiscloud.loc, PrepareImageVDSCommandParameters:{hostId='2b47d08d-16f6-4d5b-bdcd-8c8ac6f30629'}), log id: 673ef63a
2023-02-27 11:09:35,661-05 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.PrepareImageVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-54) [90ee1c89-6cfb-4632-837b-781d5a32ed4c] FINISH, PrepareImageVDSCommand, return: PrepareImageReturn:{status='Status [code=0, message=Done]'}, log id: 673ef63a
2023-02-27 11:09:35,662-05 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetQemuImageInfoVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-54) [90ee1c89-6cfb-4632-837b-781d5a32ed4c] START, GetQemuImageInfoVDSCommand(HostName = ovirtnodo01.redsiscloud.loc, GetVolumeInfoVDSCommandParameters:{hostId='2b47d08d-16f6-4d5b-bdcd-8c8ac6f30629', storagePoolId='26a41fba-0334-4b87-9b87-4ac6e86d32f7', storageDomainId='588d888c-ff14-4191-a5bc-a38f65dc29e9', imageGroupId='7ae5ded5-d0c8-41f8-ab30-401aef4a0e93', imageId='482e7dc2-faf9-480b-bbf9-2ece1a12cc5a'}), log id: 75196914
2023-02-27 11:09:35,682-05 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetQemuImageInfoVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-54) [90ee1c89-6cfb-4632-837b-781d5a32ed4c] FINISH, GetQemuImageInfoVDSCommand, return: org.ovirt.engine.core.common.businessentities.storage.QemuImageInfo@203d7f68, log id: 75196914
2023-02-27 11:09:35,683-05 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.TeardownImageVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-54) [90ee1c89-6cfb-4632-837b-781d5a32ed4c] START, TeardownImageVDSCommand(HostName = ovirtnodo01.redsiscloud.loc, ImageActionsVDSCommandParameters:{hostId='2b47d08d-16f6-4d5b-bdcd-8c8ac6f30629'}), log id: 41ad864
2023-02-27 11:09:35,935-05 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.TeardownImageVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-54) [90ee1c89-6cfb-4632-837b-781d5a32ed4c] FINISH, TeardownImageVDSCommand, return: StatusReturn:{status='Status [code=0, message=Done]'}, log id: 41ad864
2023-02-27 11:09:36,003-05 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-54) [] EVENT_ID: USER_ADD_DISK_TO_VM_FINISHED_SUCCESS(97), The disk vmdisk2 was successfully added to VM Implementacion_02-2.
2023-02-27 11:09:36,005-05 INFO [org.ovirt.engine.core.bll.storage.disk.AddDiskCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-54) [90ee1c89-6cfb-4632-837b-781d5a32ed4c] Ending command 'org.ovirt.engine.core.bll.storage.disk.AddDiskCommand' successfully.
2023-02-27 11:09:36,009-05 INFO [org.ovirt.engine.core.bll.storage.disk.image.AddImageFromScratchCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-54) [90ee1c89-6cfb-4632-837b-781d5a32ed4c] Ending command 'org.ovirt.engine.core.bll.storage.disk.image.AddImageFromScratchCommand' successfully.
2023-02-27 11:09:36,010-05 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.GetImageInfoVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-54) [90ee1c89-6cfb-4632-837b-781d5a32ed4c] START, GetImageInfoVDSCommand( GetImageInfoVDSCommandParameters:{storagePoolId='26a41fba-0334-4b87-9b87-4ac6e86d32f7', ignoreFailoverLimit='false', storageDomainId='588d888c-ff14-4191-a5bc-a38f65dc29e9', imageGroupId='096444a2-0f21-4ea0-b4c3-3a4604d232bf', imageId='213d572b-c37f-484e-ad50-b513ae68d455'}), log id: 1393bd54
2023-02-27 11:09:36,011-05 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVolumeInfoVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-54) [90ee1c89-6cfb-4632-837b-781d5a32ed4c] START, GetVolumeInfoVDSCommand(HostName = ovirtnodo01.redsiscloud.loc, GetVolumeInfoVDSCommandParameters:{hostId='2b47d08d-16f6-4d5b-bdcd-8c8ac6f30629', storagePoolId='26a41fba-0334-4b87-9b87-4ac6e86d32f7', storageDomainId='588d888c-ff14-4191-a5bc-a38f65dc29e9', imageGroupId='096444a2-0f21-4ea0-b4c3-3a4604d232bf', imageId='213d572b-c37f-484e-ad50-b513ae68d455'}), log id: 3d6a9c56
2023-02-27 11:09:36,038-05 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVolumeInfoVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-54) [90ee1c89-6cfb-4632-837b-781d5a32ed4c] FINISH, GetVolumeInfoVDSCommand, return: org.ovirt.engine.core.common.businessentities.storage.DiskImage@ed015f5c, log id: 3d6a9c56
2023-02-27 11:09:36,039-05 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.GetImageInfoVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-54) [90ee1c89-6cfb-4632-837b-781d5a32ed4c] FINISH, GetImageInfoVDSCommand, return: org.ovirt.engine.core.common.businessentities.storage.DiskImage@ed015f5c, log id: 1393bd54
2023-02-27 11:09:36,052-05 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.PrepareImageVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-54) [90ee1c89-6cfb-4632-837b-781d5a32ed4c] START, PrepareImageVDSCommand(HostName = ovirtnodo01.redsiscloud.loc, PrepareImageVDSCommandParameters:{hostId='2b47d08d-16f6-4d5b-bdcd-8c8ac6f30629'}), log id: 700e12bc
2023-02-27 11:09:36,473-05 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.PrepareImageVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-54) [90ee1c89-6cfb-4632-837b-781d5a32ed4c] FINISH, PrepareImageVDSCommand, return: PrepareImageReturn:{status='Status [code=0, message=Done]'}, log id: 700e12bc
2023-02-27 11:09:36,474-05 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetQemuImageInfoVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-54) [90ee1c89-6cfb-4632-837b-781d5a32ed4c] START, GetQemuImageInfoVDSCommand(HostName = ovirtnodo01.redsiscloud.loc, GetVolumeInfoVDSCommandParameters:{hostId='2b47d08d-16f6-4d5b-bdcd-8c8ac6f30629', storagePoolId='26a41fba-0334-4b87-9b87-4ac6e86d32f7', storageDomainId='588d888c-ff14-4191-a5bc-a38f65dc29e9', imageGroupId='096444a2-0f21-4ea0-b4c3-3a4604d232bf', imageId='213d572b-c37f-484e-ad50-b513ae68d455'}), log id: 4ac0f47b
2023-02-27 11:09:36,494-05 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetQemuImageInfoVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-54) [90ee1c89-6cfb-4632-837b-781d5a32ed4c] FINISH, GetQemuImageInfoVDSCommand, return: org.ovirt.engine.core.common.businessentities.storage.QemuImageInfo@44b063f5, log id: 4ac0f47b
2023-02-27 11:09:36,495-05 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.TeardownImageVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-54) [90ee1c89-6cfb-4632-837b-781d5a32ed4c] START, TeardownImageVDSCommand(HostName = ovirtnodo01.redsiscloud.loc, ImageActionsVDSCommandParameters:{hostId='2b47d08d-16f6-4d5b-bdcd-8c8ac6f30629'}), log id: 5615381f
2023-02-27 11:09:36,727-05 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.TeardownImageVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-54) [90ee1c89-6cfb-4632-837b-781d5a32ed4c] FINISH, TeardownImageVDSCommand, return: StatusReturn:{status='Status [code=0, message=Done]'}, log id: 5615381f
2023-02-27 11:09:36,750-05 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-54) [] EVENT_ID: USER_ADD_DISK_TO_VM_FINISHED_SUCCESS(97), The disk vmdisk1 was successfully added to VM Implementacion_02-2.
2023-02-27 11:09:37,798-05 INFO [org.ovirt.engine.core.bll.exportimport.ConvertOvaCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-51) [90ee1c89-6cfb-4632-837b-781d5a32ed4c] Running command: ConvertOvaCommand internal: true. Entities affected : ID: 557421e0-d5f8-4caa-a84c-d1df24cb5a08 Type: VM
2023-02-27 11:09:37,829-05 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.PrepareImageVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-51) [90ee1c89-6cfb-4632-837b-781d5a32ed4c] START, PrepareImageVDSCommand(HostName = ovirtnodo01.redsiscloud.loc, PrepareImageVDSCommandParameters:{hostId='2b47d08d-16f6-4d5b-bdcd-8c8ac6f30629'}), log id: 649bcef9
2023-02-27 11:09:38,286-05 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.PrepareImageVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-51) [90ee1c89-6cfb-4632-837b-781d5a32ed4c] FINISH, PrepareImageVDSCommand, return: PrepareImageReturn:{status='Status [code=0, message=Done]'}, log id: 649bcef9
2023-02-27 11:09:38,288-05 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConvertOvaVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-51) [90ee1c89-6cfb-4632-837b-781d5a32ed4c] START, ConvertOvaVDSCommand(HostName = ovirtnodo01.redsiscloud.loc, ConvertOvaVDSParameters:{hostId='2b47d08d-16f6-4d5b-bdcd-8c8ac6f30629', ovaPath='/root/w2_ovfs/ovf/Implementacion_02-2', vmName='Implementacion_02-2', storageDomainId='588d888c-ff14-4191-a5bc-a38f65dc29e9', storagePoolId='26a41fba-0334-4b87-9b87-4ac6e86d32f7', virtioIsoPath='/rhev/data-center/mnt/blockSD/7f20ff07-e000-462e-9471-7efe882687c6/images/8e4c49b1-5078-4c89-9ccc-237b95b5dce4/5d21a9de-5a4d-40e9-93e0-989174dcb912', Disk0='096444a2-0f21-4ea0-b4c3-3a4604d232bf', Disk1='7ae5ded5-d0c8-41f8-ab30-401aef4a0e93'}), log id: 780d2b61
2023-02-27 11:09:38,326-05 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConvertOvaVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-51) [90ee1c89-6cfb-4632-837b-781d5a32ed4c] FINISH, ConvertOvaVDSCommand, return: 557421e0-d5f8-4caa-a84c-d1df24cb5a08, log id: 780d2b61
2023-02-27 11:09:40,344-05 INFO [org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-35) [90ee1c89-6cfb-4632-837b-781d5a32ed4c] Command 'ImportVmFromOva' (id: 'f94e159b-a638-4f81-8af0-8b1fb8653170') waiting on child command id: '02556fe8-58c7-438f-8fd1-29a0df677b51' type:'ConvertOva' to complete
2023-02-27 11:09:41,345-05 INFO [org.ovirt.engine.core.bll.exportimport.ConvertVmCallback] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-59) [90ee1c89-6cfb-4632-837b-781d5a32ed4c] Conversion of VM from external environment failed: Job '557421e0-d5f8-4caa-a84c-d1df24cb5a08' process failed exit-code: 1
2023-02-27 11:09:42,353-05 ERROR [org.ovirt.engine.core.bll.exportimport.ConvertOvaCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-18) [90ee1c89-6cfb-4632-837b-781d5a32ed4c] Ending command 'org.ovirt.engine.core.bll.exportimport.ConvertOvaCommand' with failure.
2023-02-27 11:09:42,354-05 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DeleteV2VJobVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-18) [90ee1c89-6cfb-4632-837b-781d5a32ed4c] START, DeleteV2VJobVDSCommand(HostName = ovirtnodo01.redsiscloud.loc, VdsAndVmIDVDSParametersBase:{hostId='2b47d08d-16f6-4d5b-bdcd-8c8ac6f30629', vmId='557421e0-d5f8-4caa-a84c-d1df24cb5a08'}), log id: 65e6b22b
2023-02-27 11:09:42,360-05 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DeleteV2VJobVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-18) [90ee1c89-6cfb-4632-837b-781d5a32ed4c] FINISH, DeleteV2VJobVDSCommand, return: , log id: 65e6b22b
2023-02-27 11:09:42,374-05 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.TeardownImageVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-18) [90ee1c89-6cfb-4632-837b-781d5a32ed4c] START, TeardownImageVDSCommand(HostName = ovirtnodo01.redsiscloud.loc, ImageActionsVDSCommandParameters:{hostId='2b47d08d-16f6-4d5b-bdcd-8c8ac6f30629'}), log id: 5cc711f5
2023-02-27 11:09:42,608-05 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.TeardownImageVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-18) [90ee1c89-6cfb-4632-837b-781d5a32ed4c] FINISH, TeardownImageVDSCommand, return: StatusReturn:{status='Status [code=0, message=Done]'}, log id: 5cc711f5
2023-02-27 11:09:42,622-05 INFO [org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-18) [90ee1c89-6cfb-4632-837b-781d5a32ed4c] Command 'ImportVmFromOva' id: 'f94e159b-a638-4f81-8af0-8b1fb8653170' child commands '[d0158085-44c8-4dae-8108-6704c16154f7, bd6a7f06-1aeb-407a-8dbc-837280be9fb6, 02556fe8-58c7-438f-8fd1-29a0df677b51]' executions were completed, status 'FAILED'
2023-02-27 11:09:43,639-05 ERROR [org.ovirt.engine.core.bll.exportimport.ImportVmFromOvaCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-29) [90ee1c89-6cfb-4632-837b-781d5a32ed4c] Ending command 'org.ovirt.engine.core.bll.exportimport.ImportVmFromOvaCommand' with failure.
2023-02-27 11:09:43,661-05 INFO [org.ovirt.engine.core.bll.storage.disk.image.RemoveAllVmImagesCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-29) [17e97345] Running command: RemoveAllVmImagesCommand internal: true. Entities affected : ID: 557421e0-d5f8-4caa-a84c-d1df24cb5a08 Type: VM
2023-02-27 11:09:43,684-05 INFO [org.ovirt.engine.core.bll.storage.disk.image.RemoveImageCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-29) [17e97345] Running command: RemoveImageCommand internal: true. Entities affected : ID: 00000000-0000-0000-0000-000000000000 Type: Storage
2023-02-27 11:09:43,693-05 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.DeleteImageGroupVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-29) [17e97345] START, DeleteImageGroupVDSCommand( DeleteImageGroupVDSCommandParameters:{storagePoolId='26a41fba-0334-4b87-9b87-4ac6e86d32f7', ignoreFailoverLimit='false', storageDomainId='588d888c-ff14-4191-a5bc-a38f65dc29e9', imageGroupId='096444a2-0f21-4ea0-b4c3-3a4604d232bf', postZeros='false', discard='true', forceDelete='false'}), log id: 6ae771f2
2023-02-27 11:09:43,938-05 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.DeleteImageGroupVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-29) [17e97345] FINISH, DeleteImageGroupVDSCommand, return: , log id: 6ae771f2
2023-02-27 11:09:43,940-05 INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-29) [17e97345] CommandAsyncTask::Adding CommandMultiAsyncTasks object for command '1fc4fb29-5034-49f2-a991-9eb10a4f3f63'
2023-02-27 11:09:43,940-05 INFO [org.ovirt.engine.core.bll.CommandMultiAsyncTasks] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-29) [17e97345] CommandMultiAsyncTasks::attachTask: Attaching task '002baef1-b642-49b4-9eea-065c311d57b1' to command '1fc4fb29-5034-49f2-a991-9eb10a4f3f63'.
2023-02-27 11:09:43,949-05 INFO [org.ovirt.engine.core.bll.tasks.AsyncTaskManager] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-29) [17e97345] Adding task '002baef1-b642-49b4-9eea-065c311d57b1' (Parent Command 'RemoveImage', Parameters Type 'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters'), polling hasn't started yet..
2023-02-27 11:09:43,953-05 INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-29) [17e97345] BaseAsyncTask::startPollingTask: Starting to poll task '002baef1-b642-49b4-9eea-065c311d57b1'.
2023-02-27 11:09:43,954-05 INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-29) [17e97345] BaseAsyncTask::startPollingTask: Starting to poll task '002baef1-b642-49b4-9eea-065c311d57b1'.
2023-02-27 11:09:43,972-05 INFO [org.ovirt.engine.core.bll.storage.disk.image.RemoveImageCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-29) [17e97345] Running command: RemoveImageCommand internal: true. Entities affected : ID: 00000000-0000-0000-0000-000000000000 Type: Storage
2023-02-27 11:09:43,980-05 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.DeleteImageGroupVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-29) [17e97345] START, DeleteImageGroupVDSCommand( DeleteImageGroupVDSCommandParameters:{storagePoolId='26a41fba-0334-4b87-9b87-4ac6e86d32f7', ignoreFailoverLimit='false', storageDomainId='588d888c-ff14-4191-a5bc-a38f65dc29e9', imageGroupId='7ae5ded5-d0c8-41f8-ab30-401aef4a0e93', postZeros='false', discard='true', forceDelete='false'}), log id: 3f49936
2023-02-27 11:09:44,317-05 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.DeleteImageGroupVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-29) [17e97345] FINISH, DeleteImageGroupVDSCommand, return: , log id: 3f49936
2023-02-27 11:09:44,319-05 INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-29) [17e97345] CommandAsyncTask::Adding CommandMultiAsyncTasks object for command 'c2f34732-9878-4cbf-bca3-1511ce145e34'
2023-02-27 11:09:44,319-05 INFO [org.ovirt.engine.core.bll.CommandMultiAsyncTasks] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-29) [17e97345] CommandMultiAsyncTasks::attachTask: Attaching task '3679c4b7-aea9-4fbd-804a-20f479aed410' to command 'c2f34732-9878-4cbf-bca3-1511ce145e34'.
2023-02-27 11:09:44,342-05 INFO [org.ovirt.engine.core.bll.tasks.AsyncTaskManager] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-29) [17e97345] Adding task '3679c4b7-aea9-4fbd-804a-20f479aed410' (Parent Command 'RemoveImage', Parameters Type 'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters'), polling hasn't started yet..
2023-02-27 11:09:44,352-05 INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-29) [17e97345] BaseAsyncTask::startPollingTask: Starting to poll task '3679c4b7-aea9-4fbd-804a-20f479aed410'.
2023-02-27 11:09:44,352-05 INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-29) [17e97345] BaseAsyncTask::startPollingTask: Starting to poll task '3679c4b7-aea9-4fbd-804a-20f479aed410'.
2023-02-27 11:09:44,371-05 INFO [org.ovirt.engine.core.bll.CommandCompensator] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-29) [17e97345] Command [id=f94e159b-a638-4f81-8af0-8b1fb8653170]: Compensating NEW_ENTITY_ID of org.ovirt.engine.core.common.businessentities.network.VmNetworkStatistics; snapshot: 56850e5f-79f5-4414-8381-62ea2f81bf94.
2023-02-27 11:09:44,372-05 INFO [org.ovirt.engine.core.bll.CommandCompensator] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-29) [17e97345] Command [id=f94e159b-a638-4f81-8af0-8b1fb8653170]: Compensating NEW_ENTITY_ID of org.ovirt.engine.core.common.businessentities.network.VmNetworkInterface; snapshot: 56850e5f-79f5-4414-8381-62ea2f81bf94.
2023-02-27 11:09:44,373-05 INFO [org.ovirt.engine.core.bll.CommandCompensator] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-29) [17e97345] Command [id=f94e159b-a638-4f81-8af0-8b1fb8653170]: Compensating TRANSIENT_ENTITY of org.ovirt.engine.core.common.businessentities.ReleaseMacsTransientCompensation; snapshot: org.ovirt.engine.core.common.businessentities.ReleaseMacsTransientCompensation@2a96e51.
2023-02-27 11:09:44,374-05 INFO [org.ovirt.engine.core.bll.CommandCompensator] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-29) [17e97345] Command [id=f94e159b-a638-4f81-8af0-8b1fb8653170]: Compensating NEW_ENTITY_ID of org.ovirt.engine.core.common.businessentities.VmStatistics; snapshot: 557421e0-d5f8-4caa-a84c-d1df24cb5a08.
2023-02-27 11:09:44,375-05 INFO [org.ovirt.engine.core.bll.CommandCompensator] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-29) [17e97345] Command [id=f94e159b-a638-4f81-8af0-8b1fb8653170]: Compensating NEW_ENTITY_ID of org.ovirt.engine.core.common.businessentities.VmDynamic; snapshot: 557421e0-d5f8-4caa-a84c-d1df24cb5a08.
2023-02-27 11:09:44,375-05 INFO [org.ovirt.engine.core.bll.CommandCompensator] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-29) [17e97345] Command [id=f94e159b-a638-4f81-8af0-8b1fb8653170]: Compensating NEW_ENTITY_ID of org.ovirt.engine.core.common.businessentities.Snapshot; snapshot: f3661ecd-0965-4099-87fa-b341fb934628.
2023-02-27 11:09:44,377-05 INFO [org.ovirt.engine.core.bll.CommandCompensator] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-29) [17e97345] Command [id=f94e159b-a638-4f81-8af0-8b1fb8653170]: Compensating NEW_ENTITY_ID of org.ovirt.engine.core.common.businessentities.VmStatic; snapshot: 557421e0-d5f8-4caa-a84c-d1df24cb5a08.
2023-02-27 11:09:44,385-05 INFO [org.ovirt.engine.core.bll.exportimport.ImportVmFromOvaCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-29) [17e97345] Lock freed to object 'EngineLock:{exclusiveLocks='[557421e0-d5f8-4caa-a84c-d1df24cb5a08=VM, 8e4c49b1-5078-4c89-9ccc-237b95b5dce4=DISK, Implementacion_02-2=VM_NAME]', sharedLocks=''}'
2023-02-27 11:09:44,409-05 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-29) [17e97345] EVENT_ID: IMPORTEXPORT_IMPORT_VM_FAILED(1,153), Failed to import Vm Implementacion_02-2 to Data Center dc_rcloud, Cluster cl_rcloud
2023-02-27 11:09:45,435-05 INFO [org.ovirt.engine.core.bll.storage.disk.image.RemoveImageCommandCallback] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-50) [17e97345] Waiting on remove image command to complete the task '3679c4b7-aea9-4fbd-804a-20f479aed410'
2023-02-27 11:09:45,437-05 INFO [org.ovirt.engine.core.bll.storage.disk.image.RemoveImageCommandCallback] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-50) [17e97345] Waiting on remove image command to complete the task '002baef1-b642-49b4-9eea-065c311d57b1'
2023-02-27 11:09:45,439-05 INFO [org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-50) [17e97345] Command 'RemoveAllVmImages' (id: 'e8fcae23-dbaa-4ac8-8bd0-251aad0e9a04') waiting on child command id: '1fc4fb29-5034-49f2-a991-9eb10a4f3f63' type:'RemoveImage' to complete
2023-02-27 11:09:46,097-05 INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-100) [] Task id '002baef1-b642-49b4-9eea-065c311d57b1' is in pre-polling period and should not be polled. Pre-polling period is 60000 millis.
2023-02-27 11:09:46,097-05 INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-100) [] Task id '3679c4b7-aea9-4fbd-804a-20f479aed410' is in pre-polling period and should not be polled. Pre-polling period is 60000 millis.
2023-02-27 11:09:47,443-05 INFO [org.ovirt.engine.core.bll.storage.disk.image.RemoveImageCommandCallback] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-76) [17e97345] Waiting on remove image command to complete the task '3679c4b7-aea9-4fbd-804a-20f479aed410'
2023-02-27 11:09:47,445-05 INFO [org.ovirt.engine.core.bll.storage.disk.image.RemoveImageCommandCallback] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-76) [17e97345] Waiting on remove image command to complete the task '002baef1-b642-49b4-9eea-065c311d57b1'
2023-02-27 11:09:47,447-05 INFO [org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-76) [17e97345] Command 'RemoveAllVmImages' (id: 'e8fcae23-dbaa-4ac8-8bd0-251aad0e9a04') waiting on child command id: '1fc4fb29-5034-49f2-a991-9eb10a4f3f63' type:'RemoveImage' to complete
2023-02-27 11:09:51,451-05 INFO [org.ovirt.engine.core.bll.storage.disk.image.RemoveImageCommandCallback] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-63) [17e97345] Waiting on remove image command to complete the task '3679c4b7-aea9-4fbd-804a-20f479aed410'
2023-02-27 11:09:51,453-05 INFO [org.ovirt.engine.core.bll.storage.disk.image.RemoveImageCommandCallback] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-63) [17e97345] Waiting on remove image command to complete the task '002baef1-b642-49b4-9eea-065c311d57b1'
2023-02-27 11:09:51,455-05 INFO [org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-63) [17e97345] Command 'RemoveAllVmImages' (id: 'e8fcae23-dbaa-4ac8-8bd0-251aad0e9a04') waiting on child command id: '1fc4fb29-5034-49f2-a991-9eb10a4f3f63' type:'RemoveImage' to complete
2023-02-27 11:09:56,098-05 INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-71) [] Task id '002baef1-b642-49b4-9eea-065c311d57b1' is in pre-polling period and should not be polled. Pre-polling period is 60000 millis.
2023-02-27 11:09:56,098-05 INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-71) [] Task id '3679c4b7-aea9-4fbd-804a-20f479aed410' is in pre-polling period and should not be polled. Pre-polling period is 60000 millis.
2023-02-27 11:09:59,460-05 INFO [org.ovirt.engine.core.bll.storage.disk.image.RemoveImageCommandCallback] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-11) [17e97345] Waiting on remove image command to complete the task '3679c4b7-aea9-4fbd-804a-20f479aed410'
2023-02-27 11:09:59,463-05 INFO [org.ovirt.engine.core.bll.storage.disk.image.RemoveImageCommandCallback] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-11) [17e97345] Waiting on remove image command to complete the task '002baef1-b642-49b4-9eea-065c311d57b1'
2023-02-27 11:09:59,465-05 INFO [org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-11) [17e97345] Command 'RemoveAllVmImages' (id: 'e8fcae23-dbaa-4ac8-8bd0-251aad0e9a04') waiting on child command id: '1fc4fb29-5034-49f2-a991-9eb10a4f3f63' type:'RemoveImage' to complete
2023-02-27 11:10:06,098-05 INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-59) [] Task id '002baef1-b642-49b4-9eea-065c311d57b1' is in pre-polling period and should not be polled. Pre-polling period is 60000 millis.
2023-02-27 11:10:06,098-05 INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-59) [] Task id '3679c4b7-aea9-4fbd-804a-20f479aed410' is in pre-polling period and should not be polled. Pre-polling period is 60000 millis.
2023-02-27 11:10:08,215-05 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-38) [16fe44b7-87c1-4419-a1f1-7a7b1b0c2584] EVENT_ID: ANSIBLE_RUNNER_EVENT_NOTIFICATION(559), Query OVA info. Run import yaml on py3.
2023-02-27 11:10:08,222-05 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-38) [16fe44b7-87c1-4419-a1f1-7a7b1b0c2584] EVENT_ID: ANSIBLE_RUNNER_EVENT_NOTIFICATION(559), Query OVA info. Set facts.
2023-02-27 11:10:08,229-05 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-38) [16fe44b7-87c1-4419-a1f1-7a7b1b0c2584] EVENT_ID: ANSIBLE_RUNNER_EVENT_NOTIFICATION(559), Query OVA info. Create temporary directory.
2023-02-27 11:10:09,470-05 INFO [org.ovirt.engine.core.bll.storage.disk.image.RemoveImageCommandCallback] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-29) [17e97345] Waiting on remove image command to complete the task '3679c4b7-aea9-4fbd-804a-20f479aed410'
2023-02-27 11:10:09,472-05 INFO [org.ovirt.engine.core.bll.storage.disk.image.RemoveImageCommandCallback] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-29) [17e97345] Waiting on remove image command to complete the task '002baef1-b642-49b4-9eea-065c311d57b1'
2023-02-27 11:10:09,474-05 INFO [org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-29) [17e97345] Command 'RemoveAllVmImages' (id: 'e8fcae23-dbaa-4ac8-8bd0-251aad0e9a04') waiting on child command id: '1fc4fb29-5034-49f2-a991-9eb10a4f3f63' type:'RemoveImage' to complete
2023-02-27 11:10:11,240-05 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-38) [16fe44b7-87c1-4419-a1f1-7a7b1b0c2584] EVENT_ID: ANSIBLE_RUNNER_EVENT_NOTIFICATION(559), Query OVA info. Copy query_ova.py to temp directory.
2023-02-27 11:10:16,099-05 INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-22) [] Task id '002baef1-b642-49b4-9eea-065c311d57b1' is in pre-polling period and should not be polled. Pre-polling period is 60000 millis.
2023-02-27 11:10:16,099-05 INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-22) [] Task id '3679c4b7-aea9-4fbd-804a-20f479aed410' is in pre-polling period and should not be polled. Pre-polling period is 60000 millis.
2023-02-27 11:10:19,479-05 INFO [org.ovirt.engine.core.bll.storage.disk.image.RemoveImageCommandCallback] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-17) [17e97345] Waiting on remove image command to complete the task '3679c4b7-aea9-4fbd-804a-20f479aed410'
2023-02-27 11:10:19,482-05 INFO [org.ovirt.engine.core.bll.storage.disk.image.RemoveImageCommandCallback] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-17) [17e97345] Waiting on remove image command to complete the task '002baef1-b642-49b4-9eea-065c311d57b1'
2023-02-27 11:10:19,484-05 INFO [org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-17) [17e97345] Command 'RemoveAllVmImages' (id: 'e8fcae23-dbaa-4ac8-8bd0-251aad0e9a04') waiting on child command id: '1fc4fb29-5034-49f2-a991-9eb10a4f3f63' type:'RemoveImage' to complete
2023-02-27 11:10:26,099-05 INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-83) [] Task id '002baef1-b642-49b4-9eea-065c311d57b1' is in pre-polling period and should not be polled. Pre-polling period is 60000 millis.
2023-02-27 11:10:26,099-05 INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-83) [] Task id '3679c4b7-aea9-4fbd-804a-20f479aed410' is in pre-polling period and should not be polled. Pre-polling period is 60000 millis.
2023-02-27 11:10:26,255-05 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-38) [16fe44b7-87c1-4419-a1f1-7a7b1b0c2584] EVENT_ID: ANSIBLE_RUNNER_EVENT_NOTIFICATION(559), Query OVA info. Run query script.
2023-02-27 11:10:26,264-05 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-38) [16fe44b7-87c1-4419-a1f1-7a7b1b0c2584] EVENT_ID: ANSIBLE_RUNNER_EVENT_NOTIFICATION(559), Query OVA info. Remove temp directory.
2023-02-27 11:10:29,273-05 WARN [org.ovirt.engine.core.utils.ovf.OvfReader] (default task-38) [16fe44b7-87c1-4419-a1f1-7a7b1b0c2584] could not retrieve volume id of file1 from ovf, generating new guid
2023-02-27 11:10:29,274-05 WARN [org.ovirt.engine.core.utils.ovf.OvfReader] (default task-38) [16fe44b7-87c1-4419-a1f1-7a7b1b0c2584] could not retrieve disk id of vmdisk2 from ovf, generating new guid
2023-02-27 11:10:29,274-05 WARN [org.ovirt.engine.core.utils.ovf.OvfReader] (default task-38) [16fe44b7-87c1-4419-a1f1-7a7b1b0c2584] didn't find disk provisioned size thus allocating the virtual size
2023-02-27 11:10:29,498-05 INFO [org.ovirt.engine.core.bll.storage.disk.image.RemoveImageCommandCallback] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-95) [17e97345] Waiting on remove image command to complete the task '3679c4b7-aea9-4fbd-804a-20f479aed410'
2023-02-27 11:10:29,500-05 INFO [org.ovirt.engine.core.bll.storage.disk.image.RemoveImageCommandCallback] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-95) [17e97345] Waiting on remove image command to complete the task '002baef1-b642-49b4-9eea-065c311d57b1'
2023-02-27 11:10:29,502-05 INFO [org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-95) [17e97345] Command 'RemoveAllVmImages' (id: 'e8fcae23-dbaa-4ac8-8bd0-251aad0e9a04') waiting on child command id: '1fc4fb29-5034-49f2-a991-9eb10a4f3f63' type:'RemoveImage' to complete
any suggestions for me??
2 years, 4 months
Host Reboot Timeout of 10 Minutes
by Peter H
I'm working in a group that maintains a large oVirt setup based on 4.4.1
which works very well. We are afraid of upgrading and prefer setting up a
new installation and gradually enlist the hosts one by one into the new
installation.
We have tried 4.4.10 and 4.5.1 - 4.5.4 based on CentOS Stream 8, Rocky 8,
Alma Linux 9.1 with various problems. Worst was the problem that the rpm db
ended up in a catch-22 state.
Using Alma Linux 9.1 and current oVirt 4.5.4 seems promising as no rpm
problems are present after installation. We have only one nuisance left
which we have seen in all installation attempts we have made since 4.4.10.
When rebooting a host it takes 10 minutes before it's activated again. In
4.4.1 the hosts are activated a few seconds after they have booted up.
I have found the following in the engine log:
2023-01-24 23:01:57,564+01 INFO
[org.ovirt.engine.core.bll.SshHostRebootCommand]
(EE-ManagedThreadFactory-engine-Thread-1513) [2bb08d20] Waiting 600
seconds, for server to finish reboot process.
Our ansible playbooks for deployment times out and we could increase the
timeout but how come that this 10 minutes delay has been introduced?
Does a config file exist where this timeout can be set to a lower value?
BR
Peter H.
2 years, 4 months
Out-of-sync networks can only be detached
by Sakhi Hadebe
Hi,
I have a 3-node oVirt cluster. I have configured 2 logical networks:
ovirtmgmt and public. Public logical network is attached in only 2 nodes
and failing to attach on the 3rd node with the below error
Invalid operation, out-of-sync network 'public' can only be detached.
Please have been stuck on this for almost the whole day now. How do I fix
this error?
--
Regards,
Sakhi Hadebe
2 years, 4 months
Create VM with Python SDK
by Alan G
Hi,
Trying to create a VM while attaching an existing disk. I can create the VM then attach the disk with an additional call, but I thought it should be possible to do it in one hit?
My code is
vm = vms_service.add(
types.Vm(
name='alma8.7',
description='AlmaLinux 8.7 CIS Packer image',
cluster=types.Cluster(
name='Default',
),
type=types.VmType('server'),
template=types.Template(
name='Blank',
),
disk_attachments=[types.DiskAttachment(
disk=types.Disk(id = "0532e728-a1fb-4ff8-a4f3-0702fc876fce"),
bootable=True,
active=True,
interface=types.DiskInterface.VIRTIO,
)]
),
)
This request returns no error but the disk isn't actually attached to the created VM.
2 years, 4 months
4.3 -> 4.4 upgrade failed
by KSNull Zero
Hello!
Trying to upgrade old 4.3 installation to 4.4 using this documentation:
https://www.ovirt.org/documentation/upgrade_guide/#Upgrading_the_Manager_...
On the target engine service host got this error during restore operation:
Start of engine-backup with mode 'restore'
scope: all
archive file: backup.bck
log file: /var/log/ovirt-engine-backup/ovirt-engine-restore-20230224162609.log
Preparing to restore:
- Unpacking file 'backup.bck'
Restoring:
- Files
------------------------------------------------------------------------------
Please note:
Operating system is different from the one used during backup.
Current operating system: redhat8
Operating system at backup: redhat7
Apache httpd configuration will not be restored.
You will be asked about it on the next engine-setup run.
------------------------------------------------------------------------------
Provisioning PostgreSQL users/databases:
- user 'engine', database 'engine'
- user 'engine', database 'engine_history'
FATAL: Existing database 'engine_history' or user 'engine' found and temporary ones created - Please clean up everything and try again
One more log is here:
https://pastebin.com/Rg2kaNnM
Can you, please, help and provide some information how this can be fixed ?
2 years, 4 months
Re: [External] : Configure HA.
by Anthony Bustillos Gonzalez
Hi Marcos,
It’s possible have VM’s in HA. Because when I does this test, I see this issue.
“Highly Available VM failed. It will be restarted automatically.”
Also, this is a HA, what happen when I brute downtime in any host?
I need have my VM's in HA.
2 years, 4 months
Configure HA.
by Anthony Bustillos Gonzalez
Configure HA.
hello team,
I'm working to create a cluster, I've 2 host with the same spectifications. I'm using ISCSI to conect 2 host to my storage.
Today everithing it's okay... but i have this issue.
Test.
When i migrated a vm to other Hosts this working.
when change the host active a maintance, the ovirt move de the vm's a other hosts.
Issue.
when turn off the host with idrac... the VM's in the host change the status active to unknow, then the hypervisor move the vms a other hosts but the vms are rebooted
any idea?
2 years, 4 months
Setting up hyperconverged engine with gluster storage
by Andy Michielsen
Hello,
I'm setting up a new oVirt environment version 4.5 and want to use the
hyperconverged setup on gluster storage.
When I configure all the parameters and start the installation it keeps on
failing and I don't know why
TASK [gluster.infra/roles/backend_setup : Group devices by volume group
name, including existing devices] ***
fatal: [server1.test.local]: FAILED! => {"msg": "The task includes an
option with an undefined variable. The error was: 'str object' has no
attribute 'vgname'\n\nThe error appears to be in
'/etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/get_vg_groupings.yml':
line 3, column 3, but may\nbe elsewhere in the file depending on the exact
syntax problem.\n\nThe offending line appears to be:\n\n\n- name: Group
devices by volume group name, including existing devices\n ^ here\n"}
fatal: [ server2.test.local ]: FAILED! => {"msg": "The task includes an
option with an undefined variable. The error was: 'str object' has no
attribute 'vgname'\n\nThe error appears to be in
'/etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/get_vg_groupings.yml':
line 3, column 3, but may\nbe elsewhere in the file depending on the exact
syntax problem.\n\nThe offending line appears to be:\n\n\n- name: Group
devices by volume group name, including existing devices\n ^ here\n"}
fatal: [ server3.test.local ]: FAILED! => {"msg": "The task includes an
option with an undefined variable. The error was: 'str object' has no
attribute 'vgname'\n\nThe error appears to be in
'/etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/get_vg_groupings.yml':
line 3, column 3, but may\nbe elsewhere in the file depending on the exact
syntax problem.\n\nThe offending line appears to be:\n\n\n- name: Group
devices by volume group name, including existing devices\n ^ here\n"}
I'm using the iso version
ovirt-node-ng-4.5.4-0.20221206.0+1
2 years, 4 months
OVA Export extreme Slow
by m.rohweder@itm-h.de
Hi
I Started to export 1 VM to OVA on NFS server.
14h runtime and 325G from 500G are transmitted.
Other coppy kobs i testet, running with ful Gbit speed.
What can i do to speedup?
or what is a faster way to export VM to other systems like Proxmox?
2 years, 4 months
HostedEngine restarts from time to time
by ziyi Liu
Version 4.5.3.2-1.el8
There are two red warnings in the /var/log/messages file
kernel
shpchp 0000:01:00.0: Slot initialization failed
kernel
shpchp 0000:01:00.0: pci_hp_register failed with error -16
2 years, 4 months
Can't log on into my engine : keystore was tampered with, or password was incorrect
by Andy Michielsen
Hello,
I had an issue on my oVirt 4.3 environment that the certificate was
expired. Which I understand was not smart but now it's to late to update
them.
I found some information on how to fix that but the more I tried the more I
got confused and the further I think I got from solving this.
So I took a full backup and started a new installation off my engine on a
centos 8 stream and oVirt 4.5.
After I did the complete installation it worked fine but with a clean
environment and I really wanted to restore my current setup. So I did a
engine-cleanup and a restore off the full backup.
I think it went fine. but when I log into the engine I get the message
keystore was tampered with, or password was incorrect even before I try to
log in.
Why is it telling me this And how can I figure this out and
solve it ?
Kind regards.
2 years, 4 months
Problem after upgrading oVirt
by Andy Michielsen
Hello all,
I'm struggling with an issue with my oVirt environment and I seem not to be
able to figure this one out on my own and am hoping you can point me in on
the right track.
My old oVirt 4.3 environment's certifcat went expired. I thought I could
fix it by reading some tutorials but no luck so far.
Now I have upgraded my engine already to 4.5 and now I get the message
telling me this
"Keystore was tampered with, or password is incorrect"
Even before I can log on.
I'm looking through the ovirtengine.log but so far I haven't find why
exactly.
Any help would be greatly appreciated.
Kind regards.
2 years, 4 months