Host deploy failure: Configure OVN for oVirt
by stephan.badenhorst@fnb.co.za
Good day,
I am running into a problem during host deploy via the oVirt engine GUI since upgrading to ovirt-engine-4.5.5-1.el8. The "Configure OVN for oVirt" task seem to fail when trying to run the vdsm-tool ovn-config command. Host deploy used to work fine when the engine was on version 4.5.4.
Anyone that can guide me on the right path to get past this issue?
Does not seem to be a new problem - https://lists.ovirt.org/archives/list/users@ovirt.org/thread/IDLGSBQFX35E...
Log extract:
2024-02-01 14:48:19 SAST - TASK [ovirt-provider-ovn-driver : Configure OVN for oVirt] *********************
.
.
.
"stdout" : "fatal: [mob-r1-l-ovirt-aa-1-23.x.fnb.co.za]: FAILED! => {\"changed\": true, \"cmd\": [\"vdsm-tool\", \"ovn-config\", \"192.168.2.100\", \"host23.mydomain.com\"], \"delta\": \"0:00:00.538143\", \"end
\": \"2024-02-01 14:48:20.596823\", \"msg\": \"non-zero return code\", \"rc\": 1, \"start\": \"2024-02-01 14:48:20.058680\", \"stderr\": \"Traceback (most recent call last):\\n File \\\"/usr/lib/python3.6/site-packages/vdsm/t
ool/ovn_config.py\\\", line 117, in get_network\\n return networks[net_name]\\nKeyError: 'host23.mydomain.com'\\n\\nDuring handling of the above exception, another exception occurred:\\n\\nTraceback (most rec
ent call last):\\n File \\\"/usr/bin/vdsm-tool\\\", line 195, in main\\n return tool_command[cmd][\\\"command\\\"](*args)\\n File \\\"/usr/lib/python3.6/site-packages/vdsm/tool/ovn_config.py\\\", line 63, in ovn_config\\n
ip_address = get_ip_addr(get_network(network_caps(), net_name))\\n File \\\"/usr/lib/python3.6/site-packages/vdsm/tool/ovn_config.py\\\", line 119, in get_network\\n raise NetworkNotFoundError(net_name)\\nvdsm.tool.ovn
_config.NetworkNotFoundError: host23.mydomain.com\", \"stderr_lines\": [\"Traceback (most recent call last):\", \" File \\\"/usr/lib/python3.6/site-packages/vdsm/tool/ovn_config.py\\\", line 117, in get_network
\", \" return networks[net_name]\", \"KeyError: 'host23.mydomain.com'\", \"\", \"During handling of the above exception, another exception occurred:\", \"\", \"Traceback (most recent call last):\", \" File \
\\"/usr/bin/vdsm-tool\\\", line 195, in main\", \" return tool_command[cmd][\\\"command\\\"](*args)\", \" File \\\"/usr/lib/python3.6/site-packages/vdsm/tool/ovn_config.py\\\", line 63, in ovn_config\", \" ip_address =
get_ip_addr(get_network(network_caps(), net_name))\", \" File \\\"/usr/lib/python3.6/site-packages/vdsm/tool/ovn_config.py\\\", line 119, in get_network\", \" raise NetworkNotFoundError(net_name)\", \"vdsm.tool.ovn_config.
NetworkNotFoundError: host23.mydomain.com\"], \"stdout\": \"\", \"stdout_lines\": []}",
Thanks in advance!!
Stephan
5 days, 10 hours
Unable to access ovirt Admin Screen from ovirt Host
by louisb@ameritech.net
I've reinstalled ovirt 4.4 on my server remotely via cockpit terminal. I'm able to access the ovirt admin screen remotely from the laptop that I used for the install. However, using the same URL I'm unable to gain access to the admin screen.
Following the instruction in the documentation I've modified the file: /etc/ovirt-engine/engine.conf.d/99-custom-sso-setup.conf, to reflect the DNS name and I enter in the IP address. But I'm still unable to access the screen from the server console.
What else needs to change in order to gain access from the server console?
Thanks
2 weeks, 2 days
VM has been paused due to no Storage space error.
by suporte@logicworks.pt
Hello,
running ovirt Version 4.5.4-1.el8 on Centos 8, randomly we have this error:
VM has been paused due to no Storage space error.
We have plenty of space on the iSCSI storage. This is a preallocated disk, VirtIO-SCSi.
No user interaction. It happens, so far, with 3 VM, Windows and Ubuntu.
This service was stopped: dnf-makecache.service
This is what I found on the engine log:
2024-08-19 01:04:35,522+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-25) [eb7e5f1] VM 'ccc65521-934d-4f77-adf3-9f9eeb83a4f8'(Bravo) moved from 'Up' --> 'Paused'
2024-08-19 01:04:35,665+01 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-25) [eb7e5f1] EVENT_ID: VM_PAUSED_ENOSPC(138), VM Bravo has been paused due to no Storage space error.
2024-08-19 09:26:35,855+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-29) [72482216] VM 'ccc65521-934d-4f77-adf3-9f9eeb83a4f8'(Bravo) moved from 'Paused' --> 'Down'
2024-08-19 09:26:48,114+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-15) [72482216] VM 'ccc65521-934d-4f77-adf3-9f9eeb83a4f8'(Bravo) moved from 'WaitForLaunch' --> 'PoweringUp'
2024-08-19 09:27:50,062+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-6) [] VM 'ccc65521-934d-4f77-adf3-9f9eeb83a4f8'(Bravo) moved from 'PoweringUp' --> 'Up'
2024-08-19 09:29:25,145+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-15) [72482216] VM 'ccc65521-934d-4f77-adf3-9f9eeb83a4f8'(Bravo) moved from 'Up' --> 'Paused'
2024-08-19 09:29:25,273+01 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-15) [72482216] EVENT_ID: VM_PAUSED_ENOSPC(138), VM Bravo has been paused due to no Storage space error.
2024-08-19 09:37:26,128+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-15) [6d88f065] VM 'ccc65521-934d-4f77-adf3-9f9eeb83a4f8'(Bravo) moved from 'Paused' --> 'Down'
2024-08-19 09:41:43,300+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-15) [6d88f065] VM 'ccc65521-934d-4f77-adf3-9f9eeb83a4f8'(Bravo) moved from 'WaitForLaunch' --> 'PoweringUp'
2024-08-19 09:42:14,882+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-23) [6d88f065] VM 'ccc65521-934d-4f77-adf3-9f9eeb83a4f8'(Bravo) moved from 'PoweringUp' --> 'Up'
2024-08-19 09:42:59,792+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-15) [6d88f065] VM 'ccc65521-934d-4f77-adf3-9f9eeb83a4f8'(Bravo) moved from 'Up' --> 'Paused'
2024-08-19 09:42:59,894+01 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-15) [6d88f065] EVENT_ID: VM_PAUSED_ENOSPC(138), VM Bravo has been paused due to no Storage space error.
2024-08-19 09:45:30,334+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-15) [6b3d8ee] VM 'ccc65521-934d-4f77-adf3-9f9eeb83a4f8'(Bravo) moved from 'Paused' --> 'Down'
2024-08-19 09:47:51,068+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-15) [6b3d8ee] VM 'ccc65521-934d-4f77-adf3-9f9eeb83a4f8'(Bravo) moved from 'WaitForLaunch' --> 'PoweringUp'
2024-08-19 09:48:50,710+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-80) [] VM 'ccc65521-934d-4f77-adf3-9f9eeb83a4f8'(Bravo) moved from 'PoweringUp' --> 'Up'
2024-08-19 10:06:38,810+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-15) [1dd98021] VM 'ccc65521-934d-4f77-adf3-9f9eeb83a4f8'(Bravo) moved from 'PoweringDown' --> 'Down'
2024-08-19 10:08:11,606+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-15) [1dd98021] VM 'ccc65521-934d-4f77-adf3-9f9eeb83a4f8'(Bravo) moved from 'WaitForLaunch' --> 'PoweringUp'
2024-08-19 10:09:12,507+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-25) [] VM 'ccc65521-934d-4f77-adf3-9f9eeb83a4f8'(Bravo) moved from 'PoweringUp' --> 'Up'
2024-08-19 10:21:13,835+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-15) [63fa2421] VM 'ccc65521-934d-4f77-adf3-9f9eeb83a4f8'(Bravo) moved from 'Up' --> 'Down'
2024-08-19 10:25:19,302+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-15) [63fa2421] VM 'ccc65521-934d-4f77-adf3-9f9eeb83a4f8'(Bravo) moved from 'WaitForLaunch' --> 'PoweringUp'
2024-08-19 10:26:05,456+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-3) [63fa2421] VM 'ccc65521-934d-4f77-adf3-9f9eeb83a4f8'(Bravo) moved from 'PoweringUp' --> 'Up'
And we cannot start the VM anymore.
Any idea?
Thanks
--
Jose Ferradeira
http://www.logicworks.pt
2 weeks, 4 days
SPM and Task error ...
by Enrico
Hi all,
my ovirt cluster has got 3 Hypervisors runnig Centos 7.5.1804 vdsm is
4.20.39.1-1.el7,
ovirt engine is 4.2.4.5-1.el7, the storage systems are HP MSA P2000 and
2050 (fibre channel).
I need to stop one of the hypervisors for maintenance but this system is
the storage pool manager.
For this reason I decided to manually activate SPM in one of the other
nodes but this operation is not
successful.
In the ovirt engine (engine.log) the error is this:
2019-07-25 12:39:16,744+02 INFO
[org.ovirt.engine.core.bll.storage.pool.ForceSelectSPMCommand] (default
task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] Running command:
ForceSelectSPMCommand internal: false. Entities affected : ID:
81c9bd3c-ae0a-467f-bf7f-63ab30cd8d9e Type: VDSAction group
MANIPULATE_HOST with role type ADMIN
2019-07-25 12:39:16,745+02 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.SpmStopOnIrsVDSCommand]
(default task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] START,
SpmStopOnIrsVDSCommand(
SpmStopOnIrsVDSCommandParameters:{storagePoolId='18d57688-6ed4-43b8-bd7c-0665b55950b7',
ignoreFailoverLimit='false'}), log id: 37bf4639
2019-07-25 12:39:16,747+02 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.ResetIrsVDSCommand] (default
task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] START,
ResetIrsVDSCommand(
ResetIrsVDSCommandParameters:{storagePoolId='18d57688-6ed4-43b8-bd7c-0665b55950b7',
ignoreFailoverLimit='false',
vdsId='751f3e99-b95e-4c31-bc38-77f5661a0bdc',
ignoreStopFailed='false'}), log id: 2522686f
2019-07-25 12:39:16,749+02 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStopVDSCommand] (default
task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] START,
SpmStopVDSCommand(HostName = infn-vm05.management,
SpmStopVDSCommandParameters:{hostId='751f3e99-b95e-4c31-bc38-77f5661a0bdc',
storagePoolId='18d57688-6ed4-43b8-bd7c-0665b55950b7'}), log id: 1810fd8b
2019-07-25 12:39:16,758+02 *ERROR*
[org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStopVDSCommand] (default
task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] SpmStopVDSCommand::Not
stopping SPM on vds 'infn-vm05.management', pool id
'18d57688-6ed4-43b8-bd7c-0665b55950b7' as there are uncleared tasks
'Task 'fdcf4d1b-82fe-49a6-b233-323ebe568f8e', status 'running''
2019-07-25 12:39:16,758+02 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStopVDSCommand] (default
task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] FINISH,
SpmStopVDSCommand, log id: 1810fd8b
2019-07-25 12:39:16,758+02 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.ResetIrsVDSCommand] (default
task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] FINISH,
ResetIrsVDSCommand, log id: 2522686f
2019-07-25 12:39:16,758+02 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.SpmStopOnIrsVDSCommand]
(default task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] FINISH,
SpmStopOnIrsVDSCommand, log id: 37bf4639
2019-07-25 12:39:16,760+02 *ERROR*
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] EVENT_ID:
USER_FORCE_SELECTED_SPM_STOP_FAILED(4,096), Failed to force select
infn-vm07.management as the SPM due to a failure to stop the current SPM.
while in the hypervisor (SPM) vdsm.log:
2019-07-25 12:39:16,744+02 INFO
[org.ovirt.engine.core.bll.storage.pool.ForceSelectSPMCommand] (default
task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] Running command:
ForceSelectSPMCommand internal: false. Entities affected : ID:
81c9bd3c-ae0a-467f-bf7f-63ab30cd8d9e Type: VDSAction group
MANIPULATE_HOST with role type ADMIN
2019-07-25 12:39:16,745+02 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.SpmStopOnIrsVDSCommand]
(default task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] START,
SpmStopOnIrsVDSCommand(
SpmStopOnIrsVDSCommandParameters:{storagePoolId='18d57688-6ed4-43b8-bd7c-0665b55950b7',
ignoreFailoverLimit='false'}), log id: 37bf4639
2019-07-25 12:39:16,747+02 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.ResetIrsVDSCommand] (default
task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] START,
ResetIrsVDSCommand(
ResetIrsVDSCommandParameters:{storagePoolId='18d57688-6ed4-43b8-bd7c-0665b55950b7',
ignoreFailoverLimit='false',
vdsId='751f3e99-b95e-4c31-bc38-77f5661a0bdc',
ignoreStopFailed='false'}), log id: 2522686f
2019-07-25 12:39:16,749+02 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStopVDSCommand] (default
task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] START,
SpmStopVDSCommand(HostName = infn-vm05.management,
SpmStopVDSCommandParameters:{hostId='751f3e99-b95e-4c31-bc38-77f5661a0bdc',
storagePoolId='18d57688-6ed4-43b8-bd7c-0665b55950b7'}), log id: 1810fd8b
2019-07-25 12:39:16,758+02 *ERROR*
[org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStopVDSCommand] (default
task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] SpmStopVDSCommand::Not
stopping SPM on vds 'infn-vm05.management', pool id
'18d57688-6ed4-43b8-bd7c-0665b55950b7' as there are uncleared tasks
'Task 'fdcf4d1b-82fe-49a6-b233-323ebe568f8e', status 'running''
2019-07-25 12:39:16,758+02 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStopVDSCommand] (default
task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] FINISH,
SpmStopVDSCommand, log id: 1810fd8b
2019-07-25 12:39:16,758+02 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.ResetIrsVDSCommand] (default
task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] FINISH,
ResetIrsVDSCommand, log id: 2522686f
2019-07-25 12:39:16,758+02 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.SpmStopOnIrsVDSCommand]
(default task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] FINISH,
SpmStopOnIrsVDSCommand, log id: 37bf4639
2019-07-25 12:39:16,760+02 *ERROR*
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] EVENT_ID:
USER_FORCE_SELECTED_SPM_STOP_FAILED(4,096), Failed to force select
infn-vm07.management as the SPM due to a failure to stop the current SPM.
2019-07-25 12:39:18,660+02 INFO
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) [] Task id
'fdcf4d1b-82fe-49a6-b233-323ebe568f8e' has passed pre-polling period
time and should be polled. Pre-polling period is 60000 millis.
2019-07-25 12:39:18,660+02 INFO
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) [] Task id
'fdcf4d1b-82fe-49a6-b233-323ebe568f8e' has passed pre-polling period
time and should be polled. Pre-polling period is 60000 millis.
2019-07-25 12:39:18,750+02 INFO
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) [] Task id
'fdcf4d1b-82fe-49a6-b233-323ebe568f8e' has passed pre-polling period
time and should be polled. Pre-polling period is 60000 millis.
2019-07-25 12:39:18,750+02 *ERROR*
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) []
BaseAsyncTask::logEndTaskFailure: Task
'fdcf4d1b-82fe-49a6-b233-323ebe568f8e' (Parent Command 'Unknown',
Parameters Type
'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters') ended
with failure:
2019-07-25 12:39:18,750+02 INFO
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) []
SPMAsyncTask::ClearAsyncTask: Attempting to clear task
'fdcf4d1b-82fe-49a6-b233-323ebe568f8e'
2019-07-25 12:39:18,751+02 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) [] START,
SPMClearTaskVDSCommand(
SPMTaskGuidBaseVDSCommandParameters:{storagePoolId='18d57688-6ed4-43b8-bd7c-0665b55950b7',
ignoreFailoverLimit='false',
taskId='fdcf4d1b-82fe-49a6-b233-323ebe568f8e'}), log id: 34ae2b2f
2019-07-25 12:39:18,752+02 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) [] START,
HSMClearTaskVDSCommand(HostName = infn-vm05.management,
HSMTaskGuidBaseVDSCommandParameters:{hostId='751f3e99-b95e-4c31-bc38-77f5661a0bdc',
taskId='fdcf4d1b-82fe-49a6-b233-323ebe568f8e'}), log id: d3a78ad
2019-07-25 12:39:18,757+02 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) [] FINISH,
HSMClearTaskVDSCommand, log id: d3a78ad
2019-07-25 12:39:18,757+02 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) [] FINISH,
SPMClearTaskVDSCommand, log id: 34ae2b2f
2019-07-25 12:39:18,757+02 INFO
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) []
SPMAsyncTask::ClearAsyncTask: At time of attempt to clear task
'fdcf4d1b-82fe-49a6-b233-323ebe568f8e' the response code was
'TaskStateError' and message was 'Operation is not allowed in this task
state: ("can't clean in state running",)'. Task will not be cleaned
2019-07-25 12:39:18,757+02 INFO
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) []
BaseAsyncTask::onTaskEndSuccess: Task
'fdcf4d1b-82fe-49a6-b233-323ebe568f8e' (Parent Command 'Unknown',
Parameters Type
'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters') ended
successfully.
2019-07-25 12:39:18,757+02 INFO
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) []
SPMAsyncTask::ClearAsyncTask: Attempting to clear task
'fdcf4d1b-82fe-49a6-b233-323ebe568f8e'
2019-07-25 12:39:18,758+02 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) [] START,
SPMClearTaskVDSCommand(
SPMTaskGuidBaseVDSCommandParameters:{storagePoolId='18d57688-6ed4-43b8-bd7c-0665b55950b7',
ignoreFailoverLimit='false',
taskId='fdcf4d1b-82fe-49a6-b233-323ebe568f8e'}), log id: 42de0c2b
2019-07-25 12:39:18,759+02 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) [] START,
HSMClearTaskVDSCommand(HostName = infn-vm05.management,
HSMTaskGuidBaseVDSCommandParameters:{hostId='751f3e99-b95e-4c31-bc38-77f5661a0bdc',
taskId='fdcf4d1b-82fe-49a6-b233-323ebe568f8e'}), log id: 4895c79c
2019-07-25 12:39:18,764+02 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) [] FINISH,
HSMClearTaskVDSCommand, log id: 4895c79c
2019-07-25 12:39:18,764+02 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) [] FINISH,
SPMClearTaskVDSCommand, log id: 42de0c2b
2019-07-25 12:39:18,764+02 INFO
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) []
SPMAsyncTask::ClearAsyncTask: At time of attempt to clear task
'fdcf4d1b-82fe-49a6-b233-323ebe568f8e' the response code was
'TaskStateError' and message was 'Operation is not allowed in this task
state: ("can't clean in state running",)'. Task will not be cleaned
2019-07-25 12:39:18,764+02 INFO
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) [] Task id
'fdcf4d1b-82fe-49a6-b233-323ebe568f8e' has passed pre-polling period
time and should be polled. Pre-polling period is 60000 millis.
2019-07-25 12:39:18,764+02 INFO
[org.ovirt.engine.core.bll.tasks.AsyncTaskManager]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) [] Cleaning zombie
tasks: Clearing async task 'Unknown' that started at 'Fri May 03
14:48:50 CEST 2019'
2019-07-25 12:39:18,764+02 INFO
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) []
SPMAsyncTask::ClearAsyncTask: Attempting to clear task
'fdcf4d1b-82fe-49a6-b233-323ebe568f8e'
2019-07-25 12:39:18,765+02 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) [] START,
SPMClearTaskVDSCommand(
SPMTaskGuidBaseVDSCommandParameters:{storagePoolId='18d57688-6ed4-43b8-bd7c-0665b55950b7',
ignoreFailoverLimit='false',
taskId='fdcf4d1b-82fe-49a6-b233-323ebe568f8e'}), log id: da77af2
2019-07-25 12:39:18,766+02 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) [] START,
HSMClearTaskVDSCommand(HostName = infn-vm05.management,
HSMTaskGuidBaseVDSCommandParameters:{hostId='751f3e99-b95e-4c31-bc38-77f5661a0bdc',
taskId='fdcf4d1b-82fe-49a6-b233-323ebe568f8e'}), log id: 530694fb
2019-07-25 12:39:18,771+02 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) [] FINISH,
HSMClearTaskVDSCommand, log id: 530694fb
2019-07-25 12:39:18,771+02 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) [] FINISH,
SPMClearTaskVDSCommand, log id: da77af2
2019-07-25 12:39:18,771+02 INFO
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) []
SPMAsyncTask::ClearAsyncTask: At time of attempt to clear task
'fdcf4d1b-82fe-49a6-b233-323ebe568f8e' the response code was
'TaskStateError' and message was 'Operation is not allowed in this task
state: ("can't clean in state running",)'. Task will not be cleaned
there is some relation between this error and a task that has remained
hanging, from SPM server:
# vdsm-client Task getInfo taskID=fdcf4d1b-82fe-49a6-b233-323ebe568f8e
{
"verb": "prepareMerge",
"id": "fdcf4d1b-82fe-49a6-b233-323ebe568f8e"
}
# vdsm-client Task getStatus taskID=fdcf4d1b-82fe-49a6-b233-323ebe568f8e
{
"message": "running job 1 of 1",
"code": 0,
"taskID": "fdcf4d1b-82fe-49a6-b233-323ebe568f8e",
"taskResult": "",
"taskState": "running"
}
How can I solve this problem ?
Thanks a lot for your help !!
Best Regards
Enrico
--
_______________________________________________________________________
Enrico Becchetti Servizio di Calcolo e Reti
Istituto Nazionale di Fisica Nucleare - Sezione di Perugia
Via Pascoli,c/o Dipartimento di Fisica 06123 Perugia (ITALY)
Phone:+39 075 5852777 Mail: Enrico.Becchetti<at>pg.infn.it
_______________________________________________________________________
1 month
oVirt CLI tool for automation tasks
by munnadawood@gmail.com
We recently migrated from VMware to oVirt. I am looking for any CLI tool well suited for my automation tasks like VM create, clone, migrate 100s of Virtual machines in oVirt cluster.
with VMware I was using govc (vSphere CLI built on top of govmomi). Another option I read is powercli, quite unsure if it works with oVirt.
Any suggestions would be highly appreciated.
Thanks!
1 month, 1 week
Latest 4.5.5 centos node install username password not working.
by antonio.riggio@mail.com
I been trying to install 4.5.5 testing and used the node installer iso. I cant login what is the username and password I have tried what others have done in the forums with no luck admin@ovirt . root admin . Also can the engine be installed from the node via the cockpit still? How can I get logged in to do this? I have used 4.3 and didnt have any of these problems. It was really simple to do.
Any suggestions?
grazie
2 months
Error attach snapshot via python SDK after update to ovirt 4.5
by luis.figueiredo10@gmail.com
Hi,
I have 1 problem when using the Python SDK to make backups of virtual machine images, after updating to Ovirt 4.5.
Before the update and with Ovirt 4.4 and Centos 8 everything worked.
Scenario:
Ovirt-engine - installed standalone, centos 9 Stream
Host -> Centos 9 Strem - ISO used was ovirt-node-ng-installer-lastest-el0.iso
Storage -> Iscsi
Ovirt-backup -> Vm runnnig on host with centos 8 Stream with python3-ovirt-engine-sdk4 installed
Versions:
Version 4.5.6-1.el9
To reproduce the error, do the following:
1 -> Create a snapshot via interface GUI
2 -> Run this script to find my snaphot id
```
#!/bin/python
import sys
import printf
import ovirtsdk4 as sdk
import time
import configparser
import re
cfg = configparser.ConfigParser()
cfg.readfp(open("/opt/VirtBKP/default.conf"))
url=cfg.get('ovirt-engine', 'api_url')
user=cfg.get('ovirt-engine', 'api_user')
password=cfg.get('ovirt-engine', 'api_password')
ca_file=cfg.get('ovirt-engine', 'api_ca_file')
connection = None
try:
connection = sdk.Connection(url,user,password,ca_file)
# printf.OK("Connection to oVIrt API success %s" % url)
except Exception as ex:
print(ex)
printf.ERROR("Connection to oVirt API has failed")
vm_service = connection.service("vms")
system_service = connection.system_service()
vms_service = system_service.vms_service()
vms = vm_service.list()
for vm in vms:
vm_service = vms_service.vm_service(vm.id)
snaps_service = vm_service.snapshots_service()
snaps_map = {
snap.id: snap.description
for snap in snaps_service.list()
}
for snap_id, snap_description in snaps_map.items():
snap_service = snaps_service.snapshot_service(snap_id)
print("VM: "+vm.name+": "+snap_description+" "+snap_id)
# Close the connection to the server:
connection.close()
```
When i run the script i have the result of VM name + snap ID, so connection with api is OK.
```
[root@ovirt-backup-lab VirtBKP]# ./list_machines_with_snapshots_all
VM: ovirt-backup: Active VM d9631ff9-3a67-49af-b6ae-ed4d164c38ee
VM: ovirt-backup: clean 07478a39-a1e8-4d42-bdf9-aa3464ee85a2
VM: testes: Active VM 46f6ea97-b182-496a-89db-278ca8bcc952
VM: testes: testes123 729d64cb-c939-400c-b7f7-8e675c37a882
```
3- The problem occurs when I try to attach the snapshot to the VM
I run the script below thats run perfect on previous ovirt version 4.4 and centos 8
```
#!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# Copyright (c) 2017 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import subprocess
import os
import time
DATE = str((time.strftime("%Y%m%d-%H%M")))
# In order to send events we need to also send unique integer ids. These
# should usually come from an external database, but in this example we
# will just generate them from the current time in seconds since Jan 1st
# 1970.
event_id = int(time.time())
from dateutil.relativedelta import relativedelta
import uuid
import ovirtsdk4 as sdk
import ovirtsdk4.types as types
# Init the logging
import logging
logging.basicConfig(level=logging.DEBUG, filename='/var/log/virtbkp/dumpdebug.log')
# This function try to find the device by the SCSI-SERIAL
# which its the disk id.
#
def get_logical_name(diskid):
logicalname="None"
loop=True
timeout=60
i=int(1)
while loop:
if i <= timeout:
logging.debug('[%s] Looking for disk with id \'%s\' (%s/%s).',str((time.strftime("%Y%m%d-%H%M"))),diskid,str(i),str(timeout))
# import udev rules
import pyudev
devices = pyudev.Context().list_devices()
for d in devices.match_property('SCSI_IDENT_SERIAL',diskid):
if d.properties.get('DEVTYPE') == "disk":
logging.debug('[%s] found disk with logical name \'%s\'',str((time.strftime("%Y%m%d-%H%M"))),d.device_node)
logicalname = d.device_node
loop=False
continue
if i == int(timeout/3) or i == int((timeout/3)*2):
os.system("udevadm control --reload-rules && udevadm trigger")
logging.debug('[%s] Reloading udev.',str((time.strftime("%Y%m%d-%H%M"))))
i+=1
time.sleep(1)
else:
logging.error('[%s] Timeout reached, something wrong because we did not find the disk!',str((time.strftime("%Y%m%d-%H%M"))))
loop=False
return logicalname
# cmd="for d in `echo /sys/block/[sv]d*`; do disk=\"`echo $d | cut -d '/' -f4`\"; udevadm info --query=property --name /dev/${disk} | grep '"+diskid+"' 1>/dev/null && echo ${disk}; done"
#
# logging.debug('[%s] using cmd \'[%s]\'.',str((time.strftime("%Y%m%d-%H%M"))),cmd)
# while loop:
# try:
# logging.debug('[%s] running command %s/%s: \'%s\'.',str((time.strftime("%Y%m%d-%H%M"))),str(i),str(timeout),cmd)
# path = subprocess.check_output(cmd, shell=True, universal_newlines=True).replace("\n","")
# logging.debug('[%s] path is \'[%s]\'.',str((time.strftime("%Y%m%d-%H%M"))),str(path))
# if path.startswith("vd") or path.startswith("sd") :
# logicalname = "/dev/" + path
# except:
# if i <= timeout:
# logging.debug('[%s] Looking for disk with id \'%s\'. %s/%s.',str((time.strftime("%Y%m%d-%H%M"))),diskid,str(i),str(timeout))
# time.sleep(1)
# else:
# logging.debug('[%s] something wrong because we did not find this, will dump the disks attached now!',str((time.strftime("%Y%m%d-%H%M"))))
# cmd="for disk in `echo /dev/sd*`; do echo -n \"${disk}: \"; udevadm info --query=property --name $disk|grep SCSI_SERIAL; done"
# debug = subprocess.check_output(cmd, shell=True, universal_newlines=True)
# logging.debug('%s',str(debug))
# loop=False
# i+=1
# continue
# if str(logicalname) != "None":
# logging.debug('[%s] Found disk with id \'%s\' have logical name \'%s\'.',str((time.strftime("%Y%m%d-%H%M"))),diskid,logicalname)
# loop=False
# return logicalname
# This function it's intended to be used to create the image
# from identified this in agent machine.
# We assume this will be run on the guest machine with the
# the disks attached.
def create_qemu_backup(backupdir,logicalname,diskid,diskalias,event_id):
# Advanced options for qemu-img convert, check "man qemu-img"
qemu_options = "-o cluster_size=2M"
# Timout defined for the qemu execution time
# 3600 = 1h, 7200 = 2h, ...
qemu_exec_timeout = 7200
# Define output file name and path
ofile = backupdir + "/" + diskalias + ".qcow2"
# Exec command for making the backup
cmd = "qemu-img convert -O qcow2 "+qemu_options+" "+logicalname+" "+ofile
logging.debug('[%s] Will backup with command \'%s\' with defined timeout \'%s\' seconds.',str((time.strftime("%Y%m%d-%H%M"))),cmd,str(qemu_exec_timeout))
try:
disktimeStarted = time.time()
logging.info('[%s] QEMU backup starting, please hang on while we finish...',str((time.strftime("%Y%m%d-%H%M"))))
events_service.add(
event=types.Event(
vm=types.Vm(
id=data_vm.id,
),
origin=APPLICATION_NAME,
severity=types.LogSeverity.NORMAL,
custom_id=event_id,
description=(
'QEMU backup starting for disk \'%s\'.' % diskalias
),
),
)
event_id += 1
run = subprocess.check_output(cmd,shell=True, timeout=qemu_exec_timeout,universal_newlines=True,stderr=subprocess.STDOUT)
disktimeDelta = time.time() - disktimeStarted
diskrt = relativedelta(seconds=disktimeDelta)
diskexectime=('{:02d}:{:02d}:{:02d}'.format(int(diskrt.hours), int(diskrt.minutes), int(diskrt.seconds)))
logging.info('[%s] Backup finished successfully for disk \'%s\' in \'%s\' .',str((time.strftime("%Y%m%d-%H%M"))),diskalias,str(diskexectime))
events_service.add(
event=types.Event(
vm=types.Vm(
id=data_vm.id,
),
origin=APPLICATION_NAME,
severity=types.LogSeverity.NORMAL,
custom_id=event_id,
description=(
'QEMU backup finished for disk \'%s\' in \'%s\'.' % (diskalias, str(diskexectime))
),
),
)
event_id += 1
return event_id
except subprocess.TimeoutExpired as t:
logging.error('[%s] Timeout of \'%s\' seconds expired, process \'%s\' killed.',str((time.strftime("%Y%m%d-%H%M"))),str(t.timeout),cmd)
events_service.add(
event=types.Event(
vm=types.Vm(
id=data_vm.id,
),
origin=APPLICATION_NAME,
severity=types.LogSeverity.ERROR,
custom_id=event_id,
description=(
'Timeout of \'%s\' seconds expired, process \'%s\' killed.' % (str((time.strftime("%Y%m%d-%H%M"))),str(t.timeout))
),
),
)
event_id += 1
return event_id
except subprocess.CalledProcessError as e:
logging.error('[%s] Execution error, command output was:',str((time.strftime("%Y%m%d-%H%M"))))
events_service.add(
event=types.Event(
vm=types.Vm(
id=data_vm.id,
),
origin=APPLICATION_NAME,
severity=types.LogSeverity.ERROR,
custom_id=event_id,
description=('Execution error.'),
),
)
event_id += 1
logging.error('[%s] %s',str((time.strftime("%Y%m%d-%H%M"))),str(e.output))
events_service.add(
event=types.Event(
vm=types.Vm(
id=data_vm.id,
),
origin=APPLICATION_NAME,
severity=types.LogSeverity.ERROR,
custom_id=event_id,
description=(
'\'%s\'' % (str(e.output))
),
),
)
event_id += 1
return event_id
# Arguments
import sys
DATA_VM_BYPASSDISKS="None"
if len(sys.argv) < 3:
print("You must specify the right arguments!")
exit(1)
elif len(sys.argv) < 4:
DATA_VM_NAME = sys.argv[1]
SNAP_ID = sys.argv[2]
else:
exit(2)
logging.debug(
'[%s] Launched with arguments on vm \'%s\' and bypass disks \'%s\'.',
str((time.strftime("%Y%m%d-%H%M"))),
DATA_VM_NAME,
DATA_VM_BYPASSDISKS
)
# Parse de ini file with the configurations
import configparser
cfg = configparser.ConfigParser()
cfg.readfp(open("/opt/VirtBKP/default.conf"))
#BCKDIR = '/mnt/backups'
BCKDIR = cfg.get('ovirt-engine','backupdir')
# The connection details:
API_URL = cfg.get('ovirt-engine','api_url')
API_USER = cfg.get('ovirt-engine','api_user')
API_PASSWORD = cfg.get('ovirt-engine','api_password')
# The file containing the certificat of the CA used by the server. In
# an usual installation it will be in the file '/etc/pki/ovirt-engine/ca.pem'.
#API_CA_FILE = '/opt/VirtBKP/ca.crt'
API_CA_FILE = cfg.get('ovirt-engine','api_ca_file')
# The name of the application, to be used as the 'origin' of events
# sent to the audit log:
APPLICATION_NAME = 'Image Backup Service'
# The name of the virtual machine where we will attach the disks in
# order to actually back-up them. This virtual machine will usually have
# some kind of back-up software installed.
#AGENT_VM_NAME = 'ovirt-backup'
AGENT_VM_NAME = cfg.get('ovirt-engine','agent_vm_name')
## Connect to the server:
#connection = sdk.Connection(
# url=API_URL,
# username=API_USER,
# password=API_PASSWORD,
# ca_file=API_CA_FILE,
# debug=True,
# log=logging.getLogger(),
#)
#logging.info('[%s] Connected to the server.',str((time.strftime("%Y%m%d-%H%M"))))
cfg = configparser.ConfigParser()
cfg.readfp(open("/opt/VirtBKP/default.conf"))
url=cfg.get('ovirt-engine', 'api_url')
user=cfg.get('ovirt-engine', 'api_user')
password=cfg.get('ovirt-engine', 'api_password')
ca_file=cfg.get('ovirt-engine', 'api_ca_file')
connection = None
try:
connection = sdk.Connection(url,user,password,ca_file)
logging.info('[%s] Connected to the server.',str((time.strftime("%Y%m%d-%H%M"))))
except Exception as ex:
print(ex)
printf.ERROR("Connection to oVirt API has failed")
# Get the reference to the root of the services tree:
system_service = connection.system_service()
# Get the reference to the service that we will use to send events to
# the audit log:
events_service = system_service.events_service()
# Timer count for global process
totaltimeStarted = time.time()
# Get the reference to the service that manages the virtual machines:
vms_service = system_service.vms_service()
# Find the virtual machine that we want to back up. Note that we need to
# use the 'all_content' parameter to retrieve the retrieve the OVF, as
# it isn't retrieved by default:
data_vm = vms_service.list(
search='name=%s' % DATA_VM_NAME,
all_content=True,
)[0]
logging.info(
'[%s] Found data virtual machine \'%s\', the id is \'%s\'.',
str((time.strftime("%Y%m%d-%H%M"))), data_vm.name, data_vm.id,
)
# Find the virtual machine were we will attach the disks in order to do
# the backup:
agent_vm = vms_service.list(
search='name=%s' % AGENT_VM_NAME,
)[0]
logging.info(
'[%s] Found agent virtual machine \'%s\', the id is \'%s\'.',
str((time.strftime("%Y%m%d-%H%M"))), agent_vm.name, agent_vm.id,
)
# Find the services that manage the data and agent virtual machines:
data_vm_service = vms_service.vm_service(data_vm.id)
agent_vm_service = vms_service.vm_service(agent_vm.id)
# Create an unique description for the snapshot, so that it is easier
# for the administrator to identify this snapshot as a temporary one
# created just for backup purposes:
#snap_description = '%s-backup-%s' % (data_vm.name, uuid.uuid4())
snap_description = 'BACKUP_%s_%s' % (data_vm.name, DATE)
# Send an external event to indicate to the administrator that the
# backup of the virtual machine is starting. Note that the description
# of the event contains the name of the virtual machine and the name of
# the temporary snapshot, this way, if something fails, the administrator
# will know what snapshot was used and remove it manually.
#events_service.add(
# event=types.Event(
# vm=types.Vm(
# id=data_vm.id,
# ),
# origin=APPLICATION_NAME,
# severity=types.LogSeverity.NORMAL,
# custom_id=event_id,
# description=(
# 'Backup of virtual machine \'%s\' using snapshot \'%s\' is '
# 'starting.' % (data_vm.name, snap_description)
# ),
# ),
#)
#event_id += 1
# Create the structure we will use to deploy the backup data
#bckfullpath = BCKDIR + "/" + data_vm.name + "/" + str((time.strftime("%Y%m%d-%H%M")))
#mkdir = "mkdir -p " + bckfullpath
#subprocess.call(mkdir, shell=True)
#logging.debug(
# '[%s] Created directory \'%s\' as backup destination.',
# str((time.strftime("%Y%m%d-%H%M"))),
# bckfullpath
#)
# Send the request to create the snapshot. Note that this will return
# before the snapshot is completely created, so we will later need to
# wait till the snapshot is completely created.
# The snapshot will not include memory. Change to True the parameter
# persist_memorystate to get it (in that case the VM will be paused for a while).
snaps_service = data_vm_service.snapshots_service()
#snap = snaps_service.add(
# snapshot=types.Snapshot(
# description=snap_description,
# persist_memorystate=False,
# ),
#)
#logging.info(
# '[%s] Sent request to create snapshot \'%s\', the id is \'%s\'.',
# str((time.strftime("%Y%m%d-%H%M"))), snap.description, snap.id,
#)
# Poll and wait till the status of the snapshot is 'ok', which means
# that it is completely created:
snap_id = SNAP_ID
snap_service = snaps_service.snapshot_service(snap_id)
#while snap.snapshot_status != types.SnapshotStatus.OK:
# logging.info(
# '[%s] Waiting till the snapshot is created, the status is now \'%s\'.',
# str((time.strftime("%Y%m%d-%H%M"))),
# snap.snapshot_status
# )
# time.sleep(1)
# snap = snap_service.get()
#logging.info('[%s] The snapshot is now complete.',str((time.strftime("%Y%m%d-%H%M"))))
# Retrieve the descriptions of the disks of the snapshot:
snap_disks_service = snap_service.disks_service()
snap_disks = snap_disks_service.list()
# Attach all the disks of the snapshot to the agent virtual machine, and
# save the resulting disk attachments in a list so that we can later
# detach them easily:
attachments_service = agent_vm_service.disk_attachments_service()
attachments = []
for snap_disk in snap_disks:
attachment = attachments_service.add(
attachment=types.DiskAttachment(
disk=types.Disk(
id=snap_disk.id,
snapshot=types.Snapshot(
id=snap_id,
),
),
active=True,
bootable=False,
interface=types.DiskInterface.VIRTIO_SCSI,
),
)
attachments.append(attachment)
logging.info(
'[%s] Attached disk \'%s\' to the agent virtual machine \'%s\'.',
str((time.strftime("%Y%m%d-%H%M"))),attachment.disk.id, agent_vm.name
)
print(f"Attach disk:{attachment.disk.id} to the agent vm:{agent_vm.name}")
# Now the disks are attached to the virtual agent virtual machine, we
# can then ask that virtual machine to perform the backup. Doing that
# requires a mechanism to talk to the backup software that runs inside the
# agent virtual machine. That is outside of the scope of the SDK. But if
# the guest agent is installed in the virtual machine then we can
# provide useful information, like the identifiers of the disks that have
# just been attached.
#for attachment in attachments:
# if attachment.logical_name is not None:
# logging.info(
# '[%s] Logical name for disk \'%s\' is \'%s\'.',
# str((time.strftime("%Y%m%d-%H%M"))), attachment.disk.id, attachment.logical_name,
# )
# else:
# logging.info(
# '[%s] The logical name for disk \'%s\' isn\'t available. Is the '
# 'guest agent installed?',
# str((time.strftime("%Y%m%d-%H%M"))),
# attachment.disk.id,
# )
# Close the connection to the server:
connection.close()
```
The result of command is
root@ovirt-backup-lab VirtBKP]# ./teste.py ovirt-backup "729d64cb-c939-400c-b7f7-8e675c37a882"
Traceback (most recent call last):
File "./teste.py", line 412, in <module>
interface=types.DiskInterface.VIRTIO_SCSI,
File "/usr/lib64/python3.6/site-packages/ovirtsdk4/services.py", line 7147, in add
return self._internal_add(attachment, headers, query, wait)
File "/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py", line 232, in _internal_add
return future.wait() if wait else future
File "/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py", line 55, in wait
return self._code(response)
File "/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py", line 229, in callback
self._check_fault(response)
File "/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py", line 132, in _check_fault
self._raise_error(response, body)
File "/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py", line 118, in _raise_error
raise error
ovirtsdk4.Error: Fault reason is "Operation Failed". Fault detail is "[Failed to hot-plug disk]". HTTP response code is 400.
On engine log i have this
2024-09-25 18:30:00,132+01 INFO [org.ovirt.engine.core.sso.service.AuthenticationService] (default task-27) [] User admin@internal-authz with profile [internal] successfully logged in with scopes: ovirt-app-api ovirt-ext=token-info:authz-search ovirt-ext=token-info:public-authz-search ovirt-ext=token-info:validate ovirt-ext=token:password-access
2024-09-25 18:30:00,146+01 INFO [org.ovirt.engine.core.bll.aaa.CreateUserSessionCommand] (default task-27) [3f712b2] Running command: CreateUserSessionCommand internal: false.
2024-09-25 18:30:00,149+01 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-27) [3f712b2] EVENT_ID: USER_VDC_LOGIN(30), User admin@internal-authz connecting from '172.24.0.13' using session 'zVRF1cTF22ex9oRy2wcK/NIjfr2FwY2+AqhcRFv02J6tDPEoAC7YB329VVcrGrPoQcxJLRIokEDvt8j/PySxcg==' logged in.
2024-09-25 18:30:00,404+01 INFO [org.ovirt.engine.core.bll.storage.disk.AttachDiskToVmCommand] (default task-27) [8ee9996a-2ef8-42a4-a72b-18d406cc9199] Lock Acquired to object 'EngineLock:{exclusiveLocks='[6dc760fb-13bd-4285-b474-2ff5e39af74e=DISK]', sharedLocks=''}'
2024-09-25 18:30:00,415+01 INFO [org.ovirt.engine.core.bll.storage.disk.AttachDiskToVmCommand] (default task-27) [8ee9996a-2ef8-42a4-a72b-18d406cc9199] Running command: AttachDiskToVmCommand internal: false. Entities affected : ID: e302e3ac-6101-4317-b19d-46d951237122 Type: VMAction group CONFIGURE_VM_STORAGE with role type USER, ID: 6dc760fb-13bd-4285-b474-2ff5e39af74e Type: DiskAction group ATTACH_DISK with role type USER
2024-09-25 18:30:00,421+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand] (default task-27) [8ee9996a-2ef8-42a4-a72b-18d406cc9199] START, HotPlugDiskVDSCommand(HostName = hv1, HotPlugDiskVDSParameters:{hostId='35bd2f95-464f-4080-98e9-729a05f1a39b', vmId='e302e3ac-6101-4317-b19d-46d951237122', diskId='6dc760fb-13bd-4285-b474-2ff5e39af74e'}), log id: 7c9d787c
2024-09-25 18:30:00,422+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand] (default task-27) [8ee9996a-2ef8-42a4-a72b-18d406cc9199] Disk hot-plug: <?xml version="1.0" encoding="UTF-8"?><hotplug>
<devices>
<disk snapshot="no" type="file" device="disk">
<target dev="sda" bus="scsi"/>
<source file="/rhev/data-center/mnt/blockSD/0b52e59e-fe08-4e18-8273-228955bba3b7/images/6dc760fb-13bd-4285-b474-2ff5e39af74e/46bcd0f2-33ff-40a3-a32b-f39b34e99f77">
<seclabel model="dac" type="none" relabel="no"/>
</source>
<driver name="qemu" io="threads" type="qcow2" error_policy="stop" cache="writethrough"/>
<alias name="ua-6dc760fb-13bd-4285-b474-2ff5e39af74e"/>
<address bus="0" controller="0" unit="1" type="drive" target="0"/>
<serial>6dc760fb-13bd-4285-b474-2ff5e39af74e</serial>
</disk>
</devices>
<metadata xmlns:ovirt-vm="http://ovirt.org/vm/1.0">
<ovirt-vm:vm>
<ovirt-vm:device devtype="disk" name="sda">
<ovirt-vm:poolID>042813a0-7a69-11ef-a340-ac1f6b165d0d</ovirt-vm:poolID>
<ovirt-vm:volumeID>46bcd0f2-33ff-40a3-a32b-f39b34e99f77</ovirt-vm:volumeID>
<ovirt-vm:shared>transient</ovirt-vm:shared>
<ovirt-vm:imageID>6dc760fb-13bd-4285-b474-2ff5e39af74e</ovirt-vm:imageID>
<ovirt-vm:domainID>0b52e59e-fe08-4e18-8273-228955bba3b7</ovirt-vm:domainID>
</ovirt-vm:device>
</ovirt-vm:vm>
</metadata>
</hotplug>
2024-09-25 18:30:11,069+01 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand] (default task-27) [8ee9996a-2ef8-42a4-a72b-18d406cc9199] Failed in 'HotPlugDiskVDS' method
2024-09-25 18:30:11,075+01 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-27) [8ee9996a-2ef8-42a4-a72b-18d406cc9199] EVENT_ID: VDS_BROKER_COMMAND_FAILURE(10,802), VDSM hv1 command HotPlugDiskVDS failed: internal error: unable to execute QEMU command 'blockdev-add': Could not open '/rhev/data-center/mnt/blockSD/0b52e59e-fe08-4e18-8273-228955bba3b7/images/6dc760fb-13bd-4285-b474-2ff5e39af74e/46bcd0f2-33ff-40a3-a32b-f39b34e99f77': No such file or directory
2024-09-25 18:30:11,075+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand] (default task-27) [8ee9996a-2ef8-42a4-a72b-18d406cc9199] Command 'org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand' return value 'StatusOnlyReturn [status=Status [code=45, message=internal error: unable to execute QEMU command 'blockdev-add': Could not open '/rhev/data-center/mnt/blockSD/0b52e59e-fe08-4e18-8273-228955bba3b7/images/6dc760fb-13bd-4285-b474-2ff5e39af74e/46bcd0f2-33ff-40a3-a32b-f39b34e99f77': No such file or directory]]'
2024-09-25 18:30:11,075+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand] (default task-27) [8ee9996a-2ef8-42a4-a72b-18d406cc9199] HostName = hv1
2024-09-25 18:30:11,075+01 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand] (default task-27) [8ee9996a-2ef8-42a4-a72b-18d406cc9199] Command 'HotPlugDiskVDSCommand(HostName = hv1, HotPlugDiskVDSParameters:{hostId='35bd2f95-464f-4080-98e9-729a05f1a39b', vmId='e302e3ac-6101-4317-b19d-46d951237122', diskId='6dc760fb-13bd-4285-b474-2ff5e39af74e'})' execution failed: VDSGenericException: VDSErrorException: Failed to HotPlugDiskVDS, error = internal error: unable to execute QEMU command 'blockdev-add': Could not open '/rhev/data-center/mnt/blockSD/0b52e59e-fe08-4e18-8273-228955bba3b7/images/6dc760fb-13bd-4285-b474-2ff5e39af74e/46bcd0f2-33ff-40a3-a32b-f39b34e99f77': No such file or directory, code = 45
2024-09-25 18:30:11,075+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand] (default task-27) [8ee9996a-2ef8-42a4-a72b-18d406cc9199] FINISH, HotPlugDiskVDSCommand, return: , log id: 7c9d787c
2024-09-25 18:30:11,075+01 ERROR [org.ovirt.engine.core.bll.storage.disk.AttachDiskToVmCommand] (default task-27) [8ee9996a-2ef8-42a4-a72b-18d406cc9199] Command 'org.ovirt.engine.core.bll.storage.disk.AttachDiskToVmCommand' failed: EngineException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: VDSGenericException: VDSErrorException: Failed to HotPlugDiskVDS, error = internal error: unable to execute QEMU command 'blockdev-add': Could not open '/rhev/data-center/mnt/blockSD/0b52e59e-fe08-4e18-8273-228955bba3b7/images/6dc760fb-13bd-4285-b474-2ff5e39af74e/46bcd0f2-33ff-40a3-a32b-f39b34e99f77': No such file or directory, code = 45 (Failed with error FailedToPlugDisk and code 45)
2024-09-25 18:30:11,076+01 INFO [org.ovirt.engine.core.bll.CommandCompensator] (default task-27) [8ee9996a-2ef8-42a4-a72b-18d406cc9199] Command [id=4181b4c0-f988-4e96-8dbc-6917328bfdc5]: Compensating NEW_ENTITY_ID of org.ovirt.engine.core.common.businessentities.storage.DiskVmElement; snapshot: VmDeviceId:{deviceId='6dc760fb-13bd-4285-b474-2ff5e39af74e', vmId='e302e3ac-6101-4317-b19d-46d951237122'}.
2024-09-25 18:30:11,076+01 INFO [org.ovirt.engine.core.bll.CommandCompensator] (default task-27) [8ee9996a-2ef8-42a4-a72b-18d406cc9199] Command [id=4181b4c0-f988-4e96-8dbc-6917328bfdc5]: Compensating NEW_ENTITY_ID of org.ovirt.engine.core.common.businessentities.VmDevice; snapshot: VmDeviceId:{deviceId='6dc760fb-13bd-4285-b474-2ff5e39af74e', vmId='e302e3ac-6101-4317-b19d-46d951237122'}.
2024-09-25 18:30:11,083+01 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-27) [8ee9996a-2ef8-42a4-a72b-18d406cc9199] EVENT_ID: USER_FAILED_ATTACH_DISK_TO_VM(2,017), Failed to attach Disk testes_Disk1 to VM ovirt-backup (User: admin@internal-authz).
2024-09-25 18:30:11,083+01 INFO [org.ovirt.engine.core.bll.storage.disk.AttachDiskToVmCommand] (default task-27) [8ee9996a-2ef8-42a4-a72b-18d406cc9199] Lock freed to object 'EngineLock:{exclusiveLocks='[6dc760fb-13bd-4285-b474-2ff5e39af74e=DISK]', sharedLocks=''}'
2024-09-25 18:30:11,083+01 ERROR [org.ovirt.engine.api.restapi.resource.AbstractBackendResource] (default task-27) [] Operation Failed: [Failed to hot-plug disk]
This is scenario on my LAB Test, in production i have 4 hosts, 3 on Centos 8 and 1 on Centos 9 Ovirt node image, and if my ovirt-backup machine running on host with centos 9 Ovirt Node i have the same error that mentioned above, but if i migrate my ovirt-backup to another host with Centos 8 do not have this problem.
Any idea what could cause this problem?
Sorry if my English isn't understandable
Thanks
Luís Figueiredo
2 months, 1 week
Migrate Ovirt Node from el8 to el9
by devis@gmx.com
Hello,
I tried to find a guide or procedure to upgrade an ovirt node from el8 to el9, but I didn't find anything.
Is it possible migrate node without reinstallation?
Thanks,
Devis
2 months, 2 weeks
Installing on iSCSI boot node
by rtartar@gmail.com
I've installed the oVirt node on iSCSI boot server (Cisco UCS chassis) But when I go to add to the manager it fails and the Node loses contact with it's ibft interface. Anyway I can leave the interface alone when adding to the manager? I have 5 physical interfaces attached to the node. Any help would be greatly appreciated. Using the node 4.5 installer. I add the ip=ibft to the installation startup and seems to work mostly...
Thanks in advance
2 months, 3 weeks