How to stop backup operation on vm
by marco.pegoraro@univr.it
Hi,
I have the following problem.
In the db engine I have the following backups:
engine=# select * from vm_backups;
backup_id | from_checkpoint_id | to_checkpoint_id | vm_id
| phase | _create_date | host_id | description
| _update_date | backup_type | snapshot_id | is_stopped
--------------------------------------+--------------------+--------------------------------------+------------------------------------
--+-------+----------------------------+---------+-------------------------------------------------------------------------------------
-----------------------------------+----------------------------+-------------+--------------------------------------+------------
864beee4-c87d-447c-a249-83a38fd1d4e7 | | 0ecf0a56-e7bb-4133-9ac7-441aa0d84545 | e1465242-3724-405b-9750-250d1eeb974
2 | Ready | 2024-12-18 16:42:54.717+01 | | {"type":"veeam","job_uuid":"0c3a6b0a-b46e-4ca2-8ed1-3d29c4b221ab","session_id":"4706
6400-4f1f-456e-ab94-6c3af7043809"} | 2024-12-18 16:43:06.736+01 | hybrid | 234de52d-8739-499a-8031-b8d6f6db0422 | f
ace9cdf3-ad51-45b9-af6f-1728b3941187 | | e5b1ed24-4781-43dc-a1fd-b71173fd1602 | f2e2a274-71f7-4e4a-87ae-942ce0b78dd
2 | Ready | 2024-12-19 11:35:16.67+01 | | {"type":"veeam","job_uuid":"0c3a6b0a-b46e-4ca2-8ed1-3d29c4b221ab","session_id":"8d89
018d-ad81-4857-a4e3-42cf0b38a41d"} | 2024-12-19 11:35:40.907+01 | hybrid | c9108e74-8bdc-401a-b27d-adab99f28682 | f
(2 rows)
I want to stop the backups and delete them.
I ran the following commands on the KVM nodes to stop the backups:
#vdsm-client VM stop_backup vmID=e1465242-3724-405b-9750-250d1eeb9742 backup_id=864beee4-c87d-447c-a249-83a38fd1d4e7
{
"code": 0,
"message": "Done"
}
#vdsm-client VM stop_backup vmID=e1465242-3724-405b-9750-250d1eeb9742 backup_id=864beee4-c87d-447c-a249-83a38fd1d4e7
{
"code": 0,
"message": "Done"
}
but in the db there are still:
engine=# select * from vm_backups;
backup_id | from_checkpoint_id | to_checkpoint_id | vm_id
| phase | _create_date | host_id | description
| _update_date | backup_type | snapshot_id | is_stopped
--------------------------------------+--------------------+--------------------------------------+------------------------------------
--+-------+----------------------------+---------+-------------------------------------------------------------------------------------
-----------------------------------+----------------------------+-------------+--------------------------------------+------------
864beee4-c87d-447c-a249-83a38fd1d4e7 | | 0ecf0a56-e7bb-4133-9ac7-441aa0d84545 | e1465242-3724-405b-9750-250d1eeb974
2 | Ready | 2024-12-18 16:42:54.717+01 | | {"type":"veeam","job_uuid":"0c3a6b0a-b46e-4ca2-8ed1-3d29c4b221ab","session_id":"4706
6400-4f1f-456e-ab94-6c3af7043809"} | 2024-12-18 16:43:06.736+01 | hybrid | 234de52d-8739-499a-8031-b8d6f6db0422 | f
ace9cdf3-ad51-45b9-af6f-1728b3941187 | | e5b1ed24-4781-43dc-a1fd-b71173fd1602 | f2e2a274-71f7-4e4a-87ae-942ce0b78dd
2 | Ready | 2024-12-19 11:35:16.67+01 | | {"type":"veeam","job_uuid":"0c3a6b0a-b46e-4ca2-8ed1-3d29c4b221ab","session_id":"8d89
018d-ad81-4857-a4e3-42cf0b38a41d"} | 2024-12-19 11:35:40.907+01 | hybrid | c9108e74-8bdc-401a-b27d-adab99f28682 | f
(2 rows)
Do you have any idea how I can accomplish this?
Thank you very much.
3 months, 3 weeks
"Master" Storage Domain
by duparchy@esrf.fr
Hi
What is the role of the "Master Storage Domain" ? (and what consequences if it is unresponsive for a while)
3 months, 3 weeks
Migrate from Centos8 to Centos9
by suporte@logicworks.pt
Hi,
I need to migrate from Centos8 to Centos9, engine+ 2 nodes, iSCSI storage,
My idea is backup Engine on Centos8:
engine-backup --scope=all --mode=backup --file=/root/backup
Then install a new engine machine with Centos9 and restore:
engine-backup --mode=restore --file=backup --log=backup_restore.log
engine-setup
Now Engine on Centos9 is controlling the 2 nodes on Centos8
Than, I migrate all VMs from node2 to node1. Install node2 with Centos9
Migrate all VMs from node1 to node2
Install node1 with Centos9
Do you think it will work?
Thanks
--
Jose Ferradeira
http://www.logicworks.pt
3 months, 4 weeks
Re: [External] : Adding ISO doesn't work
by Stanley Ang
Hi Marcos
Is the problem fixed on 4.5.5-1.28 already?
Is the repLacement of olvm45-security-fixes supposed to be a workaround??
I have already changed olvm45-security-fixes.conf to the one that you
attached in the mail but the problem still persisted.
Stan
3 months, 4 weeks
CentOS 9 Stream Ovirt 4;5
by eevans@digitaldatatechs.com
My setup:
1 Seperate physical ovirt engine with 3 Centos 9 nodes;
I have imported ova's from Ovirt 4.3 and that will not launch. They stay in waiting to launch for bout 15 seconds then shutdown.
ALso, the node networks will not sync. I turned off firewalls to make sure it wasn't blocked with the same result.
I also disabled selinux with the same result.
Any help would be appereciated.
4 months
Uploading .iso image with REST API creates <id>.meta with wrong DISKTYPE value
by markul11@protonmail.com
When creating file (.iso image) with the web GUI the .meta file has DISKTYPE=ISOF, but when using REST, DISKTYPE=DATA and the file is not listed on the "Attach CD" drop down when creating a new vm.
Using API is the only way for me cos I can't find a fix to a connection problem from Firefox/Fedora 39 to the web GUI, so all transfers end up as "suspended by the system".
I am using the following POST to crate the file:
curl -k -u "admin@internal:secretpw" -X POST \
-H "Content-Type: application/xml" \
-d '<disk><name>Rocky-9-latest-x86_64-boot.iso</name><alias>Rocky-9-latest-x86_64-boot.iso</alias>\
<description>Rocky-9-latest-x86_64-boot.iso</description>\
<provisioned_size>1068498944</provisioned_size><format>raw</format>\
<storage_domains><storage_domain><name>isos</name>\
</storage_domain></storage_domains></disk>' \
https://ovirtsvc.adm.cnt.uksw.edu.pl/ovirt-engine/api/disks
The resulting <id>.meta is:
CAP=1068498944
CTIME=1732282300
DESCRIPTION={"DiskAlias":"Rocky-9-latest-x86_64-boot.iso","DiskDescription":"Rocky-9-latest-x86_64-boot.iso"}
DISKTYPE=DATA
[...]
FORMAT=RAW
[...]TYPE=SPARSE
VOLTYPE=LEAF
EOF
4 months
oVirt New version?
by ecsi@ecsi.hu
There hasn't been a new version for over a year, and not long ago, you asked for help with the release of a new version. What is the current status of the project?
4 months
Problems when add storagedomain "master"
by brancomrt@gmail.com
Hi,
I have a problem when i add iscsi storagedomain with ansible playbook. It's the sample of my task in playbook:
- name: Add Storage master
ovirt.ovirt.ovirt_storage_domain:
auth: "{{ ovirt_auth }}"
name: data_iscsi
host: xxx.xxxx.xxx
data_center: "{{ data_center_name }}"
domain_function: data
iscsi:
target: iqn.2007-11.com.nimblestorage:olvm-infra-dc-v66459751dbd365d4.00000026.bd96f285
lun_id: 2c32e833f545870cf6c9ce90085f296bd
address: 192.168.3.252
port: 3260
username: ''
password: ''
timeout: 240
discard_after_delete: True
backup: False
critical_space_action_blocker: 5
warning_low_space: 10
This is the "ERROR" when executing the palybook:
TASK [olvm_addstorage : Add Storage master] ******************************************************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ovirtsdk4.Error: Fault reason is "Operation Failed". Fault detail is "[Network error during communication with the Host.]". HTTP response code is 400.
fatal: [infradcatm-r6e1l10.atech.com.br]: FAILED! => {"changed": false, "msg": "Fault reason is \"Operation Failed\". Fault detail is \"[Network error during communication with the Host.]\". HTTP response code is 400."}
Logs of engine:
# tail -f /var/log/ovirt-engine/engine.log
2024-12-09 05:54:21,154Z ERROR [org.ovirt.vdsm.jsonrpc.client.reactors.ReactorClient] (SSL Stomp Reactor) [] Connection timeout for host 'xxxxx.xxxxxx.xxx.xxx', last response arrived 1501 ms ago.
2024-12-09 05:54:22,218Z INFO [org.ovirt.engine.core.sso.service.AuthenticationService] (default task-22) [] User admin@internal-authz with profile [internal] successfully logged in with scopes: ovirt-app-api ovirt-ext=token-info:authz-search ovirt-ext=token-info:public-authz-search ovirt-ext=token-info:validate ovirt-ext=token:password-access
2024-12-09 05:54:22,738Z INFO [org.ovirt.engine.core.bll.aaa.CreateUserSessionCommand] (default task-22) [46e35dea] Running command: CreateUserSessionCommand internal: false.
2024-12-09 05:54:22,794Z INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-22) [46e35dea] EVENT_ID: USER_VDC_LOGIN(30), User admin@internal-authz connecting from '192.168.29.20' using session 'n1uuzlRLnIXfFAua0AOD+2sk0g+RKku9vB6baJ4UoYdoxZ5JuNtySPblCXQkHG9Yz9RPBZiOMjowN+gy6gJyqg==' logged in.
2024-12-09 05:54:22,913Z INFO [org.ovirt.engine.core.bll.storage.connection.ConnectStorageToVdsCommand] (default task-22) [c8f05ab3-d362-4e8c-bfac-0ed5d6abee98] Running command: ConnectStorageToVdsCommand internal: false. Entities affected : ID: aaa00000-0000-0000-0000-123456789aaa Type: SystemAction group CREATE_STORAGE_DOMAIN with role type ADMIN
2024-12-09 05:54:22,916Z INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (default task-22) [c8f05ab3-d362-4e8c-bfac-0ed5d6abee98] START, ConnectStorageServerVDSCommand(HostName = xxxxx.xxxxxx.xxx.xxx, StorageServerConnectionManagementVDSParameters:{hostId='260fd31d-b8a3-4ac7-a919-da2964b05b2b', storagePoolId='00000000-0000-0000-0000-000000000000', storageType='ISCSI', connectionList='[StorageServerConnections:{id='null', connection='192.168.3.252', iqn='iqn.2007-11.com.nimblestorage:olvm-infra-dc-v66459751dbd365d4.00000026.bd96f285', vfsType='null', mountOptions='null', nfsVersion='null', nfsRetrans='null', nfsTimeo='null', iface='null', netIfaceName='null'}]', sendNetworkEventOnFailure='true'}), log id: 5842a368
2024-12-09 05:54:22,918Z INFO [org.ovirt.vdsm.jsonrpc.client.reactors.ReactorClient] (SSL Stomp Reactor) [] Connecting to xxxxx.xxxxxx.xxx.xxx/192.168.29.20
2024-12-09 05:54:22,918Z INFO [org.ovirt.vdsm.jsonrpc.client.reactors.ReactorClient] (SSL Stomp Reactor) [] Connected to xxxxx.xxxxxx.xxx.xxx/192.168.29.20:54321
2024-12-09 05:54:22,920Z ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (default task-22) [c8f05ab3-d362-4e8c-bfac-0ed5d6abee98] Command 'ConnectStorageServerVDSCommand(HostName = xxxxx.xxxxxx.xxx.xxx, StorageServerConnectionManagementVDSParameters:{hostId='260fd31d-b8a3-4ac7-a919-da2964b05b2b', storagePoolId='00000000-0000-0000-0000-000000000000', storageType='ISCSI', connectionList='[StorageServerConnections:{id='null', connection='192.168.3.252', iqn='iqn.2007-11.com.nimblestorage:olvm-infra-dc-v66459751dbd365d4.00000026.bd96f285', vfsType='null', mountOptions='null', nfsVersion='null', nfsRetrans='null', nfsTimeo='null', iface='null', netIfaceName='null'}]', sendNetworkEventOnFailure='true'})' execution failed: java.net.ConnectException: Connection refused
2024-12-09 05:54:22,920Z INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (default task-22) [c8f05ab3-d362-4e8c-bfac-0ed5d6abee98] FINISH, ConnectStorageServerVDSCommand, return: , log id: 5842a368
2024-12-09 05:54:22,920Z ERROR [org.ovirt.engine.core.bll.storage.connection.ConnectStorageToVdsCommand] (default task-22) [c8f05ab3-d362-4e8c-bfac-0ed5d6abee98] Command 'org.ovirt.engine.core.bll.storage.connection.ConnectStorageToVdsCommand' failed: EngineException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException: java.net.ConnectException: Connection refused (Failed with error VDS_NETWORK_ERROR and code 5022)
2024-12-09 05:54:22,920Z INFO [org.ovirt.engine.core.vdsbroker.VdsManager] (EE-ManagedThreadFactory-engine-Thread-51) [c8f05ab3-d362-4e8c-bfac-0ed5d6abee98] Clearing domains data for host xxxxx.xxxxxx.xxx.xxx
2024-12-09 05:54:22,920Z WARN [org.ovirt.engine.core.vdsbroker.VdsManager] (EE-ManagedThreadFactory-engine-Thread-51) [c8f05ab3-d362-4e8c-bfac-0ed5d6abee98] Host 'xxxxx.xxxxxx.xxx.xxx' is not responding.
2024-12-09 05:54:22,924Z ERROR [org.ovirt.engine.core.bll.storage.connection.ConnectStorageToVdsCommand] (default task-22) [c8f05ab3-d362-4e8c-bfac-0ed5d6abee98] Transaction rolled-back for command 'org.ovirt.engine.core.bll.storage.connection.ConnectStorageToVdsCommand'.
2024-12-09 05:54:22,962Z ERROR [org.ovirt.engine.api.restapi.resource.AbstractBackendResource] (default task-22) [] Operation Failed: [Network error during communication with the Host.]
2024-12-09 05:54:24,676Z INFO [org.ovirt.vdsm.jsonrpc.client.reactors.ReactorClient] (SSL Stomp Reactor) [] Connecting to xxxxx.xxxxxx.xxx.xxx/192.168.29.20
2024-12-09 05:54:24,677Z INFO [org.ovirt.vdsm.jsonrpc.client.reactors.ReactorClient] (SSL Stomp Reactor) [] Connected to xxxxx.xxxxxx.xxx.xxx/192.168.29.20:54321
2024-12-09 05:54:24,775Z ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-97) [] EVENT_ID: VDS_BROKER_COMMAND_FAILURE(10,802), VDSM xxxxx.xxxxxx.xxx.xxx command Get Host Capabilities failed: Recovering from crash or Initializing
2024-12-09 05:54:24,776Z ERROR [org.ovirt.engine.core.vdsbroker.monitoring.HostMonitoring] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-97) [] Unable to RefreshCapabilities: VDSRecoveringException: Recovering from crash or Initializing
2024-12-09 05:54:30,275Z INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoAsyncVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-56) [] START, GetHardwareInfoAsyncVDSCommand(HostName = xxxxx.xxxxxx.xxx.xxx, VdsIdAndVdsVDSCommandParametersBase:{hostId='260fd31d-b8a3-4ac7-a919-da2964b05b2b', vds='Host[xxxxx.xxxxxx.xxx.xxx,260fd31d-b8a3-4ac7-a919-da2964b05b2b]'}), log id: 47684f5d
2024-12-09 05:54:30,275Z INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoAsyncVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-56) [] FINISH, GetHardwareInfoAsyncVDSCommand, return: , log id: 47684f5d
2024-12-09 05:54:30,275Z WARN [org.ovirt.engine.core.vdsbroker.VdsManager] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-56) [] Host 'xxxxx.xxxxxx.xxx.xxx' is running with SELinux in 'DISABLED' mode
2024-12-09 05:54:30,321Z WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-56) [] EVENT_ID: VDS_NETWORKS_OUT_OF_SYNC(1,110), Host xxxxx.xxxxxx.xxx.xxx's following network(s) are not synchronized with their Logical Network configuration: ovirtmgmt.
2024-12-09 05:54:30,364Z INFO [org.ovirt.engine.core.bll.HandleVdsCpuFlagsOrClusterChangedCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-56) [6133e03b] Running command: HandleVdsCpuFlagsOrClusterChangedCommand internal: true. Entities affected : ID: 260fd31d-b8a3-4ac7-a919-da2964b05b2b Type: VDS
2024-12-09 05:54:30,366Z INFO [org.ovirt.engine.core.bll.UpdateClusterCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-56) [6f8cbaaa] Lock Acquired to object 'EngineLock:{exclusiveLocks='[]', sharedLocks='[]'}'
2024-12-09 05:54:30,373Z INFO [org.ovirt.engine.core.bll.UpdateClusterCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-56) [6f8cbaaa] Running command: UpdateClusterCommand internal: true. Entities affected : ID: 1ff65d4a-458d-49a9-a66d-9d1bfdfc5df7 Type: ClusterAction group EDIT_CLUSTER_CONFIGURATION with role type ADMIN
2024-12-09 05:54:30,435Z WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-56) [6f8cbaaa] EVENT_ID: FENCE_DISABLED_IN_CLUSTER_POLICY(9,016), Fencing is disabled in Fencing Policy of the Cluster infradc, so HA VMs running on a non-responsive host will not be restarted elsewhere.
2024-12-09 05:54:30,465Z INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-56) [6f8cbaaa] EVENT_ID: SYSTEM_UPDATE_CLUSTER(835), Host cluster infradc was updated by system
2024-12-09 05:54:30,465Z INFO [org.ovirt.engine.core.bll.UpdateClusterCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-56) [6f8cbaaa] Lock freed to object 'EngineLock:{exclusiveLocks='[]', sharedLocks='[]'}'
2024-12-09 05:54:30,621Z INFO [org.ovirt.engine.core.bll.InitVdsOnUpCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-48) [6ff01f24] Running command: InitVdsOnUpCommand internal: true. Entities affected : ID: d8c291be-6953-4185-8a51-080b5bf013a2 Type: StoragePool
2024-12-09 05:54:30,703Z ERROR [org.ovirt.engine.core.bll.pm.FenceProxyLocator] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-48) [6ff01f24] Can not run fence action on host 'xxxxx.xxxxxx.xxx.xxx', no suitable proxy host was found.
2024-12-09 05:54:30,705Z INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SetMOMPolicyParametersVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-48) [6ff01f24] START, SetMOMPolicyParametersVDSCommand(HostName = xxxxx.xxxxxx.xxx.xxx, MomPolicyVDSParameters:{hostId='260fd31d-b8a3-4ac7-a919-da2964b05b2b'}), log id: 2fc3f5c3
2024-12-09 05:54:30,730Z INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SetMOMPolicyParametersVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-48) [6ff01f24] FINISH, SetMOMPolicyParametersVDSCommand, return: , log id: 2fc3f5c3
2024-12-09 05:54:30,820Z INFO [org.ovirt.engine.core.bll.hostdev.RefreshHostDevicesCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-48) [272ff804] Running command: RefreshHostDevicesCommand internal: true. Entities affected : ID: 260fd31d-b8a3-4ac7-a919-da2964b05b2b Type: VDSAction group MANIPULATE_HOST with role type ADMIN
2024-12-09 05:54:32,122Z INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-48) [272ff804] EVENT_ID: VDS_DETECTED(13), Status of host xxxxx.xxxxxx.xxx.xxx was set to Up.
2024-12-09 05:54:32,224Z INFO [org.ovirt.engine.core.bll.HandleVdsCpuFlagsOrClusterChangedCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-48) [1887e967] Running command: HandleVdsCpuFlagsOrClusterChangedCommand internal: true. Entities affected : ID: 260fd31d-b8a3-4ac7-a919-da2964b05b2b Type: VDS
2024-12-09 05:54:32,227Z INFO [org.ovirt.engine.core.bll.UpdateClusterCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-48) [3932330e] Lock Acquired to object 'EngineLock:{exclusiveLocks='[]', sharedLocks='[]'}'
2024-12-09 05:54:32,233Z INFO [org.ovirt.engine.core.bll.UpdateClusterCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-48) [3932330e] Running command: UpdateClusterCommand internal: true. Entities affected : ID: 1ff65d4a-458d-49a9-a66d-9d1bfdfc5df7 Type: ClusterAction group EDIT_CLUSTER_CONFIGURATION with role type ADMIN
2024-12-09 05:54:32,282Z WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-48) [3932330e] EVENT_ID: FENCE_DISABLED_IN_CLUSTER_POLICY(9,016), Fencing is disabled in Fencing Policy of the Cluster infradc, so HA VMs running on a non-responsive host will not be restarted elsewhere.
2024-12-09 05:54:32,283Z INFO [org.ovirt.engine.core.bll.UpdateClusterCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-48) [3932330e] Lock freed to object 'EngineLock:{exclusiveLocks='[]', sharedLocks='[]'}'
2024-12-09 05:54:32,286Z INFO [org.ovirt.engine.core.bll.HandleVdsVersionCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-48) [5bd74a4b] Running command: HandleVdsVersionCommand internal: true. Entities affected : ID: 260fd31d-b8a3-4ac7-a919-da2964b05b2b Type: VDS
2024-12-09 05:54:32,288Z INFO [org.ovirt.engine.core.vdsbroker.VdsManager] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-48) [5bd74a4b] Received first domain report for host xxxxx.xxxxxx.xxx.xxx
2024-12-09 05:55:39,126Z INFO [org.ovirt.engine.core.sso.servlets.OAuthRevokeServlet] (default task-22) [] User admin@internal-authz with profile [internal] successfully logged out
2024-12-09 05:55:39,131Z INFO [org.ovirt.engine.core.bll.aaa.TerminateSessionsForTokenCommand] (default task-26) [736ce9c] Running command: TerminateSessionsForTokenCommand internal: true.
2024-12-09 05:59:41,230Z INFO [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] (EE-ManagedScheduledExecutorService-engineThreadMonitoringThreadPool-Thread-1) [] Thread pool 'commandCoordinator' is using 0 threads out of 10, 2 threads waiting for tasks.
2024-12-09 05:59:41,230Z INFO [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] (EE-ManagedScheduledExecutorService-engineThreadMonitoringThreadPool-Thread-1) [] Thread pool 'default' is using 0 threads out of 1, 5 threads waiting for tasks.
2024-12-09 05:59:41,230Z INFO [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] (EE-ManagedScheduledExecutorService-engineThreadMonitoringThreadPool-Thread-1) [] Thread pool 'engineScheduledThreadPool' is using 0 threads out of 1, 100 threads waiting for tasks.
2024-12-09 05:59:41,230Z INFO [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] (EE-ManagedScheduledExecutorService-engineThreadMonitoringThreadPool-Thread-1) [] Thread pool 'engineThreadMonitoringThreadPool' is using 1 threads out of 1, 0 threads waiting for tasks.
2024-12-09 05:59:41,230Z INFO [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] (EE-ManagedScheduledExecutorService-engineThreadMonitoringThreadPool-Thread-1) [] Thread pool 'hostUpdatesChecker' is using 0 threads out of 5, 1 threads waiting for tasks.
Versions:
ansible-core: 2.16.3-2.el8
Pacotes instalados
ovirt-ansible-collection.noarch 3.2.0-1.17.el8 @local_ovirt-4.5
ovirt-host.x86_64 4.5.0-3.el8 @local_ovirt-4.5
ovirt-host-dependencies.x86_64 4.5.0-3.el8 @local_ovirt-4.5
ovirt-hosted-engine-ha.noarch 2.5.1-1.el8 @local_ovirt-4.5
ovirt-hosted-engine-setup.noarch 2.7.1-1.3.el8 @local_ovirt-4.5
ovirt-imageio-client.x86_64 2.5.0-1.el8 @local_ovirt-4.5
ovirt-imageio-common.x86_64 2.5.0-1.el8 @local_ovirt-4.5
ovirt-imageio-daemon.x86_64 2.5.0-1.el8 @local_ovirt-4.5
ovirt-openvswitch.noarch 2.15-4.el8 @local_ovirt-4.5
ovirt-openvswitch-ipsec.noarch 2.15-4.el8 @local_ovirt-4.5
ovirt-openvswitch-ovn.noarch 2.15-4.el8 @local_ovirt-4.5
ovirt-openvswitch-ovn-common.noarch 2.15-4.el8 @local_ovirt-4.5
ovirt-openvswitch-ovn-host.noarch 2.15-4.el8 @local_ovirt-4.5
ovirt-provider-ovn-driver.noarch 1.2.36-1.el8 @local_ovirt-4.5
ovirt-python-openvswitch.noarch 2.15-4.el8 @local_ovirt-4.5
ovirt-vmconsole.noarch 1.0.9-4.el8 @local_ovirt-4.5
ovirt-vmconsole-host.noarch 1.0.9-4.el8 @local_ovirt-4.5
Regards,
Alex Martins
4 months, 1 week