Problems when add storagedomain "master"

Hi, I have a problem when i add iscsi storagedomain with ansible playbook. It's the sample of my task in playbook: - name: Add Storage master ovirt.ovirt.ovirt_storage_domain: auth: "{{ ovirt_auth }}" name: data_iscsi host: xxx.xxxx.xxx data_center: "{{ data_center_name }}" domain_function: data iscsi: target: iqn.2007-11.com.nimblestorage:olvm-infra-dc-v66459751dbd365d4.00000026.bd96f285 lun_id: 2c32e833f545870cf6c9ce90085f296bd address: 192.168.3.252 port: 3260 username: '' password: '' timeout: 240 discard_after_delete: True backup: False critical_space_action_blocker: 5 warning_low_space: 10 This is the "ERROR" when executing the palybook: TASK [olvm_addstorage : Add Storage master] ****************************************************************************** An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ovirtsdk4.Error: Fault reason is "Operation Failed". Fault detail is "[Network error during communication with the Host.]". HTTP response code is 400. fatal: [infradcatm-r6e1l10.atech.com.br]: FAILED! => {"changed": false, "msg": "Fault reason is \"Operation Failed\". Fault detail is \"[Network error during communication with the Host.]\". HTTP response code is 400."} Logs of engine: # tail -f /var/log/ovirt-engine/engine.log 2024-12-09 05:54:21,154Z ERROR [org.ovirt.vdsm.jsonrpc.client.reactors.ReactorClient] (SSL Stomp Reactor) [] Connection timeout for host 'xxxxx.xxxxxx.xxx.xxx', last response arrived 1501 ms ago. 2024-12-09 05:54:22,218Z INFO [org.ovirt.engine.core.sso.service.AuthenticationService] (default task-22) [] User admin@internal-authz with profile [internal] successfully logged in with scopes: ovirt-app-api ovirt-ext=token-info:authz-search ovirt-ext=token-info:public-authz-search ovirt-ext=token-info:validate ovirt-ext=token:password-access 2024-12-09 05:54:22,738Z INFO [org.ovirt.engine.core.bll.aaa.CreateUserSessionCommand] (default task-22) [46e35dea] Running command: CreateUserSessionCommand internal: false. 2024-12-09 05:54:22,794Z INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-22) [46e35dea] EVENT_ID: USER_VDC_LOGIN(30), User admin@internal-authz connecting from '192.168.29.20' using session 'n1uuzlRLnIXfFAua0AOD+2sk0g+RKku9vB6baJ4UoYdoxZ5JuNtySPblCXQkHG9Yz9RPBZiOMjowN+gy6gJyqg==' logged in. 2024-12-09 05:54:22,913Z INFO [org.ovirt.engine.core.bll.storage.connection.ConnectStorageToVdsCommand] (default task-22) [c8f05ab3-d362-4e8c-bfac-0ed5d6abee98] Running command: ConnectStorageToVdsCommand internal: false. Entities affected : ID: aaa00000-0000-0000-0000-123456789aaa Type: SystemAction group CREATE_STORAGE_DOMAIN with role type ADMIN 2024-12-09 05:54:22,916Z INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (default task-22) [c8f05ab3-d362-4e8c-bfac-0ed5d6abee98] START, ConnectStorageServerVDSCommand(HostName = xxxxx.xxxxxx.xxx.xxx, StorageServerConnectionManagementVDSParameters:{hostId='260fd31d-b8a3-4ac7-a919-da2964b05b2b', storagePoolId='00000000-0000-0000-0000-000000000000', storageType='ISCSI', connectionList='[StorageServerConnections:{id='null', connection='192.168.3.252', iqn='iqn.2007-11.com.nimblestorage:olvm-infra-dc-v66459751dbd365d4.00000026.bd96f285', vfsType='null', mountOptions='null', nfsVersion='null', nfsRetrans='null', nfsTimeo='null', iface='null', netIfaceName='null'}]', sendNetworkEventOnFailure='true'}), log id: 5842a368 2024-12-09 05:54:22,918Z INFO [org.ovirt.vdsm.jsonrpc.client.reactors.ReactorClient] (SSL Stomp Reactor) [] Connecting to xxxxx.xxxxxx.xxx.xxx/192.168.29.20 2024-12-09 05:54:22,918Z INFO [org.ovirt.vdsm.jsonrpc.client.reactors.ReactorClient] (SSL Stomp Reactor) [] Connected to xxxxx.xxxxxx.xxx.xxx/192.168.29.20:54321 2024-12-09 05:54:22,920Z ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (default task-22) [c8f05ab3-d362-4e8c-bfac-0ed5d6abee98] Command 'ConnectStorageServerVDSCommand(HostName = xxxxx.xxxxxx.xxx.xxx, StorageServerConnectionManagementVDSParameters:{hostId='260fd31d-b8a3-4ac7-a919-da2964b05b2b', storagePoolId='00000000-0000-0000-0000-000000000000', storageType='ISCSI', connectionList='[StorageServerConnections:{id='null', connection='192.168.3.252', iqn='iqn.2007-11.com.nimblestorage:olvm-infra-dc-v66459751dbd365d4.00000026.bd96f285', vfsType='null', mountOptions='null', nfsVersion='null', nfsRetrans='null', nfsTimeo='null', iface='null', netIfaceName='null'}]', sendNetworkEventOnFailure='true'})' execution failed: java.net.ConnectException: Connection refused 2024-12-09 05:54:22,920Z INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (default task-22) [c8f05ab3-d362-4e8c-bfac-0ed5d6abee98] FINISH, ConnectStorageServerVDSCommand, return: , log id: 5842a368 2024-12-09 05:54:22,920Z ERROR [org.ovirt.engine.core.bll.storage.connection.ConnectStorageToVdsCommand] (default task-22) [c8f05ab3-d362-4e8c-bfac-0ed5d6abee98] Command 'org.ovirt.engine.core.bll.storage.connection.ConnectStorageToVdsCommand' failed: EngineException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException: java.net.ConnectException: Connection refused (Failed with error VDS_NETWORK_ERROR and code 5022) 2024-12-09 05:54:22,920Z INFO [org.ovirt.engine.core.vdsbroker.VdsManager] (EE-ManagedThreadFactory-engine-Thread-51) [c8f05ab3-d362-4e8c-bfac-0ed5d6abee98] Clearing domains data for host xxxxx.xxxxxx.xxx.xxx 2024-12-09 05:54:22,920Z WARN [org.ovirt.engine.core.vdsbroker.VdsManager] (EE-ManagedThreadFactory-engine-Thread-51) [c8f05ab3-d362-4e8c-bfac-0ed5d6abee98] Host 'xxxxx.xxxxxx.xxx.xxx' is not responding. 2024-12-09 05:54:22,924Z ERROR [org.ovirt.engine.core.bll.storage.connection.ConnectStorageToVdsCommand] (default task-22) [c8f05ab3-d362-4e8c-bfac-0ed5d6abee98] Transaction rolled-back for command 'org.ovirt.engine.core.bll.storage.connection.ConnectStorageToVdsCommand'. 2024-12-09 05:54:22,962Z ERROR [org.ovirt.engine.api.restapi.resource.AbstractBackendResource] (default task-22) [] Operation Failed: [Network error during communication with the Host.] 2024-12-09 05:54:24,676Z INFO [org.ovirt.vdsm.jsonrpc.client.reactors.ReactorClient] (SSL Stomp Reactor) [] Connecting to xxxxx.xxxxxx.xxx.xxx/192.168.29.20 2024-12-09 05:54:24,677Z INFO [org.ovirt.vdsm.jsonrpc.client.reactors.ReactorClient] (SSL Stomp Reactor) [] Connected to xxxxx.xxxxxx.xxx.xxx/192.168.29.20:54321 2024-12-09 05:54:24,775Z ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-97) [] EVENT_ID: VDS_BROKER_COMMAND_FAILURE(10,802), VDSM xxxxx.xxxxxx.xxx.xxx command Get Host Capabilities failed: Recovering from crash or Initializing 2024-12-09 05:54:24,776Z ERROR [org.ovirt.engine.core.vdsbroker.monitoring.HostMonitoring] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-97) [] Unable to RefreshCapabilities: VDSRecoveringException: Recovering from crash or Initializing 2024-12-09 05:54:30,275Z INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoAsyncVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-56) [] START, GetHardwareInfoAsyncVDSCommand(HostName = xxxxx.xxxxxx.xxx.xxx, VdsIdAndVdsVDSCommandParametersBase:{hostId='260fd31d-b8a3-4ac7-a919-da2964b05b2b', vds='Host[xxxxx.xxxxxx.xxx.xxx,260fd31d-b8a3-4ac7-a919-da2964b05b2b]'}), log id: 47684f5d 2024-12-09 05:54:30,275Z INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoAsyncVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-56) [] FINISH, GetHardwareInfoAsyncVDSCommand, return: , log id: 47684f5d 2024-12-09 05:54:30,275Z WARN [org.ovirt.engine.core.vdsbroker.VdsManager] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-56) [] Host 'xxxxx.xxxxxx.xxx.xxx' is running with SELinux in 'DISABLED' mode 2024-12-09 05:54:30,321Z WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-56) [] EVENT_ID: VDS_NETWORKS_OUT_OF_SYNC(1,110), Host xxxxx.xxxxxx.xxx.xxx's following network(s) are not synchronized with their Logical Network configuration: ovirtmgmt. 2024-12-09 05:54:30,364Z INFO [org.ovirt.engine.core.bll.HandleVdsCpuFlagsOrClusterChangedCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-56) [6133e03b] Running command: HandleVdsCpuFlagsOrClusterChangedCommand internal: true. Entities affected : ID: 260fd31d-b8a3-4ac7-a919-da2964b05b2b Type: VDS 2024-12-09 05:54:30,366Z INFO [org.ovirt.engine.core.bll.UpdateClusterCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-56) [6f8cbaaa] Lock Acquired to object 'EngineLock:{exclusiveLocks='[]', sharedLocks='[]'}' 2024-12-09 05:54:30,373Z INFO [org.ovirt.engine.core.bll.UpdateClusterCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-56) [6f8cbaaa] Running command: UpdateClusterCommand internal: true. Entities affected : ID: 1ff65d4a-458d-49a9-a66d-9d1bfdfc5df7 Type: ClusterAction group EDIT_CLUSTER_CONFIGURATION with role type ADMIN 2024-12-09 05:54:30,435Z WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-56) [6f8cbaaa] EVENT_ID: FENCE_DISABLED_IN_CLUSTER_POLICY(9,016), Fencing is disabled in Fencing Policy of the Cluster infradc, so HA VMs running on a non-responsive host will not be restarted elsewhere. 2024-12-09 05:54:30,465Z INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-56) [6f8cbaaa] EVENT_ID: SYSTEM_UPDATE_CLUSTER(835), Host cluster infradc was updated by system 2024-12-09 05:54:30,465Z INFO [org.ovirt.engine.core.bll.UpdateClusterCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-56) [6f8cbaaa] Lock freed to object 'EngineLock:{exclusiveLocks='[]', sharedLocks='[]'}' 2024-12-09 05:54:30,621Z INFO [org.ovirt.engine.core.bll.InitVdsOnUpCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-48) [6ff01f24] Running command: InitVdsOnUpCommand internal: true. Entities affected : ID: d8c291be-6953-4185-8a51-080b5bf013a2 Type: StoragePool 2024-12-09 05:54:30,703Z ERROR [org.ovirt.engine.core.bll.pm.FenceProxyLocator] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-48) [6ff01f24] Can not run fence action on host 'xxxxx.xxxxxx.xxx.xxx', no suitable proxy host was found. 2024-12-09 05:54:30,705Z INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SetMOMPolicyParametersVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-48) [6ff01f24] START, SetMOMPolicyParametersVDSCommand(HostName = xxxxx.xxxxxx.xxx.xxx, MomPolicyVDSParameters:{hostId='260fd31d-b8a3-4ac7-a919-da2964b05b2b'}), log id: 2fc3f5c3 2024-12-09 05:54:30,730Z INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SetMOMPolicyParametersVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-48) [6ff01f24] FINISH, SetMOMPolicyParametersVDSCommand, return: , log id: 2fc3f5c3 2024-12-09 05:54:30,820Z INFO [org.ovirt.engine.core.bll.hostdev.RefreshHostDevicesCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-48) [272ff804] Running command: RefreshHostDevicesCommand internal: true. Entities affected : ID: 260fd31d-b8a3-4ac7-a919-da2964b05b2b Type: VDSAction group MANIPULATE_HOST with role type ADMIN 2024-12-09 05:54:32,122Z INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-48) [272ff804] EVENT_ID: VDS_DETECTED(13), Status of host xxxxx.xxxxxx.xxx.xxx was set to Up. 2024-12-09 05:54:32,224Z INFO [org.ovirt.engine.core.bll.HandleVdsCpuFlagsOrClusterChangedCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-48) [1887e967] Running command: HandleVdsCpuFlagsOrClusterChangedCommand internal: true. Entities affected : ID: 260fd31d-b8a3-4ac7-a919-da2964b05b2b Type: VDS 2024-12-09 05:54:32,227Z INFO [org.ovirt.engine.core.bll.UpdateClusterCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-48) [3932330e] Lock Acquired to object 'EngineLock:{exclusiveLocks='[]', sharedLocks='[]'}' 2024-12-09 05:54:32,233Z INFO [org.ovirt.engine.core.bll.UpdateClusterCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-48) [3932330e] Running command: UpdateClusterCommand internal: true. Entities affected : ID: 1ff65d4a-458d-49a9-a66d-9d1bfdfc5df7 Type: ClusterAction group EDIT_CLUSTER_CONFIGURATION with role type ADMIN 2024-12-09 05:54:32,282Z WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-48) [3932330e] EVENT_ID: FENCE_DISABLED_IN_CLUSTER_POLICY(9,016), Fencing is disabled in Fencing Policy of the Cluster infradc, so HA VMs running on a non-responsive host will not be restarted elsewhere. 2024-12-09 05:54:32,283Z INFO [org.ovirt.engine.core.bll.UpdateClusterCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-48) [3932330e] Lock freed to object 'EngineLock:{exclusiveLocks='[]', sharedLocks='[]'}' 2024-12-09 05:54:32,286Z INFO [org.ovirt.engine.core.bll.HandleVdsVersionCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-48) [5bd74a4b] Running command: HandleVdsVersionCommand internal: true. Entities affected : ID: 260fd31d-b8a3-4ac7-a919-da2964b05b2b Type: VDS 2024-12-09 05:54:32,288Z INFO [org.ovirt.engine.core.vdsbroker.VdsManager] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-48) [5bd74a4b] Received first domain report for host xxxxx.xxxxxx.xxx.xxx 2024-12-09 05:55:39,126Z INFO [org.ovirt.engine.core.sso.servlets.OAuthRevokeServlet] (default task-22) [] User admin@internal-authz with profile [internal] successfully logged out 2024-12-09 05:55:39,131Z INFO [org.ovirt.engine.core.bll.aaa.TerminateSessionsForTokenCommand] (default task-26) [736ce9c] Running command: TerminateSessionsForTokenCommand internal: true. 2024-12-09 05:59:41,230Z INFO [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] (EE-ManagedScheduledExecutorService-engineThreadMonitoringThreadPool-Thread-1) [] Thread pool 'commandCoordinator' is using 0 threads out of 10, 2 threads waiting for tasks. 2024-12-09 05:59:41,230Z INFO [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] (EE-ManagedScheduledExecutorService-engineThreadMonitoringThreadPool-Thread-1) [] Thread pool 'default' is using 0 threads out of 1, 5 threads waiting for tasks. 2024-12-09 05:59:41,230Z INFO [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] (EE-ManagedScheduledExecutorService-engineThreadMonitoringThreadPool-Thread-1) [] Thread pool 'engineScheduledThreadPool' is using 0 threads out of 1, 100 threads waiting for tasks. 2024-12-09 05:59:41,230Z INFO [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] (EE-ManagedScheduledExecutorService-engineThreadMonitoringThreadPool-Thread-1) [] Thread pool 'engineThreadMonitoringThreadPool' is using 1 threads out of 1, 0 threads waiting for tasks. 2024-12-09 05:59:41,230Z INFO [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] (EE-ManagedScheduledExecutorService-engineThreadMonitoringThreadPool-Thread-1) [] Thread pool 'hostUpdatesChecker' is using 0 threads out of 5, 1 threads waiting for tasks. Versions: ansible-core: 2.16.3-2.el8 Pacotes instalados ovirt-ansible-collection.noarch 3.2.0-1.17.el8 @local_ovirt-4.5 ovirt-host.x86_64 4.5.0-3.el8 @local_ovirt-4.5 ovirt-host-dependencies.x86_64 4.5.0-3.el8 @local_ovirt-4.5 ovirt-hosted-engine-ha.noarch 2.5.1-1.el8 @local_ovirt-4.5 ovirt-hosted-engine-setup.noarch 2.7.1-1.3.el8 @local_ovirt-4.5 ovirt-imageio-client.x86_64 2.5.0-1.el8 @local_ovirt-4.5 ovirt-imageio-common.x86_64 2.5.0-1.el8 @local_ovirt-4.5 ovirt-imageio-daemon.x86_64 2.5.0-1.el8 @local_ovirt-4.5 ovirt-openvswitch.noarch 2.15-4.el8 @local_ovirt-4.5 ovirt-openvswitch-ipsec.noarch 2.15-4.el8 @local_ovirt-4.5 ovirt-openvswitch-ovn.noarch 2.15-4.el8 @local_ovirt-4.5 ovirt-openvswitch-ovn-common.noarch 2.15-4.el8 @local_ovirt-4.5 ovirt-openvswitch-ovn-host.noarch 2.15-4.el8 @local_ovirt-4.5 ovirt-provider-ovn-driver.noarch 1.2.36-1.el8 @local_ovirt-4.5 ovirt-python-openvswitch.noarch 2.15-4.el8 @local_ovirt-4.5 ovirt-vmconsole.noarch 1.0.9-4.el8 @local_ovirt-4.5 ovirt-vmconsole-host.noarch 1.0.9-4.el8 @local_ovirt-4.5 Regards, Alex Martins

The full traceback is: Traceback (most recent call last): File "/tmp/ansible_ovirt.ovirt.ovirt_storage_domain_payload_xw64i82v/ansible_ovirt.ovirt.ovirt_storage_domain_payload.zip/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_storage_domain.py", line 810, in main sd_id = storage_domains_module.create()['id'] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/tmp/ansible_ovirt.ovirt.ovirt_storage_domain_payload_xw64i82v/ansible_ovirt.ovirt.ovirt_storage_domain_payload.zip/ansible_collections/ovirt/ovirt/plugins/module_utils/ovirt.py", line 673, in create self.build_entity(), ^^^^^^^^^^^^^^^^^^^ File "/tmp/ansible_ovirt.ovirt.ovirt_storage_domain_payload_xw64i82v/ansible_ovirt.ovirt.ovirt_storage_domain_payload.zip/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_storage_domain.py", line 493, in build_entity self._login(storage_type, storage) File "/tmp/ansible_ovirt.ovirt.ovirt_storage_domain_payload_xw64i82v/ansible_ovirt.ovirt.ovirt_storage_domain_payload.zip/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_storage_domain.py", line 461, in _login hosts_service.host_service(host_id).iscsi_login( File "/usr/lib64/python3.11/site-packages/ovirtsdk4/services.py", line 42212, in iscsi_login return self._internal_action(action, 'iscsilogin', None, headers, query, wait) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib64/python3.11/site-packages/ovirtsdk4/service.py", line 299, in _internal_action return future.wait() if wait else future ^^^^^^^^^^^^^ File "/usr/lib64/python3.11/site-packages/ovirtsdk4/service.py", line 55, in wait return self._code(response) ^^^^^^^^^^^^^^^^^^^^ File "/usr/lib64/python3.11/site-packages/ovirtsdk4/service.py", line 296, in callback self._check_fault(response) File "/usr/lib64/python3.11/site-packages/ovirtsdk4/service.py", line 134, in _check_fault self._raise_error(response, body.fault) File "/usr/lib64/python3.11/site-packages/ovirtsdk4/service.py", line 118, in _raise_error raise error ovirtsdk4.Error: Fault reason is "Operation Failed". Fault detail is "[Recovering from crash or Initializing]". HTTP response code is 400.
participants (1)
-
brancomrt@gmail.com