Add HE disks hosted-engine deploy ovirtsdk4-service.py errors
by airawan@certa.id
Dear all,
currently i have lab environment which use 3 node server in rocky linux9.4 perform as hypervisor node, since EL-9 cockpit-ovirt-dashboard doesn't appears in EL-9 packages we use CLI glusterfs cluster replication and hosted engine setup.
hosted engine deploy command i've use = hosted-engine --deploy --4
task executed = [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Add HE disks]
error highlight =
[ ERROR ] {'exception': 'Traceback (most recent call last):\n File "/tmp/ansible_ovirt_disk_payload_691q2jk2/ansible_ovirt_disk_payload.zip/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_disk.py", line 883, in main\n File "/tmp/ansible_ovirt_disk_payload_691q2jk2/ansible_ovirt_disk_payload.zip/ansible_collections/ovirt/ovirt/plugins/module_utils/ovirt.py", line 672, in create\n entity = self._service.add(\n File "/usr/lib64/python3.9/site-packages/ovirtsdk4/services.py", line 7825, in add\n return self._internal_add(disk, headers, query, wait)\n File "/usr/lib64/python3.9/site-packages/ovirtsdk4/service.py", line 232, in _internal_add\n return future.wait() if wait else future\n File "/usr/lib64/python3.9/site-packages/ovirtsdk4/service.py", line 55, in wait\n return self._code(response)\n File "/usr/lib64/python3.9/site-packages/ovirtsdk4/service.py", line 229, in callback\n self._check_fault(response)\n File "/usr/lib64/python3.9/site-packages/ovirtsdk4
/service.py", line 132, in _check_fault\n self._raise_error(response, body)\n File "/usr/lib64/python3.9/site-packages/ovirtsdk4/service.py", line 118, in _raise_error\n raise error\novirtsdk4.Error: Fault reason is "Operation Failed". Fault detail is "[]". HTTP response code is 400.\n', 'failed': True, 'msg': 'Fault reason is "Operation Failed". Fault detail is "[]". HTTP response code is 400.', 'invocation': {'module_args': {'name': 'he_virtio_disk', 'size': '51GiB', 'format': 'raw', 'sparse': True, 'description': 'Hosted-Engine disk', 'content_type': 'hosted_engine', 'interface': 'virtio', 'storage_domain': 'hosted_storage', 'wait': True, 'timeout': 600, 'auth': {'token':
any suggestions i really appreciate
thank you..
6 months, 1 week
Assistance Needed with oVirt Engine Deployment 4.5 Error
by weichao.jiao@rayseconsult.com
Dear friends:
I am encountering an issue while attempting to deploy the oVirt Engine 4.5 on my local engine VM and I'm seeking guidance on how to resolve this.
Here's a summary of the error message I received during the deployment process:
[ INFO ] TASK [ovirt.ovirt.engine_setup : Install oVirt Engine package]
[ ERROR ] fatal: [localhost -> 192.168.222.27]: FAILED! => {"changed": false, "msg": "Failed to download metadata for repo 'centos-ceph-pacific': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried", "rc": 1, "results": []}
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Sync on engine machine]
[ INFO ] changed: [localhost -> 192.168.222.27]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Set destination directory path]
[ INFO ] ok: [localhost -> localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Create destination directory]
[ INFO ] changed: [localhost -> localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : include_tasks]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Find the local appliance image]
[ INFO ] ok: [localhost -> localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Set local_vm_disk_path]
[ INFO ] ok: [localhost -> localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Give the vm time to flush dirty buffers]
[ INFO ] ok: [localhost -> localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Copy engine logs]
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Change ownership of copied engine logs]
[ INFO ] changed: [localhost -> localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Notify the user about a failure]
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "There was a failure deploying the engine on the local engine VM. The system may not be provisioned according to the playbook results: please check the logs for the issue, fix accordingly or re-deploy from scratch.\n"}
I would appreciate any advice or suggestions on how to proceed with troubleshooting or resolving this error. If there are any specific logs or information you need to assist me, please let me know.
Thank you in advance for your help.
Best regards
6 months, 1 week
Error during SSO authentication: access_denied: Cannot authenticate user '
by Siddu Hadpad
I'm using ovirt-engine-sdk-ruby but getting an SSE authentication error. Facing authenticate issue after my mac update(14.6.1 Sonoma)
/gems/ruby-3.0.0/gems/ovirt-engine-sdk-4.6.0/lib/ovirtsdk4/connection.rb:476:in `create_access_token': Error during SSO authentication: access_denied: Cannot authenticate user 'svcauto(a)domain.com': Unable to log in. Verify your login information or contact the system administrator.. (OvirtSDK4::AuthError)
from /Users/077503/.rvm/gems/ruby-3.0.0/gems/ovirt-engine-sdk-4.6.0/lib/ovirtsdk4/connection.rb:646:in `internal_send'
from /Users/077503/.rvm/gems/ruby-3.0.0/gems/ovirt-engine-sdk-4.6.0/lib/ovirtsdk4/connection.rb:202:in `block in send'
from /Users/077503/.rvm/gems/ruby-3.0.0/gems/ovirt-engine-sdk-4.6.0/lib/ovirtsdk4/connection.rb:202:in `synchronize'
from /Users/077503/.rvm/gems/ruby-3.0.0/gems/ovirt-engine-sdk-4.6.0/lib/ovirtsdk4/connection.rb:202:in `send'
from /Users/077503/repos/gems/olam/lib/olam/utils/monkey_patch.rb:202:in `internal_get'
Here is the code :
connection = OvirtSDK4::Connection.new(
url: 'https://xxxx/ovirt-engine/api',
username: 'svcauto(a)domain.com',
password: 'xxxx',
insecure: true,
debug: true
)
vms_service = connection.system_service.vms_service
vms = vms_service.list
6 months, 2 weeks
New oVirt Node Installation, no web console access
by arkansascontrols@icloud.com
I am setting a lab with a 3 nodes. I downloaded the latest oVirt node iso (4.5) and completed the install on all 3 nodes. When I browse to the node address I get a login screen that will not accept the root password. During the setup there is no opportunity that I have found to enter additional users, only an option to set the root password.
How do I access the Web Console? Any help would be appreciated.
6 months, 2 weeks
HA Broker corrupting iSCSI Lockspace
by mblecha@techmerx.com
I'm not sure how it happened, but a few hours ago, the lockspace for the hosted engine became corrupted. sanlock reports a -223 error.
CentOS 8 Stream, Ceph iSCSI backend (Reef at 18.2.4) using tcmu-runner.
I managed to format the lockspace, after shutting down the HA Agents and Brokers on all HA Engine nodes, but as soon as I start up the HA Agent on any node, the lockspace becomes corrupted again, and sanlock starts returning the -223 error message again. I see no relevant other errors in what I can find so far.
Any suggestions on where to investigate next? I have an entire cluster than cannot start/stop/migrate VMS as the entire DataCenter is marked Non Operational.
6 months, 3 weeks
Error during setup
by Alexey Martynov
Hello
During installation level setup ipv4 saw error
[ INFO ] Stage: Pre-termination
[ INFO ] Stage: Termination
[ ERROR ] Hosted Engine deployment failed
Log file is located at /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20240910010101-ldtwqb.log
They are last lines in log
2024-09-10 01:02:54,694+0300 DEBUG otopi.context context.dumpEnvironment:771 ENV SYSTEM/rebootDeferTime=int:'10'
2024-09-10 01:02:54,694+0300 DEBUG otopi.context context.dumpEnvironment:779 ENVIRONMENT DUMP - END
2024-09-10 01:02:54,694+0300 DEBUG otopi.context context._executeMethod:124 Stage pre-terminate METHOD otopi.plugins.otopi.dialog.cli.Plugin._pre_terminate
2024-09-10 01:02:54,695+0300 DEBUG otopi.context context._executeMethod:134 otopi.plugins.otopi.dialog.cli.Plugin._pre_terminate condition False
2024-09-10 01:02:54,695+0300 INFO otopi.context context.runSequence:614 Stage: Termination
2024-09-10 01:02:54,695+0300 DEBUG otopi.context context.runSequence:619 STAGE terminate
2024-09-10 01:02:54,696+0300 DEBUG otopi.context context._executeMethod:124 Stage terminate METHOD otopi.plugins.gr_he_common.core.misc.Plugin._terminate
2024-09-10 01:02:54,696+0300 ERROR otopi.plugins.gr_he_common.core.misc misc._terminate:164 Hosted Engine deployment failed
2024-09-10 01:02:54,696+0300 DEBUG otopi.plugins.otopi.dialog.human dialog.__logString:204 DIALOG:SEND Log file is located at /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20240910010101-ldtwqb.log
2024-09-10 01:02:54,697+0300 DEBUG otopi.context context._executeMethod:124 Stage terminate METHOD otopi.plugins.otopi.dialog.human.Plugin._terminate
2024-09-10 01:02:54,992+0300 DEBUG otopi.context context._executeMethod:124 Stage terminate METHOD otopi.plugins.otopi.dialog.machine.Plugin._terminate
2024-09-10 01:02:54,992+0300 DEBUG otopi.context context._executeMethod:134 otopi.plugins.otopi.dialog.machine.Plugin._terminate condition False
2024-09-10 01:02:54,992+0300 DEBUG otopi.context context._executeMethod:124 Stage terminate METHOD otopi.plugins.otopi.core.log.Plugin._terminate
How can resole it?
6 months, 3 weeks
Moving hosted engine to iscsi storage domain
by Devin A. Bougie
Hello,
We are attempting to move a hosted engine from an NFS to an iSCSI storage domain, following the normal backup / restore procedure.
With a little intervention in the new hosted engine VM, everything seems to be working with the new engine and the process gets to the point of trying to create the new iSCSI hosted_storage domain. At this point, it fails with "Storage domain cannot be reached. Please ensure it is accessible from the host(s).”
The host itself does see the storage, and everything looks fine using iscsiadm and multipath commands. However, if I look at the new hosted_storage domain in the new engine, it’s stuck “unattached” and the “hosted-engine --deploy --restore-from-file …” command loops at the "Please specify the storage you would like to use” step.
Please see below for an excerpt of the output and logs, and let me know what additional information I can provide.
Many thanks,
Devin
Here’s the output from the “hosted-engine --deploy --restore-from-file …” command.
———
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : iSCSI login]
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Get iSCSI LUNs]
[ INFO ] ok: [localhost]
The following luns have been found on the requested target:
[1] mpathm 1024.0GiB IFT DS 3000 Series
status: free, paths: 8 active
Please select the destination LUN (1) [1]:
[ INFO ] iSCSI discard after delete is disabled
[ INFO ] Creating Storage Domain
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Execute just a specific set of steps]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Force facts gathering]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Wait for the storage interface to be up]
[ INFO ] skipping: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Check local VM dir stat]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Enforce local VM dir existence]
[ INFO ] skipping: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : include_tasks]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Obtain SSO token using username/password credentials]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Fetch host facts]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Fetch cluster ID]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Fetch cluster facts]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Fetch Datacenter facts]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Fetch Datacenter ID]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Fetch Datacenter name]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Fetch cluster name]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Fetch cluster version]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Enforce cluster major version]
[ INFO ] skipping: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Enforce cluster minor version]
[ INFO ] skipping: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Set storage_format]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Add NFS storage domain]
[ INFO ] skipping: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Add glusterfs storage domain]
[ INFO ] skipping: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Add iSCSI storage domain]
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Add Fibre Channel storage domain]
[ INFO ] skipping: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Get storage domain details]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Find the appliance OVF]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Get ovf data]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Get disk size from ovf data]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Get required size]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Remove unsuitable storage domain]
[ INFO ] skipping: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Check storage domain free space]
[ INFO ] skipping: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Activate storage domain]
[ ERROR ] ovirtsdk4.Error: Fault reason is "Operation Failed". Fault detail is "[Storage domain cannot be reached. Please ensure it is accessible from the host(s).]". HTTP response code is 400.
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault reason is \"Operation Failed\". Fault detail is \"[Storage domain cannot be reached. Please ensure it is accessible from the host(s).]\". HTTP response code is 400.”}
———
Here’s an excerpt from the "ovirt-hosted-engine-setup-ansible-create_storage_domain-20240911081524-c7yrkf.log”:
———
2024-09-11 08:16:12,354-0400 INFO ansible task start {'status': 'OK', 'ansible_type': 'task', 'ansible_playbook': '/usr/share/ovirt-hosted-engine-setup/he_ansible/trigger_role.yml', 'ansible_task': 'ovirt.ovirt.hosted_engine_setup : Activate storage domain'}
2024-09-11 08:16:12,354-0400 DEBUG ansible on_any args TASK: ovirt.ovirt.hosted_engine_setup : Activate storage domain kwargs is_conditional:False 2024-09-11 08:16:12,355-0400 DEBUG ansible on_any args localhost TASK: ovirt.ovirt.hosted_engine_setup : Activate storage domain kwargs 2024-09-11 08:16:25,550-0400 DEBUG var changed: host "localhost" var "ansible_play_hosts" type "<class 'list'>" value: "[]"
2024-09-11 08:16:25,551-0400 DEBUG var changed: host "localhost" var "ansible_play_batch" type "<class 'list'>" value: "[]"
2024-09-11 08:16:25,551-0400 DEBUG var changed: host "localhost" var "play_hosts" type "<class 'list'>" value: "[]"
2024-09-11 08:16:25,551-0400 ERROR ansible failed {
"ansible_host": "localhost",
"ansible_playbook": "/usr/share/ovirt-hosted-engine-setup/he_ansible/trigger_role.yml",
"ansible_result": {
"_ansible_no_log": false,
"changed": false,
"exception": "Traceback (most recent call last):\n File \"/tmp/ansible_ovirt_storage_domain_payload_cne75az3/ansible_ovirt_storage_domain_payload.zip/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_storage_domain.py\", line 811, in main\n File \"/tmp/ansible_ovirt_storage_domain_payload_cne75az3/ansible_ovirt_storage_domain_payload.zip/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_storage_domain.py\", line 660, in post_create_check\n File \"/usr/lib64/python3.9/site-packages/ovirtsdk4/services.py\", line 3647, in add\n return self._internal_add(storage_domain, headers, query, wait)\n File \"/usr/lib64/python3.9/site-packages/ovirtsdk4/service.py\", line 232, in _internal_add\n return future.wait() if wait else future\n File \"/usr/lib64/python3.9/site-packages/ovirtsdk4/service.py\", line 55, in wait\n return self._code(response)\n File \"/usr/lib64/python3.9/site-packages/ovirtsdk4/service.py\", line 229, in callback\n self._check_fault(response)\n File \"/usr/lib64/python3.9/site-packages/ovirtsdk4/service.py\", line 132, in _check_fault\n self._raise_error(response, body)\n File \"/usr/lib64/python3.9/site-packages/ovirtsdk4/service.py\", line 118, in _raise_error\n raise error\novirtsdk4.Error: Fault reason is \"Operation Failed\". Fault detail is \"[Storage domain cannot be reached. Please ensure it is accessible from the host(s).]\". HTTP response code is 400.\n",
"invocation": {
"module_args": {
"backup": null,
"comment": null,
"critical_space_action_blocker": null,
"data_center": "Default",
"description": null,
"destroy": null,
"discard_after_delete": null,
"domain_function": "data",
"fcp": null,
"fetch_nested": false,
"format": null,
"glusterfs": null,
"host": "lnxvirt01-p55.classe.cornell.edu",
"id": null,
"iscsi": null,
"localfs": null,
"managed_block_storage": null,
"name": "hosted_storage",
"nested_attributes": [],
"nfs": null,
"poll_interval": 3,
"posixfs": null,
"state": "present",
"storage_format": null,
"timeout": 180,
"wait": true,
"warning_low_space": null,
"wipe_after_delete": null
}
},
"msg": "Fault reason is \"Operation Failed\". Fault detail is \"[Storage domain cannot be reached. Please ensure it is accessible from the host(s).]\". HTTP response code is 400."
},
"ansible_task": "Activate storage domain",
"ansible_type": "task",
"status": "FAILED",
"task_duration": 13
}
2024-09-11 08:16:25,551-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fac823de1f0> kwargs ignore_errors:None 2024-09-11 08:16:25,552-0400 INFO ansible stats {
"ansible_playbook": "/usr/share/ovirt-hosted-engine-setup/he_ansible/trigger_role.yml",
"ansible_playbook_duration": "00:59 Minutes",
"ansible_result": "type: <class 'dict'>\nstr: {'localhost': {'ok': 20, 'failures': 1, 'unreachable': 0, 'changed': 1, 'skipped': 9, 'rescued': 0, 'ignored': 0}}",
"ansible_type": "finish",
"status": "FAILED"
}
2024-09-11 08:16:25,552-0400 INFO SUMMARY:
Duration Task Name
-------- --------
[ < 1 sec ] Execute just a specific set of steps
[ 00:03 ] Force facts gathering
[ 00:01 ] Check local VM dir stat
[ 00:01 ] Obtain SSO token using username/password credentials
[ 00:01 ] Fetch host facts
[ < 1 sec ] Fetch cluster ID
[ 00:01 ] Fetch cluster facts
[ 00:01 ] Fetch Datacenter facts
[ < 1 sec ] Fetch Datacenter ID
[ < 1 sec ] Fetch Datacenter name
[ < 1 sec ] Fetch cluster name
[ < 1 sec ] Fetch cluster version
[ < 1 sec ] Set storage_format
[ 00:15 ] Add iSCSI storage domain
[ 00:01 ] Get storage domain details
[ 00:01 ] Find the appliance OVF
[ 00:01 ] Get ovf data
[ < 1 sec ] Get disk size from ovf data
[ < 1 sec ] Get required size
[ FAILED ] Activate storage domain
2024-09-11 08:16:25,553-0400 DEBUG ansible on_any args <ansible.executor.stats.AggregateStats object at 0x7fac83c8a700> kwargs
———
And here’s from the "ovirt-hosted-engine-setup-20240910222757-5ru4mo.log”:
———
2024-09-11 08:16:12,144-0400 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:115 TASK [ovirt.ovirt.hosted_engine_setup : Activate storage domain]
2024-09-11 08:16:25,264-0400 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:109 {'exception': 'Traceback (most recent call last):\n File "/tmp/ansible_ovirt_storage_domain_payload_cne75az3/ansible_ovirt_storage_domain_payload.zip/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_storage_domain.py", line 811, in main\n File "/tmp/ansible_ovirt_storage_domain_payload_cne75az3/ansible_ovirt_storage_domain_payload.zip/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_storage_domain.py", line 660, in post_create_check\n File "/usr/lib64/python3.9/site-packages/ovirtsdk4/services.py", line 3647, in add\n return self._internal_add(storage_domain, headers, query, wait)\n File "/usr/lib64/python3.9/site-packages/ovirtsdk4/service.py", line 232, in _internal_add\n return future.wait() if wait else future\n File "/usr/lib64/python3.9/site-packages/ovirtsdk4/service.py", line 55, in wait\n return self._code(response)\n File "/usr/lib64/python3.9/site-packages/ovirtsdk4/service.py", line 229, in callback\n self._check_fault(response)\n File "/usr/lib64/python3.9/site-packages/ovirtsdk4/service.py", line 132, in _check_fault\n self._raise_error(response, body)\n File "/usr/lib64/python3.9/site-packages/ovirtsdk4/service.py", line 118, in _raise_error\n raise error\novirtsdk4.Error: Fault reason is "Operation Failed". Fault detail is "[Storage domain cannot be reached. Please ensure it is accessible from the host(s).]". HTTP response code is 400.\n', 'msg': 'Fault reason is "Operation Failed". Fault detail is "[Storage domain cannot be reached. Please ensure it is accessible from the host(s).]". HTTP response code is 400.', 'invocation': {'module_args': {'host': 'lnxvirt01-p55.classe.cornell.edu', 'data_center': 'Default', 'name': 'hosted_storage', 'wait': True, 'state': 'present', 'timeout': 180, 'poll_interval': 3, 'fetch_nested': False, 'nested_attributes': [], 'domain_function': 'data', 'id': None, 'description': None, 'comment': None, 'localfs': None, 'nfs': None, 'iscsi': None, 'managed_block_storage': None, 'posixfs': None, 'glusterfs': None, 'fcp': None, 'wipe_after_delete': None, 'backup': None, 'critical_space_action_blocker': None, 'warning_low_space': None, 'destroy': None, 'format': None, 'discard_after_delete': None, 'storage_format': None}}, '_ansible_no_log': False, 'changed': False}
2024-09-11 08:16:25,364-0400 ERROR otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:113 ovirtsdk4.Error: Fault reason is "Operation Failed". Fault detail is "[Storage domain cannot be reached. Please ensure it is accessible from the host(s).]". HTTP response code is 400.
2024-09-11 08:16:25,464-0400 ERROR otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:113 fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault reason is \"Operation Failed\". Fault detail is \"[Storage domain cannot be reached. Please ensure it is accessible from the host(s).]\". HTTP response code is 400."}
2024-09-11 08:16:25,765-0400 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:109 PLAY RECAP [localhost] : ok: 20 changed: 1 unreachable: 0 skipped: 9 failed: 1
2024-09-11 08:16:25,866-0400 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:226 ansible-playbook rc: 2
2024-09-11 08:16:25,866-0400 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:233 ansible-playbook stdout:
2024-09-11 08:16:25,867-0400 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:236 ansible-playbook stderr:
2024-09-11 08:16:25,867-0400 DEBUG otopi.plugins.otopi.dialog.human human.queryString:174 query OVEHOSTED_STORAGE_DOMAIN_TYPE
2024-09-11 08:16:25,868-0400 DEBUG otopi.plugins.otopi.dialog.human dialog.__logString:204 DIALOG:SEND Please specify the storage you would like to use (glusterfs, iscsi, fc, nfs)[nfs]:
———
6 months, 3 weeks
ovirt hosts unassigned
by dragons@namecheap.com
Hello,
we have found ovirt hosts in unassigned states after renewing certificates for oVirt engine via engine-setup --offline. The first launch ended with an error. We had an immutable attribute on the file /etc/pki/ovirt-engine/keys/engine_id_rsa
After which we removed the attribute and launched again engine-setup --offline.
In the events for each host we see the following errors:
VDSM ov3.example.com command Get Host Capabilities failed: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
Is there any way to return the host to normal without losing data?
6 months, 3 weeks