Error during setup
by Alexey Martynov
Hello
During installation level setup ipv4 saw error
[ INFO ] Stage: Pre-termination
[ INFO ] Stage: Termination
[ ERROR ] Hosted Engine deployment failed
Log file is located at /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20240910010101-ldtwqb.log
They are last lines in log
2024-09-10 01:02:54,694+0300 DEBUG otopi.context context.dumpEnvironment:771 ENV SYSTEM/rebootDeferTime=int:'10'
2024-09-10 01:02:54,694+0300 DEBUG otopi.context context.dumpEnvironment:779 ENVIRONMENT DUMP - END
2024-09-10 01:02:54,694+0300 DEBUG otopi.context context._executeMethod:124 Stage pre-terminate METHOD otopi.plugins.otopi.dialog.cli.Plugin._pre_terminate
2024-09-10 01:02:54,695+0300 DEBUG otopi.context context._executeMethod:134 otopi.plugins.otopi.dialog.cli.Plugin._pre_terminate condition False
2024-09-10 01:02:54,695+0300 INFO otopi.context context.runSequence:614 Stage: Termination
2024-09-10 01:02:54,695+0300 DEBUG otopi.context context.runSequence:619 STAGE terminate
2024-09-10 01:02:54,696+0300 DEBUG otopi.context context._executeMethod:124 Stage terminate METHOD otopi.plugins.gr_he_common.core.misc.Plugin._terminate
2024-09-10 01:02:54,696+0300 ERROR otopi.plugins.gr_he_common.core.misc misc._terminate:164 Hosted Engine deployment failed
2024-09-10 01:02:54,696+0300 DEBUG otopi.plugins.otopi.dialog.human dialog.__logString:204 DIALOG:SEND Log file is located at /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20240910010101-ldtwqb.log
2024-09-10 01:02:54,697+0300 DEBUG otopi.context context._executeMethod:124 Stage terminate METHOD otopi.plugins.otopi.dialog.human.Plugin._terminate
2024-09-10 01:02:54,992+0300 DEBUG otopi.context context._executeMethod:124 Stage terminate METHOD otopi.plugins.otopi.dialog.machine.Plugin._terminate
2024-09-10 01:02:54,992+0300 DEBUG otopi.context context._executeMethod:134 otopi.plugins.otopi.dialog.machine.Plugin._terminate condition False
2024-09-10 01:02:54,992+0300 DEBUG otopi.context context._executeMethod:124 Stage terminate METHOD otopi.plugins.otopi.core.log.Plugin._terminate
How can resole it?
2 months, 1 week
Moving hosted engine to iscsi storage domain
by Devin A. Bougie
Hello,
We are attempting to move a hosted engine from an NFS to an iSCSI storage domain, following the normal backup / restore procedure.
With a little intervention in the new hosted engine VM, everything seems to be working with the new engine and the process gets to the point of trying to create the new iSCSI hosted_storage domain. At this point, it fails with "Storage domain cannot be reached. Please ensure it is accessible from the host(s).”
The host itself does see the storage, and everything looks fine using iscsiadm and multipath commands. However, if I look at the new hosted_storage domain in the new engine, it’s stuck “unattached” and the “hosted-engine --deploy --restore-from-file …” command loops at the "Please specify the storage you would like to use” step.
Please see below for an excerpt of the output and logs, and let me know what additional information I can provide.
Many thanks,
Devin
Here’s the output from the “hosted-engine --deploy --restore-from-file …” command.
———
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : iSCSI login]
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Get iSCSI LUNs]
[ INFO ] ok: [localhost]
The following luns have been found on the requested target:
[1] mpathm 1024.0GiB IFT DS 3000 Series
status: free, paths: 8 active
Please select the destination LUN (1) [1]:
[ INFO ] iSCSI discard after delete is disabled
[ INFO ] Creating Storage Domain
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Execute just a specific set of steps]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Force facts gathering]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Wait for the storage interface to be up]
[ INFO ] skipping: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Check local VM dir stat]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Enforce local VM dir existence]
[ INFO ] skipping: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : include_tasks]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Obtain SSO token using username/password credentials]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Fetch host facts]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Fetch cluster ID]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Fetch cluster facts]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Fetch Datacenter facts]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Fetch Datacenter ID]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Fetch Datacenter name]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Fetch cluster name]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Fetch cluster version]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Enforce cluster major version]
[ INFO ] skipping: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Enforce cluster minor version]
[ INFO ] skipping: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Set storage_format]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Add NFS storage domain]
[ INFO ] skipping: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Add glusterfs storage domain]
[ INFO ] skipping: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Add iSCSI storage domain]
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Add Fibre Channel storage domain]
[ INFO ] skipping: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Get storage domain details]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Find the appliance OVF]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Get ovf data]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Get disk size from ovf data]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Get required size]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Remove unsuitable storage domain]
[ INFO ] skipping: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Check storage domain free space]
[ INFO ] skipping: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Activate storage domain]
[ ERROR ] ovirtsdk4.Error: Fault reason is "Operation Failed". Fault detail is "[Storage domain cannot be reached. Please ensure it is accessible from the host(s).]". HTTP response code is 400.
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault reason is \"Operation Failed\". Fault detail is \"[Storage domain cannot be reached. Please ensure it is accessible from the host(s).]\". HTTP response code is 400.”}
———
Here’s an excerpt from the "ovirt-hosted-engine-setup-ansible-create_storage_domain-20240911081524-c7yrkf.log”:
———
2024-09-11 08:16:12,354-0400 INFO ansible task start {'status': 'OK', 'ansible_type': 'task', 'ansible_playbook': '/usr/share/ovirt-hosted-engine-setup/he_ansible/trigger_role.yml', 'ansible_task': 'ovirt.ovirt.hosted_engine_setup : Activate storage domain'}
2024-09-11 08:16:12,354-0400 DEBUG ansible on_any args TASK: ovirt.ovirt.hosted_engine_setup : Activate storage domain kwargs is_conditional:False 2024-09-11 08:16:12,355-0400 DEBUG ansible on_any args localhost TASK: ovirt.ovirt.hosted_engine_setup : Activate storage domain kwargs 2024-09-11 08:16:25,550-0400 DEBUG var changed: host "localhost" var "ansible_play_hosts" type "<class 'list'>" value: "[]"
2024-09-11 08:16:25,551-0400 DEBUG var changed: host "localhost" var "ansible_play_batch" type "<class 'list'>" value: "[]"
2024-09-11 08:16:25,551-0400 DEBUG var changed: host "localhost" var "play_hosts" type "<class 'list'>" value: "[]"
2024-09-11 08:16:25,551-0400 ERROR ansible failed {
"ansible_host": "localhost",
"ansible_playbook": "/usr/share/ovirt-hosted-engine-setup/he_ansible/trigger_role.yml",
"ansible_result": {
"_ansible_no_log": false,
"changed": false,
"exception": "Traceback (most recent call last):\n File \"/tmp/ansible_ovirt_storage_domain_payload_cne75az3/ansible_ovirt_storage_domain_payload.zip/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_storage_domain.py\", line 811, in main\n File \"/tmp/ansible_ovirt_storage_domain_payload_cne75az3/ansible_ovirt_storage_domain_payload.zip/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_storage_domain.py\", line 660, in post_create_check\n File \"/usr/lib64/python3.9/site-packages/ovirtsdk4/services.py\", line 3647, in add\n return self._internal_add(storage_domain, headers, query, wait)\n File \"/usr/lib64/python3.9/site-packages/ovirtsdk4/service.py\", line 232, in _internal_add\n return future.wait() if wait else future\n File \"/usr/lib64/python3.9/site-packages/ovirtsdk4/service.py\", line 55, in wait\n return self._code(response)\n File \"/usr/lib64/python3.9/site-packages/ovirtsdk4/service.py\", line 229, in callback\n self._check_fault(response)\n File \"/usr/lib64/python3.9/site-packages/ovirtsdk4/service.py\", line 132, in _check_fault\n self._raise_error(response, body)\n File \"/usr/lib64/python3.9/site-packages/ovirtsdk4/service.py\", line 118, in _raise_error\n raise error\novirtsdk4.Error: Fault reason is \"Operation Failed\". Fault detail is \"[Storage domain cannot be reached. Please ensure it is accessible from the host(s).]\". HTTP response code is 400.\n",
"invocation": {
"module_args": {
"backup": null,
"comment": null,
"critical_space_action_blocker": null,
"data_center": "Default",
"description": null,
"destroy": null,
"discard_after_delete": null,
"domain_function": "data",
"fcp": null,
"fetch_nested": false,
"format": null,
"glusterfs": null,
"host": "lnxvirt01-p55.classe.cornell.edu",
"id": null,
"iscsi": null,
"localfs": null,
"managed_block_storage": null,
"name": "hosted_storage",
"nested_attributes": [],
"nfs": null,
"poll_interval": 3,
"posixfs": null,
"state": "present",
"storage_format": null,
"timeout": 180,
"wait": true,
"warning_low_space": null,
"wipe_after_delete": null
}
},
"msg": "Fault reason is \"Operation Failed\". Fault detail is \"[Storage domain cannot be reached. Please ensure it is accessible from the host(s).]\". HTTP response code is 400."
},
"ansible_task": "Activate storage domain",
"ansible_type": "task",
"status": "FAILED",
"task_duration": 13
}
2024-09-11 08:16:25,551-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fac823de1f0> kwargs ignore_errors:None 2024-09-11 08:16:25,552-0400 INFO ansible stats {
"ansible_playbook": "/usr/share/ovirt-hosted-engine-setup/he_ansible/trigger_role.yml",
"ansible_playbook_duration": "00:59 Minutes",
"ansible_result": "type: <class 'dict'>\nstr: {'localhost': {'ok': 20, 'failures': 1, 'unreachable': 0, 'changed': 1, 'skipped': 9, 'rescued': 0, 'ignored': 0}}",
"ansible_type": "finish",
"status": "FAILED"
}
2024-09-11 08:16:25,552-0400 INFO SUMMARY:
Duration Task Name
-------- --------
[ < 1 sec ] Execute just a specific set of steps
[ 00:03 ] Force facts gathering
[ 00:01 ] Check local VM dir stat
[ 00:01 ] Obtain SSO token using username/password credentials
[ 00:01 ] Fetch host facts
[ < 1 sec ] Fetch cluster ID
[ 00:01 ] Fetch cluster facts
[ 00:01 ] Fetch Datacenter facts
[ < 1 sec ] Fetch Datacenter ID
[ < 1 sec ] Fetch Datacenter name
[ < 1 sec ] Fetch cluster name
[ < 1 sec ] Fetch cluster version
[ < 1 sec ] Set storage_format
[ 00:15 ] Add iSCSI storage domain
[ 00:01 ] Get storage domain details
[ 00:01 ] Find the appliance OVF
[ 00:01 ] Get ovf data
[ < 1 sec ] Get disk size from ovf data
[ < 1 sec ] Get required size
[ FAILED ] Activate storage domain
2024-09-11 08:16:25,553-0400 DEBUG ansible on_any args <ansible.executor.stats.AggregateStats object at 0x7fac83c8a700> kwargs
———
And here’s from the "ovirt-hosted-engine-setup-20240910222757-5ru4mo.log”:
———
2024-09-11 08:16:12,144-0400 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:115 TASK [ovirt.ovirt.hosted_engine_setup : Activate storage domain]
2024-09-11 08:16:25,264-0400 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:109 {'exception': 'Traceback (most recent call last):\n File "/tmp/ansible_ovirt_storage_domain_payload_cne75az3/ansible_ovirt_storage_domain_payload.zip/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_storage_domain.py", line 811, in main\n File "/tmp/ansible_ovirt_storage_domain_payload_cne75az3/ansible_ovirt_storage_domain_payload.zip/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_storage_domain.py", line 660, in post_create_check\n File "/usr/lib64/python3.9/site-packages/ovirtsdk4/services.py", line 3647, in add\n return self._internal_add(storage_domain, headers, query, wait)\n File "/usr/lib64/python3.9/site-packages/ovirtsdk4/service.py", line 232, in _internal_add\n return future.wait() if wait else future\n File "/usr/lib64/python3.9/site-packages/ovirtsdk4/service.py", line 55, in wait\n return self._code(response)\n File "/usr/lib64/python3.9/site-packages/ovirtsdk4/service.py", line 229, in callback\n self._check_fault(response)\n File "/usr/lib64/python3.9/site-packages/ovirtsdk4/service.py", line 132, in _check_fault\n self._raise_error(response, body)\n File "/usr/lib64/python3.9/site-packages/ovirtsdk4/service.py", line 118, in _raise_error\n raise error\novirtsdk4.Error: Fault reason is "Operation Failed". Fault detail is "[Storage domain cannot be reached. Please ensure it is accessible from the host(s).]". HTTP response code is 400.\n', 'msg': 'Fault reason is "Operation Failed". Fault detail is "[Storage domain cannot be reached. Please ensure it is accessible from the host(s).]". HTTP response code is 400.', 'invocation': {'module_args': {'host': 'lnxvirt01-p55.classe.cornell.edu', 'data_center': 'Default', 'name': 'hosted_storage', 'wait': True, 'state': 'present', 'timeout': 180, 'poll_interval': 3, 'fetch_nested': False, 'nested_attributes': [], 'domain_function': 'data', 'id': None, 'description': None, 'comment': None, 'localfs': None, 'nfs': None, 'iscsi': None, 'managed_block_storage': None, 'posixfs': None, 'glusterfs': None, 'fcp': None, 'wipe_after_delete': None, 'backup': None, 'critical_space_action_blocker': None, 'warning_low_space': None, 'destroy': None, 'format': None, 'discard_after_delete': None, 'storage_format': None}}, '_ansible_no_log': False, 'changed': False}
2024-09-11 08:16:25,364-0400 ERROR otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:113 ovirtsdk4.Error: Fault reason is "Operation Failed". Fault detail is "[Storage domain cannot be reached. Please ensure it is accessible from the host(s).]". HTTP response code is 400.
2024-09-11 08:16:25,464-0400 ERROR otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:113 fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault reason is \"Operation Failed\". Fault detail is \"[Storage domain cannot be reached. Please ensure it is accessible from the host(s).]\". HTTP response code is 400."}
2024-09-11 08:16:25,765-0400 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:109 PLAY RECAP [localhost] : ok: 20 changed: 1 unreachable: 0 skipped: 9 failed: 1
2024-09-11 08:16:25,866-0400 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:226 ansible-playbook rc: 2
2024-09-11 08:16:25,866-0400 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:233 ansible-playbook stdout:
2024-09-11 08:16:25,867-0400 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:236 ansible-playbook stderr:
2024-09-11 08:16:25,867-0400 DEBUG otopi.plugins.otopi.dialog.human human.queryString:174 query OVEHOSTED_STORAGE_DOMAIN_TYPE
2024-09-11 08:16:25,868-0400 DEBUG otopi.plugins.otopi.dialog.human dialog.__logString:204 DIALOG:SEND Please specify the storage you would like to use (glusterfs, iscsi, fc, nfs)[nfs]:
———
2 months, 1 week
ovirt hosts unassigned
by dragons@namecheap.com
Hello,
we have found ovirt hosts in unassigned states after renewing certificates for oVirt engine via engine-setup --offline. The first launch ended with an error. We had an immutable attribute on the file /etc/pki/ovirt-engine/keys/engine_id_rsa
After which we removed the attribute and launched again engine-setup --offline.
In the events for each host we see the following errors:
VDSM ov3.example.com command Get Host Capabilities failed: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
Is there any way to return the host to normal without losing data?
2 months, 1 week
Re: [External] : cpu pinning validation failed - virtual cpu does not exist in vm
by parallax
thanks for your reply
it looks like I stepped in and in fact the total number of cores for the VM
was less than I was trying to pin
чт, 5 сент. 2024 г. в 16:32, Marcos Sungaila <marcos.sungaila(a)oracle.com>:
> Hi,
>
>
>
> You disabled "Count Threads As Cores" so you need extra steps to identify
> which cores you can pin.
>
>
>
> You can check your CPU cores availability using the lscpu command, look:
>
> # lscpu | grep -e NUMA -e ^CPU\(s\)
>
> CPU(s): 72
>
> NUMA node0 CPU(s): 0-17,36-53
>
> NUMA node1 CPU(s): 18-35,54-71
>
> The previous command lists physical cores and threads.
>
>
>
> The following command shows you logical cores running on a physical core:
>
> # grep -e "processor" -e "physical id" -e "core id" /proc/cpuinfo | grep
> -E -A1 -B1 "physical id.*0$" | grep -E -B 2 "core id.* 0$"
>
> processor : 0 <-- physical core 0 on socket 0
>
> physical id : 0
>
> core id : 0
>
> --
>
> processor : 36 <-- second thread on physical core 0 on socket 0
>
> physical id : 0
>
> core id : 0
>
>
>
> I hope it can help you troubleshoot your scenario.
>
>
>
> Marcos
>
>
>
> *From:* parallax <dd432690(a)gmail.com>
> *Sent:* Sunday, July 28, 2024 6:25 AM
> *To:* users <users(a)ovirt.org>
> *Subject:* [External] : [ovirt-users] cpu pinning validation failed -
> virtual cpu does not exist in vm
>
>
>
> server with two Gold 6346
>
> this message appears when trying pin any cores after 14 core:
>
> 16#16_17#17_18#18_19#19_20#20_21#21_22#22
>
> "cpu pinning validation failed - virtual cpu does not exist in vm"
>
>
>
> "Count Threads As Cores" is disabled
>
> pinning cores from 9 to 15 goes well
> 0#0_1#1_2#2_3#3_4#4_5#5_6#6_7#7_8#8_9#9_10#10_11#11
>
2 months, 1 week
Re: [External] : Fallback Method to Start a VM if oVirt Engine is Offline
by Tommi Finance
Hi Marcos,
thank you so much for your feedback and clarification.
I've recognized you are a big player in the Oracle Universe; we are
currently migrating from Oracle VM Server to Oracle Linux Virtualization
Manager.
the target was to include also some other Servers from VMware Hypervisor.
Potentially this is a showstopper in our strategy but we will see.
thank you again
Thomas
Am Mo., 2. Sept. 2024 um 21:55 Uhr schrieb Marcos Sungaila <
marcos.sungaila(a)oracle.com>:
> Hi Thomas,
>
> No, it is not possible. All VM's interactions (start, stop, reboot, etc)
> are performed by the Engine application (ovirt-engine service).
>
> Marcos
>
> -----Original Message-----
> From: Thomas Leitner <tommifinance(a)gmail.com>
> Sent: Monday, August 26, 2024 8:29 AM
> To: users(a)ovirt.org
> Subject: [External] : [ovirt-users] Fallback Method to Start a VM if oVirt
> Engine is Offline
>
> Hello,
>
> Is there a way to start a VM directly from the KVM host if the oVirt
> Engine is down or temporarily unavailable due to network issues or other
> reasons?
>
> thanks for any advice.
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org Privacy Statement:
> https://urldefense.com/v3/__https://www.ovirt.org/privacy-policy.html__;!...
> oVirt Code of Conduct:
> https://urldefense.com/v3/__https://www.ovirt.org/community/about/communi...
> List Archives:
> https://urldefense.com/v3/__https://lists.ovirt.org/archives/list/users@o...
>
2 months, 1 week
Storage Domain Not synchronized, To synchronize them, please move them to maintenance and then activate.
by Arup Jyoti Thakuria
Dear All,
In our Ovirt Engine Infrastructure, recently we have received an alert for one of our "Storage Domain"
The alert is as mentioned below----
===================================
Storage domains with IDs [fb34e5b7-e850-4d66-ab47-3011d2000338] could not be synchronized. To synchronize them, please move them to maintenance and then activate.
==========================================================
What is the consequence of this alert, if we ignore this what can happen to the infra, are we going to face any issue with the VMs, currently this infra is under production
2 months, 1 week
How should I properly replace a failed host?
by ziyi Liu
Version 4.5
gluster 2+1 arbitration
My host is damaged and I need to reinstall the system. The host fqdn still uses the old
What is the correct replacement process? There is little information on replacing a faulty host
2 months, 2 weeks
Hosted Engine Boot Failure Recovery Guidance
by Clint Boggio
Good Day oVirt Community
I've experienced a condition whereby the HE on my three-node cluster will not boot. Connecting to the hosted-engine console reveals that the hosted engine is in the "grub rescue" prompt. I don't really know how to rescue the engine from that level of failure but if anybody can give me a quick set of steps to maybe cover that engine I'd appreciate it and certainly give it my best try.
If it's easier to re-deploy the engine and leverage a backup file in the process I have such a backup file created yesterday readily available. I would need a link to a process for decommissioning the dead machine, and re-deploying the replacement HE from the backup file.
Any help would be greatly appreciated.
2 months, 2 weeks
cpu pinning validation failed - virtual cpu does not exist in vm
by parallax
server with two Gold 6346
this message appears when trying pin any cores after 14 core:
16#16_17#17_18#18_19#19_20#20_21#21_22#22
"cpu pinning validation failed - virtual cpu does not exist in vm"
"Count Threads As Cores" is disabled
pinning cores from 9 to 15 goes well
0#0_1#1_2#2_3#3_4#4_5#5_6#6_7#7_8#8_9#9_10#10_11#11
2 months, 2 weeks
console certificate is not trusted
by suporte@logicworks.pt
Hi,
I just install last ovirt version ( Version 4.5.6-1.el9) using CentOS Stream 9.
When trying to access the console get this error message: The certificate is not trusted.
I'm using self signed certificate.
Any idea?
Thanks
--
Jose Ferradeira
http://www.logicworks.pt
2 months, 2 weeks