ovirt node NonResponsive
by carlos.mendes@mgo.cv
Hello,
I have ovirt with two nodes and one that are NonResponsive and and cant manage them because they are in Unknown state.
It seems that nodes lost connection for a while with their gateway.
The node (ovirt2) however is having consistent problems. The follow sequence of events is reproducible and is causing the host to enter a "NonOperational" state on the cluster:
What is the proper way of restoring management?
I have a two-node cluster with the ovirt manager running standlone on the virtual maachine CentOS-Stream-9 and the ovirt node running the most recent oVirt Node 4.5.4 software.
I can then re-activate ovirt2, which appears as green for approximately 5 minutes and then repeats all of the above issues.
What can I do to troubleshoot this?
1 year, 3 months
How restore nodes ovirt UP from NonResponsive and VMs executing
by José Pascual
Hello,
I have ovirt with two nodes that are NonResponsive and all the VMs are
properly executing but i cant manage them because are in Unknown state.
It seems that nodes lost connection for a while with their gateway.
I have thought of first restarting the node where the engine is not
running and trying to put in UP. Then restart the engine from within de
VM to see if it starts up on this node.
What is the proper way of restoring management? I
Thanks,
Best Regards
--
Saludos,
José Pascual Gallud Martínez
Nombre | Dpto. Ingeniería <http://telfy.com/>
1 year, 3 months
Re: Some problems with ovirt
by אריה קלטר
Hi,
Sorry for the late reply
I tried now to get logs for the vm, for the scenario of stucked in the
middle of the power up because of no disk.
The vm id is 486cea97-ed56-47d4-930b-5f85c51ad3cf, the vm name is kc26-1
[image: image.png]
About the second problem, with the migration, i also attached the logs
here, both from the source and from the destination server.
Any clue how to solve the problems?
It is רeally annoying that a lot of times when I am trying to power on the
vm it hangs like that, and when the system is doing live migration for the
engine vm it is always not working and I need to power it off and on
manually from the cli.
For both scenarios i uploaded bot the /var/log/vdsm/vdsm.log and
/var/log/libvirt/qemu/<vmname>.log
1 year, 3 months
Certificates expired...
by Jason P. Thomas
We're moving to a new facility and pretty much building the
infrastructure out from scratch. As such, the oVirt 4.4 cluster at our
old location has floated under notice because it has just worked for
years. In July it seems some of the certs expired (specifically the
engine apache cert) and we just noticed it. I followed a post for
changing the apache cert and that allowed us to login to the engine web
interface, but nothing in the interface showed as connected. VMs are
still running, I even rebooted one via ssh before realizing the
certificate issues. In "Events" in the engine, it was complaining about
certs being expired on the hosts. I found this post to this mailing
list and followed the instructions possibly in error:
https://lists.ovirt.org/archives/list/users@ovirt.org/thread/NHJNETOIMSHD...
Now the engine won't start at all and I'm afraid I'm one power outage
away from complete disaster. I need to keep the old location up and
functioning for another 4-6 months, so any insights would be greatly
appreciated.
Sincerely,
Jason P. Thomas
1 year, 3 months
Problems running CentOS 9 Stream GenericCloud guests
by Gianluca Amato
Hello everyone,
I'm trying to run CentOS 9 Stream GenericCloud as a guest in oVirt 4.5.4. While the images in the ovirt-image-repository seems to work fine (in particular, I've tried version 20211119.0), the latest version (20230727.1) does not start. After the initial boot messages, it gives the error:
Starting dracut mount hook
dracut-mount[413]: Warning: Can't mount root filesystem
Starting dracut emergency shell
and brings me to the emergency shell.
Is this some known bug ?
Note that if I start from the old image and then upgrade all the packages, I have no problems at all.
Thanks in advance for any help.
Gianluca
1 year, 4 months
Installing oVirt as a self-hosted engine - big big problem :()
by Jorge Visentini
Hi guys, starting the weekend with a "cucumber" like that in my hands.
I've been racking my brains for about 4 days to deploy a new engine.
Turns out I already tested *4.4.10*, *4.5.4.x*, and *4.5.5(master)* (el8
and el9) and none is working.
It seems to me to be ansible or a python problem, but I'm not sure.
I've read several oVirt reddit and github threads, but they seem to have no
effect anymore. I believe it's some package in the CentOS Stream
repositories, *but unfortunately I don't have it frozen locally here*.
Deploy hangs at *[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Wait for
the host to be up]*
I already tried to update the version of *python netaddr* as I read in git
and it still didn't work
I also tried to *freeze the ansible update* in the engine and it didn't
work.
I updated the version of *ovirt-ansible-collection to
ovirt-ansible-collection-3.1.3-0.1.master.20230420113738.el8.noarch.rpm*
and it didn't work either...
*The error seems to be on all oVirt builds*, but I don't know what I'm
doing wrong anymore because I can't pinpoint where the error is.
I appreciate any tips
*Below are some log outputs:*
[root@ksmmi1r02ovirt36 ~]# tail -f /var/log/vdsm/vdsm.log
2023-07-07 22:23:30,144-0300 INFO (vmrecovery) [vdsm.api] FINISH
getConnectedStoragePoolsList return={'poollist': []} from=internal,
task_id=6df1f5ed-0f41-4001-bb2e-e50fb0214ac7 (api:37)
2023-07-07 22:23:30,144-0300 INFO (vmrecovery) [vds] recovery: waiting for
storage pool to go up (clientIF:723)
2023-07-07 22:23:35,146-0300 INFO (vmrecovery) [vdsm.api] START
getConnectedStoragePoolsList() from=internal,
task_id=bd2a755d-3488-4b43-8ca4-44717dd6b017 (api:31)
2023-07-07 22:23:35,146-0300 INFO (vmrecovery) [vdsm.api] FINISH
getConnectedStoragePoolsList return={'poollist': []} from=internal,
task_id=bd2a755d-3488-4b43-8ca4-44717dd6b017 (api:37)
2023-07-07 22:23:35,146-0300 INFO (vmrecovery) [vds] recovery: waiting for
storage pool to go up (clientIF:723)
2023-07-07 22:23:39,320-0300 INFO (periodic/3) [vdsm.api] START
repoStats(domains=()) from=internal,
task_id=68567ce3-b579-469d-a46d-7bafc7b3e6bd (api:31)
2023-07-07 22:23:39,320-0300 INFO (periodic/3) [vdsm.api] FINISH repoStats
return={} from=internal, task_id=68567ce3-b579-469d-a46d-7bafc7b3e6bd
(api:37)
2023-07-07 22:23:40,151-0300 INFO (vmrecovery) [vdsm.api] START
getConnectedStoragePoolsList() from=internal,
task_id=fadcf734-9f7e-4681-8764-9d3863718644 (api:31)
2023-07-07 22:23:40,151-0300 INFO (vmrecovery) [vdsm.api] FINISH
getConnectedStoragePoolsList return={'poollist': []} from=internal,
task_id=fadcf734-9f7e-4681-8764-9d3863718644 (api:37)
2023-07-07 22:23:40,151-0300 INFO (vmrecovery) [vds] recovery: waiting for
storage pool to go up (clientIF:723)
2023-07-07 22:23:44,183-0300 INFO (jsonrpc/1) [api.host] START
getAllVmStats() from=::1,49920 (api:31)
2023-07-07 22:23:44,184-0300 INFO (jsonrpc/1) [api.host] FINISH
getAllVmStats return={'status': {'code': 0, 'message': 'Done'},
'statsList': (suppressed)} from=::1,49920 (api:37)
2023-07-07 22:23:45,157-0300 INFO (vmrecovery) [vdsm.api] START
getConnectedStoragePoolsList() from=internal,
task_id=504d8028-35be-45a3-b24d-4ec7cbc82f7e (api:31)
2023-07-07 22:23:45,157-0300 INFO (vmrecovery) [vdsm.api] FINISH
getConnectedStoragePoolsList return={'poollist': []} from=internal,
task_id=504d8028-35be-45a3-b24d-4ec7cbc82f7e (api:37)
2023-07-07 22:23:45,157-0300 INFO (vmrecovery) [vds] recovery: waiting for
storage pool to go up (clientIF:723)
2023-07-07 22:23:50,162-0300 INFO (vmrecovery) [vdsm.api] START
getConnectedStoragePoolsList() from=internal,
task_id=297ad1df-c855-4fbb-a89f-dfbe7a1b60a2 (api:31)
2023-07-07 22:23:50,162-0300 INFO (vmrecovery) [vdsm.api] FINISH
getConnectedStoragePoolsList return={'poollist': []} from=internal,
task_id=297ad1df-c855-4fbb-a89f-dfbe7a1b60a2 (api:37)
2023-07-07 22:23:50,162-0300 INFO (vmrecovery) [vds] recovery: waiting for
storage pool to go up (clientIF:723)
[root@ksmmi1r02ovirt36 ~]# journalctl -f
-- Logs begin at Fri 2023-07-07 21:57:13 -03. --
Jul 07 22:24:46 ksmmi1r02ovirt36.kosmo.cloud
ansible-async_wrapper.py[13790]: 13791 still running (86045)
Jul 07 22:24:50 ksmmi1r02ovirt36.kosmo.cloud platform-python[22812]:
ansible-ovirt_host_info Invoked with
pattern=name=ksmmi1r02ovirt36.kosmo.cloud auth={'token':
'eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICIyZzVfWUdWX08wSFJoWnlVeFNkdGl4d0liRWV6Wkp5NTgwN3BaXzUxelBvIn0.eyJleHAiOjE2ODg3OTY0MzAsImlhdCI6MTY4ODc3OTE1MCwianRpIjoiOTBlOWE5ZTAtMmExNS00MzNiLWIxOGQtMmUwNmI4MTQ5NGE2IiwiaXNzIjoiaHR0cHM6Ly9rc21lbmdpbmUwMS5rb3Ntby5jbG91ZC9vdmlydC1lbmdpbmUtYXV0aC9yZWFsbXMvb3ZpcnQtaW50ZXJuYWwiLCJhdWQiOiJhY2NvdW50Iiwic3ViIjoiYWRkMWMyYzYtYzJjMy00N2M4LWI1ODUtNGI2MTU2ZDAxYTE3IiwidHlwIjoiQmVhcmVyIiwiYXpwIjoib3ZpcnQtZW5naW5lLWludGVybmFsIiwic2Vzc2lvbl9zdGF0ZSI6IjNhMWUzZTU2LWIyZTUtNGMyYi05YTIxLThjZjE1YzY4NzlmNiIsImFjciI6IjEiLCJhbGxvd2VkLW9yaWdpbnMiOlsiaHR0cHM6Ly9rc21lbmdpbmUwMS5rb3Ntby5jbG91ZCJdLCJyZWFsbV9hY2Nlc3MiOnsicm9sZXMiOlsiZGVmYXVsdC1yb2xlcy1vdmlydC1pbnRlcm5hbCIsIm9mZmxpbmVfYWNjZXNzIiwidW1hX2F1dGhvcml6YXRpb24iXX0sInJlc291cmNlX2FjY2VzcyI6eyJhY2NvdW50Ijp7InJvbGVzIjpbIm1hbmFnZS1hY2NvdW50IiwibWFuYWdlLWFjY291bnQtbGlua3MiLCJ2aWV3LXByb2ZpbGUiXX19LCJzY29wZSI6Im92aXJ0LWV4dD10b2tlbjpwYXNzd29yZC1hY2Nlc3Mgb3ZpcnQtZXh0PXRva2VuLWluZm86cHVibGljLWF1dGh6LXNlYXJjaCBvdmlydC1hcHAtYXBpIG92aXJ0LWV4dD10b2tlbi1pbmZvOnZhbGlkYXRlIHByb2ZpbGUgZW1haWwgb3ZpcnQtZXh0PXRva2VuLWluZm86YXV0aHotc2VhcmNoIiwic2lkIjoiM2ExZTNlNTYtYjJlNS00YzJiLTlhMjEtOGNmMTVjNjg3OWY2IiwiZW1haWxfdmVyaWZpZWQiOmZhbHNlLCJncm91cHMiOlsiL292aXJ0LWFkbWluaXN0cmF0b3IiXSwicHJlZmVycmVkX3VzZXJuYW1lIjoiYWRtaW5Ab3ZpcnQiLCJlbWFpbCI6ImFkbWluQGxvY2FsaG9zdCJ9.o9PsulNw0urPphWITcB6Y3wpHQiiQ0v00su6XorITcvNElzkfHqyYfJd8W-kIfgElh6BNnCmYyIwtX7t3T4-PiLgDdipH1J9uzuDBXkmNBNcVmFimfUAqyC8aUITK56CqZ5TyRyHqhOicPciqGSY8R98hQ8I8y11w2RiIFT0rQYnRev75gjKoqUH29uNyeCAdTyKvPSGHNm1pLLrtPUmk-JCGmsYytNRCMHAPoNIlZP3k94PbQ9pI4jZ5O7kcRSgJik8tUDOVglcL4g0MoAJwracek2MUTvK8pDpRghI9hSQVLFtAXCyGRxfHHzTko4EbHBbFlz5s3pfs2kbF6TFmw',
'url': 'https://ksmengine01.kosmo.cloud/ovirt-engine/api', 'ca_file': None,
'insecure': True, 'timeout': 0, 'compress': True, 'kerberos': False,
'headers': None, 'hostname': None, 'username': None, 'password': None}
fetch_nested=False nested_attributes=[] follow=[] all_content=False
cluster_version=None
Jul 07 22:24:51 ksmmi1r02ovirt36.kosmo.cloud
ansible-async_wrapper.py[13790]: 13791 still running (86040)
Jul 07 22:24:56 ksmmi1r02ovirt36.kosmo.cloud
ansible-async_wrapper.py[13790]: 13791 still running (86035)
Jul 07 22:25:00 ksmmi1r02ovirt36.kosmo.cloud platform-python[22829]:
ansible-ovirt_host_info Invoked with
pattern=name=ksmmi1r02ovirt36.kosmo.cloud auth={'token':
'eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICIyZzVfWUdWX08wSFJoWnlVeFNkdGl4d0liRWV6Wkp5NTgwN3BaXzUxelBvIn0.eyJleHAiOjE2ODg3OTY0MzAsImlhdCI6MTY4ODc3OTE1MCwianRpIjoiOTBlOWE5ZTAtMmExNS00MzNiLWIxOGQtMmUwNmI4MTQ5NGE2IiwiaXNzIjoiaHR0cHM6Ly9rc21lbmdpbmUwMS5rb3Ntby5jbG91ZC9vdmlydC1lbmdpbmUtYXV0aC9yZWFsbXMvb3ZpcnQtaW50ZXJuYWwiLCJhdWQiOiJhY2NvdW50Iiwic3ViIjoiYWRkMWMyYzYtYzJjMy00N2M4LWI1ODUtNGI2MTU2ZDAxYTE3IiwidHlwIjoiQmVhcmVyIiwiYXpwIjoib3ZpcnQtZW5naW5lLWludGVybmFsIiwic2Vzc2lvbl9zdGF0ZSI6IjNhMWUzZTU2LWIyZTUtNGMyYi05YTIxLThjZjE1YzY4NzlmNiIsImFjciI6IjEiLCJhbGxvd2VkLW9yaWdpbnMiOlsiaHR0cHM6Ly9rc21lbmdpbmUwMS5rb3Ntby5jbG91ZCJdLCJyZWFsbV9hY2Nlc3MiOnsicm9sZXMiOlsiZGVmYXVsdC1yb2xlcy1vdmlydC1pbnRlcm5hbCIsIm9mZmxpbmVfYWNjZXNzIiwidW1hX2F1dGhvcml6YXRpb24iXX0sInJlc291cmNlX2FjY2VzcyI6eyJhY2NvdW50Ijp7InJvbGVzIjpbIm1hbmFnZS1hY2NvdW50IiwibWFuYWdlLWFjY291bnQtbGlua3MiLCJ2aWV3LXByb2ZpbGUiXX19LCJzY29wZSI6Im92aXJ0LWV4dD10b2tlbjpwYXNzd29yZC1hY2Nlc3Mgb3ZpcnQtZXh0PXRva2VuLWluZm86cHVibGljLWF1dGh6LXNlYXJjaCBvdmlydC1hcHAtYXBpIG92aXJ0LWV4dD10b2tlbi1pbmZvOnZhbGlkYXRlIHByb2ZpbGUgZW1haWwgb3ZpcnQtZXh0PXRva2VuLWluZm86YXV0aHotc2VhcmNoIiwic2lkIjoiM2ExZTNlNTYtYjJlNS00YzJiLTlhMjEtOGNmMTVjNjg3OWY2IiwiZW1haWxfdmVyaWZpZWQiOmZhbHNlLCJncm91cHMiOlsiL292aXJ0LWFkbWluaXN0cmF0b3IiXSwicHJlZmVycmVkX3VzZXJuYW1lIjoiYWRtaW5Ab3ZpcnQiLCJlbWFpbCI6ImFkbWluQGxvY2FsaG9zdCJ9.o9PsulNw0urPphWITcB6Y3wpHQiiQ0v00su6XorITcvNElzkfHqyYfJd8W-kIfgElh6BNnCmYyIwtX7t3T4-PiLgDdipH1J9uzuDBXkmNBNcVmFimfUAqyC8aUITK56CqZ5TyRyHqhOicPciqGSY8R98hQ8I8y11w2RiIFT0rQYnRev75gjKoqUH29uNyeCAdTyKvPSGHNm1pLLrtPUmk-JCGmsYytNRCMHAPoNIlZP3k94PbQ9pI4jZ5O7kcRSgJik8tUDOVglcL4g0MoAJwracek2MUTvK8pDpRghI9hSQVLFtAXCyGRxfHHzTko4EbHBbFlz5s3pfs2kbF6TFmw',
'url': 'https://ksmengine01.kosmo.cloud/ovirt-engine/api', 'ca_file': None,
'insecure': True, 'timeout': 0, 'compress': True, 'kerberos': False,
'headers': None, 'hostname': None, 'username': None, 'password': None}
fetch_nested=False nested_attributes=[] follow=[] all_content=False
cluster_version=None
cat
/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20230707220613-2t8ze9.log
2023-07-07 22:19:11,816-0300 INFO
otopi.ovirt_hosted_engine_setup.ansible_utils
ansible_utils._process_output:115 TASK [ovirt.ovirt.hosted_engine_setup :
Wait for the host to be up]
2023-07-07 22:39:39,882-0300 DEBUG
otopi.ovirt_hosted_engine_setup.ansible_utils
ansible_utils._process_output:109 {'changed': False, 'ovirt_hosts':
[{'href': '/ovirt-engine/api/hosts/d1bf8fb2-74f4-4954-8c34-66eb99ba2bf3',
'comment': '', 'id': 'd1bf8fb2-74f4-4954-8c34-66eb99ba2bf3', 'name':
'ksmmi1r02ovirt36.kosmo.cloud', 'address': 'ksmmi1r02ovirt36.kosmo.cloud',
'affinity_labels': [], 'auto_numa_status': 'unknown', 'certificate':
{'organization': 'kosmo.cloud', 'subject':
'O=kosmo.cloud,CN=ksmmi1r02ovirt36.kosmo.cloud'}, 'cluster': {'href':
'/ovirt-engine/api/clusters/d8784faf-8b77-45c8-9fa4-b9b4b0404d95', 'id':
'd8784faf-8b77-45c8-9fa4-b9b4b0404d95'}, 'cpu': {'speed': 0.0, 'topology':
{}}, 'cpu_units': [], 'device_passthrough': {'enabled': False}, 'devices':
[], 'external_network_provider_configurations': [], 'external_status':
'ok', 'hardware_information': {'supported_rng_sources': []}, 'hooks': [],
'katello_errata': [], 'kdump_status': 'unknown', 'ksm': {'enabled': False},
'max_scheduling_memory': 0, 'memory': 0, 'network_attachments': [], 'nics':
[], 'numa_nodes': [], 'numa_supported': False, 'os':
{'custom_kernel_cmdline': ''}, 'ovn_configured': False, 'permissions': [],
'port': 54321, 'power_management': {'automatic_pm_enabled': True,
'enabled': False, 'kdump_detection': True, 'pm_proxies': []}, 'protocol':
'stomp', 'reinstallation_required': False, 'se_**FILTERED**': {}, 'spm':
{'priority': 5, 'status': 'none'}, 'ssh': {'fingerprint':
'SHA256:Nr04m1g0UxbpqxwMBr93DLHz2m2wzR8+xFJhBVNovHY', 'port': 22,
'public_key': 'ecdsa-sha2-nistp256
AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBE2EdJn0vJiJUagEK3w2G2nHmziJJasailwapaL06qWU2+BkPwkokSvyK07APhwyynnz6lw8J4y/kWv12D7/r+s='},
'statistics': [], 'status': 'install_failed',
'storage_connection_extensions': [], 'summary': {'total': 0}, 'tags': [],
'transparent_huge_pages': {'enabled': False}, 'type': 'rhel',
'unmanaged_networks': [], 'update_available': False, 'vgpu_placement':
'consolidated'}], 'invocation': {'module_args': {'pattern':
'name=ksmmi1r02ovirt36.kosmo.cloud', 'fetch_nested': False,
'nested_attributes': [], 'follow': [], 'all_content': False,
'cluster_version': None}}, '_ansible_no_log': None, 'attempts': 120}
2023-07-07 22:39:39,983-0300 DEBUG
otopi.ovirt_hosted_engine_setup.ansible_utils
ansible_utils._process_output:109 ignored: [localhost]: FAILED! =>
{"attempts": 120, "changed": false, "ovirt_hosts": [{"address":
"ksmmi1r02ovirt36.kosmo.cloud", "affinity_labels": [], "auto_numa_status":
"unknown", "certificate": {"organization": "kosmo.cloud", "subject":
"O=kosmo.cloud,CN=ksmmi1r02ovirt36.kosmo.cloud"}, "cluster": {"href":
"/ovirt-engine/api/clusters/d8784faf-8b77-45c8-9fa4-b9b4b0404d95", "id":
"d8784faf-8b77-45c8-9fa4-b9b4b0404d95"}, "comment": "", "cpu": {"speed":
0.0, "topology": {}}, "cpu_units": [], "device_passthrough": {"enabled":
false}, "devices": [], "external_network_provider_configurations": [],
"external_status": "ok", "hardware_information": {"supported_rng_sources":
[]}, "hooks": [], "href":
"/ovirt-engine/api/hosts/d1bf8fb2-74f4-4954-8c34-66eb99ba2bf3", "id":
"d1bf8fb2-74f4-4954-8c34-66eb99ba2bf3", "katello_errata": [],
"kdump_status": "unknown", "ksm": {"enabled": false},
"max_scheduling_memory": 0, "memory": 0, "name":
"ksmmi1r02ovirt36.kosmo.cloud", "network_attachments": [], "nics": [],
"numa_nodes": [], "numa_supported": false, "os": {"custom_kernel_cmdline":
""}, "ovn_configured": false, "permissions": [], "port": 54321,
"power_management": {"automatic_pm_enabled": true, "enabled": false,
"kdump_detection": true, "pm_proxies": []}, "protocol": "stomp",
"reinstallation_required": false, "se_**FILTERED**": {}, "spm":
{"priority": 5, "status": "none"}, "ssh": {"fingerprint":
"SHA256:Nr04m1g0UxbpqxwMBr93DLHz2m2wzR8+xFJhBVNovHY", "port": 22,
"public_key": "ecdsa-sha2-nistp256
AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBE2EdJn0vJiJUagEK3w2G2nHmziJJasailwapaL06qWU2+BkPwkokSvyK07APhwyynnz6lw8J4y/kWv12D7/r+s="},
"statistics": [], "status": "install_failed",
"storage_connection_extensions": [], "summary": {"total": 0}, "tags": [],
"transparent_huge_pages": {"enabled": false}, "type": "rhel",
"unmanaged_networks": [], "update_available": false, "vgpu_placement":
"consolidated"}]}
2023-07-07 22:39:40,284-0300 INFO
otopi.ovirt_hosted_engine_setup.ansible_utils
ansible_utils._process_output:115 TASK [ovirt.ovirt.hosted_engine_setup :
Notify the user about a failure]
2023-07-07 22:39:40,685-0300 DEBUG
otopi.ovirt_hosted_engine_setup.ansible_utils
ansible_utils._process_output:109 {'msg': 'Host is not up, please check
logs, perhaps also on the engine machine', '_ansible_no_log': None,
'changed': False}
2023-07-07 22:39:40,786-0300 ERROR
otopi.ovirt_hosted_engine_setup.ansible_utils
ansible_utils._process_output:113 fatal: [localhost]: FAILED! =>
{"changed": false, "msg": "Host is not up, please check logs, perhaps also
on the engine machine"}
Have a nice weekend!
--
Att,
Jorge Visentini
+55 55 98432-9868
1 year, 4 months
Q: New node install failed
by Andrei Verovski
Hi,
I’m trying to install new node same way as I did several times before, with oVirt 4.5.2.4-1el8.
HP Proliant with clean install of CentOS Stream 9 (in previous installs some time ago I used Stream 8).
Add new node failed with this error (at the bottom of this e-mail),
ovirt-host-deploy-ansible-20230803161025-node15.starlett.lv-827b7fc4-68b6-410f-aa90-f33884923ee8.log (from /var/log/ovirt-engine/host-deploy/)
I can attach the whole log but it looks it is not necessary.
My engine runs on dedicated PC and is NOT hosted engine.
Thanks in advance for any help.
Andrei
————————————————
[root@node15 ~]# dnf repolist --enabled
repo id repo name
appstream CentOS Stream 9 - AppStream
baseos CentOS Stream 9 - BaseOS
extras-common CentOS Stream 9 - Extras packages
----------------------------------
2023-08-03 16:10:44 EEST - TASK [ovirt-host-deploy-vdsm : Install ovirt-hosted-engine-setup package] ******
2023-08-03 16:10:47 EEST - {
"uuid" : "b1ffdae5-0679-40c3-8609-1a5b976edc0e",
"counter" : 99,
"stdout" : "fatal: [node15.starlett.lv]: FAILED! => {\"changed\": false, \"failures\": [\"No package ovirt-hosted-engine-setup available.\"], \"msg\": \"Failed to install some of the specified packages\", \"rc\": 1, \"results\": []}",
"start_line" : 87,
"end_line" : 88,
"runner_ident" : "281cf22b-29e3-4b4f-adc4-a47028b1ba59",
"event" : "runner_on_failed",
"pid" : 3199,
"created" : "2023-08-03T13:10:45.632672",
"parent_uuid" : "525400a7-0063-c9c3-eaa2-00000000022c",
"event_data" : {
"playbook" : "ovirt-host-deploy.yml",
"playbook_uuid" : "dcb4654e-2e81-4090-a60e-b007a715eda6",
"play" : "all",
"play_uuid" : "525400a7-0063-c9c3-eaa2-000000000006",
"play_pattern" : "all",
"task" : "Install ovirt-hosted-engine-setup package",
"task_uuid" : "525400a7-0063-c9c3-eaa2-00000000022c",
"task_action" : "yum",
"task_args" : "",
"task_path" : "/usr/share/ovirt-engine/ansible-runner-service-project/project/roles/ovirt-host-deploy-vdsm/tasks/packages.yml:6",
"role" : "ovirt-host-deploy-vdsm",
"host" : "node15.starlett.lv",
"remote_addr" : "node15.starlett.lv",
"res" : {
"failures" : [ "No package ovirt-hosted-engine-setup available." ],
"results" : [ ],
"rc" : 1,
"msg" : "Failed to install some of the specified packages",
"invocation" : {
"module_args" : {
"name" : [ "ovirt-hosted-engine-setup" ],
"state" : "present",
"allow_downgrade" : false,
"autoremove" : false,
"bugfix" : false,
"cacheonly" : false,
"disable_gpg_check" : false,
"disable_plugin" : [ ],
"disablerepo" : [ ],
"download_only" : false,
"enable_plugin" : [ ],
"enablerepo" : [ ],
"exclude" : [ ],
"installroot" : "/",
"install_repoquery" : true,
"install_weak_deps" : true,
"security" : false,
"skip_broken" : false,
"update_cache" : false,
"update_only" : false,
"validate_certs" : true,
"lock_timeout" : 30,
"allowerasing" : false,
"nobest" : false,
"conf_file" : null,
"disable_excludes" : null,
"download_dir" : null,
"list" : null,
"releasever" : null
}
},
"_ansible_no_log" : false,
"changed" : false
},
"start" : "2023-08-03T13:10:43.563744",
"end" : "2023-08-03T13:10:45.632205",
"duration" : 2.068461,
"ignore_errors" : null,
"event_loop" : null,
"uuid" : "b1ffdae5-0679-40c3-8609-1a5b976edc0e"
}
}
1 year, 4 months