How to modify the MAC address of a bonding interface after changing the nic card
by wodel youchi
Hi,
ovirt 4.4 on centos 8.
We have changed the network card of a hypervisor which uses bonding
interface.
The bonding interface still uses the old MAC address of the former slave
interface.
We tried to modify it via CLI but it didn't work, as soon as the node
(hypervisor) is activated, the old MAC is reestablished.
it seems to be backed up somewhere.
Should we reinstall the host?
Regards.
19 hours
Re: not able to upload disks, iso - Connection to ovirt-imageio service has failed. Ensure that ovirt-engine certificate is registered as a valid CA in the browser.
by Mostafa Md Arefin
Observed in 3rd party Firewall monitoring log files that during the Test
Connection, the PC Client is trying to reach the
Engine Portal in port 54322, instead of 54323 (Image I/O Proxy)
Confirmed in engine-config, that ImageTransferProxyEnabled is set to False.
SOLUTION
To confirm that the ImageTransferProxyEnabled is set to "false", login in
the OLVM Engine Host/VM as root, and execute
the following command:
# engine-config -g ImageTransferProxyEnabled
Picked up JAVA_TOOL_OPTIONS: -Dcom.redhat.fips=false
ImageTransferProxyEnabled: false version: general <<< set to false
To enable the Image I/O Proxy and restart the ovirt-engine and
ovirt-imageio services perform the following as root:
# engine-config -s ImageTransferProxyEnabled=true
Picked up JAVA_TOOL_OPTIONS: -Dcom.redhat.fips=false
ImageTransferProxyEnabled: true version: general <<< set to true
# systemctl restart ovirt-engine
# systemctl restart ovirt-imageio
1 day, 16 hours
oVirh Host Update Problem
by Vladislav Solovei
Freshly installed using nightly build, freshly installed OS :)
OS: AlmaLinux 9.5
Can't update the system (and can't reinstall the Host)
Repository copr:copr.fedorainfracloud.org:ovirt:ovirt-master-snapshot is listed more than once in the configuration
Last metadata expiration check: 0:22:12 ago on Fri 24 Jan 2025 11:41:49 EET.
Error:
Problem: package rdo-ovn-host-2:22.12-2.el9s.noarch from ovirt-master-centos-stream-openstack-yoga-testing requires rdo-ovn = 2:22.12-2.el9s, but none of the providers can be installed
- cannot install the best update candidate for package ovn22.09-host-22.09.0-31.el9s.x86_64
- package rdo-ovn-2:22.12-2.el9s.noarch from ovirt-master-centos-stream-openstack-yoga-testing is filtered out by exclude filtering
(try to add '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages)
[ovirt-master-centos-stream-openstack-yoga-testing]
name=CentOS Stream 9 - OpenStack Yoga Repository - testing
baseurl=https://buildlogs.centos.org/9-stream/cloud/$basearch/openstack-y...
gpgcheck=0
enabled=1
exclude=
openstack-ansible-core
python3-rdo-openvswitch
rdo-network-scripts-openvswitch
rdo-openvswitch
rdo-ovn
rdo-ovn-central
How can this problem be resolved?
Should the rdo-ovn package, which is a dependency of rdo-ovn-host-2:22.12-2, be filtered out?
1 day, 18 hours
Couldn't resolve host name for http://mirrorlist.centos.org/
by fiorletta@ssolo.eu
Hi Dear,
I need to install 3 nodes with self-hosted-engine but with all linux distributions tried I receive the error below:
ERROR otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:113 fatal: [localhost -> 192.168.1.50]: FAILED! => {"changed": false, "msg": "Failed to download metadata for repo 'centos-ceph-pacific': Cannot prepare internal mirrorlist: Curl error (6): Couldn't resolve host name for http://mirrorlist.centos.org/?release=8-stream&arch=x86_64&repo=storage-c... [Could not resolve host: mirrorlist.centos.org]", "rc": 1, "results": []}
2025-02-24 17:03:59,038+0100 ERROR otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:113 fatal: [localhost]: FAILED! => {"changed": false, "msg": "There was a failure deploying the engine on the local engine VM. The system may not be provisioned according to the playbook results: please check the logs for the issue, fix accordingly or re-deploy from scratch.\n"}
2025-02-24 17:03:59,640+0100 ERROR otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:238 b'[DEPRECATION WARNING]: Encryption using the Python crypt module is deprecated. \n'
2025-02-24 17:03:59,640+0100 ERROR otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:238 b'The Python crypt module is deprecated and will be removed from Python 3.13. \n'
2025-02-24 17:03:59,641+0100 ERROR otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:238 b'Install the passlib library for continued encryption functionality. This \n'
2025-02-24 17:03:59,641+0100 ERROR otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:238 b'feature will be removed in version 2.17. Deprecation warnings can be disabled \n'
2025-02-24 17:03:59,641+0100 ERROR otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:238 b'by setting deprecation_warnings=False in ansible.cfg.\n'
2025-02-24 17:03:59,643+0100 ERROR otopi.context context._executeMethod:151 Failed to execute stage 'Closing up': Failed executing ansible-playbook
2025-02-24 17:04:21,338+0100 ERROR otopi.plugins.gr_he_common.core.misc misc._terminate:164 Hosted Engine deployment failed: please check the logs for the issue, fix accordingly or re-deploy from scratch.
In /etc/yum.repos.d there isn't the site http://mirrorlist.centos.org on none of the repository configuration files.
If I install the engine on physical host all work fine and the installation done without problems.
Does anyone have any idea why I find the old, now decommissioned, Centos site in the engine configuration?
Tnx
2 days, 18 hours
Barebone Hosted Engine Deployment fails
by kevin@kllmnn.de
Hey people, I hope someone can help me identify if there's something I'm doing wrong or if there's a bug.
Originally I wanted to re-deploy my self-hosted engine, as it is still on version 4.5.5 with CentOS 8, and therefore can't be updated anymore.
To get a fresh backup of the currently running config I did the following:
1. On physical host `hosted-engine --set-maintenance --mode=global`
2. On engine VM `systemctl stop ovirt-engine` and then
3. `engine-backup --scope=all --mode=backup --file=/mnt/ovirt-engine-backup/ovirt-engine-4.5.5-backup.bck --log=/mnt/ovirt-engine-backup/ovirt-engine-4.5.5-backup.log` (where `/mnt/ovirt-engine-backup/` is a NFS share)
4. On physical host `hosted-engine --vm-shutdown`
Then I set up a new physical host with CentOS 9 Stream (Build 20250320), enabled and installed the oVirt repository as mentioned at https://www.ovirt.org/download/install_on_rhel.html, then ran:
`hosted-engine --deploy --4 --restore-from-file=/mnt/ovirt-engine-backup/ovirt-engine-4.5.5-backup.bck` (of course mounting the NFS share prior to this).
But after giving it all the information it needed it fails after 15-20 minutes with several errors in the logfile, which I cannot identify the primary/root cause of.
The first error that appears is this one:
```
2025-03-25 11:19:35,427+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:115 TASK [ovirt.ovirt.hosted_engine_setup : Include after engine-setup custom tasks files for the engine VM]
2025-03-25 11:19:36,730+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:115 TASK [ovirt.ovirt.hosted_engine_setup : Wait for the engine to reach a stable condition]
2025-03-25 11:19:37,431+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:115 skipping: [localhost]
2025-03-25 11:19:38,133+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:115 TASK [ovirt.ovirt.hosted_engine_setup : Configure LibgfApi support]
2025-03-25 11:19:38,835+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:115 skipping: [localhost]
2025-03-25 11:19:39,536+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:115 TASK [ovirt.ovirt.hosted_engine_setup : Save original OvfUpdateIntervalInMinutes]
2025-03-25 11:19:42,041+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:115 changed: [localhost -> 192.168.222.170]
2025-03-25 11:19:42,742+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:115 TASK [ovirt.ovirt.hosted_engine_setup : Set OVF update interval to 1 minute]
2025-03-25 11:19:45,147+0100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:109 {'changed': True, 'stdout': 'Index 1 out of bounds for length 1', 'stderr': 'Picked up JAVA_TOOL_OPTIONS: -Dcom.redhat.fips=false', 'rc': 1, 'cmd': ['engine-config', '-s', 'OvfUpdateIntervalInMinutes=1'], 'start': '2025-03-25 11:19:43.872322', 'end': '2025-03-25 11:19:44.941216', 'delta': '0:00:01.068894', 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'engine-config -s OvfUpdateIntervalInMinutes=1', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['Index 1 out of bounds for length 1'], 'stderr_lines': ['Picked up JAVA_TOOL_OPTIONS: -Dcom.redhat.fips=false'], '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_host': '192.168.222.170', 'ansible_port': None, 'ansible_user': 'root', 'ansible_
connection': 'smart'}}
2025-03-25 11:19:45,248+0100 ERROR otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:113 fatal: [localhost -> 192.168.222.170]: FAILED! => {"changed": true, "cmd": ["engine-config", "-s", "OvfUpdateIntervalInMinutes=1"], "delta": "0:00:01.068894", "end": "2025-03-25 11:19:44.941216", "msg": "non-zero return code", "rc": 1, "start": "2025-03-25 11:19:43.872322", "stderr": "Picked up JAVA_TOOL_OPTIONS: -Dcom.redhat.fips=false", "stderr_lines": ["Picked up JAVA_TOOL_OPTIONS: -Dcom.redhat.fips=false"], "stdout": "Index 1 out of bounds for length 1", "stdout_lines": ["Index 1 out of bounds for length 1"]}
```
The full log is available here: https://filebin.net/dimi5g2o6q20t4aj
I also tried with a completely clean deployment, without restoring from backup (once without and once with using `ovirt-hosted-engine-cleanup` in between). The errors in the log are completely the same.
Also I tried to use the oVirt Node master experimental ISO (`ovirt-node-ng-installer-4.5.6-2025031111.c9s.iso`), same issue.
Does the problem lie somewhere in my answers for the questions asked by the tool? I cannot identify giving any wrong info.
Can someone maybe reproduce the issue?
2 days, 18 hours
Items to be repaired entries
by ziyi Liu
One of my hosts is faulty and needs to be replaced with a good one. The new host uses the same host name as the old host and the same backend storage name.
The recovery process is as follows:
1. Delete the storage volume of the faulty host and the storage volume of the arbitration node.
2. Use the glusterfs ansble script to restore the faulty host. (Create partitions and join peer nodes)
3. Add the storage volume of the new host
4. Wait for data recovery
During the data recovery process, it is always stuck at Items to be repaired entries
How should I deal with these items
6 days, 19 hours
network out of sync
by Nathanaël Blanchet
Hi,
After reinstalling ovirt hosts with alma Linux 9, all my hosts are out
of sync on the management bridge. I tried to sync them, but it has no
effects. Then I tried "static" and "DHCP" but it has no effect as well.
I precise that I was using ovirt node 4.5.5 el9, and the bug wasn't
present.
It seems to be a bug because, network is okay and there is no other
issue using alamlinux 9.5 with latest vdsmd.
Thank you for helping.
--
Nathanaël Blanchet
Administrateur Systèmes et Réseaux
Service Informatique et REseau (SIRE)
Département des systèmes d'information
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5
Tél. 33 (0)4 67 54 84 55
Fax 33 (0)4 67 54 84 14
blanchet(a)abes.fr
2 weeks
Problem with deleting snapshot - no responging VM
by Urbanski.r@gmail.com
Hello,
I have ovirt-engine version 4.5.6-1.el9, and nodes version 4.5.5.
We have about 200 VMs in the environment. So far, backups were performed using snapshot + ovirt API + AWX.
The problem started when during backup some machines became unresponsive (it was impossible to get to the console and it happened randomly), Prellocated disks became ThinkProvisioning and it was not possible to delete the snapshot.
We decided to skip all scripts and click the snapshot directly in ovirt-engine and then delete it in order to merge.
In many cases, these tasks ended successfully, and in random situations the process hung again and the machine became unresponsive.
I thought..... maybe it was a performance problem (network to the array, the array itself, etc.). So we set up the dev environment on different hardware (different physical servers and different hardware with ISCSI). I copied one of the sample machines that had problems. I made a snapshot again and merged it using deleting from the ovirt-engine interface. It worked a few times and I decided to do another test
I made a snapshot of the test VM and logged into it and used a file about 16GB in size and additionally copied it to another place on the disk and then tried to merge the snapshot and again the VM became unresponsive and it was impossible to delete the snapshot.
Has anyone encountered such a problem and has a way to solve it?
I am attaching the logs from /var/log/ovirt-engine/engine.log
https://pastebin.pl/view/e6e01717
2 weeks, 1 day