These are my brief notes of the installation of ovirt engine 4.5.7 over 4.5.6 engine under el8. This is my environment: * Almalinux 10.1: x86_64_v2 * ovirt-engine-appliance-almalinux10-4.5.7-1.el10.x86_64 I upgraded node from el8 (centos) via el9 (centos) to el10 (Alma linux) because the el8 4.5.6 engine cannot connect to el10 directly. I found many issue moving from my old installation. After successful installation you have to put all the cluster in management mode (powering off all the VMs), change the compatibility level of the cluster (4.8) and reinstall all the nodes. (please take care that this is only a brief memo of my history) For the deployment of the hosted-engine this is what I did - please note that it isn't a step 2 step instruction, these are only my working notes that I found after many many iterations. ----------------------------------- install 4.5.7 dnf install sshpass hosted-engine --deploy --4 --restore-from-file=260204-backup.tar.gz --config-append=260204-answers-deploy.conf Please NOTE that you have to set the same as the original engine: eg. if you have keycloak disabled in the original you have to set disabled here or in the "--config-append" file. Please also notice NOT TO USE the same hosted-engine storage for the new engine! You have to set up a NEW domain. patch -p0 -d/ <<EOF *** /usr/share/ansible/collections/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/bootstrap_local_vm/03_engine_initial_tasks.yml.orig 2026-01-06 01:00:00.000000000 +0100 --- /usr/share/ansible/collections/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/bootstrap_local_vm/03_engine_initial_tasks.yml 2026-02-04 17:13:38.457406310 +0100 *************** *** 3,10 **** block: - name: Wait for the local VM ansible.builtin.wait_for_connection: ! delay: 5 ! timeout: 3600 - name: Add an entry for this host on /etc/hosts on the local VM ansible.builtin.lineinfile: dest: /etc/hosts --- 3,18 ---- block: - name: Wait for the local VM ansible.builtin.wait_for_connection: ! delay: 30 ! timeout: 600 ! - name: DEBUG - Test manual SSH connection ! ansible.builtin.debug: ! msg: "ssh -v root@{{ hostvars[he_ansible_host_name]['local_vm_ip']['stdout_lines'][0] }} 'echo Connected'" ! ! - name: DEBUG - Print connection variables ! ansible.builtin.debug: ! msg: "Target host: {{ hostvars[he_ansible_host_name]['local_vm_ip']['stdout_lines'][0] }} | FQDN: {{ he_fqdn }} Ansible connection: {{ ansible_connection | default('ssh') }}" ! - name: Add an entry for this host on /etc/hosts on the local VM ansible.builtin.lineinfile: dest: /etc/hosts EOF patch -p0 -d/ <<EOF *** /usr/share/ansible/collections/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/bootstrap_local_vm/04_engine_final_tasks.yml.orig 2026-02-06 16:31:52.827175249 +0100 --- /usr/share/ansible/collections/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/bootstrap_local_vm/04_engine_final_tasks.yml 2026-02-06 16:32:23.427533682 +0100 *************** *** 10,18 **** # After a restart the engine has a 5 minute grace time, # other actions like electing a new SPM host or reconstructing # the master storage domain could require more time ! - name: Wait for the engine to reach a stable condition ansible.builtin.wait_for: ! timeout: "600" when: he_restore_from_file is defined and he_restore_from_file - name: Configure LibgfApi support ansible.builtin.command: engine-config -s LibgfApiSupported=true --cver=4.2 --- 10,18 ---- # After a restart the engine has a 5 minute grace time, # other actions like electing a new SPM host or reconstructing # the master storage domain could require more time ! - name: Wait for the engine to reach a stable condition (600s too much) ansible.builtin.wait_for: ! timeout: 180 when: he_restore_from_file is defined and he_restore_from_file - name: Configure LibgfApi support ansible.builtin.command: engine-config -s LibgfApiSupported=true --cver=4.2 EOF . ~/.profile export http_proxy=http://proxy.dmz.ssis:3128 export https_proxy=http://proxy.dmz.ssis:3128 export ftp_proxy=http://proxy.dmz.ssis:3128 export no_proxy=.ovirt # connect to ovirt-engine@localhost echo "proxy=http://proxy.dmz.ssis:3128" | tee -a /etc/yum.conf while [ `ls -1 /var/log/ovirt-engine/setup | wc -l` -eq 0 ]; do echo -n "."; sleep 1; done su - postgres psql -c "ALTER DEFAULT PRIVILEGES FOR ROLE postgres IN SCHEMA public GRANT SELECT ON TABLES TO ovirt_engine_history_grafana;" exit link=/var/lib/grafana/plugins/performancecopilot-pcp-app while :; do if [ -L "$link" ] && ! [ -e "$link" ]; then echo "Rimuovo link interrotto" rm -f "$link" grafana cli plugins install performancecopilot-pcp-app break fi echo -n "." sleep 1 done patch -p0 -d/<<EOF *** /usr/share/ovirt-engine/services/ovirt-engine/ovirt-engine.py.orig 2026-02-04 20:53:23.672000000 +0100 --- /usr/share/ovirt-engine/services/ovirt-engine/ovirt-engine.py 2026-02-04 20:26:23.619000000 +0100 *************** *** 63,69 **** def _processTemplate(self, template, dir, mode=None): out = os.path.join( dir, ! re.sub('\.in$', '', os.path.basename(template)), ) with open(template, 'r', encoding='utf-8') as f: t = Template(f.read()) --- 63,69 ---- def _processTemplate(self, template, dir, mode=None): out = os.path.join( dir, ! re.sub(r'\.in$', '', os.path.basename(template)), ) with open(template, 'r', encoding='utf-8') as f: t = Template(f.read()) EOF chmod a+r /etc/pki/ovirt-engine/keys/engine.p12;chmod a+r /etc/pki/ovirt-engine/keys/jboss.p12 tail -F /var/log/ovirt-engine/setup/*.log # ansible: Make sure `ovirt-engine` service is running curl http://localhost/ovirt-engine/services/health tail -F /var/log/ovirt-engine/engine.log # ansible: Wait for the engine to reach a stable condition (600 seconds -too much-) After that, follow the ansible instructions. Finally you have the hosted-engine in the cluster full renewed