Issue with oVirt 4.5 and Data Warehouse installed on a Separate Machine
by Igor Davidoff
Hello,
i have an issue with 'engine-seup' step on DWH (separate server) aufter upgrade from 4.4.10 to 4.5.
It looks like the ovirt-engine-setup are looking for rpm-package 'ovirt-engine' instead of 'ovirt-engine-dwh'.
the reporting error is:
"
--== END OF CONFIGURATION ==--
[ INFO ] Stage: Setup validation
[ ERROR ] Failed to execute stage 'Setup validation': Command '/usr/bin/rpm' failed to execute
[ INFO ] Stage: Clean up
Log file is located at /var/log/ovirt-engine/setup/ovirt-engine-setup-20220502100751-fqwb07.log
[WARNING] Remote engine was not configured to be able to access DWH, please check the logs.
[ INFO ] Generating answer file '/var/lib/ovirt-engine/setup/answers/20220502101130-setup.conf'
[ INFO ] Stage: Pre-termination
[ INFO ] Stage: Termination
[ ERROR ] Execution of setup failed
"
in Setup log i found:
"
2022-05-02 10:11:30,000+0000 DEBUG otopi.context context._executeMethod:127 Stage validation METHOD otopi.plugins.ovirt_engine_setup.ovirt_engine_common.distro-rpm.packages.Plugin._validation
2022-05-02 10:11:30,001+0000 DEBUG otopi.plugins.ovirt_engine_setup.ovirt_engine_common.distro-rpm.packages plugin.executeRaw:813 execute: ('/usr/bin/rpm', '-q', '--queryformat=%{version}-%{release}', 'ovirt-engine'), executable='None', cwd='None', env=None
2022-05-02 10:11:30,013+0000 DEBUG otopi.plugins.ovirt_engine_setup.ovirt_engine_common.distro-rpm.packages plugin.executeRaw:863 execute-result: ('/usr/bin/rpm', '-q', '--queryformat=%{version}-%{release}', 'ovirt-engine'), rc=1
2022-05-02 10:11:30,013+0000 DEBUG otopi.plugins.ovirt_engine_setup.ovirt_engine_common.distro-rpm.packages plugin.execute:921 execute-output: ('/usr/bin/rpm', '-q', '--queryformat=%{version}-%{release}', 'ovirt-engine') stdout:
package ovirt-engine is not installed
2022-05-02 10:11:30,013+0000 DEBUG otopi.plugins.ovirt_engine_setup.ovirt_engine_common.distro-rpm.packages plugin.execute:926 execute-output: ('/usr/bin/rpm', '-q', '--queryformat=%{version}-%{release}', 'ovirt-engine') stderr:
2022-05-02 10:11:30,013+0000 DEBUG otopi.context context._executeMethod:145 method exception
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/otopi/context.py", line 132, in _executeMethod
method['method']()
File "/usr/share/ovirt-engine/setup/bin/../plugins/ovirt-engine-setup/ovirt-engine-common/distro-rpm/packages.py", line 463, in _validation
oenginecons.Const.ENGINE_PACKAGE_NAME,
File "/usr/lib/python3.6/site-packages/otopi/plugin.py", line 931, in execute
command=args[0],
RuntimeError: Command '/usr/bin/rpm' failed to execute
2022-05-02 10:11:30,015+0000 ERROR otopi.context context._executeMethod:154 Failed to execute stage 'Setup validation': Command '/usr/bin/rpm' failed to execute
"
usually the upgrade of minor versions in 4.4 was just:
# yum update ovirt\*setup\*
# engine-setup
# yum update
as it did not work, i tried the fresh installation of centos8 stream and recovery of DWH Database and configuration:
# engine-backup --mode=restore --file=backup.bck --provision-all-databases
-> no luck
the last idea was fresh installation centos8 stream + fresh installation of ovirt-engine-dwh 4.5 (without recovery)
-> the same error.
the engine side works fine.
i compared the current setup logs with the installation and all the minor upgrades of ovirt-engine-dwh before 4.5
and only found the rpm-validation for the package 'ovirt-engine-dwh':
"
2022-02-08 16:11:29,846+0000 DEBUG otopi.plugins.ovirt_engine_setup.ovirt_engine_common.distro-rpm.packages plugin.executeRaw:813 execute: ('/usr/bin/rpm', '-q', 'ovirt-engine-dwh'), executable='None', cwd='None', env=None
2022-02-08 16:11:29,877+0000 DEBUG otopi.plugins.ovirt_engine_setup.ovirt_engine_common.distro-rpm.packages plugin.executeRaw:863 execute-result: ('/usr/bin/rpm', '-q', 'ovirt-engine-dwh'), rc=0
2022-02-08 16:11:29,878+0000 DEBUG otopi.plugins.ovirt_engine_setup.ovirt_engine_common.distro-rpm.packages plugin.execute:921 execute-output: ('/usr/bin/rpm', '-q', 'ovirt-engine-dwh') stdout:
ovirt-engine-dwh-4.4.10-1.el8.noarch
2022-02-08 16:11:29,878+0000 DEBUG otopi.plugins.ovirt_engine_setup.ovirt_engine_common.distro-rpm.packages plugin.execute:926 execute-output: ('/usr/bin/rpm', '-q', 'ovirt-engine-dwh') stderr:
2022-02-08 16:11:29,878+0000 DEBUG otopi.transaction transaction.commit:152 committing 'DWH Engine database Transaction'
"
It looks like engine-setup knows it is the DWH-Server, but is trying to validate the wrong rpm package.
Any Ideas, how to work around this.
Thank you!
2 years, 3 months
Obsoleting Packages in 'dnf check-update' on RHEL 8.6 hosts
by Scott Worthington
Hello,
My 4.5.1.3-1.el8 oVirt homelab cluster is running on RHEL 8.6 hosts and a
RHEL 8.6 stand-alone engine.
During the installation, I followed the instructions for RHEL 8.6:
https://ovirt.org/download/install_on_rhel.html
After the installation is complete, all packages are up-to-date, and
everything is running, I run a 'dnf check-upgrade' and I find there are
two packages that are listed under "Obsoleting Packages":
"""
Last metadata expiration check: 0:43:06 ago on Mon 01 Aug 2022 08:08:50 AM
EDT.
Obsoleting Packages
centos-stream-release.noarch 8.6-1.el8
@@System
centos-stream-release.noarch 8.6-1.el8
@@System
centos-stream-release.noarch 8.6-1.el8
@@System
redhat-release.x86_64 8.6-0.1.el8
@rhel-8-for-x86_64-baseos-rpms
"""
For the hosts, it makes it appear that the hosts always have an upgrade
pending in the oVirt Engine UI.
"centos-strea-release.noarch" appears to contain similar files to the
original RHEL 8.6 package "redhat-release-8.6.0.1.el8" except for the
addition of "centos*" named files.
Any thoughts on how to change this behavior so that it doesn't appear to
have packages to upgrade because a 'dnf update' does nothing to these
"Obsoleting Packages"?
Thanks in advance.
2 years, 3 months
Dark Mode
by Dean L
Is there a way to place oVirt in Dark Mode? It's one of those things I really like about moVirt. :)
Thanks!
2 years, 3 months
Gluster volume "deleted" by accident --- Is it possible to recover?
by itforums51@gmail.com
hi everyone,
I have a 3x node ovirt 4.4.6 cluster in HC setup.
Today I was intending to extend the data and vmstore volume adding another brick each; then by accident I pressed the "cleanup" button. Basically it looks that the volume were deleted.
I am wondering whether there is a process of trying to recover these volumes and therefore all VMs (including the Hosted-Engine).
```
lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
gluster_lv_data gluster_vg_sda4 Vwi---t--- 500.00g gluster_thinpool_gluster_vg_sda4
gluster_lv_data-brick1 gluster_vg_sda4 Vwi-aot--- 500.00g gluster_thinpool_gluster_vg_sda4 0.45
gluster_lv_engine gluster_vg_sda4 -wi-a----- 100.00g
gluster_lv_vmstore gluster_vg_sda4 Vwi---t--- 500.00g gluster_thinpool_gluster_vg_sda4
gluster_lv_vmstore-brick1 gluster_vg_sda4 Vwi-aot--- 500.00g gluster_thinpool_gluster_vg_sda4 0.33
gluster_thinpool_gluster_vg_sda4 gluster_vg_sda4 twi-aot--- <7.07t 11.46 0.89
```
I would appreciate any advice.
TIA
2 years, 3 months