disk pending in "finalizing" state
by Diego Ercolani
ovirt-engine-4.5.1.3-1.el8.noarch
Hello I have a situation where a disk is stuck in finalizing state derived by trying to backup via veeam.
backup process is interrupted and I have cleared the job states with the dbutils scripts (/usr/share/ovirt-engine/setup/dbutils/task_cleaner.sh) even if the script didn't advice about any pending job...
I tryied to unlock_entity of the disk but for the script there is no disk locked.
In the ovirt-engine gui, the disk is "finalizing"
And I'm stuck here
Can someone address me somewhere?
2 years, 8 months
Authentication with Active Directory
by Alireza Eskandari
Hi all,
I'm trying to connect an active directory to my ovirt for authentication.
But the base DN of my active directory is not the default one so I need to
specify it in ovirt configuration to find users.
How can I configure it?
In my current configuration when I try to login, I get this message:
server_error: Cannot resolve principal 'xxxx(a)yy.zz'
Regards,
Alireza
2 years, 8 months
ovirt-engine manager, certificate issue
by david
hello
I have a problem to log in to ovirt-engine manager in my browser
the warning message in the browser display me this text:
PKIX path validation failed: java.security.cert.CertPathValidatorException:
validity check failed
to solve this problem I am offered to run engine-setup
and here is a question: the engine-setup will have no impact to the
hosts(hypervisors) working?
ovirt version 4.4.4.7-1.el8
thanks
2 years, 8 months
Make QXL the default
by Colin Coe
Hey all
We've just updated to RHV4.4SP1 (aka 4.5) from v4.4 and now our ansible
workflows are failing with:
cannot run VM. Selected display type is not supported by the operating
system
The OS are RHEL6 and RHEL7 (no, I can't change this as the software stack
only works with these)
I couldn't find anything in the ansible ovirt collection to change from VGA
to QXL.
Any ideas on how I can make QXL the default instead of VGA? Maybe a DB
change?
Thanks
2 years, 8 months
Issue with oVirt 4.5 and Data Warehouse installed on a Separate Machine
by Igor Davidoff
Hello,
i have an issue with 'engine-seup' step on DWH (separate server) aufter upgrade from 4.4.10 to 4.5.
It looks like the ovirt-engine-setup are looking for rpm-package 'ovirt-engine' instead of 'ovirt-engine-dwh'.
the reporting error is:
"
--== END OF CONFIGURATION ==--
[ INFO ] Stage: Setup validation
[ ERROR ] Failed to execute stage 'Setup validation': Command '/usr/bin/rpm' failed to execute
[ INFO ] Stage: Clean up
Log file is located at /var/log/ovirt-engine/setup/ovirt-engine-setup-20220502100751-fqwb07.log
[WARNING] Remote engine was not configured to be able to access DWH, please check the logs.
[ INFO ] Generating answer file '/var/lib/ovirt-engine/setup/answers/20220502101130-setup.conf'
[ INFO ] Stage: Pre-termination
[ INFO ] Stage: Termination
[ ERROR ] Execution of setup failed
"
in Setup log i found:
"
2022-05-02 10:11:30,000+0000 DEBUG otopi.context context._executeMethod:127 Stage validation METHOD otopi.plugins.ovirt_engine_setup.ovirt_engine_common.distro-rpm.packages.Plugin._validation
2022-05-02 10:11:30,001+0000 DEBUG otopi.plugins.ovirt_engine_setup.ovirt_engine_common.distro-rpm.packages plugin.executeRaw:813 execute: ('/usr/bin/rpm', '-q', '--queryformat=%{version}-%{release}', 'ovirt-engine'), executable='None', cwd='None', env=None
2022-05-02 10:11:30,013+0000 DEBUG otopi.plugins.ovirt_engine_setup.ovirt_engine_common.distro-rpm.packages plugin.executeRaw:863 execute-result: ('/usr/bin/rpm', '-q', '--queryformat=%{version}-%{release}', 'ovirt-engine'), rc=1
2022-05-02 10:11:30,013+0000 DEBUG otopi.plugins.ovirt_engine_setup.ovirt_engine_common.distro-rpm.packages plugin.execute:921 execute-output: ('/usr/bin/rpm', '-q', '--queryformat=%{version}-%{release}', 'ovirt-engine') stdout:
package ovirt-engine is not installed
2022-05-02 10:11:30,013+0000 DEBUG otopi.plugins.ovirt_engine_setup.ovirt_engine_common.distro-rpm.packages plugin.execute:926 execute-output: ('/usr/bin/rpm', '-q', '--queryformat=%{version}-%{release}', 'ovirt-engine') stderr:
2022-05-02 10:11:30,013+0000 DEBUG otopi.context context._executeMethod:145 method exception
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/otopi/context.py", line 132, in _executeMethod
method['method']()
File "/usr/share/ovirt-engine/setup/bin/../plugins/ovirt-engine-setup/ovirt-engine-common/distro-rpm/packages.py", line 463, in _validation
oenginecons.Const.ENGINE_PACKAGE_NAME,
File "/usr/lib/python3.6/site-packages/otopi/plugin.py", line 931, in execute
command=args[0],
RuntimeError: Command '/usr/bin/rpm' failed to execute
2022-05-02 10:11:30,015+0000 ERROR otopi.context context._executeMethod:154 Failed to execute stage 'Setup validation': Command '/usr/bin/rpm' failed to execute
"
usually the upgrade of minor versions in 4.4 was just:
# yum update ovirt\*setup\*
# engine-setup
# yum update
as it did not work, i tried the fresh installation of centos8 stream and recovery of DWH Database and configuration:
# engine-backup --mode=restore --file=backup.bck --provision-all-databases
-> no luck
the last idea was fresh installation centos8 stream + fresh installation of ovirt-engine-dwh 4.5 (without recovery)
-> the same error.
the engine side works fine.
i compared the current setup logs with the installation and all the minor upgrades of ovirt-engine-dwh before 4.5
and only found the rpm-validation for the package 'ovirt-engine-dwh':
"
2022-02-08 16:11:29,846+0000 DEBUG otopi.plugins.ovirt_engine_setup.ovirt_engine_common.distro-rpm.packages plugin.executeRaw:813 execute: ('/usr/bin/rpm', '-q', 'ovirt-engine-dwh'), executable='None', cwd='None', env=None
2022-02-08 16:11:29,877+0000 DEBUG otopi.plugins.ovirt_engine_setup.ovirt_engine_common.distro-rpm.packages plugin.executeRaw:863 execute-result: ('/usr/bin/rpm', '-q', 'ovirt-engine-dwh'), rc=0
2022-02-08 16:11:29,878+0000 DEBUG otopi.plugins.ovirt_engine_setup.ovirt_engine_common.distro-rpm.packages plugin.execute:921 execute-output: ('/usr/bin/rpm', '-q', 'ovirt-engine-dwh') stdout:
ovirt-engine-dwh-4.4.10-1.el8.noarch
2022-02-08 16:11:29,878+0000 DEBUG otopi.plugins.ovirt_engine_setup.ovirt_engine_common.distro-rpm.packages plugin.execute:926 execute-output: ('/usr/bin/rpm', '-q', 'ovirt-engine-dwh') stderr:
2022-02-08 16:11:29,878+0000 DEBUG otopi.transaction transaction.commit:152 committing 'DWH Engine database Transaction'
"
It looks like engine-setup knows it is the DWH-Server, but is trying to validate the wrong rpm package.
Any Ideas, how to work around this.
Thank you!
2 years, 8 months
Obsoleting Packages in 'dnf check-update' on RHEL 8.6 hosts
by Scott Worthington
Hello,
My 4.5.1.3-1.el8 oVirt homelab cluster is running on RHEL 8.6 hosts and a
RHEL 8.6 stand-alone engine.
During the installation, I followed the instructions for RHEL 8.6:
https://ovirt.org/download/install_on_rhel.html
After the installation is complete, all packages are up-to-date, and
everything is running, I run a 'dnf check-upgrade' and I find there are
two packages that are listed under "Obsoleting Packages":
"""
Last metadata expiration check: 0:43:06 ago on Mon 01 Aug 2022 08:08:50 AM
EDT.
Obsoleting Packages
centos-stream-release.noarch 8.6-1.el8
@@System
centos-stream-release.noarch 8.6-1.el8
@@System
centos-stream-release.noarch 8.6-1.el8
@@System
redhat-release.x86_64 8.6-0.1.el8
@rhel-8-for-x86_64-baseos-rpms
"""
For the hosts, it makes it appear that the hosts always have an upgrade
pending in the oVirt Engine UI.
"centos-strea-release.noarch" appears to contain similar files to the
original RHEL 8.6 package "redhat-release-8.6.0.1.el8" except for the
addition of "centos*" named files.
Any thoughts on how to change this behavior so that it doesn't appear to
have packages to upgrade because a 'dnf update' does nothing to these
"Obsoleting Packages"?
Thanks in advance.
2 years, 8 months
Dark Mode
by Dean L
Is there a way to place oVirt in Dark Mode? It's one of those things I really like about moVirt. :)
Thanks!
2 years, 8 months
Gluster volume "deleted" by accident --- Is it possible to recover?
by itforums51@gmail.com
hi everyone,
I have a 3x node ovirt 4.4.6 cluster in HC setup.
Today I was intending to extend the data and vmstore volume adding another brick each; then by accident I pressed the "cleanup" button. Basically it looks that the volume were deleted.
I am wondering whether there is a process of trying to recover these volumes and therefore all VMs (including the Hosted-Engine).
```
lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
gluster_lv_data gluster_vg_sda4 Vwi---t--- 500.00g gluster_thinpool_gluster_vg_sda4
gluster_lv_data-brick1 gluster_vg_sda4 Vwi-aot--- 500.00g gluster_thinpool_gluster_vg_sda4 0.45
gluster_lv_engine gluster_vg_sda4 -wi-a----- 100.00g
gluster_lv_vmstore gluster_vg_sda4 Vwi---t--- 500.00g gluster_thinpool_gluster_vg_sda4
gluster_lv_vmstore-brick1 gluster_vg_sda4 Vwi-aot--- 500.00g gluster_thinpool_gluster_vg_sda4 0.33
gluster_thinpool_gluster_vg_sda4 gluster_vg_sda4 twi-aot--- <7.07t 11.46 0.89
```
I would appreciate any advice.
TIA
2 years, 8 months