Reinstall standalone node without vms loss
by douglasddr8@gmail.com
My server failed and I can't boot via UEFI
How can I reinstall this node (standalone) without losing my virtual machines?. I checked the filesystem and it's completely intact, I couldn't figure out what caused the UEFI to fail.
1 year, 10 months
can't use vmconsole anymore
by Nathanaël Blanchet
Hi,
I was used to use the vmconsole proxy, but since a while, I'm getting
this issue (currently 4.4.5):
# ssh -t -p 2222 ovirt-vmconsole(a)air.v100.abes.fr connect
ovirt-vmconsole(a)air.v100.abes.fr: Permission denied (publickey).
I found following in the engine.log
2021-04-15 17:55:43,094+02 ERROR
[org.ovirt.engine.core.services.VMConsoleProxyServlet] (default task-4)
[] Error validating ticket: :
sun.security.provider.certpath.SunCertPathBuilderException: unable to
find valid certification path to requested target
at
java.base/sun.security.provider.certpath.SunCertPathBuilder.build(SunCertPathBuilder.java:141)
at
java.base/sun.security.provider.certpath.SunCertPathBuilder.engineBuild(SunCertPathBuilder.java:126)
at
java.base/java.security.cert.CertPathBuilder.build(CertPathBuilder.java:297)
at
org.ovirt.engine.core.uutils//org.ovirt.engine.core.uutils.crypto.CertificateChain.buildCertPath(CertificateChain.java:128)
at
org.ovirt.engine.core.uutils//org.ovirt.engine.core.uutils.crypto.ticket.TicketDecoder.decode(TicketDecoder.java:89)
at
deployment.engine.ear.services.war//org.ovirt.engine.core.services.VMConsoleProxyServlet.validateTicket(VMConsoleProxyServlet.java:175)
at
deployment.engine.ear.services.war//org.ovirt.engine.core.services.VMConsoleProxyServlet.doPost(VMConsoleProxyServlet.java:225)
The user key is the good one, I use the same with my other engines and I
can successfully connect to vm consoles.
Thank you for helping
--
Nathanaël Blanchet
Supervision réseau
SIRE
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5
Tél. 33 (0)4 67 54 84 55
Fax 33 (0)4 67 54 84 14
blanchet(a)abes.fr
1 year, 10 months
Host fails to activate
by David Johnson
Good afternoon all,
Ovirt version: 4.14.4.10.7-1.el8
Centos version: Linux version 4.18.0-365.el8.x86_64 (
mockbuild(a)kbuilder.bsys.centos.org) (gcc version 8.5.0 20210514 (Red Hat
8.5.0-10) (GCC)) #1 SMP Thu Feb 10 16:11:23 UTC 2022
Background:
We had a mother board fail in our storage device. I was able to migrate the
storage domain to the backup device before it failed completely, and have
been running on the backup device for several weeks while we purchased a
replacement main storage.
Today I shut everything down cleanly, replaced the main storage, and
restarted the cluster. We did disconnect and reconnect the network on all
of the devices as we shuffled equipment in the rack.
One of the hosts in the cluster refuses to come back up.I am able to
connect to the host via putty.
Ovirt gui reporting:
Setting Host ovirt-host-03.maxisinc.net to Non-Operational mode.
Completed: Jun 11, 2022, 4:59:57 PM
Activating Host ovirt-host-03.maxisinc.net
Completed: Jun 11, 2022, 4:59:57 PM
Invoking Activate Host ovirt-host-03.maxisinc.net
Completed: Jun 11, 2022, 4:57:40 PM
Installing Host ovirt-host-03.maxisinc.net
log from host is
5:09 PM
GetManagedObjects() failed: org.freedesktop.DBus.Error.NoReply: Did not
receive a reply. Possible causes include: the remote application did not
send a reply, the message bus security policy blocked the reply, the reply
timeout expired, or the network connection was broken.
pulseaudio
4:55 PM
bondscan-DGwC1l: option lacp_active: mode dependency failed, not supported
in mode balance-alb(6)
kernel
4:55 PM
bondscan-DGwC1l: option arp_all_targets: invalid value (2)
kernel
4:55 PM
bondscan-DGwC1l: option fail_over_mac: invalid value (3)
kernel
4:55 PM
bondscan-DGwC1l: option primary_reselect: invalid value (3)
kernel
4:55 PM
bondscan-DGwC1l: option ad_select: invalid value (3)
kernel
4:55 PM
1 year, 10 months
oVirt SSH rate limit and disable SSH passwd auth
by tasnadi.peter@kifu.gov.hu
Hello,
1.
Is it possible to disable ssh root password authentication in a working oVirt cluster without any problems? (host and ovirt-engine)
/etc/ssh/sshd_config
PasswordAuthentication no
The SSH public authentication key is set on the host.
2.
I tried setting ssh rate limit using firewall-cmd but it doesn't work for some reason. I can log in more than once.
firewall-cmd --permanent --add-rich-rule='rule family="ipv4" priority="-1" service name=ssh limit value=3/m accept'
There is best practice for this?
Thanks
Peter
1 year, 10 months
[ANN] oVirt 4.5.1 First Release Candidate is now available for testing
by Lev Veyde
oVirt 4.5.1 First Release Candidate is now available for testing
The oVirt Project is pleased to announce the availability of oVirt 4.5.1
First Release Candidate for testing, as of June 9th, 2022.
This update is the first in a series of stabilization updates to the 4.5
series.
Documentation
-
If you want to try oVirt as quickly as possible, follow the instructions
on the Download <https://ovirt.org/download/> page.
-
For complete installation, administration, and usage instructions, see
the oVirt Documentation <https://ovirt.org/documentation/>.
-
For upgrading from a previous version, see the oVirt Upgrade Guide
<https://ovirt.org/documentation/upgrade_guide/>.
-
For a general overview of oVirt, see About oVirt
<https://ovirt.org/community/about.html>.
Important notes before you try it
Please note this is a pre-release build.
The oVirt Project makes no guarantees as to its suitability or usefulness.
This pre-release must not be used in production.
Installation instructions
For installation instructions and additional information please refer to:
https://ovirt.org/documentation/
This release is available now on x86_64 architecture for:
-
CentOS Stream 8
-
RHEL 8.6 Beta and derivatives
This release supports Hypervisor Hosts on x86_64:
-
oVirt Node NG (based on CentOS Stream 8)
-
CentOS Stream 8
-
RHEL 8.6 Beta and derivatives
Builds are also available for ppc64le and aarch64.
Experimental builds for CentOS Stream 9 are also provided for Hypervisor
Hosts.
See the release notes [1] for installation instructions and a list of new
features and bugs fixed.
Notes:
- oVirt Appliance is already available based on CentOS Stream 8
- oVirt Node NG is already available based on CentOS Stream 8
Additional Resources:
* Read more about the oVirt 4.5.1 pre-release highlights:
http://www.ovirt.org/release/4.5.1/
* Get more oVirt project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/
[1] http://www.ovirt.org/release/4.5.1/
[2] http://resources.ovirt.org/pub/ovirt-4.5-pre/iso/
Thanks in advance,
--
Lev Veyde
Senior Software Engineer, RHCE | RHCVA | MCITP
Red Hat Israel
<https://www.redhat.com>
lev(a)redhat.com | lveyde(a)redhat.com
<https://red.ht/sig>
TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
1 year, 10 months
VmMediatedDevices help for vGPU in overt 4.5
by Don Dupuis
Hello
I am looking for an example on how to use the new VmMediatedDevices service
to add an Nvidia vGPU in ovirt to guest vms. I had it working just fine in
oVirt 4.4 when using the custom_properties method. Just need to understand
the new method/way of doing it correctly using the python3-ovirt-engine-sdk.
Thanks
Don
1 year, 10 months
Re: list-view instead of tiled-view in oVirt VM Portal?
by Frank Coons
Please note that (a) there are people that use more than 20 VM's that do
not need admin access, and (b) some people do not LIKE looking at big gaudy
buttons, even if there are only 15 of them.
I put in an RFE to bring back the list view YEARS ago and was basically
told that "we know what you want better than you do." I am willing to bet
that many more people want the list view than you realize, but you don't
seem to be willing to listen.
Disgruntled.
1 year, 10 months
ovirt 4.4.10 ansible version
by Kapetanakis Giannis
Could someone verify the correct ansible version for ovirt 4.4.10 ?
I'm having dependencies problem:
# rpm -q ansible
ansible-2.9.27-3.el8.noarch
# dnf update
Last metadata expiration check: 1:14:38 ago on Thu 09 Jun 2022 10:59:53 EEST.
Error:
Problem: package ovirt-engine-4.4.10.7-1.el8.noarch requires ansible < 2.10.0, but none of the providers can be installed
- cannot install both ansible-5.4.0-2.el8.noarch and ansible-2.9.27-3.el8.noarch
- cannot install both ansible-2.9.17-1.el8.noarch and ansible-5.4.0-2.el8.noarch
- cannot install both ansible-2.9.18-2.el8.noarch and ansible-5.4.0-2.el8.noarch
- cannot install both ansible-2.9.20-2.el8.noarch and ansible-5.4.0-2.el8.noarch
- cannot install both ansible-2.9.21-2.el8.noarch and ansible-5.4.0-2.el8.noarch
- cannot install both ansible-2.9.23-2.el8.noarch and ansible-5.4.0-2.el8.noarch
- cannot install both ansible-2.9.24-2.el8.noarch and ansible-5.4.0-2.el8.noarch
- cannot install both ansible-2.9.27-2.el8.noarch and ansible-5.4.0-2.el8.noarch
- cannot install the best update candidate for package ovirt-engine-4.4.10.7-1.el8.noarch
- cannot install the best update candidate for package ansible-2.9.27-3.el8.noarch
- package ansible-2.9.20-1.el8.noarch is filtered out by exclude filtering
- package ansible-2.9.16-1.el8.noarch is filtered out by exclude filtering
- package ansible-2.9.19-1.el8.noarch is filtered out by exclude filtering
- package ansible-2.9.23-1.el8.noarch is filtered out by exclude filtering
(try to add '--allowerasing' to command line to replace conflicting packages or '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages)
# dnf list --showduplicates ansible
Last metadata expiration check: 1:15:11 ago on Thu 09 Jun 2022 10:59:53 EEST.
Installed Packages
ansible.noarch 2.9.27-3.el8 @epel
Available Packages
ansible.noarch 2.9.17-1.el8 ovirt-4.4-centos-ovirt44
ansible.noarch 2.9.18-2.el8 ovirt-4.4-centos-ovirt44
ansible.noarch 2.9.20-2.el8 ovirt-4.4-centos-ovirt44
ansible.noarch 2.9.21-2.el8 ovirt-4.4-centos-ovirt44
ansible.noarch 2.9.23-2.el8 ovirt-4.4-centos-ovirt44
ansible.noarch 2.9.24-2.el8 ovirt-4.4-centos-ovirt44
ansible.noarch 2.9.27-2.el8 ovirt-4.4-centos-ovirt44
ansible.noarch 5.4.0-2.el8 epel
ansible.noarch 5.4.0-2.el8 ovirt-4.4-epel
thanks,
G
1 year, 10 months
Re: Self-hosted engine failing liveliness check
by McNamara, Bradley
When a run "hosted-engine --check-liveliness" it returns "Hosted Engine is not up!" When I run this command and can see it hitting the httpd server, with success, in the httpd logs. Accessing the URL returns this: "DB Up!Welcome to Health Status!".
________________________________
From: McNamara, Bradley <Bradley.McNamara(a)seattle.gov>
Sent: Wednesday, June 8, 2022 12:54 PM
To: users(a)ovirt.org <users(a)ovirt.org>
Subject: [ovirt-users] Self-hosted engine failing liveliness check
CAUTION: External Email
Hello, and thank you all for your help.
I'm running Oracle's rebranded oVirt 4.3.10. All has been good until I patched my self-hosted engine. I ran through the normal process: backup, global maintenance mode, update the oVirt packages, run engine-setup, etc. All completed normally without issues. I rebooted the self-hosted engine VM, and now it constantly fails liveliness checks and the HA agent reboots it every five minutes, or so. I put it in back in global maintenance so the HA agent would not reboot it. The VM is up and works correctly. I can do everything normally.
From what I can tell the HA agent liveliness check is just a http get to the web portal. I can see that happening with success. What is the lilveliness check actually doing? All services on the VM are up and running without issue. Where can I look to figure this out?
Here is the output of hosted-engine --vm-status:
[root@itdlolv101 ~]# hosted-engine --vm-status
!! Cluster is in GLOBAL MAINTENANCE mode !!
--== Host itdlolv100.ci.seattle.wa.us (id: 1) status ==--
conf_on_shared_storage : True
Status up-to-date : True
Hostname : itdlolv100.ci.seattle.wa.us
Host ID : 1
Engine status : {"reason": "vm not running on this host", "health": "bad", "vm": "down", "detail": "unknown"}
Score : 3400
stopped : False
Local maintenance : False
crc32 : 855e161f
local_conf_timestamp : 55128
Host timestamp : 55128
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=55128 (Wed Jun 8 12:52:20 2022)
host-id=1
score=3400
vm_conf_refresh_time=55128 (Wed Jun 8 12:52:20 2022)
conf_on_shared_storage=True
maintenance=False
state=GlobalMaintenance
stopped=False
--== Host itdlolv101.ci.seattle.wa.us (id: 2) status ==--
conf_on_shared_storage : True
Status up-to-date : True
Hostname : itdlolv101.ci.seattle.wa.us
Host ID : 2
Engine status : {"reason": "failed liveliness check", "health": "bad", "vm": "up", "detail": "Up"}
Score : 3400
stopped : False
Local maintenance : False
crc32 : cc1c2261
local_conf_timestamp : 45453
Host timestamp : 45453
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=45453 (Wed Jun 8 12:55:15 2022)
host-id=2
score=3400
vm_conf_refresh_time=45453 (Wed Jun 8 12:55:15 2022)
conf_on_shared_storage=True
maintenance=False
state=GlobalMaintenance
stopped=False
!! Cluster is in GLOBAL MAINTENANCE mode !!
[root@itdlolv101 ~]#
1 year, 10 months