Since a few weeks, we are not able to connect to the vmconsole proxy:
$ ssh -t -p 2222 ovirt-vmconsole@ovirt
ovirt-vmconsole@ovirt: Permission denied (publickey).
Last successful login record: Mar 29 11:31:32
First login failure record: Mar 31 17:28:51
We tracked the issue to the following log in /var/log/ovirt-engine/engine.log:
ERROR [org.ovirt.engine.core.services.VMConsoleProxyServlet] (default task-11)  Error validating ticket: : sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
Indeed, certificate /etc/pki/ovirt-engine/certs/vmconsole-proxy-helper.cer and others did expire:
# grep 'Not After' /etc/pki/ovirt-engine/certs/vmconsole-proxy-*
/etc/pki/ovirt-engine/certs/vmconsole-proxy-helper.cer: Not After : Mar 31 13:18:44 2021 GMT
/etc/pki/ovirt-engine/certs/vmconsole-proxy-host.cer: Not After : Mar 31 13:18:44 2021 GMT
/etc/pki/ovirt-engine/certs/vmconsole-proxy-user.cer: Not After : Mar 31 13:18:44 2021 GMT
But we did not manage to found how to renew them. Any advice ?
I would like to know how disk size and snapshot allocation works, because every time I create a new snapshot, it increases 1 GB in the VM's disk size, and when I remove the snap, that space is not returned to Domain Storage.
I'm using the oVirt 4.3.10
How do I reprovision the VM disk?
Thank you all.
In the second engine dep run in Hyperconverged deployment I get red "Ooops!" in cockpit.
I think it fails on some networking setup.
The first oVirt Node says "Hosted Engine is up!" but the other nodes is not added to HostedEngine yet.
There is no network connectivity to the Engine outside node1, I can ssh to engine from node1 on the right IP-address.
Please tell what logs I should pull.
Hi ovirt gurus,
This is an interesting issue, one I never expected to have.
When I push high volumes of writes to my NAS, I will cause VM's to go into
a paused state. I'm looking at this from a number of angles, including
upgrades on the NAS appliance.
I can reproduce this problem at will running a centos 7.9 VM on Ovirt 4.5.
1. Is my analysis of the failure (below) reasonable/correct?
2. What am I looking for to validate this?
3. Is there a configuration that I can set to make it a little more robust
while I acquire the hardware to improve the NAS?
Standard test of file write speed:
[root@cen-79-pgsql-01 ~]# dd if=/dev/zero of=./test bs=512k count=4096
4096+0 records in
4096+0 records out
2147483648 bytes (2.1 GB) copied, 1.68431 s, 1.3 GB/s
Give it more data
[root@cen-79-pgsql-01 ~]# dd if=/dev/zero of=./test bs=512k count=12228
12228+0 records in
12228+0 records out
6410993664 bytes (6.4 GB) copied, 7.22078 s, 888 MB/s
The odds are about 50/50 that 6 GB will kill the VM, but 100% when I hit 8
What I think appears to be happening is that the intent cache on the NAS is
on an SSD, and my VM's are pushing data about three times as fast as the
SSD can handle. When the SSD gets queued up beyond a certain point, the NAS
(which places reliability over speed) says "Whoah Nellie!", and the VM
We are currently running oVirt 4.3 and upgrade/migration to 4.4 won’t be possible for few more months.
I am looking into guidelines, how to, for setting up Grafana using DataWarehouse as data source.
Did anyone already did this, and would be willing to share the steps?
Kindly awaiting your reply.
kind regards/met vriendelijke groeten
Sr. System Engineer @ System Administration
o: +31 (35) 6774131
m: +31 (65) 5734174
ActiveVideo Networks BV. Mediacentrum 3745 Joop van den Endeplein 1.1217 WJ Hilversum, The Netherlands. The information contained in this message may be legally privileged and confidential. It is intended to be read only by the individual or entity to whom it is addressed or by their designee. If the reader of this message is not the intended recipient, you are on notice that any distribution of this message, in any form, is strictly prohibited. If you have received this message in error, please immediately notify the sender and/or ActiveVideo Networks, LLC by telephone at +1 408.931.9200 and delete or destroy any copy of this message.
I have built a system as a template on oVirt. Specifically, Ubuntu 18.04 server.
I am noticing an issue when creating new vms from that template. I used the check box for "seal template" when creating the template.
When I create a new Ubuntu VM I am getting duplicate IP addresses for all the machines created from the template.
It seems like the checkbox doesn't fully function as intended. I would need to do further manual steps to clear up this issue.
Has anyone else noticed this behavior? Is this expected or have I missed something?
Thanks for your input!
I'm restoring a full ovirt engine backup, having used the --scope=all
option, for oVirt 4.3.
I restored the backup on a fresh CentOS7 machine. The process went well,
but when trying to log into the restored authentication system I get the
following message which won't allow me to log in:
The provided authorization grant for the auth code has expired.
What does that mean and how can it be fixed?
The issue was with edk2-ovmf package update, rolling that package back and
it started recognizing the CPU and host coming up... tested on one host and
Thanks & Regards,
From: Nur Imam Febrianto <nur_imam(a)outlook.com>
Sent: Wednesday, May 26, 2021 9:54 PM
To: k.gunasekhar(a)non.keysight.com; users(a)ovirt.org
Subject: RE: [ovirt-users] Re: CPU Compatibility Problem after Upgrading
Centos 8 Stream Host
CAUTION: This message originates from an external sender.
Already trying several things, seem kernel update doesn't make this problem
happen. Already tried to yum update exclude kernel, the issue still
Nur Imam Febrianto
From: k.gunasekhar--- via Users <mailto:firstname.lastname@example.org>
Sent: 26 May 2021 12:27
To: users(a)ovirt.org <mailto:email@example.com>
Subject: [ovirt-users] Re: CPU Compatibility Problem after Upgrading Centos
8 Stream Host
I also end up with the same problem today. How did rollback yum i see many
yum updates in the yum history.
Here is what the error says.
The host CPU does not match the Cluster CPU Type and is running in a
degraded mode. It is missing the following CPU flags: model_IvyBridge,
spec_ctrl. Please update the host CPU microcode or change the Cluster CPU
Users mailing list -- users(a)ovirt.org <mailto:firstname.lastname@example.org>
To unsubscribe send an email to users-leave(a)ovirt.org
oVirt Code of Conduct:
We need to add and remove directly mapped LUNs to multiple VMs in our
Non-Production environment. The environment is backed by an iSCSI SAN. In
testing when removing a directly mapped LUN it doesn't remove the
underlying multipath and devices. Several questions.
1) Is this the expected behavior?
2) Are we supposed to go to each KVM host and manually remove the
underlying multipath devices?
3) Is there a technical reason that oVirt doesn't do this as part of the
steps to removing the storage?
This is something that was handled by the manager in the previous
virtualization that we used, Oracle's Xen based Oracle VM.