How to renew vmconsole-proxy* certificates
by capelle@labri.fr
Hi,
Since a few weeks, we are not able to connect to the vmconsole proxy:
$ ssh -t -p 2222 ovirt-vmconsole@ovirt
ovirt-vmconsole@ovirt: Permission denied (publickey).
Last successful login record: Mar 29 11:31:32
First login failure record: Mar 31 17:28:51
We tracked the issue to the following log in /var/log/ovirt-engine/engine.log:
ERROR [org.ovirt.engine.core.services.VMConsoleProxyServlet] (default task-11) [] Error validating ticket: : sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
Indeed, certificate /etc/pki/ovirt-engine/certs/vmconsole-proxy-helper.cer and others did expire:
--
# grep 'Not After' /etc/pki/ovirt-engine/certs/vmconsole-proxy-*
/etc/pki/ovirt-engine/certs/vmconsole-proxy-helper.cer: Not After : Mar 31 13:18:44 2021 GMT
/etc/pki/ovirt-engine/certs/vmconsole-proxy-host.cer: Not After : Mar 31 13:18:44 2021 GMT
/etc/pki/ovirt-engine/certs/vmconsole-proxy-user.cer: Not After : Mar 31 13:18:44 2021 GMT
--
But we did not manage to found how to renew them. Any advice ?
--
Benoît
3 years, 1 month
Snapshot and disk size allocation
by jorgevisentini@gmail.com
Hello everyone.
I would like to know how disk size and snapshot allocation works, because every time I create a new snapshot, it increases 1 GB in the VM's disk size, and when I remove the snap, that space is not returned to Domain Storage.
I'm using the oVirt 4.3.10
How do I reprovision the VM disk?
Thank you all.
3 years, 1 month
HA VM and vm leases usage with site failure
by Gianluca Cecchi
Hello,
supposing latest 4.4.7 environment installed with an external engine and
two hosts, one in one site and one in another site.
For storage I have one FC storage domain.
I try to simulate a sort of "site failure scenario" to see what kind of HA
I should expect.
The 2 hosts have power mgmt configured through fence_ipmilan.
I have 2 VMs, one configured as HA with lease on storage (Resume Behavior:
kill) and one not marked as HA.
Initially host1 is SPM and it is the host that runs the two VMs.
Fencing of host1 from host2 initially works ok. I can test also from
command line:
# fence_ipmilan -a 10.10.193.152 -P -l my_fence_user -A password -L
operator -S /usr/local/bin/pwd.sh -o status
Status: ON
On host2 I then prevent reaching host1 iDRAC:
firewall-cmd --direct --add-rule ipv4 filter OUTPUT 0 -d 10.10.193.152 -p
udp --dport 623 -j DROP
firewall-cmd --direct --add-rule ipv4 filter OUTPUT 1 -j ACCEPT
so that:
# fence_ipmilan -a 10.10.193.152 -P -l my_fence_user -A password -L
operator -S /usr/local/bin/pwd.sh -o status
2021-08-05 15:06:07,254 ERROR: Failed: Unable to obtain correct plug status
or plug is not available
On host1 I generate panic:
# date ; echo 1 > /proc/sys/kernel/sysrq ; echo c > /proc/sysrq-trigger
Thu Aug 5 15:06:24 CEST 2021
host1 correctly completes its crash dump (kdump integration is enabled) and
reboots, but I stop it at grub prompt so that host1 is unreachable from
host2 point of view and also power fencing not determined
At this point I thought that VM lease functionality would have come in
place and host2 would be able to re-start the HA VM, as it is able to see
that the lease is not taken from the other host and so it can acquire the
lock itself....
Instead it goes through the attempt to power fence loop
I wait about 25 minutes without any effect but continuous attempts.
After 2 minutes host2 correctly becomes SPM and VMs are marked as unknown
At a certain point after the failures in power fencing host1, I see the
event:
Failed to power fence host host1. Please check the host status and it's
power management settings, and then manually reboot it and click "Confirm
Host Has Been Rebooted"
If I select host and choose "Confirm Host Has Been Rebooted", then the two
VMs are marked as down and the HA one is correctly booted by host2.
But this requires my manual intervention.
Is the behavior above the expected one or the use of VM leases should have
allowed host2 to bypass fencing inability and start the HA VM with lease?
Otherwise I don't understand the reason to have the lease itself at all....
Thanks,
Gianluca
3 years, 2 months
Ooops! in last step of Hyperconverged deployment
by Harry O
Hi,
In the second engine dep run in Hyperconverged deployment I get red "Ooops!" in cockpit.
I think it fails on some networking setup.
The first oVirt Node says "Hosted Engine is up!" but the other nodes is not added to HostedEngine yet.
There is no network connectivity to the Engine outside node1, I can ssh to engine from node1 on the right IP-address.
Please tell what logs I should pull.
3 years, 3 months
Sparse VMs from Templates - Storage issues
by Shantur Rathore
Hi all,
I have a setup as detailed below
- iSCSI Storage Domain
- Template with Thin QCOW2 disk
- Multiple VMs from Template with Thin disk
oVirt Node 4.4.4
When the VMs boots up it downloads some data to it and that leads to
increase in volume size.
I see that every few seconds the VM gets paused with
"VM X has been paused due to no Storage space error."
and then after few seconds
"VM X has recovered from paused back to up"
Sometimes after a many pause and recovery the VM dies with
"VM X is down with error. Exit message: Lost connection with qemu process."
and I have to restart the VMs.
My questions.
1. How to work around this dying VM?
2. Is there a way to use sparse disks without VM being paused again and
again?
Thanks in advance.
Shantur
3 years, 3 months
[ANN] oVirt 4.4.8 Async update #1
by Sandro Bonazzola
oVirt 4.4.8 Async update #1
On August 26th 2021 the oVirt project released an async update to the
following packages:
-
ovirt-ansible-collection 1.6.2
-
ovirt-engine 4.4.8.5
-
ovirt-release44 4.4.8.1
-
oVirt Node 4.4.8.1
-
oVirt Appliance 4.4-20210826
Fixing the following bugs:
-
Bug 1947709 <https://bugzilla.redhat.com/show_bug.cgi?id=1947709> -
[IPv6] HostedEngineLocal is an isolated libvirt network, breaking upgrades
from 4.3
-
Bug 1966873 <https://bugzilla.redhat.com/show_bug.cgi?id=1966873> -
[RFE] Create Ansible role for remove stale LUNs example
remove_mpath_device.yml
-
Bug 1997663 <https://bugzilla.redhat.com/show_bug.cgi?id=1997663> - Keep
cinbderlib dependencies optional for 4.4.8
-
Bug 1996816 <https://bugzilla.redhat.com/show_bug.cgi?id=1996816> -
Cluster upgrade fails with: 'OAuthException invalid_grant: The provided
authorization grant for the auth code has expired.
oVirt Node Changes:
- Consume above oVirt updates
- GlusterFS 8.6: https://docs.gluster.org/en/latest/release-notes/8.6/
- Fixes for:
-
CVE-2021-22923 <https://access.redhat.com/security/cve/CVE-2021-22923>
curl: Metalink download sends credentials
-
CVE-2021-22922 <https://access.redhat.com/security/cve/CVE-2021-22922>
curl: Content not matching hash in Metalink is not being discarded
Full diff list:
--- ovirt-node-ng-image-4.4.8.manifest-rpm 2021-08-19 07:57:44.081590739
+0200
+++ ovirt-node-ng-image-4.4.8.1.manifest-rpm 2021-08-27 08:11:54.863736688
+0200
@@ -2,7 +2,7 @@
-ModemManager-glib-1.10.8-3.el8.x86_64
-NetworkManager-1.32.6-1.el8.x86_64
-NetworkManager-config-server-1.32.6-1.el8.noarch
-NetworkManager-libnm-1.32.6-1.el8.x86_64
-NetworkManager-ovs-1.32.6-1.el8.x86_64
-NetworkManager-team-1.32.6-1.el8.x86_64
-NetworkManager-tui-1.32.6-1.el8.x86_64
+ModemManager-glib-1.10.8-4.el8.x86_64
+NetworkManager-1.32.8-1.el8.x86_64
+NetworkManager-config-server-1.32.8-1.el8.noarch
+NetworkManager-libnm-1.32.8-1.el8.x86_64
+NetworkManager-ovs-1.32.8-1.el8.x86_64
+NetworkManager-team-1.32.8-1.el8.x86_64
+NetworkManager-tui-1.32.8-1.el8.x86_64
@@ -94 +94 @@
-curl-7.61.1-18.el8.x86_64
+curl-7.61.1-18.el8_4.1.x86_64
@@ -106,4 +106,4 @@
-device-mapper-1.02.177-5.el8.x86_64
-device-mapper-event-1.02.177-5.el8.x86_64
-device-mapper-event-libs-1.02.177-5.el8.x86_64
-device-mapper-libs-1.02.177-5.el8.x86_64
+device-mapper-1.02.177-6.el8.x86_64
+device-mapper-event-1.02.177-6.el8.x86_64
+device-mapper-event-libs-1.02.177-6.el8.x86_64
+device-mapper-libs-1.02.177-6.el8.x86_64
@@ -140,36 +140,36 @@
-fence-agents-all-4.2.1-74.el8.x86_64
-fence-agents-amt-ws-4.2.1-74.el8.noarch
-fence-agents-apc-4.2.1-74.el8.noarch
-fence-agents-apc-snmp-4.2.1-74.el8.noarch
-fence-agents-bladecenter-4.2.1-74.el8.noarch
-fence-agents-brocade-4.2.1-74.el8.noarch
-fence-agents-cisco-mds-4.2.1-74.el8.noarch
-fence-agents-cisco-ucs-4.2.1-74.el8.noarch
-fence-agents-common-4.2.1-74.el8.noarch
-fence-agents-compute-4.2.1-74.el8.noarch
-fence-agents-drac5-4.2.1-74.el8.noarch
-fence-agents-eaton-snmp-4.2.1-74.el8.noarch
-fence-agents-emerson-4.2.1-74.el8.noarch
-fence-agents-eps-4.2.1-74.el8.noarch
-fence-agents-heuristics-ping-4.2.1-74.el8.noarch
-fence-agents-hpblade-4.2.1-74.el8.noarch
-fence-agents-ibmblade-4.2.1-74.el8.noarch
-fence-agents-ifmib-4.2.1-74.el8.noarch
-fence-agents-ilo-moonshot-4.2.1-74.el8.noarch
-fence-agents-ilo-mp-4.2.1-74.el8.noarch
-fence-agents-ilo-ssh-4.2.1-74.el8.noarch
-fence-agents-ilo2-4.2.1-74.el8.noarch
-fence-agents-intelmodular-4.2.1-74.el8.noarch
-fence-agents-ipdu-4.2.1-74.el8.noarch
-fence-agents-ipmilan-4.2.1-74.el8.noarch
-fence-agents-kdump-4.2.1-74.el8.x86_64
-fence-agents-mpath-4.2.1-74.el8.noarch
-fence-agents-redfish-4.2.1-74.el8.x86_64
-fence-agents-rhevm-4.2.1-74.el8.noarch
-fence-agents-rsa-4.2.1-74.el8.noarch
-fence-agents-rsb-4.2.1-74.el8.noarch
-fence-agents-sbd-4.2.1-74.el8.noarch
-fence-agents-scsi-4.2.1-74.el8.noarch
-fence-agents-vmware-rest-4.2.1-74.el8.noarch
-fence-agents-vmware-soap-4.2.1-74.el8.noarch
-fence-agents-wti-4.2.1-74.el8.noarch
+fence-agents-all-4.2.1-75.el8.x86_64
+fence-agents-amt-ws-4.2.1-75.el8.noarch
+fence-agents-apc-4.2.1-75.el8.noarch
+fence-agents-apc-snmp-4.2.1-75.el8.noarch
+fence-agents-bladecenter-4.2.1-75.el8.noarch
+fence-agents-brocade-4.2.1-75.el8.noarch
+fence-agents-cisco-mds-4.2.1-75.el8.noarch
+fence-agents-cisco-ucs-4.2.1-75.el8.noarch
+fence-agents-common-4.2.1-75.el8.noarch
+fence-agents-compute-4.2.1-75.el8.noarch
+fence-agents-drac5-4.2.1-75.el8.noarch
+fence-agents-eaton-snmp-4.2.1-75.el8.noarch
+fence-agents-emerson-4.2.1-75.el8.noarch
+fence-agents-eps-4.2.1-75.el8.noarch
+fence-agents-heuristics-ping-4.2.1-75.el8.noarch
+fence-agents-hpblade-4.2.1-75.el8.noarch
+fence-agents-ibmblade-4.2.1-75.el8.noarch
+fence-agents-ifmib-4.2.1-75.el8.noarch
+fence-agents-ilo-moonshot-4.2.1-75.el8.noarch
+fence-agents-ilo-mp-4.2.1-75.el8.noarch
+fence-agents-ilo-ssh-4.2.1-75.el8.noarch
+fence-agents-ilo2-4.2.1-75.el8.noarch
+fence-agents-intelmodular-4.2.1-75.el8.noarch
+fence-agents-ipdu-4.2.1-75.el8.noarch
+fence-agents-ipmilan-4.2.1-75.el8.noarch
+fence-agents-kdump-4.2.1-75.el8.x86_64
+fence-agents-mpath-4.2.1-75.el8.noarch
+fence-agents-redfish-4.2.1-75.el8.x86_64
+fence-agents-rhevm-4.2.1-75.el8.noarch
+fence-agents-rsa-4.2.1-75.el8.noarch
+fence-agents-rsb-4.2.1-75.el8.noarch
+fence-agents-sbd-4.2.1-75.el8.noarch
+fence-agents-scsi-4.2.1-75.el8.noarch
+fence-agents-vmware-rest-4.2.1-75.el8.noarch
+fence-agents-vmware-soap-4.2.1-75.el8.noarch
+fence-agents-wti-4.2.1-75.el8.noarch
@@ -215,7 +215,7 @@
-glusterfs-8.5-2.el8.x86_64
-glusterfs-cli-8.5-2.el8.x86_64
-glusterfs-client-xlators-8.5-2.el8.x86_64
-glusterfs-events-8.5-2.el8.x86_64
-glusterfs-fuse-8.5-2.el8.x86_64
-glusterfs-geo-replication-8.5-2.el8.x86_64
-glusterfs-server-8.5-2.el8.x86_64
+glusterfs-8.6-1.el8.x86_64
+glusterfs-cli-8.6-1.el8.x86_64
+glusterfs-client-xlators-8.6-1.el8.x86_64
+glusterfs-events-8.6-1.el8.x86_64
+glusterfs-fuse-8.6-1.el8.x86_64
+glusterfs-geo-replication-8.6-1.el8.x86_64
+glusterfs-server-8.6-1.el8.x86_64
@@ -301,5 +301,5 @@
-kernel-4.18.0-326.el8.x86_64
-kernel-core-4.18.0-326.el8.x86_64
-kernel-modules-4.18.0-326.el8.x86_64
-kernel-tools-4.18.0-326.el8.x86_64
-kernel-tools-libs-4.18.0-326.el8.x86_64
+kernel-4.18.0-331.el8.x86_64
+kernel-core-4.18.0-331.el8.x86_64
+kernel-modules-4.18.0-331.el8.x86_64
+kernel-tools-4.18.0-331.el8.x86_64
+kernel-tools-libs-4.18.0-331.el8.x86_64
@@ -310 +310 @@
-kmod-kvdo-6.2.5.65-79.el8.x86_64
+kmod-kvdo-6.2.5.72-79.el8.x86_64
@@ -363 +363 @@
-libcurl-7.61.1-18.el8.x86_64
+libcurl-7.61.1-18.el8_4.1.x86_64
@@ -381,6 +381,6 @@
-libgfapi0-8.5-2.el8.x86_64
-libgfchangelog0-8.5-2.el8.x86_64
-libgfrpc0-8.5-2.el8.x86_64
-libgfxdr0-8.5-2.el8.x86_64
-libglusterd0-8.5-2.el8.x86_64
-libglusterfs0-8.5-2.el8.x86_64
+libgfapi0-8.6-1.el8.x86_64
+libgfchangelog0-8.6-1.el8.x86_64
+libgfrpc0-8.6-1.el8.x86_64
+libgfxdr0-8.6-1.el8.x86_64
+libglusterd0-8.6-1.el8.x86_64
+libglusterfs0-8.6-1.el8.x86_64
@@ -416 +415,0 @@
-libmetalink-0.1.3-7.el8.x86_64
@@ -558,2 +557,2 @@
-lvm2-2.03.12-5.el8.x86_64
-lvm2-libs-2.03.12-5.el8.x86_64
+lvm2-2.03.12-6.el8.x86_64
+lvm2-libs-2.03.12-6.el8.x86_64
@@ -641 +640 @@
-ovirt-ansible-collection-1.6.0-1.el8.noarch
+ovirt-ansible-collection-1.6.2-1.el8.noarch
@@ -649 +648 @@
-ovirt-node-ng-image-update-placeholder-4.4.8-1.el8.noarch
+ovirt-node-ng-image-update-placeholder-4.4.8.1-1.el8.noarch
@@ -657,2 +656,2 @@
-ovirt-release-host-node-4.4.8-1.el8.noarch
-ovirt-release44-4.4.8-1.el8.noarch
+ovirt-release-host-node-4.4.8.1-1.el8.noarch
+ovirt-release44-4.4.8.1-1.el8.noarch
@@ -665,3 +664,3 @@
-pacemaker-cluster-libs-2.1.0-5.el8.x86_64
-pacemaker-libs-2.1.0-5.el8.x86_64
-pacemaker-schemas-2.1.0-5.el8.noarch
+pacemaker-cluster-libs-2.1.0-6.el8.x86_64
+pacemaker-libs-2.1.0-6.el8.x86_64
+pacemaker-schemas-2.1.0-6.el8.noarch
@@ -773 +772 @@
-python3-gluster-8.5-2.el8.x86_64
+python3-gluster-8.6-1.el8.x86_64
@@ -835 +834 @@
-python3-perf-4.18.0-326.el8.x86_64
+python3-perf-4.18.0-331.el8.x86_64
@@ -935,2 +934,2 @@
-selinux-policy-3.14.3-75.el8.noarch
-selinux-policy-targeted-3.14.3-75.el8.noarch
+selinux-policy-3.14.3-76.el8.noarch
+selinux-policy-targeted-3.14.3-76.el8.noarch
@@ -941 +940 @@
-shadow-utils-4.6-13.el8.x86_64
+shadow-utils-4.6-14.el8.x86_64
@@ -948 +947 @@
-sos-4.1-4.el8.noarch
+sos-4.1-5.el8.noarch
--
Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo(a)redhat.com
<https://www.redhat.com/>
*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.*
3 years, 3 months
oVirt 4.3 DWH with Grafana
by Vrgotic, Marko
Dear oVirt,
We are currently running oVirt 4.3 and upgrade/migration to 4.4 won’t be possible for few more months.
I am looking into guidelines, how to, for setting up Grafana using DataWarehouse as data source.
Did anyone already did this, and would be willing to share the steps?
Kindly awaiting your reply.
-----
kind regards/met vriendelijke groeten
Marko Vrgotic
Sr. System Engineer @ System Administration
ActiveVideo
o: +31 (35) 6774131
m: +31 (65) 5734174
e: m.vrgotic(a)activevideo.com<mailto:m.vrgotic@activevideo.com>
w: www.activevideo.com<http://www.activevideo.com>
ActiveVideo Networks BV. Mediacentrum 3745 Joop van den Endeplein 1.1217 WJ Hilversum, The Netherlands. The information contained in this message may be legally privileged and confidential. It is intended to be read only by the individual or entity to whom it is addressed or by their designee. If the reader of this message is not the intended recipient, you are on notice that any distribution of this message, in any form, is strictly prohibited. If you have received this message in error, please immediately notify the sender and/or ActiveVideo Networks, LLC by telephone at +1 408.931.9200 and delete or destroy any copy of this message.
3 years, 3 months
oVirt Monitoring Alerts via Grafana
by Aviv Litman
Hi all,
Hope you all are doing well.
Checkout this new oVirt blog: oVirt Monitoring Alerts via Grafana
<https://blogs.ovirt.org/2021/08/ovirt-monitoring-alerts-via-grafana/>.
The blog explains how to configure alerts in Grafana for your oVirt
environment and provides an example alerts dashboard that you can import,
use and edit to your needs.
When using alerts significant or critical data changes can be immediately
recognized, so don't miss this opportunity to learn how to configure and
use this important tool.
Feedback, comments and suggestions are more than welcome!
--
Aviv Litman
BI Associate Software Engineer
Red Hat <https://www.redhat.com/>
alitman(a)redhat.com
<https://www.redhat.com/>
3 years, 3 months