Ooops! in last step of Hyperconverged deployment
by Harry O
Hi,
In the second engine dep run in Hyperconverged deployment I get red "Ooops!" in cockpit.
I think it fails on some networking setup.
The first oVirt Node says "Hosted Engine is up!" but the other nodes is not added to HostedEngine yet.
There is no network connectivity to the Engine outside node1, I can ssh to engine from node1 on the right IP-address.
Please tell what logs I should pull.
2 years, 6 months
Sparse VMs from Templates - Storage issues
by Shantur Rathore
Hi all,
I have a setup as detailed below
- iSCSI Storage Domain
- Template with Thin QCOW2 disk
- Multiple VMs from Template with Thin disk
oVirt Node 4.4.4
When the VMs boots up it downloads some data to it and that leads to
increase in volume size.
I see that every few seconds the VM gets paused with
"VM X has been paused due to no Storage space error."
and then after few seconds
"VM X has recovered from paused back to up"
Sometimes after a many pause and recovery the VM dies with
"VM X is down with error. Exit message: Lost connection with qemu process."
and I have to restart the VMs.
My questions.
1. How to work around this dying VM?
2. Is there a way to use sparse disks without VM being paused again and
again?
Thanks in advance.
Shantur
2 years, 6 months
[ANN] oVirt 4.4.8 Async update #1
by Sandro Bonazzola
oVirt 4.4.8 Async update #1
On August 26th 2021 the oVirt project released an async update to the
following packages:
-
ovirt-ansible-collection 1.6.2
-
ovirt-engine 4.4.8.5
-
ovirt-release44 4.4.8.1
-
oVirt Node 4.4.8.1
-
oVirt Appliance 4.4-20210826
Fixing the following bugs:
-
Bug 1947709 <https://bugzilla.redhat.com/show_bug.cgi?id=1947709> -
[IPv6] HostedEngineLocal is an isolated libvirt network, breaking upgrades
from 4.3
-
Bug 1966873 <https://bugzilla.redhat.com/show_bug.cgi?id=1966873> -
[RFE] Create Ansible role for remove stale LUNs example
remove_mpath_device.yml
-
Bug 1997663 <https://bugzilla.redhat.com/show_bug.cgi?id=1997663> - Keep
cinbderlib dependencies optional for 4.4.8
-
Bug 1996816 <https://bugzilla.redhat.com/show_bug.cgi?id=1996816> -
Cluster upgrade fails with: 'OAuthException invalid_grant: The provided
authorization grant for the auth code has expired.
oVirt Node Changes:
- Consume above oVirt updates
- GlusterFS 8.6: https://docs.gluster.org/en/latest/release-notes/8.6/
- Fixes for:
-
CVE-2021-22923 <https://access.redhat.com/security/cve/CVE-2021-22923>
curl: Metalink download sends credentials
-
CVE-2021-22922 <https://access.redhat.com/security/cve/CVE-2021-22922>
curl: Content not matching hash in Metalink is not being discarded
Full diff list:
--- ovirt-node-ng-image-4.4.8.manifest-rpm 2021-08-19 07:57:44.081590739
+0200
+++ ovirt-node-ng-image-4.4.8.1.manifest-rpm 2021-08-27 08:11:54.863736688
+0200
@@ -2,7 +2,7 @@
-ModemManager-glib-1.10.8-3.el8.x86_64
-NetworkManager-1.32.6-1.el8.x86_64
-NetworkManager-config-server-1.32.6-1.el8.noarch
-NetworkManager-libnm-1.32.6-1.el8.x86_64
-NetworkManager-ovs-1.32.6-1.el8.x86_64
-NetworkManager-team-1.32.6-1.el8.x86_64
-NetworkManager-tui-1.32.6-1.el8.x86_64
+ModemManager-glib-1.10.8-4.el8.x86_64
+NetworkManager-1.32.8-1.el8.x86_64
+NetworkManager-config-server-1.32.8-1.el8.noarch
+NetworkManager-libnm-1.32.8-1.el8.x86_64
+NetworkManager-ovs-1.32.8-1.el8.x86_64
+NetworkManager-team-1.32.8-1.el8.x86_64
+NetworkManager-tui-1.32.8-1.el8.x86_64
@@ -94 +94 @@
-curl-7.61.1-18.el8.x86_64
+curl-7.61.1-18.el8_4.1.x86_64
@@ -106,4 +106,4 @@
-device-mapper-1.02.177-5.el8.x86_64
-device-mapper-event-1.02.177-5.el8.x86_64
-device-mapper-event-libs-1.02.177-5.el8.x86_64
-device-mapper-libs-1.02.177-5.el8.x86_64
+device-mapper-1.02.177-6.el8.x86_64
+device-mapper-event-1.02.177-6.el8.x86_64
+device-mapper-event-libs-1.02.177-6.el8.x86_64
+device-mapper-libs-1.02.177-6.el8.x86_64
@@ -140,36 +140,36 @@
-fence-agents-all-4.2.1-74.el8.x86_64
-fence-agents-amt-ws-4.2.1-74.el8.noarch
-fence-agents-apc-4.2.1-74.el8.noarch
-fence-agents-apc-snmp-4.2.1-74.el8.noarch
-fence-agents-bladecenter-4.2.1-74.el8.noarch
-fence-agents-brocade-4.2.1-74.el8.noarch
-fence-agents-cisco-mds-4.2.1-74.el8.noarch
-fence-agents-cisco-ucs-4.2.1-74.el8.noarch
-fence-agents-common-4.2.1-74.el8.noarch
-fence-agents-compute-4.2.1-74.el8.noarch
-fence-agents-drac5-4.2.1-74.el8.noarch
-fence-agents-eaton-snmp-4.2.1-74.el8.noarch
-fence-agents-emerson-4.2.1-74.el8.noarch
-fence-agents-eps-4.2.1-74.el8.noarch
-fence-agents-heuristics-ping-4.2.1-74.el8.noarch
-fence-agents-hpblade-4.2.1-74.el8.noarch
-fence-agents-ibmblade-4.2.1-74.el8.noarch
-fence-agents-ifmib-4.2.1-74.el8.noarch
-fence-agents-ilo-moonshot-4.2.1-74.el8.noarch
-fence-agents-ilo-mp-4.2.1-74.el8.noarch
-fence-agents-ilo-ssh-4.2.1-74.el8.noarch
-fence-agents-ilo2-4.2.1-74.el8.noarch
-fence-agents-intelmodular-4.2.1-74.el8.noarch
-fence-agents-ipdu-4.2.1-74.el8.noarch
-fence-agents-ipmilan-4.2.1-74.el8.noarch
-fence-agents-kdump-4.2.1-74.el8.x86_64
-fence-agents-mpath-4.2.1-74.el8.noarch
-fence-agents-redfish-4.2.1-74.el8.x86_64
-fence-agents-rhevm-4.2.1-74.el8.noarch
-fence-agents-rsa-4.2.1-74.el8.noarch
-fence-agents-rsb-4.2.1-74.el8.noarch
-fence-agents-sbd-4.2.1-74.el8.noarch
-fence-agents-scsi-4.2.1-74.el8.noarch
-fence-agents-vmware-rest-4.2.1-74.el8.noarch
-fence-agents-vmware-soap-4.2.1-74.el8.noarch
-fence-agents-wti-4.2.1-74.el8.noarch
+fence-agents-all-4.2.1-75.el8.x86_64
+fence-agents-amt-ws-4.2.1-75.el8.noarch
+fence-agents-apc-4.2.1-75.el8.noarch
+fence-agents-apc-snmp-4.2.1-75.el8.noarch
+fence-agents-bladecenter-4.2.1-75.el8.noarch
+fence-agents-brocade-4.2.1-75.el8.noarch
+fence-agents-cisco-mds-4.2.1-75.el8.noarch
+fence-agents-cisco-ucs-4.2.1-75.el8.noarch
+fence-agents-common-4.2.1-75.el8.noarch
+fence-agents-compute-4.2.1-75.el8.noarch
+fence-agents-drac5-4.2.1-75.el8.noarch
+fence-agents-eaton-snmp-4.2.1-75.el8.noarch
+fence-agents-emerson-4.2.1-75.el8.noarch
+fence-agents-eps-4.2.1-75.el8.noarch
+fence-agents-heuristics-ping-4.2.1-75.el8.noarch
+fence-agents-hpblade-4.2.1-75.el8.noarch
+fence-agents-ibmblade-4.2.1-75.el8.noarch
+fence-agents-ifmib-4.2.1-75.el8.noarch
+fence-agents-ilo-moonshot-4.2.1-75.el8.noarch
+fence-agents-ilo-mp-4.2.1-75.el8.noarch
+fence-agents-ilo-ssh-4.2.1-75.el8.noarch
+fence-agents-ilo2-4.2.1-75.el8.noarch
+fence-agents-intelmodular-4.2.1-75.el8.noarch
+fence-agents-ipdu-4.2.1-75.el8.noarch
+fence-agents-ipmilan-4.2.1-75.el8.noarch
+fence-agents-kdump-4.2.1-75.el8.x86_64
+fence-agents-mpath-4.2.1-75.el8.noarch
+fence-agents-redfish-4.2.1-75.el8.x86_64
+fence-agents-rhevm-4.2.1-75.el8.noarch
+fence-agents-rsa-4.2.1-75.el8.noarch
+fence-agents-rsb-4.2.1-75.el8.noarch
+fence-agents-sbd-4.2.1-75.el8.noarch
+fence-agents-scsi-4.2.1-75.el8.noarch
+fence-agents-vmware-rest-4.2.1-75.el8.noarch
+fence-agents-vmware-soap-4.2.1-75.el8.noarch
+fence-agents-wti-4.2.1-75.el8.noarch
@@ -215,7 +215,7 @@
-glusterfs-8.5-2.el8.x86_64
-glusterfs-cli-8.5-2.el8.x86_64
-glusterfs-client-xlators-8.5-2.el8.x86_64
-glusterfs-events-8.5-2.el8.x86_64
-glusterfs-fuse-8.5-2.el8.x86_64
-glusterfs-geo-replication-8.5-2.el8.x86_64
-glusterfs-server-8.5-2.el8.x86_64
+glusterfs-8.6-1.el8.x86_64
+glusterfs-cli-8.6-1.el8.x86_64
+glusterfs-client-xlators-8.6-1.el8.x86_64
+glusterfs-events-8.6-1.el8.x86_64
+glusterfs-fuse-8.6-1.el8.x86_64
+glusterfs-geo-replication-8.6-1.el8.x86_64
+glusterfs-server-8.6-1.el8.x86_64
@@ -301,5 +301,5 @@
-kernel-4.18.0-326.el8.x86_64
-kernel-core-4.18.0-326.el8.x86_64
-kernel-modules-4.18.0-326.el8.x86_64
-kernel-tools-4.18.0-326.el8.x86_64
-kernel-tools-libs-4.18.0-326.el8.x86_64
+kernel-4.18.0-331.el8.x86_64
+kernel-core-4.18.0-331.el8.x86_64
+kernel-modules-4.18.0-331.el8.x86_64
+kernel-tools-4.18.0-331.el8.x86_64
+kernel-tools-libs-4.18.0-331.el8.x86_64
@@ -310 +310 @@
-kmod-kvdo-6.2.5.65-79.el8.x86_64
+kmod-kvdo-6.2.5.72-79.el8.x86_64
@@ -363 +363 @@
-libcurl-7.61.1-18.el8.x86_64
+libcurl-7.61.1-18.el8_4.1.x86_64
@@ -381,6 +381,6 @@
-libgfapi0-8.5-2.el8.x86_64
-libgfchangelog0-8.5-2.el8.x86_64
-libgfrpc0-8.5-2.el8.x86_64
-libgfxdr0-8.5-2.el8.x86_64
-libglusterd0-8.5-2.el8.x86_64
-libglusterfs0-8.5-2.el8.x86_64
+libgfapi0-8.6-1.el8.x86_64
+libgfchangelog0-8.6-1.el8.x86_64
+libgfrpc0-8.6-1.el8.x86_64
+libgfxdr0-8.6-1.el8.x86_64
+libglusterd0-8.6-1.el8.x86_64
+libglusterfs0-8.6-1.el8.x86_64
@@ -416 +415,0 @@
-libmetalink-0.1.3-7.el8.x86_64
@@ -558,2 +557,2 @@
-lvm2-2.03.12-5.el8.x86_64
-lvm2-libs-2.03.12-5.el8.x86_64
+lvm2-2.03.12-6.el8.x86_64
+lvm2-libs-2.03.12-6.el8.x86_64
@@ -641 +640 @@
-ovirt-ansible-collection-1.6.0-1.el8.noarch
+ovirt-ansible-collection-1.6.2-1.el8.noarch
@@ -649 +648 @@
-ovirt-node-ng-image-update-placeholder-4.4.8-1.el8.noarch
+ovirt-node-ng-image-update-placeholder-4.4.8.1-1.el8.noarch
@@ -657,2 +656,2 @@
-ovirt-release-host-node-4.4.8-1.el8.noarch
-ovirt-release44-4.4.8-1.el8.noarch
+ovirt-release-host-node-4.4.8.1-1.el8.noarch
+ovirt-release44-4.4.8.1-1.el8.noarch
@@ -665,3 +664,3 @@
-pacemaker-cluster-libs-2.1.0-5.el8.x86_64
-pacemaker-libs-2.1.0-5.el8.x86_64
-pacemaker-schemas-2.1.0-5.el8.noarch
+pacemaker-cluster-libs-2.1.0-6.el8.x86_64
+pacemaker-libs-2.1.0-6.el8.x86_64
+pacemaker-schemas-2.1.0-6.el8.noarch
@@ -773 +772 @@
-python3-gluster-8.5-2.el8.x86_64
+python3-gluster-8.6-1.el8.x86_64
@@ -835 +834 @@
-python3-perf-4.18.0-326.el8.x86_64
+python3-perf-4.18.0-331.el8.x86_64
@@ -935,2 +934,2 @@
-selinux-policy-3.14.3-75.el8.noarch
-selinux-policy-targeted-3.14.3-75.el8.noarch
+selinux-policy-3.14.3-76.el8.noarch
+selinux-policy-targeted-3.14.3-76.el8.noarch
@@ -941 +940 @@
-shadow-utils-4.6-13.el8.x86_64
+shadow-utils-4.6-14.el8.x86_64
@@ -948 +947 @@
-sos-4.1-4.el8.noarch
+sos-4.1-5.el8.noarch
--
Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo(a)redhat.com
<https://www.redhat.com/>
*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.*
2 years, 6 months
oVirt 4.3 DWH with Grafana
by Vrgotic, Marko
Dear oVirt,
We are currently running oVirt 4.3 and upgrade/migration to 4.4 won’t be possible for few more months.
I am looking into guidelines, how to, for setting up Grafana using DataWarehouse as data source.
Did anyone already did this, and would be willing to share the steps?
Kindly awaiting your reply.
-----
kind regards/met vriendelijke groeten
Marko Vrgotic
Sr. System Engineer @ System Administration
ActiveVideo
o: +31 (35) 6774131
m: +31 (65) 5734174
e: m.vrgotic(a)activevideo.com<mailto:m.vrgotic@activevideo.com>
w: www.activevideo.com<http://www.activevideo.com>
ActiveVideo Networks BV. Mediacentrum 3745 Joop van den Endeplein 1.1217 WJ Hilversum, The Netherlands. The information contained in this message may be legally privileged and confidential. It is intended to be read only by the individual or entity to whom it is addressed or by their designee. If the reader of this message is not the intended recipient, you are on notice that any distribution of this message, in any form, is strictly prohibited. If you have received this message in error, please immediately notify the sender and/or ActiveVideo Networks, LLC by telephone at +1 408.931.9200 and delete or destroy any copy of this message.
2 years, 6 months
oVirt Monitoring Alerts via Grafana
by Aviv Litman
Hi all,
Hope you all are doing well.
Checkout this new oVirt blog: oVirt Monitoring Alerts via Grafana
<https://blogs.ovirt.org/2021/08/ovirt-monitoring-alerts-via-grafana/>.
The blog explains how to configure alerts in Grafana for your oVirt
environment and provides an example alerts dashboard that you can import,
use and edit to your needs.
When using alerts significant or critical data changes can be immediately
recognized, so don't miss this opportunity to learn how to configure and
use this important tool.
Feedback, comments and suggestions are more than welcome!
--
Aviv Litman
BI Associate Software Engineer
Red Hat <https://www.redhat.com/>
alitman(a)redhat.com
<https://www.redhat.com/>
2 years, 6 months
problems testing 4.3.10 to 4.4.8 upgrade SHE
by Gianluca Cecchi
Hello,
I'm testing what in object in a test env with novirt1 and novirt2 as hosts.
First reinstalled host is novirt2
For this I downloaded the 4.4.8 iso of the node:
https://resources.ovirt.org/pub/ovirt-4.4/iso/ovirt-node-ng-installer/4.4...
before running the restore command for the first scratched node I
pre-installed the appliance rpm on it and I got:
ovirt-engine-appliance-4.4-20210818155544.1.el8.x86_64
I selected to pause an d I arrived here with local vm engine completing its
setup:
INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Add host]
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Include after_add_host
tasks files]
[ INFO ] You can now connect to
https://novirt2.localdomain.local:6900/ovirt-engine/ and check the status
of this host and eventually remediate it, please continue only when the
host is listed as 'up'
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : include_tasks]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Create temporary lock
file]
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Pause execution until
/tmp/ansible.4_o6a2wo_he_setup_lock is removed, delete it once ready to
proceed]
But connecting t the provided
https://novirt2.localdomain.local:6900/ovirt-engine/ url
I see that only the still 4.3.10 host results up while novirt2 is not
resp[onsive
vm situation:
https://drive.google.com/file/d/1OwHHzK0owU2HWZqvHFaLLbHVvjnBhRRX/view?us...
storage situation:
https://drive.google.com/file/d/1D-rmlpGsKfRRmYx2avBk_EYCG7XWMXNq/view?us...
hosts situation:
https://drive.google.com/file/d/1yrmfYF6hJFzKaG54Xk0Rhe2kY-TIcUvA/view?us...
In engine.log I see
2021-08-25 09:14:38,548+02 ERROR
[org.ovirt.engine.core.vdsbroker.HostDevListByCapsVDSCommand]
(EE-ManagedThreadFactory-engine-Thread-4) [5f4541ee] Command
'HostDevListByCapsVDSCommand(HostName = novirt2.localdomain.local,
VdsIdAndVdsVDSCommandParametersBase:{hostId='ca9ff6f7-5a7c-4168-9632-998c52f76cfa',
vds='Host[novirt2.localdomain.local,ca9ff6f7-5a7c-4168-9632-998c52f76cfa]'})'
execution failed: java.net.ConnectException: Connection refused
and continuouslly this message...
I also tried to restart vdsmd on novit2 but nothing changed.
Do I have to restart the HA daemons on novirt2?
Any insight?
Thanks
Gianluca
2 years, 6 months
Error when trying to change master storage domain
by Matthew Benstead
Hello,
I'm trying to decommission the old master storage domain in ovirt, and
replace it with a new one. All of the VMs have been migrated off of the
old master, and everything has been running on the new storage domain
for a couple months. But when I try to put the old domain into
maintenance mode I get an error.
Old Master: vm-storage-ssd
New Domain: vm-storage-ssd2
The error is:
Failed to Reconstruct Master Domain for Data Center EDC2
As well as:
Sync Error on Master Domain between Host daccs01 and oVirt Engine.
Domain: vm-storage-ssd is marked as Master in oVirt Engine database but
not on the Storage side. Please consult with Support on how to fix this
issue.
2021-07-28 11:41:34,870-07 WARN
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxy]
(EE-ManagedThreadFactory-engine-Thread-23) [] Master domain version is
not in sync between DB and VDSM. Domain vm-storage-ssd
marked as master, but the version in DB: 283 and in VDSM: 280
And:
Not stopping SPM on vds daccs01, pool id
f72ec125-69a1-4c1b-a5e1-313fcb70b6ff as there are uncleared tasks Task
'5fa9edf0-56c3-40e4-9327-47bf7764d28d', status 'finished'
After a couple minutes all the domains are marked as active again and
things continue, but vm-storage-ssd is still listed as the master
domain. Any thoughts?
This is on 4.3.10.4-1.el7 on CentOS 7.
engine=# SELECT storage_name, storage_pool_id, storage, status FROM
storage_pool_with_storage_domain ORDER BY storage_name;
storage_name | storage_pool_id
| storage | status
-----------------------+--------------------------------------+----------------------------------------+--------
compute1-iscsi-ssd | f72ec125-69a1-4c1b-a5e1-313fcb70b6ff |
yvUESE-yWUv-VIWL-qX90-aAq7-gK0I-EqppRL | 1
compute7-iscsi-ssd | f72ec125-69a1-4c1b-a5e1-313fcb70b6ff |
8ekHdv-u0RJ-B0FO-LUUK-wDWs-iaxb-sh3W3J | 1
export-domain-storage | f72ec125-69a1-4c1b-a5e1-313fcb70b6ff |
d3932528-6844-481a-bfed-542872ace9e5 | 1
iso-storage | f72ec125-69a1-4c1b-a5e1-313fcb70b6ff |
f800b7a6-6a0c-4560-8476-2f294412d87d | 1
vm-storage-7200rpm | f72ec125-69a1-4c1b-a5e1-313fcb70b6ff |
a0bff472-1348-4302-a5c7-f1177efa45a9 | 1
vm-storage-ssd | f72ec125-69a1-4c1b-a5e1-313fcb70b6ff |
95acd9a4-a6fb-4208-80dd-1c53d6aacad0 | 1
vm-storage-ssd2 | f72ec125-69a1-4c1b-a5e1-313fcb70b6ff |
829d0600-c3f7-4dae-a749-d7f05c6a6ca4 | 1
(7 rows)
Thanks,
-Matthew
--
2 years, 6 months
data storage domain iso upload problem
by csabany@freemail.hu
Hi,
I managed a ovirt 4.4.7 for production systems.
Last week i removed the Master storage domain (moved tamplates, vm-s well, unattached, etc), but i forgot to move isos.
Now, when i upload a new iso to a data storage domain, the system show it, but it's unbootable:
"could not read from cdrom code 0005"
thanks
csabany
2 years, 6 months