[ANN] oVirt 4.4.5 Second Release Candidate is now available for testing
by Sandro Bonazzola
oVirt 4.4.5 Second Release Candidate is now available for testing
The oVirt Project is pleased to announce the availability of oVirt 4.4.5
Second Release Candidate for testing, as of January 21st, 2021.
This update is the fifth in a series of stabilization updates to the 4.4
series.
How to prevent hosts entering emergency mode after upgrade from oVirt 4.4.1
Note: Upgrading from 4.4.2 GA or later should not require re-doing these
steps, if already performed while upgrading from 4.4.1 to 4.4.2 GA. These
are only required to be done once.
Due to Bug 1837864 <https://bugzilla.redhat.com/show_bug.cgi?id=1837864> -
Host enter emergency mode after upgrading to latest build
If you have your root file system on a multipath device on your hosts you
should be aware that after upgrading from 4.4.1 to 4.4.5 you may get your
host entering emergency mode.
In order to prevent this be sure to upgrade oVirt Engine first, then on
your hosts:
1.
Remove the current lvm filter while still on 4.4.1, or in emergency mode
(if rebooted).
2.
Reboot.
3.
Upgrade to 4.4.5 (redeploy in case of already being on 4.4.5).
4.
Run vdsm-tool config-lvm-filter to confirm there is a new filter in
place.
5.
Only if not using oVirt Node:
- run "dracut --force --add multipath” to rebuild initramfs with the
correct filter configuration
6.
Reboot.
Documentation
-
If you want to try oVirt as quickly as possible, follow the instructions
on the Download <https://ovirt.org/download/> page.
-
For complete installation, administration, and usage instructions, see
the oVirt Documentation <https://ovirt.org/documentation/>.
-
For upgrading from a previous version, see the oVirt Upgrade Guide
<https://ovirt.org/documentation/upgrade_guide/>.
-
For a general overview of oVirt, see About oVirt
<https://ovirt.org/community/about.html>.
Important notes before you try it
Please note this is a pre-release build.
The oVirt Project makes no guarantees as to its suitability or usefulness.
This pre-release must not be used in production.
Installation instructions
For installation instructions and additional information please refer to:
https://ovirt.org/documentation/
This release is available now on x86_64 architecture for:
* Red Hat Enterprise Linux 8.3 or newer
* CentOS Linux (or similar) 8.3 or newer
This release supports Hypervisor Hosts on x86_64 and ppc64le architectures
for:
* Red Hat Enterprise Linux 8.3 or newer
* CentOS Linux (or similar) 8.3 or newer
* oVirt Node 4.4 based on CentOS Linux 8.3 (available for x86_64 only)
See the release notes [1] for installation instructions and a list of new
features and bugs fixed.
Notes:
- oVirt Appliance is already available for CentOS Linux 8
- oVirt Node NG is already available for CentOS Linux 8
- We found a few issues while testing on CentOS Stream so we are still
basing oVirt 4.4.5 Node and Appliance on CentOS Linux.
Additional Resources:
* Read more about the oVirt 4.4.5 release highlights:
http://www.ovirt.org/release/4.4.5/
* Get more oVirt project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/
[1] http://www.ovirt.org/release/4.4.5/
[2] http://resources.ovirt.org/pub/ovirt-4.4-pre/iso/
--
Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo(a)redhat.com
<https://www.redhat.com/>
*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.*
3 years, 11 months
"Exception: null" after "Check if post tasks file exists" (was: [oVirt Jenkins] ovirt-system-tests_he-basic-suite-master - Build # 1892 - Failure!)
by Yedidyah Bar David
Hi all,
On Thu, Jan 21, 2021 at 4:56 AM <jenkins(a)jenkins.phx.ovirt.org> wrote:
>
> Project: https://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-suite-master/
> Build: https://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-suite-master/1892/
https://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-suite-master/18...
:
2021-01-21 03:25:52,254+01 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(EE-ManagedThreadFactory-engine-Thread-1) [6668b493] EVENT_ID:
ANSIBLE_RUNNER_EVENT_NOTIFICATION(559), Installing Host
lago-he-basic-suite-master-host-0.lago.local. Check if post tasks file
exists.
2021-01-21 03:25:52,258+01 ERROR
[org.ovirt.engine.core.common.utils.ansible.AnsibleExecutor]
(EE-ManagedThreadFactory-engine-Thread-1) [6668b493] Exception: null
2021-01-21 03:25:52,296+01 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(EE-ManagedThreadFactory-engine-Thread-1) [6668b493] EVENT_ID:
ANSIBLE_RUNNER_EVENT_NOTIFICATION(559), Installing Host
lago-he-basic-suite-master-host-0.lago.local. Gather the rpm package
facts.
2021-01-21 03:25:52,312+01 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(EE-ManagedThreadFactory-engine-Thread-1) [6668b493] EVENT_ID:
ANSIBLE_RUNNER_EVENT_NOTIFICATION(559), Installing Host
lago-he-basic-suite-master-host-0.lago.local. Check if post tasks file
exists.
2021-01-21 03:25:52,316+01 ERROR
[org.ovirt.engine.core.bll.hostdeploy.InstallVdsInternalCommand]
(EE-ManagedThreadFactory-engine-Thread-1) [6668b493] Host installation
failed for host '736e10b6-0445-4a24-935d-87ba15cae4c4',
'lago-he-basic-suite-master-host-0.lago.local': null
2021-01-21 03:25:52,321+01 INFO
[org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand]
(EE-ManagedThreadFactory-engine-Thread-1) [6668b493] START,
SetVdsStatusVDSCommand(HostName =
lago-he-basic-suite-master-host-0.lago.local,
SetVdsStatusVDSCommandParameters:{hostId='736e10b6-0445-4a24-935d-87ba15cae4c4',
status='InstallFailed', nonOperationalReason='NONE',
stopSpmFailureLogged='false', maintenanceReason='null'}), log id:
198cd7e7
2021-01-21 03:25:52,337+01 INFO
[org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand]
(EE-ManagedThreadFactory-engine-Thread-1) [6668b493] FINISH,
SetVdsStatusVDSCommand, return: , log id: 198cd7e7
2021-01-21 03:25:52,363+01 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(EE-ManagedThreadFactory-engine-Thread-1) [6668b493] EVENT_ID:
VDS_INSTALL_FAILED(505), Host
lago-he-basic-suite-master-host-0.lago.local installation failed.
Please refer to /var/log/ovirt-engine/engine.log and log logs under
/var/log/ovirt-engine/host-deploy/ for further details..
https://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-suite-master/18...
ends with:
2021-01-21 03:25:52 CET - TASK [Executing post tasks defined by user]
************************************
Any idea?
Best regards,
> Build Number: 1892
> Build Status: Failure
> Triggered By: Started by timer
>
> -------------------------------------
> Changes Since Last Success:
> -------------------------------------
> Changes for Build #1892
> [Andrej Cernek] pylint: Fix searched directories
>
>
>
>
> -----------------
> Failed Tests:
> -----------------
> No tests ran.
--
Didi
3 years, 11 months
NullPointerException during host-deploy
by Yedidyah Bar David
Hi all,
Got NPE on patched he-basic-suite-master while adding host-1 (not
during initial deploy). Known issue? Perhaps related to updated
wildfly?
https://jenkins.ovirt.org/job/ovirt-system-tests_standard-check-patch/15048/
https://jenkins.ovirt.org/job/ovirt-system-tests_standard-check-patch/150...
:
2021-01-20 08:00:27,449+01 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(EE-ManagedThreadFactory-engine-Thread-2) [4da9f40c] EVENT_ID:
ANSIBLE_RUNNER_EVENT_NOTIFICATION(559), Installing Host
lago-basic-suite-master-host-1. restart libvirtd.
2021-01-20 08:00:27,452+01 ERROR
[org.ovirt.engine.core.common.utils.ansible.AnsibleExecutor]
(EE-ManagedThreadFactory-engine-Thread-2) [4da9f40c] Exception: null
2021-01-20 08:00:27,452+01 DEBUG
[org.ovirt.engine.core.common.utils.ansible.AnsibleExecutor]
(EE-ManagedThreadFactory-engine-Thread-2) [4da9f40c] Exception: :
java.lang.NullPointerException
"restart libvirtd" task itself seems successful, based on host-deploy log.
Thanks and best regards,
--
Didi
3 years, 11 months
My web console(remote viewer) cannot connect to other cluster's vm.
by tommy
I encountered a question about using web console(local vnc remote viewer) to
connect to VM.
The engine-vm can be accessed using this methord, but other VM in other
DataCenters or other Clusters can not be accessed, when I tried to connect ,
the remote viewer program auto abend quickly.
The follow file is the connect file for vm that can connect using remote
viewer:
[virt-viewer]
type=vnc
host=192.168.10.41
port=5900
password=rdXQA4zr/UAY
# Password is valid for 120 seconds.
delete-this-file=1
fullscreen=0
title=HostedEngine:%d
toggle-fullscreen=shift+f11
release-cursor=shift+f12
secure-attention=ctrl+alt+end
versions=rhev-win64:2.0-160;rhev-win32:2.0-160;rhel8:7.0-3;rhel7:2.0-6;rhel6
:99.0-1
newer-version-url=http://www.ovirt.org/documentation/admin-guide/virt/consol
e-client-resources
[ovirt]
host=ooeng.tltd.com:443
vm-guid=76f99df2-ef79-45d9-8eea-a32b168f9ef3
sso-token=4Up7TfLLBjSuQgPkQvRz3D-fUGZWZg4ynApe2Y7ylkARCFwQWsfEr3dU8FjlK8esct
m3Im4tz80mE1DjrNT3XQ
admin=1
ca=-----BEGIN
CERTIFICATE-----\nMIIDqDCCApCgAwIBAgICEAAwDQYJKoZIhvcNAQELBQAwPzELMAkGA1UEBh
MCVVMxETAPBgNVBAoM\nCHRsdGQuY29tMR0wGwYDVQQDDBRvb2VuZy50bHRkLmNvbS4xNzczMDAe
Fw0yMTAxMTAxNjE1NDda\nFw0zMTAxMDkxNjE1NDdaMD8xCzAJBgNVBAYTAlVTMREwDwYDVQQKDA
h0bHRkLmNvbTEdMBsGA1UE\nAwwUb29lbmcudGx0ZC5jb20uMTc3MzAwggEiMA0GCSqGSIb3DQEB
AQUAA4IBDwAwggEKAoIBAQCg\nYT9S7hWiXQUzAqFQKbg2nMjwyHDmb/JmKeJAUVZqNKRg1q80Ip
WyoM12Zw0nX1eTwMnVY/JtJON4\n13PoEC3So8nniGt+wtHr44ysvCWfU0SBk/ZPnKmQ58o5MlSk
idHwySChXfVPYLPWeUJ1JUrujna/\nCbi5bmmjx2pqwLrZXX8Q5NO2MRKOTs0Dtg16Q6z+a3cXLI
ffVJfhPGS3AkIh6nznNaDeH5gFZZbd\nr3DKE4xrpdw/7y6CgjmHe4vwGxOIyE+gElZ/lVtqznLM
wohz7wgtgsDA36277mujNyMjMbrSFheu\n5WfbIa9VVSZWEkISVq6eswLOQ1IRaFyJsFN9AgMBAA
Gjga0wgaowHQYDVR0OBBYEFDYEqJOMqN8+\nQhCP7DAkqF3RZMFdMGgGA1UdIwRhMF+AFDYEqJOM
qN8+QhCP7DAkqF3RZMFdoUOkQTA/MQswCQYD\nVQQGEwJVUzERMA8GA1UECgwIdGx0ZC5jb20xHT
AbBgNVBAMMFG9vZW5nLnRsdGQuY29tLjE3NzMw\nggIQADAPBgNVHRMBAf8EBTADAQH/MA4GA1Ud
DwEB/wQEAwIBBjANBgkqhkiG9w0BAQsFAAOCAQEA\nAKs0/yQWkoOkGcL0PjF9ijekdMmjrLZGyh
5uLot7h9s/Y2+5l9n9IzEjjx9chi8xwt6MBsR6/nBT\n/skcciv2veM22HwNGjdrHvhfbZFnZsGe
2TU60kGzKjlv1En/8Pgd2aWBcwTlr+SErBXkehNEJRj9\n1saycPgwS4pHS04c2+4JMhpe+hxgsO
2+N/SYkP95Lf7ZQynVsN/SKx7X3cWybErCqoB7G7McqaHN\nVWw+QNXo5islWUXqeDc3RcnW3kq0
XUEzEtp6hoeRcLKO99QrAW31zqU/QY+EeZ6Fax1O/jrDafZn\npTs0KJFNgeVnUhKanB29ONy+tm
nUmTAgPMaKKw==\n-----END CERTIFICATE-----\n
the firewall list of the host 192.168.10.41 is:
[root@ooengh1 ~]# firewall-cmd --list-all public (active)
target: default
icmp-block-inversion: no
interfaces: bond0 ovirtmgmt
sources:
services: cockpit dhcpv6-client libvirt-tls ovirt-imageio ovirt-vmconsole
snmp ssh vdsm
ports: 6900/tcp 22/tcp 6081/udp
protocols:
masquerade: no
forward-ports:
source-ports:
icmp-blocks:
rich rules:
the follow file is the connect file that vm that cannot connect using remote
viewer:
[virt-viewer]
type=vnc
host=ohost1.tltd.com
port=5900
password=4/jWA+RLaSZe
# Password is valid for 120 seconds.
delete-this-file=1
fullscreen=0
title=testol:%d
toggle-fullscreen=shift+f11
release-cursor=shift+f12
secure-attention=ctrl+alt+end
versions=rhev-win64:2.0-160;rhev-win32:2.0-160;rhel8:7.0-3;rhel7:2.0-6;rhel6
:99.0-1
newer-version-url=http://www.ovirt.org/documentation/admin-guide/virt/consol
e-client-resources
[ovirt]
host=ooeng.tltd.com:443
vm-guid=2b0eeecf-e561-4f60-b16d-dccddfcc852a
sso-token=4Up7TfLLBjSuQgPkQvRz3D-fUGZWZg4ynApe2Y7ylkARCFwQWsfEr3dU8FjlK8esct
m3Im4tz80mE1DjrNT3XQ
admin=1
ca=-----BEGIN
CERTIFICATE-----\nMIIDqDCCApCgAwIBAgICEAAwDQYJKoZIhvcNAQELBQAwPzELMAkGA1UEBh
MCVVMxETAPBgNVBAoM\nCHRsdGQuY29tMR0wGwYDVQQDDBRvb2VuZy50bHRkLmNvbS4xNzczMDAe
Fw0yMTAxMTAxNjE1NDda\nFw0zMTAxMDkxNjE1NDdaMD8xCzAJBgNVBAYTAlVTMREwDwYDVQQKDA
h0bHRkLmNvbTEdMBsGA1UE\nAwwUb29lbmcudGx0ZC5jb20uMTc3MzAwggEiMA0GCSqGSIb3DQEB
AQUAA4IBDwAwggEKAoIBAQCg\nYT9S7hWiXQUzAqFQKbg2nMjwyHDmb/JmKeJAUVZqNKRg1q80Ip
WyoM12Zw0nX1eTwMnVY/JtJON4\n13PoEC3So8nniGt+wtHr44ysvCWfU0SBk/ZPnKmQ58o5MlSk
idHwySChXfVPYLPWeUJ1JUrujna/\nCbi5bmmjx2pqwLrZXX8Q5NO2MRKOTs0Dtg16Q6z+a3cXLI
ffVJfhPGS3AkIh6nznNaDeH5gFZZbd\nr3DKE4xrpdw/7y6CgjmHe4vwGxOIyE+gElZ/lVtqznLM
wohz7wgtgsDA36277mujNyMjMbrSFheu\n5WfbIa9VVSZWEkISVq6eswLOQ1IRaFyJsFN9AgMBAA
Gjga0wgaowHQYDVR0OBBYEFDYEqJOMqN8+\nQhCP7DAkqF3RZMFdMGgGA1UdIwRhMF+AFDYEqJOM
qN8+QhCP7DAkqF3RZMFdoUOkQTA/MQswCQYD\nVQQGEwJVUzERMA8GA1UECgwIdGx0ZC5jb20xHT
AbBgNVBAMMFG9vZW5nLnRsdGQuY29tLjE3NzMw\nggIQADAPBgNVHRMBAf8EBTADAQH/MA4GA1Ud
DwEB/wQEAwIBBjANBgkqhkiG9w0BAQsFAAOCAQEA\nAKs0/yQWkoOkGcL0PjF9ijekdMmjrLZGyh
5uLot7h9s/Y2+5l9n9IzEjjx9chi8xwt6MBsR6/nBT\n/skcciv2veM22HwNGjdrHvhfbZFnZsGe
2TU60kGzKjlv1En/8Pgd2aWBcwTlr+SErBXkehNEJRj9\n1saycPgwS4pHS04c2+4JMhpe+hxgsO
2+N/SYkP95Lf7ZQynVsN/SKx7X3cWybErCqoB7G7McqaHN\nVWw+QNXo5islWUXqeDc3RcnW3kq0
XUEzEtp6hoeRcLKO99QrAW31zqU/QY+EeZ6Fax1O/jrDafZn\npTs0KJFNgeVnUhKanB29ONy+tm
nUmTAgPMaKKw==\n-----END CERTIFICATE-----\n
the firewall list of the host ohost1.tltd.com(192.168.10.160) is:
[root@ohost1 ~]# firewall-cmd --list-all public (active)
target: default
icmp-block-inversion: no
interfaces: bond0 ovirtmgmt
sources:
services: cockpit dhcpv6-client libvirt-tls ovirt-imageio ovirt-vmconsole
snmp ssh vdsm
ports: 22/tcp 6081/udp
protocols:
masquerade: no
forward-ports:
source-ports:
icmp-blocks:
rich rules:
Please give me some advice,thanks.
3 years, 11 months
About Update oVirt from 4.4.3 to 4.4.4 and the ovirt-node-ng-image-update rpm file.
by tommy
Hi, everyone:
I meet a question when I update ovirt from 4.4.3 to 4.4.4.
First, I update my ovirt engine vm using yum update, the using hosted-engine
reconfigure system.
But, then I found that others hosts (including engine vm's hosts and other
KVM hosts) show need update. Then I checked the yum update info, I found
these hosts needs to download and install this rpm file:
[root@host1 ~]# yum update
Last metadata expiration check: 2:05:17 ago on Fri 15 Jan 2021 09:39:58 AM
CST.
Dependencies resolved.
============================================================================
============================================================================
=======
Package Architecture
Version Repository Size
============================================================================
============================================================================
=======
Installing:
ovirt-node-ng-image-update noarch
4.4.4-1.el8 ovirt-4.4 819 M
replacing ovirt-node-ng-image-update-placeholder.noarch 4.4.3-2.el8
Transaction Summary
============================================================================
============================================================================
=======
Install 1 Package
Total download size: 819 M
Is this ok [y/N]: n
So, I just download the rpm file and unpack it, I found that it includes two
files:
[root@oeng image]# ll
total 838416
-rw-r--r--. 1 root root 858533888 Jan 15 11:23
ovirt-node-ng-image-update-4.4.4-1.el8.squashfs.img
-rw-r--r--. 1 root root 3962 Jan 15 11:23 product.img
Then, I unpack the squashfs.img files, and mount the file, I found it's a
linux rootfs:
[root@oeng rootfs]# ll
total 92
lrwxrwxrwx. 1 root root 7 Nov 3 23:22 bin -> usr/bin
dr-xr-xr-x. 6 root root 4096 Dec 21 18:37 boot
drwxr-xr-x. 3 root root 4096 Dec 21 18:57 data
drwxr-xr-x. 2 root root 4096 Dec 21 17:42 dev
drwxr-xr-x. 131 root root 12288 Dec 21 19:00 etc
drwxr-xr-x. 2 root root 4096 Nov 3 23:22 home
lrwxrwxrwx. 1 root root 7 Nov 3 23:22 lib -> usr/lib
lrwxrwxrwx. 1 root root 9 Nov 3 23:22 lib64 -> usr/lib64
drwx------. 2 root root 16384 Dec 21 17:41 lost+found
drwxr-xr-x. 2 root root 4096 Nov 3 23:22 media
drwxr-xr-x. 2 root root 4096 Nov 3 23:22 mnt
drwxr-xr-x. 2 root root 4096 Nov 3 23:22 opt
drwxr-xr-x. 2 root root 4096 Dec 21 17:42 proc
drwxr-xr-x. 3 root root 4096 Dec 21 18:57 rhev
dr-xr-x---. 2 root root 4096 Dec 21 19:28 root
drwxr-xr-x. 2 root root 4096 Dec 21 17:42 run
lrwxrwxrwx. 1 root root 8 Nov 3 23:22 sbin -> usr/sbin
drwxr-xr-x. 2 root root 4096 Nov 3 23:22 srv
drwxr-xr-x. 2 root root 4096 Dec 21 17:42 sys
drwxrwxrwt. 2 root root 4096 Dec 21 19:00 tmp
drwxr-xr-x. 12 root root 4096 Dec 21 17:47 usr
drwxr-xr-x. 20 root root 4096 Dec 21 17:53 var
[root@oeng rootfs]#
But the product.img file cannot be mounted,it raese error:
[root@oeng image]# mount -o loop ./product.img /mnt/product/
mount: /mnt/product: wrong fs type, bad option, bad superblock on
/dev/loop1, missing codepage or helper program, or other error.
I feel that this rpm is for making engine vm. But I have updated my engine
vm, then should I continue using yum update to update my hosts ?
Thanks!
3 years, 11 months