oVirt 4.4.0 Beta release refresh is now available for testing
by Sandro Bonazzola
oVirt 4.4.0 Beta release refresh is now available for testing
The oVirt Project is excited to announce the availability of the beta
release of oVirt 4.4.0 refresh for testing, as of April 9th, 2020
This release unleashes an altogether more powerful and flexible open source
virtualization solution that encompasses hundreds of individual changes and
a wide range of enhancements across the engine, storage, network, user
interface, and analytics on top of oVirt 4.3.
Important notes before you try it
Please note this is a Beta release.
The oVirt Project makes no guarantees as to its suitability or usefulness.
This pre-release must not to be used in production.
In particular, please note that upgrades from 4.3 and future upgrades from
this beta to the final 4.4 release from this version are not supported.
Some of the features included in oVirt 4.4.0 Beta require content that will
be available in CentOS Linux 8.2 but can’t be tested on RHEL 8.2 beta yet
due to some incompatibility in openvswitch package shipped in CentOS Virt
SIG which requires to rebuild openvswitch on top of CentOS 8.2.
Known Issues
-
ovirt-imageio development is still in progress. In this beta you can’t
upload images to data domains using the engine web application. You can
still copy iso images into the deprecated ISO domain for installing VMs or
upload and download to/from data domains is fully functional via the REST
API and SDK.
For uploading and downloading via the SDK, please see:
-
https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/examples/upload...
-
https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/examples/downlo...
Both scripts are standalone command line tool, try --help for more info.
Installation instructions
For the engine: either use appliance or:
- Install CentOS Linux 8 minimal from
http://centos.mirror.garr.it/centos/8.1.1911/isos/x86_64/CentOS-8.1.1911-...
- dnf install
https://resources.ovirt.org/pub/yum-repo/ovirt-release44-pre.rpm
- dnf update (reboot if needed)
- dnf module enable -y javapackages-tools pki-deps 389-ds
- dnf install ovirt-engine
- engine-setup
For the nodes:
Either use oVirt Node ISO or:
- Install CentOS Linux 8 from
http://centos.mirror.garr.it/centos/8.1.1911/isos/x86_64/CentOS-8.1.1911-...
; select minimal installation
- dnf install
https://resources.ovirt.org/pub/yum-repo/ovirt-release44-pre.rpm
- dnf update (reboot if needed)
- Attach the host to engine and let it be deployed.
What’s new in oVirt 4.4.0 Beta?
-
Hypervisors based on CentOS Linux 8 (rebuilt from award winning RHEL8),
for both oVirt Node and standalone CentOS Linux hosts
-
Easier network management and configuration flexibility with
NetworkManager
-
VMs based on a more modern Q35 chipset with legacy seabios and UEFI
firmware
-
Support for direct passthrough of local host disks to VMs
-
Live migration improvements for High Performance guests.
-
New Windows Guest tools installer based on WiX framework now moved to
VirtioWin project
-
Dropped support for cluster level prior to 4.2
-
Dropped SDK3 support
-
4K disks support only for file based storage. iSCSI/FC storage do not
support 4k disks yet.
-
Exporting a VM to a data domain
-
Editing of floating disks
-
Integrating ansible-runner into engine, which allows a more detailed
monitoring of playbooks executed from engine
-
Adding/reinstalling hosts are now completely based on Ansible
-
The OpenStack Neutron Agent cannot be configured by oVirt anymore, it
should be configured by TripleO instead
This release is available now on x86_64 architecture for:
* Red Hat Enterprise Linux 8.1
* CentOS Linux (or similar) 8.1
This release supports Hypervisor Hosts on x86_64 and ppc64le architectures
for:
* Red Hat Enterprise Linux 8.1
* CentOS Linux (or similar) 8.1
* oVirt Node 4.4 based on CentOS Linux 8.1 (available for x86_64 only)
See the release notes [1] for installation instructions and a list of new
features and bugs fixed.
If you manage more than one oVirt instance, OKD or RDO we also recommend to
try ManageIQ <http://manageiq.org/>.
In such a case, please be sure to take the qc2 image and not the ova image.
Notes:
- oVirt Appliance is already available for CentOS Linux 8
- oVirt Node NG is already available for CentOS Linux 8
Additional Resources:
* Read more about the oVirt 4.4.0 release highlights:
http://www.ovirt.org/release/4.4.0/
* Get more oVirt project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/
[1] http://www.ovirt.org/release/4.4.0/
[2] http://resources.ovirt.org/pub/ovirt-4.4-pre/iso/
--
Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo(a)redhat.com
<https://www.redhat.com/>*
<https://www.redhat.com/en/summit?sc_cid=7013a000002D2QxAAK>*
*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.*
4 years, 6 months
hover_to_id couldnt find element compute
by Yedidyah Bar David
Hi all,
Almost two years ago I started a thread on this list with the same
subject. Eventually I pushed a small patch to make it try longer (24
seconds instead of 2.4). Later this happened again, but I didn't post
publicly - only corresponded with Greg Sheremeta who helped me
originally (and is now in OpenShift), not doing anything else
eventually. At the time, he said this might not have been simple
normal slowness, but some other problem. Now it happened again:
https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-te...
Last screenshot, taken before the failure, just shows "Loading ...",
just as then.
Any UI expert wants to spend some time looking at this? Perhaps adding
what's needed to have more debug information etc.?
For now I'll just retry, anyway.
Thanks and best regards,
--
Didi
4 years, 6 months
How to add imageio configuration into engine?
by Vojtech Juranek
Hi,
I'd like to add imageio configuration into engine. Previously, it was done by
dedicated setup code in imageio-proxy [1], but in new imageio we remove the
proxy package and to configure imageio for engine we need just to place
firewalld and imageio config files into proper place.
Initial idea was to do it directly in engine spec file, but it seems to me
that configuration of the engine is done by dedicated packages and custom
code. Ansible roles in packaging/ansible-runner-service-project seem to be
only for host configuration.
What is a proper way how to configure imageio?
Thanks
Vojta
[1] https://github.com/oVirt/ovirt-imageio/tree/master/proxy/setup
4 years, 6 months
OST:: hc-basic-suite-master failing
by Parth Dhanjal
Hello!
The hc-basic-suite-master fails at
+ lago shell lago-hc-basic-suite-master-host-0 /root/exec_playbook.sh
lago-hc-basic-suite-master-host-0 lago-hc-basic-suite-master-host-1
lago-hc-basic-suite-master-host-2 + RET_CODE=2 + '[' 2 -ne 0 ']' + echo
'ansible setup on lago-hc-basic-suite-master-host-0 failed with status 2.'
Is this issue because of an ansible issue or a lago error?
4 years, 6 months
Re: Change in ovirt-provider-ovn[master]: test, do not merge: Drop el7
by Yedidyah Bar David
On Sun, Apr 5, 2020 at 1:35 PM Code Review <gerrit(a)ovirt.org> wrote:
>
> From gerrit-hooks <automation(a)ovirt.org>:
>
> gerrit-hooks has posted comments on this change.
>
> Change subject: test, do not merge: Drop el7
Seems to me like CQ is failing due to failed check-merged:
https://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/22088/
https://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/22088/cons...
04:53:13 ovirt-provider-ovn_standard-on-merge (423) failed building
And indeed last patch to ovirt-provider-ovn:
https://gerrit.ovirt.org/#/c/107970/
Failed:
https://jenkins.ovirt.org/job/ovirt-provider-ovn_standard-on-merge/423/
https://jenkins.ovirt.org/job/ovirt-provider-ovn_standard-on-merge/423/ar...
ERROR: Could not find a version that satisfies the requirement
futurist>=2.1.0 (from openstacksdk) (from versions: 0.1.0, 0.1.1,
0.1.2, 0.2.0, 0.3.0, 0.4.0, 0.5.0, 0.6.0, 0.7.0, 0.8.0, 0.9.0, 0.10.0,
0.11.0, 0.12.0, 0.13.0, 0.14.0, 0.15.0, 0.16.0, 0.17.0, 0.18.0,
0.19.0, 0.20.0, 0.21.0, 0.21.1, 0.22.0, 0.23.0, 1.0.0, 1.1.0, 1.2.0,
1.3.0, 1.3.1, 1.3.2, 1.4.0, 1.5.0, 1.6.0, 1.7.0, 1.8.0, 1.8.1, 1.9.0,
1.10.0)
ERROR: No matching distribution found for futurist>=2.1.0 (from openstacksdk)
So perhaps branch ovirt-provider-ovn for 4.3 (and fix what's needed
there, if anything)
and drop el7 support from master.
Best regards,
> ......................................................................
>
>
> Patch Set 1:
>
> * Update Tracker::IGNORE, no bug url/s found
>
> --
> To view, visit https://gerrit.ovirt.org/108219
> To unsubscribe, visit https://gerrit.ovirt.org/settings
>
> Gerrit-Project: ovirt-provider-ovn
> Gerrit-Branch: master
> Gerrit-MessageType: comment
> Gerrit-Change-Id: Iff6b57f8e3965d559566f9bb026a3439fc0b857d
> Gerrit-Change-Number: 108219
> Gerrit-PatchSet: 1
> Gerrit-Owner: Yedidyah Bar David <didi(a)redhat.com>
> Gerrit-Reviewer: gerrit-hooks <automation(a)ovirt.org>
> Gerrit-Comment-Date: Sun, 05 Apr 2020 10:35:05 +0000
> Gerrit-HasComments: No
>
--
Didi
4 years, 6 months
Fwd: [virt-devel] Fwd: Modularity Survey
by Yedidyah Bar David
Forwarding here as well. I did fill it in myself now, perhaps others
would like to too.
---------- Forwarded message ---------
From: Richard W.M. Jones <rjones(a)redhat.com>
Date: Fri, Apr 3, 2020 at 5:34 PM
Subject: [virt-devel] Fwd: Modularity Survey
To: <virt-devel(a)redhat.com>
Cc: <ddepaula(a)redhat.com>
----- Forwarded message from Daniel Mach <dmach(a)redhat.com> -----
Date: Fri, 3 Apr 2020 15:52:56 +0200
From: Daniel Mach <dmach(a)redhat.com>
To: Development discussions related to Fedora <devel(a)lists.fedoraproject.org>
Subject: Modularity Survey
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
Thunderbird/68.6.0
Hello everyone,
On behalf of Red Hat's Modularity team, I'd like to ask you to fill
out a survey on Modularity:
https://docs.google.com/forms/d/e/1FAIpQLScOA97rGONieSOYmlZLsHdkq-EhdePZ4...
Our goal is to use your feedback to improve Modularity, its
documentation and hopefully fix any issues you may have.
Modularity Survey
-----------------
The purpose of this survey is to get feedback on Modularity.
It is divided into 4 sections:
* Information about yourself (optional)
* Modularity & you
* Problems with Modularity you may have experienced
* Glossary review - what do you think the terms mean
Privacy / GDPR:
* The raw data incl. any personal information you provide will be
shared only with Red Hat's Modularity team (approx. 10 people) to
evaluate the survey
* The raw data will not be provided to anyone else at Red Hat or any
3rd parties
* Aggregated (anonymous) results of the survey will be published
Thank you for your cooperation.
_______________________________________________
devel mailing list -- devel(a)lists.fedoraproject.org
To unsubscribe send an email to devel-leave(a)lists.fedoraproject.org
Fedora Code of Conduct:
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
----- End forwarded message -----
--
Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
Read my programming and virtualization blog: http://rwmj.wordpress.com
virt-builder quickly builds VMs from scratch
http://libguestfs.org/virt-builder.1.html
--
Didi
4 years, 6 months
ppc64le build-artifacts/check-merged job failing in "mock init"
by Nir Soffer
The ppc64le build artifacts jobs fail now in "mock init". Looks like
an environmental issue.
Here are few failing builds:
https://jenkins.ovirt.org/job/ovirt-imageio_standard-check-patch/2574/
https://jenkins.ovirt.org/job/ovirt-imageio_standard-check-patch/2573/
https://jenkins.ovirt.org/job/ovirt-imageio_standard-on-merge/573/
This seems to be the last successful build, 4 days ago:
https://jenkins.ovirt.org/job/ovirt-imageio_standard-on-merge/563/
----
[2020-04-04T19:28:16.332Z] + ../jenkins/mock_configs/mock_runner.sh
--execute-script automation/build-artifacts.py3.sh --mock-confs-dir
../jenkins/mock_configs --secrets-file
/home/jenkins/workspace/ovirt-imageio_standard-check-patch/std_ci_secrets.yaml
--try-proxy --timeout-duration 10800 --try-mirrors
http://mirrors.phx.ovirt.org/repos/yum/all_latest.json 'el8.*ppc64le'
[2020-04-04T19:28:16.332Z]
##########################################################
[2020-04-04T19:28:16.332Z]
##########################################################
[2020-04-04T19:28:16.332Z] ## Sat Apr 4 19:28:16 UTC 2020 Running
env: el8:epel-8-ppc64le
[2020-04-04T19:28:16.332Z]
##########################################################
[2020-04-04T19:28:16.332Z]
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
[2020-04-04T19:28:16.332Z] @@ Sat Apr 4 19:28:16 UTC 2020 Running
chroot for script: automation/build-artifacts.py3.sh
[2020-04-04T19:28:16.332Z]
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
[2020-04-04T19:28:16.599Z] Using base mock conf
../jenkins/mock_configs/epel-8-ppc64le.cfg
[2020-04-04T19:28:16.599Z] WARN: Unable to find req file
automation/build-artifacts.py3.req or
automation/build-artifacts.py3.req.el8, skipping req
[2020-04-04T19:28:16.599Z] Using proxified config
../jenkins/mock_configs/epel-8-ppc64le_proxied.cfg
[2020-04-04T19:28:16.599Z] Generating temporary mock conf
/home/jenkins/workspace/ovirt-imageio_standard-check-patch/ovirt-imageio/mocker-epel-8-ppc64le.el8
[2020-04-04T19:28:16.599Z] Skipping mount points
[2020-04-04T19:28:16.599Z] WARN: Unable to find repos file
automation/build-artifacts.py3.repos or
automation/build-artifacts.py3.repos.el8, skipping repos
[2020-04-04T19:28:16.599Z] Using chroot cache =
/var/cache/mock/epel-8-ppc64le-ef003ab0662b9b04a2143d179b949705
[2020-04-04T19:28:16.599Z] Using chroot dir =
/var/lib/mock/epel-8-ppc64le-ef003ab0662b9b04a2143d179b949705-15351
[2020-04-04T19:28:16.599Z] Skipping environment variables
[2020-04-04T19:28:16.599Z] ========== Initializing chroot
[2020-04-04T19:28:16.599Z] mock \
[2020-04-04T19:28:16.599Z] --old-chroot \
[2020-04-04T19:28:16.599Z]
--configdir="/home/jenkins/workspace/ovirt-imageio_standard-check-patch/ovirt-imageio"
\
[2020-04-04T19:28:16.599Z] --root="mocker-epel-8-ppc64le.el8" \
[2020-04-04T19:28:16.599Z] --resultdir="/tmp/mock_logs.FrjZTOEI/init" \
[2020-04-04T19:28:16.599Z] --init
[2020-04-04T19:28:17.186Z] WARNING: Could not find required logging
config file: /home/jenkins/workspace/ovirt-imageio_standard-check-patch/ovirt-imageio/logging.ini.
Using default...
[2020-04-04T19:28:17.186Z] INFO: mock.py version 1.4.21 starting
(python version = 3.6.8)...
[2020-04-04T19:28:17.186Z] Start(bootstrap): init plugins
[2020-04-04T19:28:17.186Z] INFO: selinux enabled
[2020-04-04T19:28:17.845Z] Finish(bootstrap): init plugins
[2020-04-04T19:28:17.845Z] Start: init plugins
[2020-04-04T19:28:17.845Z] INFO: selinux enabled
[2020-04-04T19:28:17.845Z] Finish: init plugins
[2020-04-04T19:28:17.845Z] INFO: Signal handler active
[2020-04-04T19:28:17.845Z] Start: run
[2020-04-04T19:28:17.845Z] Start: clean chroot
[2020-04-04T19:28:17.845Z] Finish: clean chroot
[2020-04-04T19:28:17.845Z] Start(bootstrap): chroot init
[2020-04-04T19:28:17.845Z] INFO: calling preinit hooks
[2020-04-04T19:28:17.845Z] INFO: enabled root cache
[2020-04-04T19:28:17.846Z] INFO: enabled dnf cache
[2020-04-04T19:28:17.846Z] Start(bootstrap): cleaning dnf metadata
[2020-04-04T19:28:17.846Z] Finish(bootstrap): cleaning dnf metadata
[2020-04-04T19:28:17.846Z] INFO: enabled HW Info plugin
[2020-04-04T19:28:17.846Z] Mock Version: 1.4.21
[2020-04-04T19:28:17.846Z] INFO: Mock Version: 1.4.21
[2020-04-04T19:28:18.119Z] Start(bootstrap): yum install
[2020-04-04T19:28:23.477Z] ERROR: Command failed:
[2020-04-04T19:28:23.477Z] # /usr/bin/yum --installroot
/var/lib/mock/epel-8-ppc64le-ef003ab0662b9b04a2143d179b949705-bootstrap-15351/root/
--releasever 8 install dnf dnf-plugins-core distribution-gpg-keys
--setopt=tsflags=nocontexts
[2020-04-04T19:28:23.477Z] Failed to set locale, defaulting to C
[2020-04-04T19:28:23.477Z] Error: Package:
python3-dnf-4.2.7-7.el8_1.noarch (centos-base-el8)
[2020-04-04T19:28:23.477Z] Requires: /usr/libexec/platform-python
[2020-04-04T19:28:23.477Z] Error: Package:
python3-dnf-4.2.7-7.el8_1.noarch (centos-base-el8)
[2020-04-04T19:28:23.477Z] Requires: python3-gpg
[2020-04-04T19:28:23.477Z] Error: Package:
python3-dnf-4.2.7-7.el8_1.noarch (centos-base-el8)
[2020-04-04T19:28:23.477Z] Requires: python3-libdnf >= 0.35.1-9
[2020-04-04T19:28:23.477Z] Error: Package:
python3-dnf-4.2.7-7.el8_1.noarch (centos-base-el8)
[2020-04-04T19:28:23.477Z] Requires: python3-rpm >= 4.14.0
[2020-04-04T19:28:23.477Z] Error: Package:
python3-dnf-4.2.7-7.el8_1.noarch (centos-base-el8)
[2020-04-04T19:28:23.477Z] Requires: python3-hawkey >= 0.35.1-9
[2020-04-04T19:28:23.477Z] Error: Package:
python3-dnf-plugins-core-4.0.8-3.el8.noarch (centos-base-el8)
[2020-04-04T19:28:23.477Z] Requires: python3-hawkey >= 0.34.0
[2020-04-04T19:28:23.477Z] Error: Package: dnf-4.2.7-7.el8_1.noarch
(centos-base-el8)
[2020-04-04T19:28:23.477Z] Requires: /bin/sh
[2020-04-04T19:28:23.477Z] Error: Package:
python3-dnf-4.2.7-7.el8_1.noarch (centos-base-el8)
[2020-04-04T19:28:23.477Z] Requires: python3-libdnf
[2020-04-04T19:28:23.477Z] Error: Package:
python3-dnf-plugins-core-4.0.8-3.el8.noarch (centos-base-el8)
[2020-04-04T19:28:23.477Z] Requires: python(abi) = 3.6
[2020-04-04T19:28:23.477Z] Available:
python2-2.7.16-12.module_el8.1.0+219+cf9e6ac9.ppc64le
(centos-appstream-el8)
[2020-04-04T19:28:23.477Z] python(abi) = 2.7
[2020-04-04T19:28:23.477Z] Error: Package:
python3-dnf-4.2.7-7.el8_1.noarch (centos-base-el8)
[2020-04-04T19:28:23.477Z] Requires: libmodulemd >= 1.4.0
[2020-04-04T19:28:23.477Z] Error: Package:
python3-dnf-4.2.7-7.el8_1.noarch (centos-base-el8)
[2020-04-04T19:28:23.477Z] Requires: python3-libcomps >= 0.1.8
[2020-04-04T19:28:23.477Z] Error: Package:
python3-dnf-4.2.7-7.el8_1.noarch (centos-base-el8)
[2020-04-04T19:28:23.477Z] Requires: python(abi) = 3.6
[2020-04-04T19:28:23.477Z] Available:
python2-2.7.16-12.module_el8.1.0+219+cf9e6ac9.ppc64le
(centos-appstream-el8)
[2020-04-04T19:28:23.477Z] python(abi) = 2.7
[2020-04-04T19:28:23.477Z] Error: Package:
python3-six-1.11.0-8.el8.noarch (centos-base-el8)
[2020-04-04T19:28:23.477Z] Requires: python(abi) = 3.6
[2020-04-04T19:28:23.477Z] Available:
python2-2.7.16-12.module_el8.1.0+219+cf9e6ac9.ppc64le
(centos-appstream-el8)
[2020-04-04T19:28:23.477Z] python(abi) = 2.7
[2020-04-04T19:28:23.477Z] Error: Package:
dnf-data-4.2.7-7.el8_1.noarch (centos-base-el8)
[2020-04-04T19:28:23.477Z] Requires: libreport-filesystem
[2020-04-04T19:28:23.477Z] Error: Package:
1:python3-dateutil-2.6.1-6.el8.noarch (centos-base-el8)
[2020-04-04T19:28:23.477Z] Requires: python(abi) = 3.6
[2020-04-04T19:28:23.477Z] Available:
python2-2.7.16-12.module_el8.1.0+219+cf9e6ac9.ppc64le
(centos-appstream-el8)
[2020-04-04T19:28:23.477Z] python(abi) = 2.7
[2020-04-04T19:28:23.477Z] Error: Package: dnf-4.2.7-7.el8_1.noarch
(centos-base-el8)
[2020-04-04T19:28:23.477Z] Requires: systemd
[2020-04-04T19:28:23.477Z] You could try using --skip-broken to work
around the problem
[2020-04-04T19:28:23.477Z] You could try running: rpm -Va --nofiles --nodigest
4 years, 6 months
Change Queue Tester is still failing
by Martin Perina
Hi,
Unfortunately CQ tester is still failing even after reverting back to nose.
Now it fails during host0 installation when trying to establish connection
after setup networks. On engine we have below exception:
2020-04-03 00:39:34,107-04 ERROR
[org.ovirt.engine.core.bll.hostdeploy.InstallVdsInternalCommand]
(EE-ManagedThreadFactory-engine-Thread-1) [7fb0fc5b] Host installation
failed for host 'efc5cbff-4255-4c6d-98fd-338169272e15',
'lago-basic-suite-master-host-0': Network error during communication
with the host
2020-04-03 00:39:34,107-04 DEBUG
[org.ovirt.engine.core.bll.hostdeploy.InstallVdsInternalCommand]
(EE-ManagedThreadFactory-engine-Thread-1) [7fb0fc5b] Exception:
org.ovirt.engine.core.bll.hostdeploy.VdsInstallException: Network
error during communication with the host
at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.hostdeploy.InstallVdsInternalCommand.configureManagementNetwork(InstallVdsInternalCommand.java:447)
at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.hostdeploy.InstallVdsInternalCommand.executeCommand(InstallVdsInternalCommand.java:166)
at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.CommandBase.executeWithoutTransaction(CommandBase.java:1169)
at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.CommandBase.executeActionInTransactionScope(CommandBase.java:1327)
at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.CommandBase.runInTransaction(CommandBase.java:2003)
at org.ovirt.engine.core.utils//org.ovirt.engine.core.utils.transaction.TransactionSupport.executeInSuppressed(TransactionSupport.java:140)
at org.ovirt.engine.core.utils//org.ovirt.engine.core.utils.transaction.TransactionSupport.executeInScope(TransactionSupport.java:79)
at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.CommandBase.execute(CommandBase.java:1387)
at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.CommandBase.executeAction(CommandBase.java:419)
at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.executor.DefaultBackendActionExecutor.execute(DefaultBackendActionExecutor.java:13)
at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.Backend.runAction(Backend.java:451)
at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.Backend.runActionImpl(Backend.java:433)
at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.Backend.runInternalAction(Backend.java:639)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native
Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at org.jboss.as.ee@18.0.1.Final//org.jboss.as.ee.component.ManagedReferenceMethodInterceptor.processInvocation(ManagedReferenceMethodInterceptor.java:52)
at org.jboss.invocation@1.5.2.Final//org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
at org.jboss.invocation@1.5.2.Final//org.jboss.invocation.InterceptorContext$Invocation.proceed(InterceptorContext.java:509)
at org.jboss.as.weld.common@18.0.1.Final//org.jboss.as.weld.interceptors.Jsr299BindingsInterceptor.delegateInterception(Jsr299BindingsInterceptor.java:79)
at org.jboss.as.weld.common@18.0.1.Final//org.jboss.as.weld.interceptors.Jsr299BindingsInterceptor.doMethodInterception(Jsr299BindingsInterceptor.java:89)
at org.jboss.as.weld.common@18.0.1.Final//org.jboss.as.weld.interceptors.Jsr299BindingsInterceptor.processInvocation(Jsr299BindingsInterceptor.java:102)
at org.jboss.as.ee@18.0.1.Final//org.jboss.as.ee.component.interceptors.UserInterceptorFactory$1.processInvocation(UserInterceptorFactory.java:63)
at org.jboss.invocation@1.5.2.Final//org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
at org.jboss.as.ejb3@18.0.1.Final//org.jboss.as.ejb3.component.invocationmetrics.ExecutionTimeInterceptor.processInvocation(ExecutionTimeInterceptor.java:43)
at org.jboss.invocation@1.5.2.Final//org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
at org.jboss.as.ee@18.0.1.Final//org.jboss.as.ee.concurrent.ConcurrentContextInterceptor.processInvocation(ConcurrentContextInterceptor.java:45)
at org.jboss.invocation@1.5.2.Final//org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
at org.jboss.invocation@1.5.2.Final//org.jboss.invocation.InitialInterceptor.processInvocation(InitialInterceptor.java:40)
at org.jboss.invocation@1.5.2.Final//org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
at org.jboss.invocation@1.5.2.Final//org.jboss.invocation.ChainedInterceptor.processInvocation(ChainedInterceptor.java:53)
at org.jboss.as.ee@18.0.1.Final//org.jboss.as.ee.component.interceptors.ComponentDispatcherInterceptor.processInvocation(ComponentDispatcherInterceptor.java:52)
at org.jboss.invocation@1.5.2.Final//org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
at org.jboss.as.ejb3@18.0.1.Final//org.jboss.as.ejb3.component.singleton.SingletonComponentInstanceAssociationInterceptor.processInvocation(SingletonComponentInstanceAssociationInterceptor.java:53)
at org.jboss.invocation@1.5.2.Final//org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
at org.jboss.as.ejb3@18.0.1.Final//org.jboss.as.ejb3.tx.CMTTxInterceptor.invokeInNoTx(CMTTxInterceptor.java:216)
at org.jboss.as.ejb3@18.0.1.Final//org.jboss.as.ejb3.tx.CMTTxInterceptor.supports(CMTTxInterceptor.java:418)
at org.jboss.as.ejb3@18.0.1.Final//org.jboss.as.ejb3.tx.CMTTxInterceptor.processInvocation(CMTTxInterceptor.java:148)
at org.jboss.invocation@1.5.2.Final//org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
at org.jboss.invocation@1.5.2.Final//org.jboss.invocation.InterceptorContext$Invocation.proceed(InterceptorContext.java:509)
at org.jboss.weld.core@3.1.2.Final//org.jboss.weld.module.ejb.AbstractEJBRequestScopeActivationInterceptor.aroundInvoke(AbstractEJBRequestScopeActivationInterceptor.java:81)
at org.jboss.as.weld.common@18.0.1.Final//org.jboss.as.weld.ejb.EjbRequestScopeActivationInterceptor.processInvocation(EjbRequestScopeActivationInterceptor.java:89)
at org.jboss.invocation@1.5.2.Final//org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
at org.jboss.as.ejb3@18.0.1.Final//org.jboss.as.ejb3.component.interceptors.CurrentInvocationContextInterceptor.processInvocation(CurrentInvocationContextInterceptor.java:41)
at org.jboss.invocation@1.5.2.Final//org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
at org.jboss.as.ejb3@18.0.1.Final//org.jboss.as.ejb3.component.invocationmetrics.WaitTimeInterceptor.processInvocation(WaitTimeInterceptor.java:47)
at org.jboss.invocation@1.5.2.Final//org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
at org.jboss.as.ejb3@18.0.1.Final//org.jboss.as.ejb3.security.SecurityContextInterceptor.processInvocation(SecurityContextInterceptor.java:100)
at org.jboss.invocation@1.5.2.Final//org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
at org.jboss.as.ejb3@18.0.1.Final//org.jboss.as.ejb3.deployment.processors.StartupAwaitInterceptor.processInvocation(StartupAwaitInterceptor.java:22)
at org.jboss.invocation@1.5.2.Final//org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
at org.jboss.as.ejb3@18.0.1.Final//org.jboss.as.ejb3.component.interceptors.ShutDownInterceptorFactory$1.processInvocation(ShutDownInterceptorFactory.java:64)
at org.jboss.invocation@1.5.2.Final//org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
at org.jboss.as.ejb3@18.0.1.Final//org.jboss.as.ejb3.component.interceptors.LoggingInterceptor.processInvocation(LoggingInterceptor.java:67)
at org.jboss.invocation@1.5.2.Final//org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
at org.jboss.as.ee@18.0.1.Final//org.jboss.as.ee.component.NamespaceContextInterceptor.processInvocation(NamespaceContextInterceptor.java:50)
at org.jboss.invocation@1.5.2.Final//org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
at org.jboss.invocation@1.5.2.Final//org.jboss.invocation.ContextClassLoaderInterceptor.processInvocation(ContextClassLoaderInterceptor.java:60)
at org.jboss.invocation@1.5.2.Final//org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
at org.jboss.invocation@1.5.2.Final//org.jboss.invocation.InterceptorContext.run(InterceptorContext.java:438)
at org.wildfly.security.elytron-private@1.10.4.Final//org.wildfly.security.manager.WildFlySecurityManager.doChecked(WildFlySecurityManager.java:627)
at org.jboss.invocation@1.5.2.Final//org.jboss.invocation.AccessCheckingInterceptor.processInvocation(AccessCheckingInterceptor.java:57)
at org.jboss.invocation@1.5.2.Final//org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
at org.jboss.invocation@1.5.2.Final//org.jboss.invocation.ChainedInterceptor.processInvocation(ChainedInterceptor.java:53)
at org.jboss.as.ee@18.0.1.Final//org.jboss.as.ee.component.ViewService$View.invoke(ViewService.java:198)
at org.jboss.as.ee@18.0.1.Final//org.jboss.as.ee.component.ViewDescription$1.processInvocation(ViewDescription.java:185)
at org.jboss.as.ee@18.0.1.Final//org.jboss.as.ee.component.ProxyInvocationHandler.invoke(ProxyInvocationHandler.java:81)
at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.interfaces.BackendInternal$$$view4.runInternalAction(Unknown
Source)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native
Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at org.jboss.weld.core@3.1.2.Final//org.jboss.weld.util.reflection.Reflections.invokeAndUnwrap(Reflections.java:410)
at org.jboss.weld.core@3.1.2.Final//org.jboss.weld.module.ejb.EnterpriseBeanProxyMethodHandler.invoke(EnterpriseBeanProxyMethodHandler.java:134)
at org.jboss.weld.core@3.1.2.Final//org.jboss.weld.bean.proxy.EnterpriseTargetBeanInstance.invoke(EnterpriseTargetBeanInstance.java:56)
at org.jboss.weld.core@3.1.2.Final//org.jboss.weld.module.ejb.InjectionPointPropagatingEnterpriseTargetBeanInstance.invoke(InjectionPointPropagatingEnterpriseTargetBeanInstance.java:68)
at org.jboss.weld.core@3.1.2.Final//org.jboss.weld.bean.proxy.ProxyMethodHandler.invoke(ProxyMethodHandler.java:106)
at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.BackendCommandObjectsHandler$BackendInternal$BackendLocal$2049259618$Proxy$_$$_Weld$EnterpriseProxy$.runInternalAction(Unknown
Source)
at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.CommandBase.runInternalAction(CommandBase.java:2381)
at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.hostdeploy.AddVdsCommand.lambda$executeCommand$3(AddVdsCommand.java:219)
at org.ovirt.engine.core.utils//org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil$InternalWrapperRunnable.run(ThreadPoolUtil.java:96)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
at org.glassfish.javax.enterprise.concurrent//org.glassfish.enterprise.concurrent.ManagedThreadFactoryImpl$ManagedThread.run(ManagedThreadFactoryImpl.java:250)
2020-04-03 00:39:34,111-04 WARN
[org.ovirt.vdsm.jsonrpc.client.utils.retry.Retryable]
(EE-ManagedThreadFactory-engine-Thread-42) [7fb0fc5b] Retry failed
2020-04-03 00:39:34,111-04 DEBUG
[org.ovirt.vdsm.jsonrpc.client.utils.retry.Retryable]
(EE-ManagedThreadFactory-engine-Thread-42) [7fb0fc5b] null:
java.lang.InterruptedException
at java.base/java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1081)
at java.base/java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1369)
at java.base/java.util.concurrent.CountDownLatch.await(CountDownLatch.java:278)
at org.ovirt.vdsm-jsonrpc-java//org.ovirt.vdsm.jsonrpc.client.reactors.stomp.SSLStompClient.lambda$waitForConnect$0(SSLStompClient.java:111)
at org.ovirt.vdsm-jsonrpc-java//org.ovirt.vdsm.jsonrpc.client.utils.retry.Retryable.call(Retryable.java:27)
at org.ovirt.vdsm-jsonrpc-java//org.ovirt.vdsm.jsonrpc.client.utils.retry.AwaitRetry.retry(AwaitRetry.java:15)
at org.ovirt.vdsm-jsonrpc-java//org.ovirt.vdsm.jsonrpc.client.reactors.stomp.SSLStompClient.waitForConnect(SSLStompClient.java:110)
at org.ovirt.vdsm-jsonrpc-java//org.ovirt.vdsm.jsonrpc.client.reactors.stomp.SSLStompClient.sendMessage(SSLStompClient.java:80)
at org.ovirt.vdsm-jsonrpc-java//org.ovirt.vdsm.jsonrpc.client.JsonRpcClient.call(JsonRpcClient.java:93)
at deployment.engine.ear//org.ovirt.engine.core.vdsbroker.jsonrpc.FutureMap.<init>(FutureMap.java:92)
at deployment.engine.ear//org.ovirt.engine.core.vdsbroker.jsonrpc.JsonRpcVdsServer.lambda$timeBoundPollInternal$1(JsonRpcVdsServer.java:1025)
at deployment.engine.ear//org.ovirt.engine.core.vdsbroker.jsonrpc.JsonRpcVdsServer$FutureCallable.call(JsonRpcVdsServer.java:499)
at deployment.engine.ear//org.ovirt.engine.core.vdsbroker.jsonrpc.JsonRpcVdsServer$FutureCallable.call(JsonRpcVdsServer.java:488)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
at org.glassfish.javax.enterprise.concurrent//org.glassfish.enterprise.concurrent.ManagedThreadFactoryImpl$ManagedThread.run(ManagedThreadFactoryImpl.java:250)
And on VDSM we have some uncatched exception:
2020-04-03 00:37:35,278-0400 INFO (Reactor thread)
[ProtocolDetector.AcceptorImpl] Accepted connection from
::ffff:192.168.201.4:47644 (protocoldetector:61)
2020-04-03 00:37:35,280-0400 ERROR (Reactor thread) [vds.dispatcher]
uncaptured python exception, closing channel
<yajsonrpc.betterAsyncore.Dispatcher connected
('::ffff:192.168.201.4', 47644, 0, 0) at 0x7fb1b7721c50> (<class
'ssl.SSLError'>:[X509] no certificate or crl found (_ssl.c:3771)
[/usr/lib64/python3.6/asyncore.py|readwrite|110]
[/usr/lib64/python3.6/asyncore.py|handle_write_event|442]
[/usr/lib/python3.6/site-packages/yajsonrpc/betterAsyncore.py|handle_write|75]
[/usr/lib/python3.6/site-packages/yajsonrpc/betterAsyncore.py|_delegate_call|171]
[/usr/lib/python3.6/site-packages/vdsm/sslutils.py|handle_write|190]
[/usr/lib/python3.6/site-packages/vdsm/sslutils.py|_handle_io|194]
[/usr/lib/python3.6/site-packages/vdsm/sslutils.py|_set_up_socket|154])
(betterAsyncore:182)
More details can be found at
https://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/22094
Marcin/Artur could it be related to fixes around swallowing SSLError
https://gerrit.ovirt.org/108016 ?
Thanks,
Martin
--
Martin Perina
Manager, Software Engineering
Red Hat Czech s.r.o.
4 years, 6 months