Error: Adding new Host to ovirt-engine
by Ahmad Khiet
Hi,
Can't add new host to ovirt engine, because the following error:
2019-06-12 12:23:09,664 p=4134 u=engine | TASK [ovirt-host-deploy-facts :
Set facts] *************************************
2019-06-12 12:23:09,684 p=4134 u=engine | ok: [10.35.1.17] => {
"ansible_facts": {
"ansible_python_interpreter": "/usr/bin/python2",
"host_deploy_vdsm_version": "4.40.0"
},
"changed": false
}
2019-06-12 12:23:09,697 p=4134 u=engine | TASK [ovirt-provider-ovn-driver
: Install ovs] *********************************
2019-06-12 12:23:09,726 p=4134 u=engine | fatal: [10.35.1.17]: FAILED! =>
{}
MSG:
The conditional check 'cluster_switch == "ovs" or (ovn_central is defined
and ovn_central | ipaddr and ovn_engine_cluster_version is
version_compare('4.2', '>='))' failed. The error was: The ipaddr filter
requires python's netaddr be installed on the ansible controller
The error appears to be in
'/home/engine/apps/engine/share/ovirt-engine/playbooks/roles/ovirt-provider-ovn-driver/tasks/configure.yml':
line 3, column 5, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
- block:
- name: Install ovs
^ here
2019-06-12 12:23:09,728 p=4134 u=engine | PLAY RECAP
*********************************************************************
2019-06-12 12:23:09,728 p=4134 u=engine | 10.35.1.17 :
ok=3 changed=0 unreachable=0 failed=1 skipped=0 rescued=0
ignored=0
whats missing!?
Thanks
--
Ahmad Khiet
Red Hat <https://www.redhat.com/>
akhiet(a)redhat.com
M: +972-54-6225629
<https://red.ht/sig>
3 months, 3 weeks
Error Java SDK Issue??
by Geschwentner, Patrick
Dear Ladies and Gentlemen!
I am currently working with the java-sdk and I encountered a problem.
If I would like to retrieve the disk details, I get the following error:
Disk currDisk = ovirtConnection.followLink(diskAttachment.disk());
The Error is occurring in this line:
[cid:image001.png@01D44537.AF127FD0]
The getResponst looks quiet ok. (I inspected: [cid:image002.png@01D44537.AF127FD0] and it looks ok).
Error:
wrong number of arguments
The code is quiet similar to what you published on github (https://github.com/oVirt/ovirt-engine-sdk-java/blob/master/sdk/src/test/j... ).
Can you confirm the defect?
Best regards
Patrick
2 years, 8 months
oVirt 4.4.0 Beta release refresh is now available for testing
by Sandro Bonazzola
oVirt 4.4.0 Beta release refresh is now available for testing
The oVirt Project is excited to announce the availability of the beta
release of oVirt 4.4.0 refresh (beta 4) for testing, as of April 17th, 2020
This release unleashes an altogether more powerful and flexible open source
virtualization solution that encompasses hundreds of individual changes and
a wide range of enhancements across the engine, storage, network, user
interface, and analytics on top of oVirt 4.3.
Important notes before you try it
Please note this is a Beta release.
The oVirt Project makes no guarantees as to its suitability or usefulness.
This pre-release must not to be used in production.
In particular, please note that upgrades from 4.3 and future upgrades from
this beta to the final 4.4 release from this version are not supported.
Some of the features included in oVirt 4.4.0 Beta require content that will
be available in CentOS Linux 8.2 but can’t be tested on RHEL 8.2 beta yet
due to some incompatibility in openvswitch package shipped in CentOS Virt
SIG which requires to rebuild openvswitch on top of CentOS 8.2.
Known Issues
-
ovirt-imageio development is still in progress. In this beta you can’t
upload images to data domains using the engine web application. You can
still copy iso images into the deprecated ISO domain for installing VMs or
upload and download to/from data domains is fully functional via the REST
API and SDK.
For uploading and downloading via the SDK, please see:
-
https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/examples/upload...
-
https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/examples/downlo...
Both scripts are standalone command line tool, try --help for more info.
Installation instructions
For the engine: either use appliance or:
- Install CentOS Linux 8 minimal from
http://centos.mirror.garr.it/centos/8.1.1911/isos/x86_64/CentOS-8.1.1911-...
- dnf install
https://resources.ovirt.org/pub/yum-repo/ovirt-release44-pre.rpm
- dnf update (reboot if needed)
- dnf module enable -y javapackages-tools pki-deps postgresql:12
- dnf install ovirt-engine
- engine-setup
For the nodes:
Either use oVirt Node ISO or:
- Install CentOS Linux 8 from
http://centos.mirror.garr.it/centos/8.1.1911/isos/x86_64/CentOS-8.1.1911-...
; select minimal installation
- dnf install
https://resources.ovirt.org/pub/yum-repo/ovirt-release44-pre.rpm
- dnf update (reboot if needed)
- Attach the host to engine and let it be deployed.
What’s new in oVirt 4.4.0 Beta?
-
Hypervisors based on CentOS Linux 8 (rebuilt from award winning RHEL8),
for both oVirt Node and standalone CentOS Linux hosts
-
Easier network management and configuration flexibility with
NetworkManager
-
VMs based on a more modern Q35 chipset with legacy seabios and UEFI
firmware
-
Support for direct passthrough of local host disks to VMs
-
Live migration improvements for High Performance guests.
-
New Windows Guest tools installer based on WiX framework now moved to
VirtioWin project
-
Dropped support for cluster level prior to 4.2
-
Dropped SDK3 support
-
4K disks support only for file based storage. iSCSI/FC storage do not
support 4k disks yet.
-
Exporting a VM to a data domain
-
Editing of floating disks
-
Integrating ansible-runner into engine, which allows a more detailed
monitoring of playbooks executed from engine
-
Adding/reinstalling hosts are now completely based on Ansible
-
The OpenStack Neutron Agent cannot be configured by oVirt anymore, it
should be configured by TripleO instead
This release is available now on x86_64 architecture for:
* Red Hat Enterprise Linux 8.1
* CentOS Linux (or similar) 8.1
This release supports Hypervisor Hosts on x86_64 and ppc64le architectures
for:
* Red Hat Enterprise Linux 8.1
* CentOS Linux (or similar) 8.1
* oVirt Node 4.4 based on CentOS Linux 8.1 (available for x86_64 only)
See the release notes [1] for installation instructions and a list of new
features and bugs fixed.
If you manage more than one oVirt instance, OKD or RDO we also recommend to
try ManageIQ <http://manageiq.org/>.
In such a case, please be sure to take the qc2 image and not the ova image.
Notes:
- oVirt Appliance is already available for CentOS Linux 8
- oVirt Node NG is already available for CentOS Linux 8
Additional Resources:
* Read more about the oVirt 4.4.0 release highlights:
http://www.ovirt.org/release/4.4.0/
* Get more oVirt project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/
[1] http://www.ovirt.org/release/4.4.0/
[2] http://resources.ovirt.org/pub/ovirt-4.4-pre/iso/
--
Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo(a)redhat.com
<https://www.redhat.com/>*
<https://www.redhat.com/en/summit?sc_cid=7013a000002D2QxAAK>*
*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.*
3 years, 7 months
dhcp monitoring module
by Michal Skrivanek
Hi Ales,
did the recent changes pass OST? It seems to be failing now, but maybe we just miss some required dependency in OST?
I’m getting things like:
Apr 29 18:09:58 lago-basic-suite-master-host-0 python3[22154]: detected unhandled Python exception in '/etc/NetworkManager/dispatcher.d/dhcp_monitor.py'
Apr 29 18:09:58 lago-basic-suite-master-host-0 nm-dispatcher[22146]: Traceback (most recent call last):
Apr 29 18:09:58 lago-basic-suite-master-host-0 nm-dispatcher[22146]: File "/etc/NetworkManager/dispatcher.d/dhcp_monitor.py", line 78, in <module>
Apr 29 18:09:58 lago-basic-suite-master-host-0 nm-dispatcher[22146]: main()
Apr 29 18:09:58 lago-basic-suite-master-host-0 nm-dispatcher[22146]: File "/etc/NetworkManager/dispatcher.d/dhcp_monitor.py", line 41, in main
Apr 29 18:09:58 lago-basic-suite-master-host-0 nm-dispatcher[22146]: handle_up(device)
Apr 29 18:09:58 lago-basic-suite-master-host-0 nm-dispatcher[22146]: File "/etc/NetworkManager/dispatcher.d/dhcp_monitor.py", line 51, in handle_up
Apr 29 18:09:58 lago-basic-suite-master-host-0 nm-dispatcher[22146]: send_configuration(content)
Apr 29 18:09:58 lago-basic-suite-master-host-0 nm-dispatcher[22146]: File "/etc/NetworkManager/dispatcher.d/dhcp_monitor.py", line 73, in send_configuration
Apr 29 18:09:58 lago-basic-suite-master-host-0 nm-dispatcher[22146]: client.connect(SOCKET_PATH)
Apr 29 18:09:58 lago-basic-suite-master-host-0 nm-dispatcher[22146]: FileNotFoundError: [Errno 2] No such file or directory
Apr 29 18:09:58 lago-basic-suite-master-host-0 nm-dispatcher[22146]: req:1 'dhcp6-change' [eth1], "/etc/NetworkManager/dispatcher.d/dhcp_monitor.py": complete: failed w
ith Script '/etc/NetworkManager/dispatcher.d/dhcp_monitor.py' exited with error status 1.
and it fails host deploy.
Can you please check that out ASAP?
Thanks,
michal
3 years, 7 months
package conflicts when executing dnf update
by Dana Elfassy
Hi,
I'm working on CentOS 8.1
Whenever I'm trying to dnf update, I'm getting this error:
Error:
Problem: package qemu-kvm-core-15:4.1.0-23.el8.1.x86_64 requires
libvirglrenderer.so.0()(64bit), but none of the providers can be installed
- cannot install both
virglrenderer-0.8.0-1.20191002git4ac3a04c.el8.x86_64 and
virglrenderer-0.6.0-5.20180814git491d3b705.el8.x86_64
- cannot install both
virglrenderer-0.6.0-5.20180814git491d3b705.el8.x86_64 and
virglrenderer-0.8.0-1.20191002git4ac3a04c.el8.x86_64
- cannot install the best update candidate for package
virglrenderer-0.6.0-5.20180814git491d3b705.el8.x86_64
- cannot install the best update candidate for package
qemu-kvm-core-15:4.1.0-23.el8.1.x86_64
(try to add '--allowerasing' to command line to replace conflicting
packages or '--skip-broken' to skip uninstallable packages or '--nobest' to
use not only best candidate packages)
I've tried the --allowerasing and --skip-broken, which lead to the same
error. I also tried updating virglrenderer and got the same error. The only
way I was able to update was removing qemu-kvm-core (and all of its
dependencies), and then had to reinstall it again
Did anyone face this issue? Any suggestions about how to resolve it?
Thanks,
Dana
3 years, 7 months
planned Jenkins restart
by Evgheni Dereveanchin
Hi everyone,
I'll be performing a planned Jenkins restart within the next hour.
No new jobs will be scheduled during this maintenance period.
I will inform you once it is back online.
--
Regards,
Evgheni Dereveanchin
3 years, 7 months
PoC - using pre-built VM images in OST
by Marcin Sobczyk
Hi,
recently I've been working on a PoC for OST that replaces the usage
of lago templates with pre-built, layered VM images packed in RPMs [2][7].
What's the motivation?
There are two big pains around OST - first one is that it's slow
and the second one is it uses lago, which is unmaintained.
How is OST working currently?
Lago launches VMs based on templates. It actually has its own mechanism for
VM
templating - you can find the ones that we currently use here [1]. How these
templates are created? There is a multiple-page doc somewhere that
describes the process,
but few are familiar with it. These templates are nothing special really -
just a xzipped
qcow with some metadata attached. The proposition here is to replace those
templates with
RPMs with qcows inside. The RPMs themselves would be built by a CI
pipeline. An example
of a pipeline like this can be found here [2].
Why RPMs?
It ticks all the boxes really. RPMs provide:
- tried and well known mechanisms for packaging, versioning, and
distribution instead
of lago's custom ones
- dependencies which permit to layer the VM images in a controllable way
- we already install RPMs when running OST, so using the new ones is a
matter of adding
some dependencies
How the image building pipeline works? [3]
- we download a dvd iso for installation of the distro
- we use 'virt-install' with the dvd iso + kickstart file to build a 'base'
layer
qcow image
- we create another qcow image that has the 'base' image as the backing
one. In this
image we use 'virt-customize' to run 'dnf upgrade'. This is our
'upgrade' layer.
- we create two more qcow images that have the 'upgrade' image as the
backing one. On one
of them we install the 'ovirt-host' package and on the other the
'ovirt-engine'. These are
our 'host-installed' and 'engine-installed' layers.
- we create 4 RPMs for these qcows:
* ost-images-base
* ost-images-upgrade
* ost-images-host-installed
* ost-images-engine-installed
- we publish the RPMs to templates.ovirt.org/yum/ DNF repository (not
implemented yet)
Each of those RPMs holds their respective qcow image. They also have proper
dependencies
set up - since 'upgrade' layer requires 'base' layer to be functional, it
has an RPM
requirement to that package. Same thing happens for '*-installed' packages
which depend on
'upgrade' package.
Since this is only a PoC there's still a lot of room for improvement around
the pipeline.
The 'base' RPM would be actually built very rarely, since it's a bare
distro, and the
'upgrade' and '*-installed' RPMs would be built nightly. This would allow
us to simply
type 'dnf upgrade' on any machine and have a fresh set of VMs ready to be
used with OST.
Advantages:
- we have CI for building OST images instead of current, obscure template
creating process
- we get rid of lots of unnecessary preparations that are done during each
OST run
by moving stuff from 'deploy scripts' [4] to image-building pipeline -
this should
speed up the runs a lot
- if the nightly pipeline for building images is not successful, the RPMs
won't be
published - OST will use the older ones. This makes a nice "early error
detection"
mechanism and can partially mitigate situations where everything is
blocked because
of some, i.e. dependency issues.
- it's another step for removing responsibilities from lago
- the pre-built VM images can be used for much more than OST - functional
testing of
vdsm/engine on a VM? We have an image for that
- we can build images for multiple distros, both u/s and d/s, easily
Caveats:
- we have to download the RPMs before running OST and that takes time,
since they're big.
This can be handled by having them cached on the CI slaves though.
- current limitations of CI and lago force us to make a copy of the images
after
installation so they can be seen both by the processes in the chroot and
libvirt, which
is running outside of chroot. Right now they're placed in '/dev/shm'
(which would
actually make some sense if they could be shared among all OST runs on
the slave, but
that's another story). There are some possible workarounds around that
problem too (like
running pipelines on bare metal machines with libvirt running inside
chroot)
- multiple qcow layers can slow down the runs because there's a lot of
jumping around.
This can be handled by i.e. introducing a meta package that squashes all
the layers into
one.
- we need a way to run OST with custom-built artifacts. There are multiple
ways we can
approach it:
* use 'upgrade' layer and not '*-installed' one
* first build your artifacts, then build VM image RPMs that have your
artifacts
installed and pass those RPMs to OST run
* add 'ci build vms' that will do both ^^^ steps for you
Even here we can still benefit from using pre-built images - we can create
a 'deps-installed' layer that sits between 'upgrade' and '*-installed'
and contains
all vdsm's/engine's dependencies.
Some numbers
So let's take a look at two OST runs - first one that uses the old way of
working [5]
and one that uses the new pre-built VM images [6]. The hacky change that
allows us to
use the pre-built images is here [7]. Here are some running times:
- chroot init: 00:34 for the old way vs 14:03 for pre-built images
This happens because the slave didn't have the new RPMs and chroot cached,
so it took a lot
of time to download them - the RPMs are ~2GB currently. When they will be
available
in cache it will get much closer to the old-way timing.
- deployment times:
* engine 08:09 for the old way vs 03:31 for pre-built images
* host-1 05:05 for the old way vs 02:00 for pre-built images
Here we can clearly see the benefits. This is without any special fine
tuning really -
even when using pre-built images there's still some deployment done, which
can be moved
to image-creating pipeline.
Further improvements
We could probably get rid of all the funny custom repository stuff that
we're
doing right now because we can put everything that's necessary to pre-built
VM images.
We can ship the images with ssh key injected - currently lago injects an ssh
key for root user in each run, which requires selinux relabeling, which
takes a lot
of time.
We can try creating 'ovirt-deployed' images, where the whole ovirt solution
would
be already deployed for some tests.
WDYT?
Regards, Marcin
[1] https://templates.ovirt.org/repo/
[2] https://gerrit.ovirt.org/#/c/108430/
[3] https://gerrit.ovirt.org/#/c/108430/6/ost-images/Makefile.am
[4]
https://github.com/oVirt/ovirt-system-tests/tree/master/common/deploy-scr...
[5]
https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-te...
[6]
https://jenkins.ovirt.org/job/ovirt-system-tests_standard-check-patch/902...
[7] https://gerrit.ovirt.org/#/c/108610/
3 years, 7 months