All stable branch maintainers, please make sure to
- merge all relevant open bugs until Wednesday morning 11:00 AM TLV time.
For each package that need to be built (i.e oVirt product) please make sure
every bug in MODIFIED has the right Target Release and Target Milestone.
A Target release should state the version of the package you're building
and should include the same version you used for the tag you just used for
this build. (e.g. for ovirt-engine, tag: ovirt-engine-220.127.116.11, tr: 18.104.22.168)
A list of bugs that require attention is here:
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
We're working on a script that stands up an oVirt Engine and adds a node to
it. The issue is we don't know how long to wait before trying to add a
node. What we're doing right now is to check the status of the engine using:
to determine when the oVirt engine itself has booted. That link reports "DB
Up!Welcome to Health Status!" as soon as the web UI is accessible, but this
is not the same thing as having an actual usable cluster attached.
Would it be possible to have separate status messages to distinguish
between an engine that has/is missing a usable cluster? Is that already
possible some other way? Blindly waiting for arbitrary time periods is
I saw a lot of failing builds lately on this job:
One log which still exists is:
It seems like the BUILD ENGINE RPM step is failing, but I can't see any
BUILDING ENGINE RPM
+ create_rpms /home/jenkins/workspace/ovirt-engine_master_upgrade-from-3.6_el7_created/tmp_repo/ovirt-engine-4.1.0-0.0.master.20161014001903.el7.centos.src.rpm
+ local src_rpm=/home/jenkins/workspace/ovirt-engine_master_upgrade-from-3.6_el7_created/tmp_repo/ovirt-engine-4.1.0-0.0.master.20161014001903.el7.centos.src.rpm
+ local dst_dir=/home/jenkins/workspace/ovirt-engine_master_upgrade-from-3.6_el7_created/tmp_repo
+ local release=.gitee47dd2
+ local workspace=/home/jenkins/workspace/ovirt-engine_master_upgrade-from-3.6_el7_created
+ local 'BUILD_JAVA_OPTS_MAVEN= -XX:MaxPermSize=1G
+ local 'BUILD_JAVA_OPTS_GWT= -XX:PermSize=512M
-XX:MaxPermSize=1G -Xms1G -Xmx6G '
+ env 'BUILD_JAVA_OPTS_MAVEN= -XX:MaxPermSize=1G
-Dgwt.compiler.localWorkers=1 ' 'BUILD_JAVA_OPTS_GWT=
-XX:PermSize=512M -XX:MaxPermSize=1G -Xms1G -Xmx6G '
rpmbuild -D 'ovirt_build_minimal 1' -D 'release_suffix .gitee47dd2' -D
-D '_srcrpmdir /home/jenkins/workspace/ovirt-engine_master_upgrade-from-3.6_el7_created/tmp_repo'
-D '_specdir /home/jenkins/workspace/ovirt-engine_master_upgrade-from-3.6_el7_created/tmp_repo'
-D '_sourcedir /home/jenkins/workspace/ovirt-engine_master_upgrade-from-3.6_el7_created/tmp_repo'
-D '_rpmdir /home/jenkins/workspace/ovirt-engine_master_upgrade-from-3.6_el7_created/tmp_repo'
-D '_builddir /home/jenkins/workspace/ovirt-engine_master_upgrade-from-3.6_el7_created/tmp_repo'
+ return 1
Build step 'Execute shell' marked build as failure
Performing Post build task...
Match found for :.* : True
Logical operation result is TRUE
Running script : #!/bin/bash -x
does anyone else experience following error of `engine-setup`?
***L:ERROR Internal error: No module named dwh
I have a suspicion it might be related to commit '221c7ed packaging: setup: Remove constants duplication'.
Project Proposal - Vagrant Provider
A vagrant provider for oVirt v4
This will be a provider plugin for the Vagrant suite that allows
command-line ease of virtual machine provisioning and lifecycle
This Vagrant provider plugin will interface with the oVirt REST API
(version 4 and higher) using the oVirt provided ruby SDK
'ovirt-engine-sdk-ruby'. This allows users to abstract the user
interface and experience into a set of command line abilities to
create, provision, destroy and manage the complete lifecycle of
virtual machines. It also allows the use of external configuration
management and configuration files themselves to be committed into
I have previously forked and maintained the 'vagrant-ovirt' gem as
'vagrant-ovirt3' due to Gems requiring unique names. The original
author has officially abandoned the project. For the last few years
all code to maintain this project has been maintained by myself and a
few ad-hoc github contributors. This provider interfaced directly with
oVirt v3 using fog and rbovirt. The new project would be a fresh start
using the oVirt provided ruby SDK to work directly with version 4.
The trend in configuration management, operations, and devops has been
to maintain as much of the development process as possible in terms of
the virtual machines and hosts that they run on. With software like
Terraform the tasks of creating the underlying infrastructure such as
network rules, etc have had great success moving into 'Infrastructure
as code'. The same company behind Terraform got their reputation from
Vagrant which aims to utilize the same process for virtual machines
themselves. The core software allows for standard commands such as
'up', 'provision', 'destroy' to be used across a provider framework. A
provider for oVirt makes the process for managing VMs easier and able
to be controlled through code and source control.
The initial goal is to get the base steps of 'up', 'down' (halt), and
'destroy' to succeed using the oVirt provided ruby SDK for v4.
Stretch/followup goals would be to ensure testability and alternate
commands such as 'provision' and allow configuration management suites
like puppet to work via 'userdata' (cloud-init).
The version 3 of this software has been heavily utilized. The original
fork known as 'vagrant-ovirt' has been abandoned with no plans to
communicate or move forward. My upstream fork has had great success
with nearly 4x the downloads from rubygems.org . Until my github fork
has more 'stars' I cannot take over it completely so the gem was
renamed 'vagrant-ovirt3'. This is also true for rubygems.org since
gems are not namespaced, therefore could not be published without a
unique name. The v4 provider is still pending my initial POC commit
but there are no current barriers except initial oVirt hosting. The
hosting of oVirt v3 for testing is a laptop on a UPS at my home, and
v4 is also a different pc attached to a UPS.
RHEVM/oVirt REST API - This provider must interact with the API itself
to manage virtual machines.
Marcus Young ( 3vilpenguin at gmail dot com )
Fyi oVirt products maintainers,
An oVirt build for an official release is going to start right now.
If you're a maintainer for any of the projects included in oVirt
distribution and you have changes in your package ready to be released
- bump version and release to be GA ready
- tag your release within git (implies a GitHub Release to be automatically
- build your packages within jenkins / koji / copr / whatever
- verify all bugs on MODIFIED have target release and target milestone set.
- add your builds to releng-tools/releases/ovirt-4.0.5_rc2.conf within
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
In the last 2.5 days I was exploring if and how we can integrate collectd and Vdsm.
The final picture could look like:
1. collectd does all the monitoring and reporting currently Vdsm does
2. Engine consumes data from collectd
3. Vdsm consumes *notifications* from collectd - for few but important tasks like Drive high water mark monitoring
Benefits (aka: why to bother?):
1. less code in Vdsm / long-awaited modularization of Vdsm
2. better integration with the system, reuse of well-known components
3. more flexibility in monitoring/reporting: collectd is special purpose existing solution
4. faster, more scalable operation because all the monitoring can be done in C
At first glance, Collectd seems to have all the tools we need.
1. A plugin interface (https://collectd.org/wiki/index.php/Plugin_architecture and https://collectd.org/wiki/index.php/Table_of_Plugins)
2. Support for notifications and thresholds (https://collectd.org/wiki/index.php/Notifications_and_thresholds)
3. a libvirt plugin https://collectd.org/wiki/index.php/Plugin:virt
So, the picture is like
1. we start requiring collectd as dependency of Vdsm
2. we either configure it appropriately (collectd support config drop-ins: /etc/collectd.d) or we document our requirements (or both)
3. collectd monitors the hosts and libvirt
4. Engine polls collectd
5. Vdsm listens from notifications
Should libvirt deliver us the event we need (see https://bugzilla.redhat.com/show_bug.cgi?id=1181659),
we can just stop using collectd notifications, everything else works as previously.
1. Collectd does NOT consider the plugin API stable (https://collectd.org/wiki/index.php/Plugin_architecture#The_interface.27s...)
so the plugins should be inclueded in the main tree, much like the modules of the linux kernel
Worth mentioning that the plugin API itself has a good deal of rough edges.
we will need to maintain this plugin ourselves, *and* we need to maintain our thin API
layer, to make sure the plugin loads and works with recent versions of collectd.
2. the virt plugin is out of date, doesn't report some data we need: see https://github.com/collectd/collectd/issues/1945
3. the notification message(s) are tailored for human consumption, those messages are not easy
to parse for machines.
4. the threshold support in collectd seems to match values against constants; it doesn't seem possible
to match a value against another one, as we need to do for high water monitoring (capacity VS allocation).
How I'm addressing, or how I plan to address those challenges (aka action items):
1. I've been experimenting with out-of-tree plugins, and I managed develop, build, install and run
one out-of-tree plugin: https://github.com/mojaves/vmon/tree/master/collectd
The development pace of collectd looks sustainable, so this doesn't look such a big deal.
Furthermore, we can engage with upstream to merge our plugins, either as-is or to extend existing ones.
2. Write another collectd plugin based on the Vdsm python code and/or my past accelerator executable project
3. patch the collectd notification code. It is yet another plugin
4. send notification from the new virt module as per #2, bypassing the threshold system. This move could preclude
the new virt module to be merged in the collectd tree.
Current status of the action items:
1. done BUT PoC quality
2. To be done (more work than #1/possible dupe with github issue)
3. need more investigation, conflicts with #4
4. need more investigation, conflicts with #3
All the code I'm working on will be found on https://github.com/mojaves/vmon
Comments are appreciated
RedHat Engineering Virtualization R & D