I just got around to studying this.
- Nice clear email!
- Everything really makes sense.
- Thank you for fixing the -excludes thing in the yaml. That was rough :)
- The graph view in Blue Ocean is easy to see and understand.
- "We now support “sub stages” which provide the ability to run
scripts in parallel" -- what kind of races should we watch out for? :) For
example in OST, I think I'll have to adapt docker stuff to be aware that
another set of containers could be running at the same time -- not positive
It looks like the substages replace change_resolver in OST. Can you go into
that in more detail? How does this impact running run mock_runner locally?
When I run it locally it doesn't appear to paralleilize like it does in
jenkins / Blue Ocean.
On Mon, Apr 16, 2018 at 10:17 AM, Barak Korren <bkorren(a)redhat.com> wrote:
The CI team is thrilled to announce the general availability of the
version of the oVirt CI standard. Work on this version included almost a
complete rewrite of the CI backend. The major user-visible features are:
- Project maintainers no longer need to maintain YAML in the ‘jenkins’
repository. Details that were specified there, including targeted
distributions, architectures and oVirt versions should now be specified
YAML file under the project’s own repository (In a different syntax).
- We now support “sub stages” which provide the ability to run multiple
different scripts in parallel within the same STDCI stage. There is also
conditional syntax which allows controlling which scripts get executed
according to which files were changed in the patch being tested.
- The STDCI script file names and locations can now be customized via the
mentioned YAML file. This means that for e.g. using the same script for
different stages can now be done by assigning it to the stages in the
file instead of by using symlinks.
Inspecting job results in STDCI V2
As already mentioned, the work on STDCI V2 consisted of a major rewrite of
CI backend, one of the changes made is switching from using multiple
type jobs per project to just two (pre-merge and post-merge) pipeline
has implications for the way job results are to be inspected.
Since all the different parallel tasks now happen within the same job,
at the job output can be rather confusing as it includes the merged output
all the tasks. Instead, the “Blue Ocean” view should be used. The “Blue
view displays a graphical layout of the job execution allowing one to
learn which parts of the job failed. It also allows drilling down and
the logs of individual parts of the job.
Apart from using the “Blue Ocean” view, job logs are also stored as
files. The ‘exported-artifacts’ directory seen in the job results will now
include different subdirectories for the different parts of the job.
have a ‘check-patch’ stage script running on ‘el7/x86_64’, we can find its
output under ‘exported-artifacts’ in:
Any additional artifacts generated by the script would be present in the
‘check-patch.el7.x86-64’ directory as well.
I have a CI YAML file in my project already, is this really new?
We’ve been working on this for a while, and occasionally introduced V2
into individual projects as needed. In particular, our GitHub support was
based on STDCI V2 code, so all GitHub projects (except Lago, which is
‘special’…) are already using STDCI V2.
A few Gerrit-based projects have already been converted to V2 as well as
our efforts to test and debug the V2 code. Most notably, the “OST” and
projects have been switched, although they are running the STDCI V1 jobs
for the time being.
What is the process for switching my project to STDCI V2?
The CI team is going to proactively work with project maintainers to
to V2. The process for switching is as follows:
- Send a one-line patch to the ‘jenkins’ repo to enable the V2 jobs for the
project - at this point the V2 jobs will run side-by-side with the V1
and will execute the STDCI scripts on el7/x86_64.
- Create an STDCI YAML file to define the target distributions,
and oVirt versions for the project. (See below for a sample file that
equivalent to what many projects have defined in V1 currently). As soon
patch with the new YAML file is submitted to the project, the V2 job will
parse it and follow the instructions in it. This allows for an easy
verification of the file functionality in CI.
- Remove the STDCI V1 job configuration from the ‘jenkins’ repo. This
the last patch project maintainers have to send to the ‘jenkins’ repo.
What does the new YAML file look like?
We defined multiple optional names for the file, so that each project
choose which name seems most adequate. The following names can be used:
A dot (.) can also optionally be added at the beginning of the file name
the file hidden, the file extension could also be “yml”. If multiple
files exist in the project repo, the first matching file according to the
listed above will be used.
The file conforms to the YAML syntax. The key names in the file are
case-agnostic, and hyphens (-) underscores (_) and spaces ( ) in key names
ignored. Additionally we support multiple forms of the same word so you
need to remember if the key should be ‘distro’, ‘distros’, ‘distributions’,
‘operating-systems’ or ‘OperatingSystems’ as all these forms (and others)
work and mean the same thing.
To create complex test/build matrices, ‘stage’, ‘distribution’,
and ‘sub-stage’ definitions can be nested within one another. We find this
more intuitive than having to maintain tedious ‘exclude’ lists as was
Here is an example of an STDCI V2 YAML file that is compatible with the
master branch V1 configuration of many oVirt projects:
Distributions: [ "el7", "fc27" ]
Note: since the file is committed into the project’s own repo, having
configuration for different branches can be done by simply having different
files in the different branches, so there is no need for a big convoluted
to configure all branches.
Since the above file does not mention stages, any STDCI scripts that
the project repo and belong to a particular stage will be run on all
distribution and architecture combinations. Since it is sometimes desired
‘check-patch.sh’ on less platforms then build-artifacts for example, a
different file would be needed:
Distributions: [ “el7”, “fc27” ]
The above file makes ‘check-patch’ run only on el7/x86_64, while
runs on all platforms specified and check-merged would not run at all
is not listed in the file.
Great efforts have been made to make the file format very flexible but
to use. Additionally there are many defaults in place to allow specifying
complex behaviours with very brief YAML code. For further details about
format, please see the documentation linked below.
About the relation between STDCI V2 and the change-queue
In STDCI V1 the change queue that would run the OST tests and release a
patch was determined by looking at the “version” part of the name of the
project’s build-artifacts jobs that got invoked for the patch.
This was confusing for people as most people understood “version” to mean
internal version for their own project rather then the oVirt version.
In V2 we decided to be more explicit and simply include a map from
change queues in the YAML configuration under the “release-branches”
can be seen in the examples above.
We also chose to no longer allow specifying the oVirt version as a
the equivalent queue name (E.g specifying ‘4.2’ instead of ‘ovirt-4.2’),
should reduce the chance for confusing between project versions and queue
and also allows us to create and use change queues for projects that are
A project can choose not to include a “release-branches” option, in which
its patches will not get submitted to any queues.
The documentation for STDCI can be found at .
The documentation update for V2 are still in progress and expected to be
soon. In the meatine, the GitHub-specific documentation  already
great deal of information which is relevant for V2.
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
| TRIED. TESTED. TRUSTED. | redhat.com/trusted
Devel mailing list
SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHV UX
Red Hat NA