Hey,
I just got around to studying this.
- Nice clear email!
- Everything really makes sense.
- Thank you for fixing the -excludes thing in the yaml. That was rough :)
- The graph view in Blue Ocean is easy to see and understand.
- "We now support “sub stages” which provide the ability to run
multiple different
scripts in parallel" -- what kind of races should we watch out for? :) For
example in OST, I think I'll have to adapt docker stuff to be aware that
another set of containers could be running at the same time -- not positive
though.
It looks like the substages replace change_resolver in OST. Can you go into
that in more detail? How does this impact running run mock_runner locally?
When I run it locally it doesn't appear to paralleilize like it does in
jenkins / Blue Ocean.
Best wishes,
Greg
On Mon, Apr 16, 2018 at 10:17 AM, Barak Korren <bkorren(a)redhat.com> wrote:
The CI team is thrilled to announce the general availability of the
second
version of the oVirt CI standard. Work on this version included almost a
complete rewrite of the CI backend. The major user-visible features are:
- Project maintainers no longer need to maintain YAML in the ‘jenkins’
repository. Details that were specified there, including targeted
distributions, architectures and oVirt versions should now be specified
in a
YAML file under the project’s own repository (In a different syntax).
- We now support “sub stages” which provide the ability to run multiple
different scripts in parallel within the same STDCI stage. There is also
a
conditional syntax which allows controlling which scripts get executed
according to which files were changed in the patch being tested.
- The STDCI script file names and locations can now be customized via the
above
mentioned YAML file. This means that for e.g. using the same script for
different stages can now be done by assigning it to the stages in the
YAML
file instead of by using symlinks.
Inspecting job results in STDCI V2
----------------------------------
As already mentioned, the work on STDCI V2 consisted of a major rewrite of
the
CI backend, one of the changes made is switching from using multiple
“FreeStyle”
type jobs per project to just two (pre-merge and post-merge) pipeline
jobs. This
has implications for the way job results are to be inspected.
Since all the different parallel tasks now happen within the same job,
looking
at the job output can be rather confusing as it includes the merged output
of
all the tasks. Instead, the “Blue Ocean” view should be used. The “Blue
Ocean”
view displays a graphical layout of the job execution allowing one to
quickly
learn which parts of the job failed. It also allows drilling down and
viewing
the logs of individual parts of the job.
Apart from using the “Blue Ocean” view, job logs are also stored as
artifact
files. The ‘exported-artifacts’ directory seen in the job results will now
include different subdirectories for the different parts of the job.
Assuming we
have a ‘check-patch’ stage script running on ‘el7/x86_64’, we can find its
output under ‘exported-artifacts’ in:
check-patch.el7.x86_64/mock_logs/script/stdout_stderr.log
Any additional artifacts generated by the script would be present in the
‘check-patch.el7.x86-64’ directory as well.
I have a CI YAML file in my project already, is this really new?
----------------------------------------------------------------
We’ve been working on this for a while, and occasionally introduced V2
features
into individual projects as needed. In particular, our GitHub support was
always
based on STDCI V2 code, so all GitHub projects (except Lago, which is
‘special’…) are already using STDCI V2.
A few Gerrit-based projects have already been converted to V2 as well as
part of
our efforts to test and debug the V2 code. Most notably, the “OST” and
“Jenkins”
projects have been switched, although they are running the STDCI V1 jobs
as well
for the time being.
What is the process for switching my project to STDCI V2?
---------------------------------------------------------
The CI team is going to proactively work with project maintainers to
switch them
to V2. The process for switching is as follows:
- Send a one-line patch to the ‘jenkins’ repo to enable the V2 jobs for the
project - at this point the V2 jobs will run side-by-side with the V1
jobs,
and will execute the STDCI scripts on el7/x86_64.
- Create an STDCI YAML file to define the target distributions,
architectures
and oVirt versions for the project. (See below for a sample file that
would be
equivalent to what many projects have defined in V1 currently). As soon
as a
patch with the new YAML file is submitted to the project, the V2 job will
parse it and follow the instructions in it. This allows for an easy
verification of the file functionality in CI.
- Remove the STDCI V1 job configuration from the ‘jenkins’ repo. This
should be
the last patch project maintainers have to send to the ‘jenkins’ repo.
What does the new YAML file look like?
--------------------------------------
We defined multiple optional names for the file, so that each project
owner can
choose which name seems most adequate. The following names can be used:
- stdci.yaml
- automation.yaml
- ovirtci.yaml
A dot (.) can also optionally be added at the beginning of the file name
to make
the file hidden, the file extension could also be “yml”. If multiple
matching
files exist in the project repo, the first matching file according to the
order
listed above will be used.
The file conforms to the YAML syntax. The key names in the file are
case-agnostic, and hyphens (-) underscores (_) and spaces ( ) in key names
are
ignored. Additionally we support multiple forms of the same word so you
don’t
need to remember if the key should be ‘distro’, ‘distros’, ‘distributions’,
‘operating-systems’ or ‘OperatingSystems’ as all these forms (and others)
will
work and mean the same thing.
To create complex test/build matrices, ‘stage’, ‘distribution’,
‘architecture’
and ‘sub-stage’ definitions can be nested within one another. We find this
to be
more intuitive than having to maintain tedious ‘exclude’ lists as was
needed in
V1.
Here is an example of an STDCI V2 YAML file that is compatible with the
current
master branch V1 configuration of many oVirt projects:
---
Architectures:
- x86_64:
Distributions: [ "el7", "fc27" ]
- ppc64le:
Distribution: el7
- s390x:
Distribution: fc27
Release Branches:
master: ovirt-master
Note: since the file is committed into the project’s own repo, having
different
configuration for different branches can be done by simply having different
files in the different branches, so there is no need for a big convoluted
file
to configure all branches.
Since the above file does not mention stages, any STDCI scripts that
exists in
the project repo and belong to a particular stage will be run on all
specified
distribution and architecture combinations. Since it is sometimes desired
to run
‘check-patch.sh’ on less platforms then build-artifacts for example, a
slightly
different file would be needed:
---
Architectures:
- x86_64:
Distributions: [ “el7”, “fc27” ]
- ppc64le:
Distribution: el7
- s390x:
Distribution: fc27
Stages:
- check-patch:
Architecture: x86_64
Distribution: el7
- build-artifacts
Release Branches:
master: ovirt-master
The above file makes ‘check-patch’ run only on el7/x86_64, while
build-artifacts
runs on all platforms specified and check-merged would not run at all
because it
is not listed in the file.
Great efforts have been made to make the file format very flexible but
intuitive
to use. Additionally there are many defaults in place to allow specifying
complex behaviours with very brief YAML code. For further details about
the file
format, please see the documentation linked below.
About the relation between STDCI V2 and the change-queue
--------------------------------------------------------
In STDCI V1 the change queue that would run the OST tests and release a
given
patch was determined by looking at the “version” part of the name of the
project’s build-artifacts jobs that got invoked for the patch.
This was confusing for people as most people understood “version” to mean
the
internal version for their own project rather then the oVirt version.
In V2 we decided to be more explicit and simply include a map from
branches to
change queues in the YAML configuration under the “release-branches”
option, as
can be seen in the examples above.
We also chose to no longer allow specifying the oVirt version as a
shorthand for
the equivalent queue name (E.g specifying ‘4.2’ instead of ‘ovirt-4.2’),
this
should reduce the chance for confusing between project versions and queue
names,
and also allows us to create and use change queues for projects that are
not
oVirt.
A project can choose not to include a “release-branches” option, in which
case
its patches will not get submitted to any queues.
Further information
-------------------
The documentation for STDCI can be found at [1].
The documentation update for V2 are still in progress and expected to be
merged
soon. In the meatine, the GitHub-specific documentation [2] already
provides a
great deal of information which is relevant for V2.
[1]:
http://ovirt-infra-docs.readthedocs.io/en/latest/CI/
Build_and_test_standards
[2]:
http://ovirt-infra-docs.readthedocs.io/en/latest/CI/
Using_STDCI_with_GitHub
---
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. |
redhat.com/trusted
_______________________________________________
Devel mailing list
Devel(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel
--
GREG SHEREMETA
SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHV UX
Red Hat NA
<