[ovirt-devel] Using travis yaml files to specify dependencies and tests
David Caro
dcaroest at redhat.com
Tue Jan 20 15:50:03 UTC 2015
Hi everyone!
After talking a bit with some of you, I think that we can start
planning a common build and dependency declaration for tests for the
ovirt products, to improve and automate most of the ci process and
maintenance.
Current status:
== Dependencies
Right now we have 4 types of dependencies:
* test dependencies
* tarball/srcrpm build dependencies
* rpm build dependencies
* installation dependencies
The last two are managed from the spec files through rpm/yum
dependency systems. But the first ones are managed manually on the
jobs on jenkins or puppet manifests. What separates it from the code
that actually requires them and adds an extra layer of maintenance and
synchronization between the code, the jenkins jobs and the puppet
repository.
== Builds
We started using autotools to build most of the projects, but it's
not a global methodology and even being used on some projects, you
need to tune the run for each of them, specifying different variables
and running some side scripts.
== Tests
Some projects use make check to run some of the tests, some tests are
totally outside the code and run only in jenkins jobs.
Some possible improvements:
== Tests/builds
Using shell scripts:
We talked before in another thread to create some generic script to
build the artifacts for the product and to run the tests, namely we
talked about having 3 executables (bash scripts probably) at the root
of each project, that should not require any parameters:
./build-artifacts
This should generate any artifacts to be archives (isos, rpms,
debs, tarballs, ...) and leave them at ./exported-artifacts/
directory, for the build system to collect, removing any
previous artifacts if needed.
./check_patch
Runs all the tests required for any new patchset in gerrit, for
non-merged changes, should be fast to run to get feedback easily
./check_merge
Runs all the tests for any change that is going to be merged
(right now we are not using gates, so it actually after merge,
but the idea is to use this as gate for any merge to the
repo). This can be more resource hungry than check_path.
That way it will let you use easily any framework that you want to use
for your project, but still let it be easy to maintain in the global
ci/build system (you can use pip, tox, maven, gradle, autotools, make,
rake, ...). This will not allow at first running tests in parallel in
jenkins, but we can in the future add that possibility (for example,
allowing the user to define more than one check script, like
check_patch.mytest1 and check_patch.mytest2 and make jenkins run them
in parallel).
I started a POC of this process here [1]
Using travis yaml files:
Using a travis compliant yaml file [2]. That will be less flexible
than the above solution and will not allow you to run tests in
parallel, though it will let you use travis at any point to offload
our ci if needed.
== Dependencies
Using plain text files:
Similar to the above scripts solution, I though of adding an extra
file, with the same name, to declare the dependencies to run that
script. Adding a suffix in case of different requirements for
different distros (matching facter fact strings), for example:
./build-artifacts.req
Main requirements file, used if no more specific one
found. With a newline separated list of packages to install on
the environment (jenkins will take care of which package
manager to use).
./build-artifacts.req.fc20
Specific requirements for fc20 environment, replaces the
general one if present.
And the same for the other scripts (check_patch and check_merge).
Using travis yaml file:
Using a travis compliant yaml file with some extensions to declare
the different dependencies for each type and os/distro. That will
allow you to have only one extra file in your repo, though you'd have
to duplicate some of the requirements as travis only has ubuntu and
forces you to run scripts to install dependencies.
What do you think? Do you have any better idea?
ps. About using an external repository to store the
scripts/requirements for the code. The issue with this is that it
forces you to bind a code change in the code repo, to a
script/dependency change in the scripts repo, and that adds a lot of
extra maintenance and source of issues and failures. If you know a way
of doing it like that without all the fuss, I'd love to hear it.
For example, imagine that you have vdsm and the dependencies are in
another repo, now you send a patch to vdsm that requires you to run a
specific pep8 version to pass the patch tests, so you have to change
the script repo to add that dependency, but doing that you will brake
all the other patches tests because they require the older pep8
version, so you have to somehow specify in the vdsm patch that you
require a specific commit from the scripts repo to be tested with...
Having both in the same repo, allows you to do the code change and the
dependency/script change in the same patchset, and test it right away
with the correct scripts/deps.
It also binds together code and tests to some point, what is nice to
have in a product view, because you know for each version which tests
it passed and have a better idea of the possible failures for that
version.
[1] http://gerrit.ovirt.org/#/admin/projects/repoman
[2] http://docs.travis-ci.com/
--
David Caro
Red Hat S.L.
Continuous Integration Engineer - EMEA ENG Virtualization R&D
Tel.: +420 532 294 605
Email: dcaro at redhat.com
Web: www.redhat.com
RHT Global #: 82-62605
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 473 bytes
Desc: not available
URL: <http://lists.ovirt.org/pipermail/devel/attachments/20150120/6dd682f1/attachment.sig>
More information about the Devel
mailing list