
Il 03/02/2015 14:04, David Caro ha scritto:
On 02/03, Sandro Bonazzola wrote:
Il 20/01/2015 16:50, David Caro ha scritto:
Hi everyone!
After talking a bit with some of you, I think that we can start planning a common build and dependency declaration for tests for the ovirt products, to improve and automate most of the ci process and maintenance.
Current status:
== Dependencies
Right now we have 4 types of dependencies:
* test dependencies * tarball/srcrpm build dependencies * rpm build dependencies * installation dependencies
The last two are managed from the spec files through rpm/yum dependency systems. But the first ones are managed manually on the jobs on jenkins or puppet manifests. What separates it from the code that actually requires them and adds an extra layer of maintenance and synchronization between the code, the jenkins jobs and the puppet repository.
== Builds
We started using autotools to build most of the projects, but it's not a global methodology and even being used on some projects, you need to tune the run for each of them, specifying different variables and running some side scripts.
== Tests
Some projects use make check to run some of the tests, some tests are totally outside the code and run only in jenkins jobs.
Some possible improvements:
== Tests/builds
Using shell scripts: We talked before in another thread to create some generic script to build the artifacts for the product and to run the tests, namely we talked about having 3 executables (bash scripts probably) at the root of each project, that should not require any parameters:
./build-artifacts This should generate any artifacts to be archives (isos, rpms, debs, tarballs, ...) and leave them at ./exported-artifacts/ directory, for the build system to collect, removing any previous artifacts if needed.
./check_patch Runs all the tests required for any new patchset in gerrit, for non-merged changes, should be fast to run to get feedback easily
./check_merge Runs all the tests for any change that is going to be merged (right now we are not using gates, so it actually after merge, but the idea is to use this as gate for any merge to the repo). This can be more resource hungry than check_path.
That way it will let you use easily any framework that you want to use for your project, but still let it be easy to maintain in the global ci/build system (you can use pip, tox, maven, gradle, autotools, make, rake, ...). This will not allow at first running tests in parallel in jenkins, but we can in the future add that possibility (for example, allowing the user to define more than one check script, like check_patch.mytest1 and check_patch.mytest2 and make jenkins run them in parallel). I started a POC of this process here [1]
Using travis yaml files: Using a travis compliant yaml file [2]. That will be less flexible than the above solution and will not allow you to run tests in parallel, though it will let you use travis at any point to offload our ci if needed.
== Dependencies
Using plain text files: Similar to the above scripts solution, I though of adding an extra file, with the same name, to declare the dependencies to run that script. Adding a suffix in case of different requirements for different distros (matching facter fact strings), for example:
./build-artifacts.req Main requirements file, used if no more specific one found. With a newline separated list of packages to install on the environment (jenkins will take care of which package manager to use).
./build-artifacts.req.fc20 Specific requirements for fc20 environment, replaces the general one if present.
And the same for the other scripts (check_patch and check_merge).
Using travis yaml file: Using a travis compliant yaml file with some extensions to declare the different dependencies for each type and os/distro. That will allow you to have only one extra file in your repo, though you'd have to duplicate some of the requirements as travis only has ubuntu and forces you to run scripts to install dependencies.
What do you think? Do you have any better idea?
I would prefer to have above scripts in something like jenkins, automation or build sub-directory. Other than that, no better idea.
As long as all the projects use the same path and name for them, I have no issue with putting them somewhere else.
I don't think jenkins is a good name, but automation is a nice one.
build is too common, used by a lot of different tools, better avoid it.
So can we agree to use automation directory?
automation is ok for me.
ps. About using an external repository to store the scripts/requirements for the code. The issue with this is that it forces you to bind a code change in the code repo, to a script/dependency change in the scripts repo, and that adds a lot of extra maintenance and source of issues and failures. If you know a way of doing it like that without all the fuss, I'd love to hear it.
For example, imagine that you have vdsm and the dependencies are in another repo, now you send a patch to vdsm that requires you to run a specific pep8 version to pass the patch tests, so you have to change the script repo to add that dependency, but doing that you will brake all the other patches tests because they require the older pep8 version, so you have to somehow specify in the vdsm patch that you require a specific commit from the scripts repo to be tested with...
Having both in the same repo, allows you to do the code change and the dependency/script change in the same patchset, and test it right away with the correct scripts/deps.
It also binds together code and tests to some point, what is nice to have in a product view, because you know for each version which tests it passed and have a better idea of the possible failures for that version.
[1] http://gerrit.ovirt.org/#/admin/projects/repoman [2] http://docs.travis-ci.com/
_______________________________________________ Infra mailing list Infra@ovirt.org http://lists.ovirt.org/mailman/listinfo/infra
-- Sandro Bonazzola Better technology. Faster innovation. Powered by community collaboration. See how it works at redhat.com
-- Sandro Bonazzola Better technology. Faster innovation. Powered by community collaboration. See how it works at redhat.com