[ovirt-devel] Creating a new gerrit flag

David Caro dcaroest at redhat.com
Thu Dec 11 10:06:24 UTC 2014


On 12/11, Nir Soffer wrote:
> ----- Original Message -----
> > From: "David Caro" <dcaroest at redhat.com>
> > To: "Nir Soffer" <nsoffer at redhat.com>
> > Cc: "Eyal Edri" <eedri at redhat.com>, "Oved Ourfali" <ovedo at redhat.com>, "infra" <infra at ovirt.org>, devel at ovirt.org
> > Sent: Wednesday, December 10, 2014 4:59:36 PM
> > Subject: Re: [ovirt-devel] Creating a new gerrit flag
> > 
> > On 12/10, Nir Soffer wrote:
> > > 
> > > 
> > > ----- Original Message -----
> > > > From: "Eyal Edri" <eedri at redhat.com>
> > > > To: devel at ovirt.org
> > > > Cc: "Oved Ourfali" <ovedo at redhat.com>, "infra" <infra at ovirt.org>
> > > > Sent: Wednesday, December 10, 2014 10:40:47 AM
> > > > Subject: Re: [ovirt-devel] Creating a new gerrit flag
> > > > 
> > > > 
> > > > 
> > > > ----- Original Message -----
> > > > > From: "Oved Ourfali" <ovedo at redhat.com>
> > > > > To: "David Caro" <dcaroest at redhat.com>
> > > > > Cc: devel at ovirt.org
> > > > > Sent: Wednesday, December 10, 2014 8:30:30 AM
> > > > > Subject: Re: [ovirt-devel] Creating a new gerrit flag
> > > > > 
> > > > > 
> > > > > 
> > > > > ----- Original Message -----
> > > > > > From: "David Caro" <dcaroest at redhat.com>
> > > > > > To: "Oved Ourfali" <ovedo at redhat.com>
> > > > > > Cc: devel at ovirt.org
> > > > > > Sent: Tuesday, December 9, 2014 7:02:44 PM
> > > > > > Subject: Re: [ovirt-devel] Creating a new gerrit flag
> > > > > > 
> > > > > > On 12/09, Oved Ourfali wrote:
> > > > > > > 
> > > > > > > 
> > > > > > > ----- Original Message -----
> > > > > > > > From: "David Caro" <dcaroest at redhat.com>
> > > > > > > > To: "Oved Ourfali" <ovedo at redhat.com>
> > > > > > > > Cc: "Sven Kieske" <s.kieske at mittwald.de>, devel at ovirt.org
> > > > > > > > Sent: Tuesday, December 9, 2014 3:40:30 PM
> > > > > > > > Subject: Re: [ovirt-devel] Creating a new gerrit flag
> > > > > > > > 
> > > > > > > > On 12/09, Oved Ourfali wrote:
> > > > > > > > > 
> > > > > > > > > 
> > > > > > > > > ----- Original Message -----
> > > > > > > > > > From: "Sven Kieske" <s.kieske at mittwald.de>
> > > > > > > > > > To: devel at ovirt.org
> > > > > > > > > > Sent: Tuesday, December 9, 2014 3:21:43 PM
> > > > > > > > > > Subject: Re: [ovirt-devel] Creating a new gerrit flag
> > > > > > > > > > 
> > > > > > > > > > 
> > > > > > > > > > 
> > > > > > > > > > On 09/12/14 13:47, Oved Ourfali wrote:
> > > > > > > > > > > safe up to 95% or so.
> > > > > > > > > > 
> > > > > > > > > > You just made up that number.
> > > > > > > > > > I don't really understand why you would want
> > > > > > > > > > to downgrade your code quality by circumventing tests.
> > > > > > > > > > 
> > > > > > > > > > Maybe someone can elaborate on this a bit?
> > > > > > > > > > 
> > > > > > > > > 
> > > > > > > > > It doesn't downgrade the code quality.
> > > > > > > > > It is just a way to ensure developers can both merge changes,
> > > > > > > > > and
> > > > > > > > > do
> > > > > > > > > it
> > > > > > > > > as
> > > > > > > > > safely as possible without relying on post-submit tools.
> > > > > > > > > The number is indeed invented... as I don't have real
> > > > > > > > > statistics,
> > > > > > > > > but
> > > > > > > > > it
> > > > > > > > > comes to say that it would be safe most of the time.
> > > > > > > > > After the patch is merged, if CI will fail, it is the
> > > > > > > > > responsibility
> > > > > > > > > of
> > > > > > > > > the
> > > > > > > > > developer to check that out and fix that.
> > > > > > > > 
> > > > > > > > This thread was started to avoid getting to that point, as
> > > > > > > > getting a
> > > > > > > > failed patch inside the code means breaking all the other tests
> > > > > > > > that
> > > > > > > > run on top of it and that blocks all the development, not only
> > > > > > > > that
> > > > > > > > specific patch.
> > > > > > > > 
> > > > > > > 
> > > > > > > The issue that started the discussion was an issue in which there
> > > > > > > was a
> > > > > > > Tests "-1" flag, and it was ignored.
> > > > > > > My suggestion will enforce that it won't be ignored.
> > > > > > > In more rare cases, in which the rebase is the source of the tests
> > > > > > > issue,
> > > > > > > then you'll find about it later.
> > > > > > 
> > > > > > I started the discussion, and I started it because a developer
> > > > > > complained about not being able to merge a patch because it was
> > > > > > failing the tests due to an already merged patch that was making all
> > > > > > the builds to fail. And was trying to get a solution to avoid getting
> > > > > > to that point where a patch is merged while breaking the tests.
> > > > > > 
> > > > > > 
> > > > > > So in summary, you are suggestion this:
> > > > > > 
> > > > > > Creating a new flag 'tested' with values +1, 0 and -1 that only
> > > > > > jenkins
> > > > > > and managers can set
> > > > > > 
> > > > > > Block form submitting any patches that have a -1
> > > > > > 
> > > > > > Carry the value of that flag to following patches only if the flag
> > > > > > was
> > > > > > -1
> > > > > > 
> > > > > 
> > > > 
> > > > +1, we need a way to block bad patches from being merged, even with a
> > > > rebase
> > > > in gerrit.
> > > > going forward we're planning a few changes to the way jenkins jobs are
> > > > run on
> > > > ovirt ci, which will help
> > > > reduce noise and imrove resources usages.
> > > > 
> > > > 1. moving into a flow process, where critical jobs like unit
> > > > tests/checkstyle
> > > > will run first and only then other heavy jobs will run
> > > > (integration/rpms/findbugs)
> > > 
> > > This is already implemented in vdsm for few months - running "make check"
> > > will run the fast tests first and will not run the slower tests if a fast
> > > test
> > > failed.
> > 
> > Please change to be able to run only fast tests or only slow tests,
> > that way we can separate the job into two and give feedback about the
> > fast tests before the slows have finished running.
> 
> These are the available targets (from faster to slower):
> 
> - gitignore - check that certain files are ignored
> 
> - pyflakes - check common Python errors (e.g. unused imports)
> 
> - pep8 - style check
> 
> - check - run the fast checks above and if successful, the unittests
> 
>   Environment variables:
> 
>   NOSE_SLOW_TESTS=1 - enable slow tests (we have only few)
>   NOSE_STRESS_TESTS=1 - enable stress tests (probably not useful for the CI)
> 
>   Note that the environment variables are used only for the tests in
>   vdsm/tests there are few tests in various sub directories that do 
>   not use the test infrastructure in vdsm/tests.
> 
> - check-all - run make check enbaling both slow and stress tests
> 
> Do you need a separate target for the unittests?

I just want to be able to execute:

> make fast-check

for fast checks (you decide what are and what are not)

> make slow-check

for slow checks, non overlapping, meaning that if you want to runa ll
the checks, you'll have to use fast and then slow.

The idea is to generalize the interface we use for the tests between
all the ovirt projects so from ci we don't have to go keeping specific
scripts for each provect and for each project version. And be able to
run the fast checks (on each patchset, merges included) and the slow
ones (on each merge only) separated so we can give feedback faster and
not start the slow ones. Then after that goes the building of the rpm,
and then the functional tests. And last the release if relevant.

On patch:
   fast_check

On merge:
   fast_check -> slow_check -> build -> functional_check -> release

That flow is generic for all the projects, so each project will have
to implement the same interface to run each step (I don't really care
if it's make fast-check or just a bash script that runs whatever you
need underneath, the key is not having to specify any options if
possible and running just one line, the same everywhere).

That simplifies a LOT the maintenance of the jenkins jobs, and allows
to change the way checks are run bound to the version of the product.

Next step is to specify the required dependencies inside the project,
so the dependencies for the tests are also stored in the repo and
bound to the code, and installed in the test env on demand so no more
issues with versioning or new dependencies. Right now that is
hardcoded in the job itself, making it break if any future version
needs a different dependency list. Not sure yet what's the best way to
do that though, maybe a simple puppet manifest would be the way to
go... because having a requirements.txt file is python specific and
not all the projects we have are python or use distutils at all.
Ideas are welcome.


> 
> Nir

-- 
David Caro

Red Hat S.L.
Continuous Integration Engineer - EMEA ENG Virtualization R&D

Tel.: +420 532 294 605
Email: dcaro at redhat.com
Web: www.redhat.com
RHT Global #: 82-62605
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 473 bytes
Desc: not available
URL: <http://lists.ovirt.org/pipermail/devel/attachments/20141211/e5e641be/attachment.sig>


More information about the Devel mailing list