[ANNOUNCE] Introducing STDCI V2

The CI team is thrilled to announce the general availability of the second version of the oVirt CI standard. Work on this version included almost a complete rewrite of the CI backend. The major user-visible features are: - Project maintainers no longer need to maintain YAML in the ‘jenkins’ repository. Details that were specified there, including targeted distributions, architectures and oVirt versions should now be specified in a YAML file under the project’s own repository (In a different syntax). - We now support “sub stages” which provide the ability to run multiple different scripts in parallel within the same STDCI stage. There is also a conditional syntax which allows controlling which scripts get executed according to which files were changed in the patch being tested. - The STDCI script file names and locations can now be customized via the above mentioned YAML file. This means that for e.g. using the same script for different stages can now be done by assigning it to the stages in the YAML file instead of by using symlinks. Inspecting job results in STDCI V2 ---------------------------------- As already mentioned, the work on STDCI V2 consisted of a major rewrite of the CI backend, one of the changes made is switching from using multiple “FreeStyle” type jobs per project to just two (pre-merge and post-merge) pipeline jobs. This has implications for the way job results are to be inspected. Since all the different parallel tasks now happen within the same job, looking at the job output can be rather confusing as it includes the merged output of all the tasks. Instead, the “Blue Ocean” view should be used. The “Blue Ocean” view displays a graphical layout of the job execution allowing one to quickly learn which parts of the job failed. It also allows drilling down and viewing the logs of individual parts of the job. Apart from using the “Blue Ocean” view, job logs are also stored as artifact files. The ‘exported-artifacts’ directory seen in the job results will now include different subdirectories for the different parts of the job. Assuming we have a ‘check-patch’ stage script running on ‘el7/x86_64’, we can find its output under ‘exported-artifacts’ in: check-patch.el7.x86_64/mock_logs/script/stdout_stderr.log Any additional artifacts generated by the script would be present in the ‘check-patch.el7.x86-64’ directory as well. I have a CI YAML file in my project already, is this really new? ---------------------------------------------------------------- We’ve been working on this for a while, and occasionally introduced V2 features into individual projects as needed. In particular, our GitHub support was always based on STDCI V2 code, so all GitHub projects (except Lago, which is ‘special’…) are already using STDCI V2. A few Gerrit-based projects have already been converted to V2 as well as part of our efforts to test and debug the V2 code. Most notably, the “OST” and “Jenkins” projects have been switched, although they are running the STDCI V1 jobs as well for the time being. What is the process for switching my project to STDCI V2? --------------------------------------------------------- The CI team is going to proactively work with project maintainers to switch them to V2. The process for switching is as follows: - Send a one-line patch to the ‘jenkins’ repo to enable the V2 jobs for the project - at this point the V2 jobs will run side-by-side with the V1 jobs, and will execute the STDCI scripts on el7/x86_64. - Create an STDCI YAML file to define the target distributions, architectures and oVirt versions for the project. (See below for a sample file that would be equivalent to what many projects have defined in V1 currently). As soon as a patch with the new YAML file is submitted to the project, the V2 job will parse it and follow the instructions in it. This allows for an easy verification of the file functionality in CI. - Remove the STDCI V1 job configuration from the ‘jenkins’ repo. This should be the last patch project maintainers have to send to the ‘jenkins’ repo. What does the new YAML file look like? -------------------------------------- We defined multiple optional names for the file, so that each project owner can choose which name seems most adequate. The following names can be used: - stdci.yaml - automation.yaml - ovirtci.yaml A dot (.) can also optionally be added at the beginning of the file name to make the file hidden, the file extension could also be “yml”. If multiple matching files exist in the project repo, the first matching file according to the order listed above will be used. The file conforms to the YAML syntax. The key names in the file are case-agnostic, and hyphens (-) underscores (_) and spaces ( ) in key names are ignored. Additionally we support multiple forms of the same word so you don’t need to remember if the key should be ‘distro’, ‘distros’, ‘distributions’, ‘operating-systems’ or ‘OperatingSystems’ as all these forms (and others) will work and mean the same thing. To create complex test/build matrices, ‘stage’, ‘distribution’, ‘architecture’ and ‘sub-stage’ definitions can be nested within one another. We find this to be more intuitive than having to maintain tedious ‘exclude’ lists as was needed in V1. Here is an example of an STDCI V2 YAML file that is compatible with the current master branch V1 configuration of many oVirt projects: --- Architectures: - x86_64: Distributions: [ "el7", "fc27" ] - ppc64le: Distribution: el7 - s390x: Distribution: fc27 Release Branches: master: ovirt-master Note: since the file is committed into the project’s own repo, having different configuration for different branches can be done by simply having different files in the different branches, so there is no need for a big convoluted file to configure all branches. Since the above file does not mention stages, any STDCI scripts that exists in the project repo and belong to a particular stage will be run on all specified distribution and architecture combinations. Since it is sometimes desired to run ‘check-patch.sh’ on less platforms then build-artifacts for example, a slightly different file would be needed: --- Architectures: - x86_64: Distributions: [ “el7”, “fc27” ] - ppc64le: Distribution: el7 - s390x: Distribution: fc27 Stages: - check-patch: Architecture: x86_64 Distribution: el7 - build-artifacts Release Branches: master: ovirt-master The above file makes ‘check-patch’ run only on el7/x86_64, while build-artifacts runs on all platforms specified and check-merged would not run at all because it is not listed in the file. Great efforts have been made to make the file format very flexible but intuitive to use. Additionally there are many defaults in place to allow specifying complex behaviours with very brief YAML code. For further details about the file format, please see the documentation linked below. About the relation between STDCI V2 and the change-queue -------------------------------------------------------- In STDCI V1 the change queue that would run the OST tests and release a given patch was determined by looking at the “version” part of the name of the project’s build-artifacts jobs that got invoked for the patch. This was confusing for people as most people understood “version” to mean the internal version for their own project rather then the oVirt version. In V2 we decided to be more explicit and simply include a map from branches to change queues in the YAML configuration under the “release-branches” option, as can be seen in the examples above. We also chose to no longer allow specifying the oVirt version as a shorthand for the equivalent queue name (E.g specifying ‘4.2’ instead of ‘ovirt-4.2’), this should reduce the chance for confusing between project versions and queue names, and also allows us to create and use change queues for projects that are not oVirt. A project can choose not to include a “release-branches” option, in which case its patches will not get submitted to any queues. Further information ------------------- The documentation for STDCI can be found at [1]. The documentation update for V2 are still in progress and expected to be merged soon. In the meatine, the GitHub-specific documentation [2] already provides a great deal of information which is relevant for V2. [1]: http://ovirt-infra-docs.readthedocs.io/en/latest/CI/Build_and_test_standards [2]: http://ovirt-infra-docs.readthedocs.io/en/latest/CI/Using_STDCI_with_GitHub --- Barak Korren RHV DevOps team , RHCE, RHCi Red Hat EMEA redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted

Hey, I just got around to studying this. - Nice clear email! - Everything really makes sense. - Thank you for fixing the -excludes thing in the yaml. That was rough :) - The graph view in Blue Ocean is easy to see and understand. - "We now support “sub stages” which provide the ability to run multiple different scripts in parallel" -- what kind of races should we watch out for? :) For example in OST, I think I'll have to adapt docker stuff to be aware that another set of containers could be running at the same time -- not positive though. It looks like the substages replace change_resolver in OST. Can you go into that in more detail? How does this impact running run mock_runner locally? When I run it locally it doesn't appear to paralleilize like it does in jenkins / Blue Ocean. Best wishes, Greg On Mon, Apr 16, 2018 at 10:17 AM, Barak Korren <bkorren@redhat.com> wrote:
The CI team is thrilled to announce the general availability of the second version of the oVirt CI standard. Work on this version included almost a complete rewrite of the CI backend. The major user-visible features are:
- Project maintainers no longer need to maintain YAML in the ‘jenkins’ repository. Details that were specified there, including targeted distributions, architectures and oVirt versions should now be specified in a YAML file under the project’s own repository (In a different syntax).
- We now support “sub stages” which provide the ability to run multiple different scripts in parallel within the same STDCI stage. There is also a conditional syntax which allows controlling which scripts get executed according to which files were changed in the patch being tested.
- The STDCI script file names and locations can now be customized via the above mentioned YAML file. This means that for e.g. using the same script for different stages can now be done by assigning it to the stages in the YAML file instead of by using symlinks.
Inspecting job results in STDCI V2 ---------------------------------- As already mentioned, the work on STDCI V2 consisted of a major rewrite of the CI backend, one of the changes made is switching from using multiple “FreeStyle” type jobs per project to just two (pre-merge and post-merge) pipeline jobs. This has implications for the way job results are to be inspected.
Since all the different parallel tasks now happen within the same job, looking at the job output can be rather confusing as it includes the merged output of all the tasks. Instead, the “Blue Ocean” view should be used. The “Blue Ocean” view displays a graphical layout of the job execution allowing one to quickly learn which parts of the job failed. It also allows drilling down and viewing the logs of individual parts of the job.
Apart from using the “Blue Ocean” view, job logs are also stored as artifact files. The ‘exported-artifacts’ directory seen in the job results will now include different subdirectories for the different parts of the job. Assuming we have a ‘check-patch’ stage script running on ‘el7/x86_64’, we can find its output under ‘exported-artifacts’ in:
check-patch.el7.x86_64/mock_logs/script/stdout_stderr.log
Any additional artifacts generated by the script would be present in the ‘check-patch.el7.x86-64’ directory as well.
I have a CI YAML file in my project already, is this really new? ---------------------------------------------------------------- We’ve been working on this for a while, and occasionally introduced V2 features into individual projects as needed. In particular, our GitHub support was always based on STDCI V2 code, so all GitHub projects (except Lago, which is ‘special’…) are already using STDCI V2.
A few Gerrit-based projects have already been converted to V2 as well as part of our efforts to test and debug the V2 code. Most notably, the “OST” and “Jenkins” projects have been switched, although they are running the STDCI V1 jobs as well for the time being.
What is the process for switching my project to STDCI V2? --------------------------------------------------------- The CI team is going to proactively work with project maintainers to switch them to V2. The process for switching is as follows:
- Send a one-line patch to the ‘jenkins’ repo to enable the V2 jobs for the project - at this point the V2 jobs will run side-by-side with the V1 jobs, and will execute the STDCI scripts on el7/x86_64.
- Create an STDCI YAML file to define the target distributions, architectures and oVirt versions for the project. (See below for a sample file that would be equivalent to what many projects have defined in V1 currently). As soon as a patch with the new YAML file is submitted to the project, the V2 job will parse it and follow the instructions in it. This allows for an easy verification of the file functionality in CI.
- Remove the STDCI V1 job configuration from the ‘jenkins’ repo. This should be the last patch project maintainers have to send to the ‘jenkins’ repo.
What does the new YAML file look like? -------------------------------------- We defined multiple optional names for the file, so that each project owner can choose which name seems most adequate. The following names can be used:
- stdci.yaml - automation.yaml - ovirtci.yaml
A dot (.) can also optionally be added at the beginning of the file name to make the file hidden, the file extension could also be “yml”. If multiple matching files exist in the project repo, the first matching file according to the order listed above will be used.
The file conforms to the YAML syntax. The key names in the file are case-agnostic, and hyphens (-) underscores (_) and spaces ( ) in key names are ignored. Additionally we support multiple forms of the same word so you don’t need to remember if the key should be ‘distro’, ‘distros’, ‘distributions’, ‘operating-systems’ or ‘OperatingSystems’ as all these forms (and others) will work and mean the same thing.
To create complex test/build matrices, ‘stage’, ‘distribution’, ‘architecture’ and ‘sub-stage’ definitions can be nested within one another. We find this to be more intuitive than having to maintain tedious ‘exclude’ lists as was needed in V1.
Here is an example of an STDCI V2 YAML file that is compatible with the current master branch V1 configuration of many oVirt projects:
--- Architectures: - x86_64: Distributions: [ "el7", "fc27" ] - ppc64le: Distribution: el7 - s390x: Distribution: fc27 Release Branches: master: ovirt-master
Note: since the file is committed into the project’s own repo, having different configuration for different branches can be done by simply having different files in the different branches, so there is no need for a big convoluted file to configure all branches.
Since the above file does not mention stages, any STDCI scripts that exists in the project repo and belong to a particular stage will be run on all specified distribution and architecture combinations. Since it is sometimes desired to run ‘check-patch.sh’ on less platforms then build-artifacts for example, a slightly different file would be needed:
--- Architectures: - x86_64: Distributions: [ “el7”, “fc27” ] - ppc64le: Distribution: el7 - s390x: Distribution: fc27 Stages: - check-patch: Architecture: x86_64 Distribution: el7 - build-artifacts Release Branches: master: ovirt-master
The above file makes ‘check-patch’ run only on el7/x86_64, while build-artifacts runs on all platforms specified and check-merged would not run at all because it is not listed in the file.
Great efforts have been made to make the file format very flexible but intuitive to use. Additionally there are many defaults in place to allow specifying complex behaviours with very brief YAML code. For further details about the file format, please see the documentation linked below.
About the relation between STDCI V2 and the change-queue -------------------------------------------------------- In STDCI V1 the change queue that would run the OST tests and release a given patch was determined by looking at the “version” part of the name of the project’s build-artifacts jobs that got invoked for the patch.
This was confusing for people as most people understood “version” to mean the internal version for their own project rather then the oVirt version.
In V2 we decided to be more explicit and simply include a map from branches to change queues in the YAML configuration under the “release-branches” option, as can be seen in the examples above.
We also chose to no longer allow specifying the oVirt version as a shorthand for the equivalent queue name (E.g specifying ‘4.2’ instead of ‘ovirt-4.2’), this should reduce the chance for confusing between project versions and queue names, and also allows us to create and use change queues for projects that are not oVirt.
A project can choose not to include a “release-branches” option, in which case its patches will not get submitted to any queues.
Further information ------------------- The documentation for STDCI can be found at [1].
The documentation update for V2 are still in progress and expected to be merged soon. In the meatine, the GitHub-specific documentation [2] already provides a great deal of information which is relevant for V2.
[1]: http://ovirt-infra-docs.readthedocs.io/en/latest/CI/ Build_and_test_standards [2]: http://ovirt-infra-docs.readthedocs.io/en/latest/CI/ Using_STDCI_with_GitHub
--- Barak Korren RHV DevOps team , RHCE, RHCi Red Hat EMEA redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted _______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel
-- GREG SHEREMETA SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHV UX Red Hat NA <https://www.redhat.com/> gshereme@redhat.com IRC: gshereme <https://red.ht/sig>

Hey,
I just got around to studying this.
- Nice clear email! - Everything really makes sense. - Thank you for fixing the -excludes thing in the yaml. That was rough :) - The graph view in Blue Ocean is easy to see and understand. - "We now support “sub stages” which provide the ability to run multiple different scripts in parallel" -- what kind of races should we watch out for? :) For example in OST, I think I'll have to adapt docker stuff to be aware that another set of containers could be running at the same time -- not positive though.
You shouldn't expect any races due to that change. Sub stages are there to allow triggering of more than one task/job on a single CI event such as check-patch when a patch is created/updated, check-merged/build-artifacts when a patch is merged. Sub stages run in parallel but on *different slaves**. *With sub-stages you can, for example, run different scripts in parallel and on different slaves to do different tasks such as running unit-tests in parallel with docs generation and build verification.
It looks like the substages replace change_resolver in OST. Can you go into that in more detail? How does this impact running run mock_runner locally? When I run it locally it doesn't appear to paralleilize like it does in jenkins / Blue Ocean.
That is true. In STDCI V1 we used to run change_resolver in check-patch to check the commit and resolve the relevant changes. STDCI V2 has this feature integrated in one of it's core components which is called usrc.py. We haven't decided yet how/if we will integrate this tool into OST, or how we will achieve the same behaviour when running OST locally with mock runner. For now, you can keep using the "old" check-patch.sh with mock_runner which will call change_resolver. I'd recommend to send a patch and let Jenkins do the checks for you. It will be faster in many cases where you'll have to run several suites in parallel. We'll send a proper announcement regarding the new (STDCI V2 based) jobs for OST including instructions of debugging and how this change affects you as an OST developer. Thanks, - DANIEL BELENKY RHV DEVOPS

How do I map branches to distros, like, how do I add fc28 only for master? In other words, how do I replicate something like this v1 config? version: - master: branch: master - 4.2: branch: master - 4.1: branch: master distro: - el7 - fc27 exclude: - { version: 4.1, distro: fc27 } arch: x86_64 Greg On Sun, Apr 22, 2018 at 3:18 AM, Daniel Belenky <dbelenky@redhat.com> wrote:
Hey,
I just got around to studying this.
- Nice clear email! - Everything really makes sense. - Thank you for fixing the -excludes thing in the yaml. That was rough :) - The graph view in Blue Ocean is easy to see and understand. - "We now support “sub stages” which provide the ability to run multiple different scripts in parallel" -- what kind of races should we watch out for? :) For example in OST, I think I'll have to adapt docker stuff to be aware that another set of containers could be running at the same time -- not positive though.
You shouldn't expect any races due to that change. Sub stages are there to allow triggering of more than one task/job on a single CI event such as check-patch when a patch is created/updated, check-merged/build-artifacts when a patch is merged. Sub stages run in parallel but on *different slaves**. *With sub-stages you can, for example, run different scripts in parallel and on different slaves to do different tasks such as running unit-tests in parallel with docs generation and build verification.
It looks like the substages replace change_resolver in OST. Can you go into that in more detail? How does this impact running run mock_runner locally? When I run it locally it doesn't appear to paralleilize like it does in jenkins / Blue Ocean.
That is true. In STDCI V1 we used to run change_resolver in check-patch to check the commit and resolve the relevant changes. STDCI V2 has this feature integrated in one of it's core components which is called usrc.py. We haven't decided yet how/if we will integrate this tool into OST, or how we will achieve the same behaviour when running OST locally with mock runner. For now, you can keep using the "old" check-patch.sh with mock_runner which will call change_resolver. I'd recommend to send a patch and let Jenkins do the checks for you. It will be faster in many cases where you'll have to run several suites in parallel.
We'll send a proper announcement regarding the new (STDCI V2 based) jobs for OST including instructions of debugging and how this change affects you as an OST developer.
Thanks, -
DANIEL BELENKY
RHV DEVOPS

On Fri, May 11, 2018 at 9:22 PM, Greg Sheremeta <gshereme@redhat.com> wrote:
How do I map branches to distros, like, how do I add fc28 only for master?
In other words, how do I replicate something like this v1 config?
version: - master: branch: master - 4.2: branch: master - 4.1: branch: master distro: - el7 - fc27 exclude: - { version: 4.1, distro: fc27 } arch: x86_64
Hi Grer, Since STDCI reacts to changes in your repo (PR/Merge in GitHub or new patchset/update/submit in Gerrit), STDCI checks out to the same branch the change was made to after cloning your project. Since stdci.yaml is located in your repository, you neeed to write different stdci.yaml for each branch with it's specific configuration. -- DANIEL BELENKY RHV DEVOPS
participants (3)
-
Barak Korren
-
Daniel Belenky
-
Greg Sheremeta