Enhancing std-ci for deployment (std-cd)

Hi all, I'm contemplating the best way to enable including deployment logic in standard-CI scripts. Case to the point - embedding the deployment logic of our infra-puppet repo. One thing to note about this, is that deployment in this scenario can happen either post-merge (Like it does today) or pre-merge (Create a per-patch puppet env to enable easy testing) I can think of a few ways to go about this: 1. Copy the full generated puppet configuration into 'exported-artifacts' and add logic to the YAML to copy it to the foreman server. The main shortcoming of this is that we will have to maintain quite a bit of custom logic in the YAML. This beats the purpose of embedding the logic in the source repo in the 1st place. 2. Mount the '/etc/puppet' directory into the chrrot This will require having the foreman be a Jenkins slave and some custom YAML to ensure the jobs run on it (not a big deal IMO) The shortcoming is that running tests locally with mock_runner would be cumbersome (It will touch your local /etc/puppet directory and probably fail). Another issue is that we will have to find a way to figure out Gerrit patch information from inside mock. Possibly we could use the commit message or git hash for that. 3. Invent some kind of a new deploy_*.sh script This makes it possible to run the checking code locally without the deployment code. The YAML changes for this could be quite generic and shared with other projects. We could possibly also invent a 'deploy_*.target' to specify where to run the deploy script (E.g. a Jenkins label). We could even consider not running the script inside mock, though I think mock's benefits outweigh the limits it imposes on accessing the outside system (which can be mostly bypassed anyway with bind mounts). So, WDYT? -- Barak Korren bkorren@redhat.com RHEV-CI Team

----- Original Message -----
From: "Barak Korren" <bkorren@redhat.com> To: "infra" <infra@ovirt.org> Sent: Tuesday, June 7, 2016 10:45:06 AM Subject: Enhancing std-ci for deployment (std-cd)
Hi all,
I'm contemplating the best way to enable including deployment logic in standard-CI scripts.
I'm working on a first POC of something similar to that right now, deploying engine rpms to an 'experimental' repo on build-artifacts success
Case to the point - embedding the deployment logic of our infra-puppet repo. One thing to note about this, is that deployment in this scenario can happen either post-merge (Like it does today) or pre-merge (Create a per-patch puppet env to enable easy testing)
I can think of a few ways to go about this:
1. Copy the full generated puppet configuration into 'exported-artifacts' and add logic to the YAML to copy it to the foreman server.
The main shortcoming of this is that we will have to maintain quite a bit of custom logic in the YAML. This beats the purpose of embedding the logic in the source repo in the 1st place.
2. Mount the '/etc/puppet' directory into the chrrot
This will require having the foreman be a Jenkins slave and some custom YAML to ensure the jobs run on it (not a big deal IMO)
The shortcoming is that running tests locally with mock_runner would be cumbersome (It will touch your local /etc/puppet directory and probably fail). Another issue is that we will have to find a way to figure out Gerrit patch information from inside mock. Possibly we could use the commit message or git hash for that.
3. Invent some kind of a new deploy_*.sh script
This makes it possible to run the checking code locally without the deployment code. The YAML changes for this could be quite generic and shared with other projects. We could possibly also invent a 'deploy_*.target' to specify where to run the deploy script (E.g. a Jenkins label).
We could even consider not running the script inside mock, though I think mock's benefits outweigh the limits it imposes on accessing the outside system (which can be mostly bypassed anyway with bind mounts).
So, WDYT?
I'd go a 4th way: * For the non-merged patches, use lago or similar instead of deploying into prod foreman, though it might be a bit cumbersome to generate the env, for most cases, it's way more flexible, and a lot less risky * For the merged patches, I'd use a 'passive' deployment, where the scripts with the deploy logic reside on foreman and are activated by jenkins (for example, by ssh to the slave, similar to how we deploy there today). That puts the deploy logic on the server where it should be deployed. Most probably using the same or very similar script on the non-merged checks to deploy to the virtual environment. This leaves a clean yaml, keeps a strict security (only a specific ssh user with the correct private key can do it, and it can only run that script and nothing else), and maintain the infra config details out of the source code.
-- Barak Korren bkorren@redhat.com RHEV-CI Team _______________________________________________ Infra mailing list Infra@ovirt.org http://lists.ovirt.org/mailman/listinfo/infra

I'd go a 4th way:
* For the non-merged patches, use lago or similar instead of deploying into prod foreman, though it might be a bit cumbersome to generate the env, for most cases, it's way more flexible, and a lot less risky
This would probably mean all tests would need to be automated, while a worthy goal, this is not practical in the short term IMO.
* For the merged patches, I'd use a 'passive' deployment, where the scripts with the deploy logic reside on foreman and are activated by jenkins (for example, by ssh to the slave, similar to how we deploy there today). That puts the deploy logic on the server where it should be deployed. Most probably using the same or very similar script on the non-merged checks to deploy to the virtual environment. This leaves a clean yaml, keeps a strict security (only a specific ssh user with the correct private key can do it, and it can only run that script and nothing else), and maintain the infra config details out of the source code.
While I agree that infra details should be kept outside the source repo. This seems to create the situation where all deployment logic will also permanently reside outside of it. I want the deployment logic to be self-contained and movable. I'm actually looking at this right now because I want to deploy the Puppet code on the DS Sat6 and not the US foreman. I can see the security benefits of the keyed ssh commands, but I'm not sure those are required in all cases and outweigh the lack of transparency in the logic and the probable need for manual maintenance. -- Barak Korren bkorren@redhat.com RHEV-CI Team

----- Original Message -----
From: "Barak Korren" <bkorren@redhat.com> To: "David Caro Estevez" <dcaroest@redhat.com> Cc: "infra" <infra@ovirt.org> Sent: Tuesday, June 7, 2016 11:31:17 AM Subject: Re: Enhancing std-ci for deployment (std-cd)
I'd go a 4th way:
* For the non-merged patches, use lago or similar instead of deploying into prod foreman, though it might be a bit cumbersome to generate the env, for most cases, it's way more flexible, and a lot less risky
This would probably mean all tests would need to be automated, while a worthy goal, this is not practical in the short term IMO.
I don't think we should invest time in automating any other solution, that would mean not just not working on that one, but actually burying it under extra effort to adapt whatever 'temporary short term' solution was used instead.
* For the merged patches, I'd use a 'passive' deployment, where the scripts with the deploy logic reside on foreman and are activated by jenkins (for example, by ssh to the slave, similar to how we deploy there today). That puts the deploy logic on the server where it should be deployed. Most probably using the same or very similar script on the non-merged checks to deploy to the virtual environment. This leaves a clean yaml, keeps a strict security (only a specific ssh user with the correct private key can do it, and it can only run that script and nothing else), and maintain the infra config details out of the source code.
While I agree that infra details should be kept outside the source repo. This seems to create the situation where all deployment logic will also permanently reside outside of it. I want the deployment logic to be self-contained and movable. I'm actually looking at this right now because I want to deploy the Puppet code on the DS Sat6 and not the US foreman.
I don't think you should use the same deploy procedure on upstream foreman and ds satellite, each env has it's own particularities, and unless you want to deploy the whole env (like deploying full vms, or containers) it's no worth imo try to keep such a generic deploy script, given into account all the limitations and maintenance that genericness requires.
I can see the security benefits of the keyed ssh commands, but I'm not sure those are required in all cases and outweigh the lack of transparency in the logic and the probable need for manual maintenance.
I don't think the manual maintenance will be that high, the deploy scripts can be easily puppetized themselves. And imo, upstream the ssh command are more than required, they should be a bare minimum.
-- Barak Korren bkorren@redhat.com RHEV-CI Team
participants (2)
-
Barak Korren
-
David Caro Estevez