
Hey David, as part of the migration to jenkins.phx.ovirt.org,I want to advance with the Lago jobs. I already migrated infra-puppet/infra-docs/ovirt-node/appliance/imgbased jobs, and so far they seem all to work. As far as I understand the Lago jobs are pretty independent so it should be rather simple. Currently there are 3 slaves configured (fc23, el7, fc21). At the fist stage(until we finish the migration) jenkins_master_deploy-configs_merged is not running, so we could control which jobs get migrated. So if a patch to the jenkins yaml will be introduced during the migration process it will have to be re-run manually. After migrating I'll disable the lago jobs in jenkins.ovirt.org, so even if JJB runs we will have only one jenkins running the CI checks. One more question is if there are any other jobs which are dependent on the Lago jobs(like the publishers which are dependent on all build_artifacts on ovirt-node/appliance/node) As far as I understand the only thing needed for migration is updating the github api tokens and running JJB with *lago*. What do you think? Thanks Nadav.

On Sun, Apr 3, 2016 at 7:27 PM, Nadav Goldin <ngoldin@redhat.com> wrote:
Hey David, as part of the migration to jenkins.phx.ovirt.org,I want to advance with the Lago jobs. I already migrated infra-puppet/infra-docs/ovirt-node/appliance/imgbased jobs, and so far they seem all to work. As far as I understand the Lago jobs are pretty independent so it should be rather simple. Currently there are 3 slaves configured (fc23, el7, fc21).
At the fist stage(until we finish the migration) jenkins_master_deploy-configs_merged is not running, so we could control which jobs get migrated. So if a patch to the jenkins yaml will be introduced during the migration process it will have to be re-run manually.
After migrating I'll disable the lago jobs in jenkins.ovirt.org, so even if JJB runs we will have only one jenkins running the CI checks.
One more question is if there are any other jobs which are dependent on the Lago jobs(like the publishers which are dependent on all build_artifacts on ovirt-node/appliance/node)
As far as I understand the only thing needed for migration is updating the github api tokens and running JJB with *lago*.
What do you think?
Can you migrate user accounts as well? Can't login into jenkins.phx.ovirt.org
Thanks
Nadav.
_______________________________________________ Infra mailing list Infra@ovirt.org http://lists.ovirt.org/mailman/listinfo/infra
-- Sandro Bonazzola Better technology. Faster innovation. Powered by community collaboration. See how it works at redhat.com

Hi Sandro, currently you need to sign up and I'll add you the permissions(you can self enrol in the main page) On Mon, Apr 4, 2016 at 9:23 AM, Sandro Bonazzola <sbonazzo@redhat.com> wrote:
On Sun, Apr 3, 2016 at 7:27 PM, Nadav Goldin <ngoldin@redhat.com> wrote:
Hey David, as part of the migration to jenkins.phx.ovirt.org,I want to advance with the Lago jobs. I already migrated infra-puppet/infra-docs/ovirt-node/appliance/imgbased jobs, and so far they seem all to work. As far as I understand the Lago jobs are pretty independent so it should be rather simple. Currently there are 3 slaves configured (fc23, el7, fc21).
At the fist stage(until we finish the migration) jenkins_master_deploy-configs_merged is not running, so we could control which jobs get migrated. So if a patch to the jenkins yaml will be introduced during the migration process it will have to be re-run manually.
After migrating I'll disable the lago jobs in jenkins.ovirt.org, so even if JJB runs we will have only one jenkins running the CI checks.
One more question is if there are any other jobs which are dependent on the Lago jobs(like the publishers which are dependent on all build_artifacts on ovirt-node/appliance/node)
As far as I understand the only thing needed for migration is updating the github api tokens and running JJB with *lago*.
What do you think?
Can you migrate user accounts as well? Can't login into jenkins.phx.ovirt.org
Thanks
Nadav.
_______________________________________________ Infra mailing list Infra@ovirt.org http://lists.ovirt.org/mailman/listinfo/infra
-- Sandro Bonazzola Better technology. Faster innovation. Powered by community collaboration. See how it works at redhat.com

Sandro, We are working on making the signup similar to the 'ssh users' so you'll just need to send you public key, Nadav will update when its ready, but for now, you can signup as Nadav said. On Mon, Apr 4, 2016 at 9:35 AM, Nadav Goldin <ngoldin@redhat.com> wrote:
Hi Sandro, currently you need to sign up and I'll add you the permissions(you can self enrol in the main page)
On Mon, Apr 4, 2016 at 9:23 AM, Sandro Bonazzola <sbonazzo@redhat.com> wrote:
On Sun, Apr 3, 2016 at 7:27 PM, Nadav Goldin <ngoldin@redhat.com> wrote:
Hey David, as part of the migration to jenkins.phx.ovirt.org,I want to advance with the Lago jobs. I already migrated infra-puppet/infra-docs/ovirt-node/appliance/imgbased jobs, and so far they seem all to work. As far as I understand the Lago jobs are pretty independent so it should be rather simple. Currently there are 3 slaves configured (fc23, el7, fc21).
At the fist stage(until we finish the migration) jenkins_master_deploy-configs_merged is not running, so we could control which jobs get migrated. So if a patch to the jenkins yaml will be introduced during the migration process it will have to be re-run manually.
After migrating I'll disable the lago jobs in jenkins.ovirt.org, so even if JJB runs we will have only one jenkins running the CI checks.
One more question is if there are any other jobs which are dependent on the Lago jobs(like the publishers which are dependent on all build_artifacts on ovirt-node/appliance/node)
As far as I understand the only thing needed for migration is updating the github api tokens and running JJB with *lago*.
What do you think?
Can you migrate user accounts as well? Can't login into jenkins.phx.ovirt.org
Thanks
Nadav.
_______________________________________________ Infra mailing list Infra@ovirt.org http://lists.ovirt.org/mailman/listinfo/infra
-- Sandro Bonazzola Better technology. Faster innovation. Powered by community collaboration. See how it works at redhat.com
_______________________________________________ Infra mailing list Infra@ovirt.org http://lists.ovirt.org/mailman/listinfo/infra
-- Eyal Edri Associate Manager RHEV DevOps EMEA ENG Virtualization R&D Red Hat Israel phone: +972-9-7692018 irc: eedri (on #tlv #rhev-dev #rhev-integ)

On Mon, Apr 4, 2016 at 8:35 AM, Nadav Goldin <ngoldin@redhat.com> wrote:
Hi Sandro, currently you need to sign up and I'll add you the permissions(you can self enrol in the main page)
User created: Access Denied sbonazzo is missing the Overall/Read permission
On Mon, Apr 4, 2016 at 9:23 AM, Sandro Bonazzola <sbonazzo@redhat.com> wrote:
On Sun, Apr 3, 2016 at 7:27 PM, Nadav Goldin <ngoldin@redhat.com> wrote:
Hey David, as part of the migration to jenkins.phx.ovirt.org,I want to advance with the Lago jobs. I already migrated infra-puppet/infra-docs/ovirt-node/appliance/imgbased jobs, and so far they seem all to work. As far as I understand the Lago jobs are pretty independent so it should be rather simple. Currently there are 3 slaves configured (fc23, el7, fc21).
At the fist stage(until we finish the migration) jenkins_master_deploy-configs_merged is not running, so we could control which jobs get migrated. So if a patch to the jenkins yaml will be introduced during the migration process it will have to be re-run manually.
After migrating I'll disable the lago jobs in jenkins.ovirt.org, so even if JJB runs we will have only one jenkins running the CI checks.
One more question is if there are any other jobs which are dependent on the Lago jobs(like the publishers which are dependent on all build_artifacts on ovirt-node/appliance/node)
As far as I understand the only thing needed for migration is updating the github api tokens and running JJB with *lago*.
What do you think?
Can you migrate user accounts as well? Can't login into jenkins.phx.ovirt.org
Thanks
Nadav.
_______________________________________________ Infra mailing list Infra@ovirt.org http://lists.ovirt.org/mailman/listinfo/infra
-- Sandro Bonazzola Better technology. Faster innovation. Powered by community collaboration. See how it works at redhat.com
-- Sandro Bonazzola Better technology. Faster innovation. Powered by community collaboration. See how it works at redhat.com

done, try now. On Mon, Apr 4, 2016 at 9:57 AM, Sandro Bonazzola <sbonazzo@redhat.com> wrote:
On Mon, Apr 4, 2016 at 8:35 AM, Nadav Goldin <ngoldin@redhat.com> wrote:
Hi Sandro, currently you need to sign up and I'll add you the permissions(you can self enrol in the main page)
User created: Access Denied
sbonazzo is missing the Overall/Read permission
On Mon, Apr 4, 2016 at 9:23 AM, Sandro Bonazzola <sbonazzo@redhat.com> wrote:
On Sun, Apr 3, 2016 at 7:27 PM, Nadav Goldin <ngoldin@redhat.com> wrote:
Hey David, as part of the migration to jenkins.phx.ovirt.org,I want to advance with the Lago jobs. I already migrated infra-puppet/infra-docs/ovirt-node/appliance/imgbased jobs, and so far they seem all to work. As far as I understand the Lago jobs are pretty independent so it should be rather simple. Currently there are 3 slaves configured (fc23, el7, fc21).
At the fist stage(until we finish the migration) jenkins_master_deploy-configs_merged is not running, so we could control which jobs get migrated. So if a patch to the jenkins yaml will be introduced during the migration process it will have to be re-run manually.
After migrating I'll disable the lago jobs in jenkins.ovirt.org, so even if JJB runs we will have only one jenkins running the CI checks.
One more question is if there are any other jobs which are dependent on the Lago jobs(like the publishers which are dependent on all build_artifacts on ovirt-node/appliance/node)
As far as I understand the only thing needed for migration is updating the github api tokens and running JJB with *lago*.
What do you think?
Can you migrate user accounts as well? Can't login into jenkins.phx.ovirt.org
Thanks
Nadav.
_______________________________________________ Infra mailing list Infra@ovirt.org http://lists.ovirt.org/mailman/listinfo/infra
-- Sandro Bonazzola Better technology. Faster innovation. Powered by community collaboration. See how it works at redhat.com
-- Sandro Bonazzola Better technology. Faster innovation. Powered by community collaboration. See how it works at redhat.com

On Mon, Apr 4, 2016 at 9:01 AM, Nadav Goldin <ngoldin@redhat.com> wrote:
done, try now.
looks ok now.
On Mon, Apr 4, 2016 at 9:57 AM, Sandro Bonazzola <sbonazzo@redhat.com> wrote:
On Mon, Apr 4, 2016 at 8:35 AM, Nadav Goldin <ngoldin@redhat.com> wrote:
Hi Sandro, currently you need to sign up and I'll add you the permissions(you can self enrol in the main page)
User created: Access Denied
sbonazzo is missing the Overall/Read permission
On Mon, Apr 4, 2016 at 9:23 AM, Sandro Bonazzola <sbonazzo@redhat.com> wrote:
On Sun, Apr 3, 2016 at 7:27 PM, Nadav Goldin <ngoldin@redhat.com> wrote:
Hey David, as part of the migration to jenkins.phx.ovirt.org,I want to advance with the Lago jobs. I already migrated infra-puppet/infra-docs/ovirt-node/appliance/imgbased jobs, and so far they seem all to work. As far as I understand the Lago jobs are pretty independent so it should be rather simple. Currently there are 3 slaves configured (fc23, el7, fc21).
At the fist stage(until we finish the migration) jenkins_master_deploy-configs_merged is not running, so we could control which jobs get migrated. So if a patch to the jenkins yaml will be introduced during the migration process it will have to be re-run manually.
After migrating I'll disable the lago jobs in jenkins.ovirt.org, so even if JJB runs we will have only one jenkins running the CI checks.
One more question is if there are any other jobs which are dependent on the Lago jobs(like the publishers which are dependent on all build_artifacts on ovirt-node/appliance/node)
As far as I understand the only thing needed for migration is updating the github api tokens and running JJB with *lago*.
What do you think?
Can you migrate user accounts as well? Can't login into jenkins.phx.ovirt.org
Thanks
Nadav.
_______________________________________________ Infra mailing list Infra@ovirt.org http://lists.ovirt.org/mailman/listinfo/infra
-- Sandro Bonazzola Better technology. Faster innovation. Powered by community collaboration. See how it works at redhat.com
-- Sandro Bonazzola Better technology. Faster innovation. Powered by community collaboration. See how it works at redhat.com
-- Sandro Bonazzola Better technology. Faster innovation. Powered by community collaboration. See how it works at redhat.com

--sHrvAb52M6C8blB9 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On 04/03 20:27, Nadav Goldin wrote:
Hey David, as part of the migration to jenkins.phx.ovirt.org,I want to advance with the Lago jobs. I already migrated infra-puppet/infra-docs/ovirt-node/appliance/imgbased jobs, and so far th= ey seem all to work. As far as I understand the Lago jobs are pretty independent so it should be rather simple. Currently there are 3 slaves configured (fc23, el7, fc21).
There are only 3 fc23 slaves, having one less, duplicates the check run tim= e, and having only one, triplicates it, can you create new slaves instead of moving them from the old jenkins? (lago is not the only one using them, so migrating all of them is not an option)
=20 At the fist stage(until we finish the migration) jenkins_master_deploy-configs_merged is not running, so we could control which jobs get migrated. So if a patch to the jenkins yaml will be introduced during the migration process it will have to be re-run manuall= y. =20 After migrating I'll disable the lago jobs in jenkins.ovirt.org, so even = if JJB runs we will have only one jenkins running the CI checks.
Don't allow both to run anything at the same time, that will lead to confus= ion and branches being deleted at strange times on the github repo, if they run= on one jenkins master, run them there only.
=20 One more question is if there are any other jobs which are dependent on t= he Lago jobs(like the publishers which are dependent on all build_artifacts = on ovirt-node/appliance/node)
Lago is self-contained, anything lago needs (check-build-deploy) is tagged = as lago*, any other job that uses lago, get's it from the repos.
=20 As far as I understand the only thing needed for migration is updating the github api tokens and running JJB with *lago*.
And disabling the jobs on the other jenkins. The github configuration is not trivial though, the api token is valid only once and for a specific url. Also you have to configure the github hooks to point to the new jenkins (or it will not get any events), that is done at t= he github page, under project configuration.
=20 What do you think? =20 =20 Thanks =20 Nadav.
_______________________________________________ Infra mailing list Infra@ovirt.org http://lists.ovirt.org/mailman/listinfo/infra
--=20 David Caro Red Hat S.L. Continuous Integration Engineer - EMEA ENG Virtualization R&D Tel.: +420 532 294 605 Email: dcaro@redhat.com IRC: dcaro|dcaroest@{freenode|oftc|redhat} Web: www.redhat.com RHT Global #: 82-62605 --sHrvAb52M6C8blB9 Content-Type: application/pgp-signature; name="signature.asc" -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 iQEcBAEBAgAGBQJXAhn9AAoJEEBxx+HSYmnDwr0H/27yKOt8SvWrATbjx7+yOn0U +pERewhkptueVsigPb6jLQnUye1gFH+ea40FjwBb5FWNDS23Hv3+kjXtKAzNAGKV iszveceqaukod1NujU+M2a8b01P9ElbgOTgZKjHEgWMEhZvyNQYezMkgjpEGqny9 5vt7jPpAu1SorYE5S+s8tRLzbk7Ttlx45XYcQnYV/Av9f13dd3x4iuoWWOujZDVe 6czuedGv7jgNn1Icyhc3IQo6fkkLY0FLaj/PgafeVAUWd8vXWEii3kHpXmsrx9tI DbACAY1M3sZzMR1M0HCJ6afIIittdvXEUuNgcDorKKERjcOmVzwogYfTders/oI= =9N4U -----END PGP SIGNATURE----- --sHrvAb52M6C8blB9--

On Mon, Apr 4, 2016 at 10:38 AM, David Caro Estevez <dcaro@redhat.com> wrote:
Hey David, as part of the migration to jenkins.phx.ovirt.org,I want to advance with the Lago jobs. I already migrated infra-puppet/infra-docs/ovirt-node/appliance/imgbased jobs, and so far
On 04/03 20:27, Nadav Goldin wrote: they
seem all to work. As far as I understand the Lago jobs are pretty independent so it should be rather simple. Currently there are 3 slaves configured (fc23, el7, fc21).
There are only 3 fc23 slaves, having one less, duplicates the check run time, and having only one, triplicates it, can you create new slaves instead of moving them from the old jenkins? (lago is not the only one using them, so migrating all of them is not an option)
Is it possible to add new slaves with the current state of pxe not working? The ideal will be to have all new servers installed with the pxe fixed so we can deploy many more slaves. This way we can just add lots of slaves to the new jenkins.
At the fist stage(until we finish the migration) jenkins_master_deploy-configs_merged is not running, so we could control which jobs get migrated. So if a patch to the jenkins yaml will be introduced during the migration process it will have to be re-run
manually.
After migrating I'll disable the lago jobs in jenkins.ovirt.org, so
even if
JJB runs we will have only one jenkins running the CI checks.
Don't allow both to run anything at the same time, that will lead to confusion and branches being deleted at strange times on the github repo, if they run on one jenkins master, run them there only.
One more question is if there are any other jobs which are dependent on
the
Lago jobs(like the publishers which are dependent on all build_artifacts on ovirt-node/appliance/node)
Lago is self-contained, anything lago needs (check-build-deploy) is tagged as lago*, any other job that uses lago, get's it from the repos.
As far as I understand the only thing needed for migration is updating
the
github api tokens and running JJB with *lago*.
And disabling the jobs on the other jenkins. The github configuration is not trivial though, the api token is valid only once and for a specific url. Also you have to configure the github hooks to point to the new jenkins (or it will not get any events), that is done at the github page, under project configuration.
What do you think?
Thanks
Nadav.
_______________________________________________ Infra mailing list Infra@ovirt.org http://lists.ovirt.org/mailman/listinfo/infra
-- David Caro
Red Hat S.L. Continuous Integration Engineer - EMEA ENG Virtualization R&D
Tel.: +420 532 294 605 Email: dcaro@redhat.com IRC: dcaro|dcaroest@{freenode|oftc|redhat} Web: www.redhat.com RHT Global #: 82-62605
_______________________________________________ Infra mailing list Infra@ovirt.org http://lists.ovirt.org/mailman/listinfo/infra
-- Eyal Edri Associate Manager RHEV DevOps EMEA ENG Virtualization R&D Red Hat Israel phone: +972-9-7692018 irc: eedri (on #tlv #rhev-dev #rhev-integ)

--qLni7iB6Dl8qUSwk Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On 04/04 11:49, Eyal Edri wrote:
On Mon, Apr 4, 2016 at 10:38 AM, David Caro Estevez <dcaro@redhat.com> wrote: =20
Hey David, as part of the migration to jenkins.phx.ovirt.org,I want to advance w= ith the Lago jobs. I already migrated infra-puppet/infra-docs/ovirt-node/appliance/imgbased jobs, and so far
On 04/03 20:27, Nadav Goldin wrote: they
seem all to work. As far as I understand the Lago jobs are pretty independent so it should be rather simple. Currently there are 3 slav= es configured (fc23, el7, fc21).
There are only 3 fc23 slaves, having one less, duplicates the check run time, and having only one, triplicates it, can you create new slaves instead = of moving them from the old jenkins? (lago is not the only one using them,= so migrating all of them is not an option)
=20 Is it possible to add new slaves with the current state of pxe not workin= g? The ideal will be to have all new servers installed with the pxe fixed so we can deploy many more slaves. This way we can just add lots of slaves to the new jenkins.
We already have a template for fc23, it's just creating a new slave from th= at template (can be done from foreman too, faster than installing from scratch= ).
=20 =20
At the fist stage(until we finish the migration) jenkins_master_deploy-configs_merged is not running, so we could cont=
rol
which jobs get migrated. So if a patch to the jenkins yaml will be introduced during the migration process it will have to be re-run manually.
After migrating I'll disable the lago jobs in jenkins.ovirt.org, so even if JJB runs we will have only one jenkins running the CI checks.
Don't allow both to run anything at the same time, that will lead to confusion and branches being deleted at strange times on the github repo, if they run on one jenkins master, run them there only.
One more question is if there are any other jobs which are dependent =
on the
Lago jobs(like the publishers which are dependent on all build_artifa= cts on ovirt-node/appliance/node)
Lago is self-contained, anything lago needs (check-build-deploy) is tag= ged as lago*, any other job that uses lago, get's it from the repos.
As far as I understand the only thing needed for migration is updating
the
github api tokens and running JJB with *lago*.
And disabling the jobs on the other jenkins. The github configuration is not trivial though, the api token is valid = only once and for a specific url. Also you have to configure the github hook= s to point to the new jenkins (or it will not get any events), that is done = at the github page, under project configuration.
What do you think?
Thanks
Nadav.
_______________________________________________ Infra mailing list Infra@ovirt.org http://lists.ovirt.org/mailman/listinfo/infra
-- David Caro
Red Hat S.L. Continuous Integration Engineer - EMEA ENG Virtualization R&D
Tel.: +420 532 294 605 Email: dcaro@redhat.com IRC: dcaro|dcaroest@{freenode|oftc|redhat} Web: www.redhat.com RHT Global #: 82-62605
_______________________________________________ Infra mailing list Infra@ovirt.org http://lists.ovirt.org/mailman/listinfo/infra
=20 =20 --=20 Eyal Edri Associate Manager RHEV DevOps EMEA ENG Virtualization R&D Red Hat Israel =20 phone: +972-9-7692018 irc: eedri (on #tlv #rhev-dev #rhev-integ)
_______________________________________________ Infra mailing list Infra@ovirt.org http://lists.ovirt.org/mailman/listinfo/infra
--=20 David Caro Red Hat S.L. Continuous Integration Engineer - EMEA ENG Virtualization R&D Tel.: +420 532 294 605 Email: dcaro@redhat.com IRC: dcaro|dcaroest@{freenode|oftc|redhat} Web: www.redhat.com RHT Global #: 82-62605 --qLni7iB6Dl8qUSwk Content-Type: application/pgp-signature; name="signature.asc" -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 iQEcBAEBAgAGBQJXAitQAAoJEEBxx+HSYmnDf+gH/0Ne8j0VPyxvn0PcFSxz4luD lQBNKg+WD25dXY55Ly9XKBzxoONAYWKwbY8jB392rnPFMZUK5hoq/cky1jmHP23L SHGTKFLMhtBne1RzE5u1hVQV266kTS0CKwPAGVwgdGHKV+VGT7C/4y/bZWNlGR+c ZaYj2ekHjDyIHwfh1wFPODvEfUcnp2FrsOPE2fEJWZPQvI5m4ORdPUjqBE7k3aAS 1JudRfInFWGxsdwzi2X18+M816MApWJTYrzew1BltJFrIyOfAXOHvBvy8/H1Ansy 2n2a6qKzitx78N9niJynonqUNNAe2j/IrmB0sQ63Nfr2Z2HCeRCxy1PUKCXWG20= =HLFa -----END PGP SIGNATURE----- --qLni7iB6Dl8qUSwk--

On 04/04 11:49, Eyal Edri wrote:
On Mon, Apr 4, 2016 at 10:38 AM, David Caro Estevez <dcaro@redhat.com> wrote: =20
Hey David, as part of the migration to jenkins.phx.ovirt.org,I want to advance= with the Lago jobs. I already migrated infra-puppet/infra-docs/ovirt-node/appliance/imgbased jobs, and so = far
On 04/03 20:27, Nadav Goldin wrote: they
seem all to work. As far as I understand the Lago jobs are pretty independent so it should be rather simple. Currently there are 3 sl= aves configured (fc23, el7, fc21).
There are only 3 fc23 slaves, having one less, duplicates the check r= un time, and having only one, triplicates it, can you create new slaves instea= d of moving them from the old jenkins? (lago is not the only one using the= m, so migrating all of them is not an option)
=20 Is it possible to add new slaves with the current state of pxe not work= ing? The ideal will be to have all new servers installed with the pxe fixed = so we can deploy many more slaves. This way we can just add lots of slaves to the new jenkins. =20 We already have a template for fc23, it's just creating a new slave from =
--liqSWPDvh3eyfZ9k Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On 04/04 10:52, David Caro Estevez wrote: that
template (can be done from foreman too, faster than installing from scrat= ch).
Just to make sure, don't confuse lago jobs with ovirt-system tests, the ovirt-system tests use bare metal slaves, those can't be created from templ= ates so those can only be installed with pxe (or physically on site, as the virt= ual media of the ilo does not work so good)
=20
=20 =20
At the fist stage(until we finish the migration) jenkins_master_deploy-configs_merged is not running, so we could co=
ntrol
which jobs get migrated. So if a patch to the jenkins yaml will be introduced during the migration process it will have to be re-run manually.
After migrating I'll disable the lago jobs in jenkins.ovirt.org, so even if JJB runs we will have only one jenkins running the CI checks.
Don't allow both to run anything at the same time, that will lead to confusion and branches being deleted at strange times on the github repo, if th= ey run on one jenkins master, run them there only.
One more question is if there are any other jobs which are dependen=
t on the
Lago jobs(like the publishers which are dependent on all build_arti= facts on ovirt-node/appliance/node)
Lago is self-contained, anything lago needs (check-build-deploy) is t= agged as lago*, any other job that uses lago, get's it from the repos.
As far as I understand the only thing needed for migration is updat=
ing the
github api tokens and running JJB with *lago*.
And disabling the jobs on the other jenkins. The github configuration is not trivial though, the api token is vali= d only once and for a specific url. Also you have to configure the github ho= oks to point to the new jenkins (or it will not get any events), that is don= e at the github page, under project configuration.
What do you think?
Thanks
Nadav.
_______________________________________________ Infra mailing list Infra@ovirt.org http://lists.ovirt.org/mailman/listinfo/infra
-- David Caro
Red Hat S.L. Continuous Integration Engineer - EMEA ENG Virtualization R&D
Tel.: +420 532 294 605 Email: dcaro@redhat.com IRC: dcaro|dcaroest@{freenode|oftc|redhat} Web: www.redhat.com RHT Global #: 82-62605
_______________________________________________ Infra mailing list Infra@ovirt.org http://lists.ovirt.org/mailman/listinfo/infra
=20 =20 --=20 Eyal Edri Associate Manager RHEV DevOps EMEA ENG Virtualization R&D Red Hat Israel =20 phone: +972-9-7692018 irc: eedri (on #tlv #rhev-dev #rhev-integ) =20 _______________________________________________ Infra mailing list Infra@ovirt.org http://lists.ovirt.org/mailman/listinfo/infra =20 =20 --=20 David Caro =20 Red Hat S.L. Continuous Integration Engineer - EMEA ENG Virtualization R&D =20 Tel.: +420 532 294 605 Email: dcaro@redhat.com IRC: dcaro|dcaroest@{freenode|oftc|redhat} Web: www.redhat.com RHT Global #: 82-62605
--=20 David Caro Red Hat S.L. Continuous Integration Engineer - EMEA ENG Virtualization R&D Tel.: +420 532 294 605 Email: dcaro@redhat.com IRC: dcaro|dcaroest@{freenode|oftc|redhat} Web: www.redhat.com RHT Global #: 82-62605 --liqSWPDvh3eyfZ9k Content-Type: application/pgp-signature; name="signature.asc" -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 iQEcBAEBAgAGBQJXAiulAAoJEEBxx+HSYmnDgbkH/RKrJBzV9izxQjCbMqbbqb6u Fmln/5y/K3VlCdJiNgYEWMfc+olDuknqWo5Rt7hVtLlh4HPNXjgvLEzuxoQ1iTPq FA8ZGCJV/Lwx+vSlkTezShhE1N2iFAAcBWgq0Sr2wrLPOnpfY1KcCK3sCXxADzVV WVoax0cVS+FQM6rYec4ers3FCRtSCaZb5x7Mmq9gh47AubDiIzHLyou+5h4C6jp+ eJusvVmpJjbez1LnUSxHT09Lb86rYfuhRAfltcwgB1caql7/d9V6YTP6VowbRDiB MrgLfitKKbliIU9I4830Yqk/BOqBJR2hG13fg0lnOaQ96X2gVcl7f7hXonKg/ZY= =SfPV -----END PGP SIGNATURE----- --liqSWPDvh3eyfZ9k--

On Mon, Apr 4, 2016 at 11:52 AM, David Caro Estevez <dcaro@redhat.com> wrote:
On Mon, Apr 4, 2016 at 10:38 AM, David Caro Estevez <dcaro@redhat.com> wrote:
Hey David, as part of the migration to jenkins.phx.ovirt.org,I want to advance with the Lago jobs. I already migrated infra-puppet/infra-docs/ovirt-node/appliance/imgbased jobs, and so far
On 04/03 20:27, Nadav Goldin wrote: they
seem all to work. As far as I understand the Lago jobs are pretty independent so it should be rather simple. Currently there are 3 slaves configured (fc23, el7, fc21).
There are only 3 fc23 slaves, having one less, duplicates the check run time, and having only one, triplicates it, can you create new slaves instead of moving them from the old jenkins? (lago is not the only one using
On 04/04 11:49, Eyal Edri wrote: them, so
migrating all of them is not an option)
Is it possible to add new slaves with the current state of pxe not working? The ideal will be to have all new servers installed with the pxe fixed so we can deploy many more slaves. This way we can just add lots of slaves to the new jenkins.
We already have a template for fc23, it's just creating a new slave from that template (can be done from foreman too, faster than installing from scratch).
+1, lets try to install a few new f23 slaves from template then.
At the fist stage(until we finish the migration) jenkins_master_deploy-configs_merged is not running, so we could
control
which jobs get migrated. So if a patch to the jenkins yaml will be introduced during the migration process it will have to be re-run manually.
After migrating I'll disable the lago jobs in jenkins.ovirt.org, so even if JJB runs we will have only one jenkins running the CI checks.
Don't allow both to run anything at the same time, that will lead to confusion and branches being deleted at strange times on the github repo, if they run on one jenkins master, run them there only.
One more question is if there are any other jobs which are dependent
on the
Lago jobs(like the publishers which are dependent on all build_artifacts on ovirt-node/appliance/node)
Lago is self-contained, anything lago needs (check-build-deploy) is tagged as lago*, any other job that uses lago, get's it from the repos.
As far as I understand the only thing needed for migration is
updating the
github api tokens and running JJB with *lago*.
And disabling the jobs on the other jenkins. The github configuration is not trivial though, the api token is valid only once and for a specific url. Also you have to configure the github hooks to point to the new jenkins (or it will not get any events), that is done at the github page, under project configuration.
What do you think?
Thanks
Nadav.
_______________________________________________ Infra mailing list Infra@ovirt.org http://lists.ovirt.org/mailman/listinfo/infra
-- David Caro
Red Hat S.L. Continuous Integration Engineer - EMEA ENG Virtualization R&D
Tel.: +420 532 294 605 Email: dcaro@redhat.com IRC: dcaro|dcaroest@{freenode|oftc|redhat} Web: www.redhat.com RHT Global #: 82-62605
_______________________________________________ Infra mailing list Infra@ovirt.org http://lists.ovirt.org/mailman/listinfo/infra
-- Eyal Edri Associate Manager RHEV DevOps EMEA ENG Virtualization R&D Red Hat Israel
phone: +972-9-7692018 irc: eedri (on #tlv #rhev-dev #rhev-integ)
_______________________________________________ Infra mailing list Infra@ovirt.org http://lists.ovirt.org/mailman/listinfo/infra
-- David Caro
Red Hat S.L. Continuous Integration Engineer - EMEA ENG Virtualization R&D
Tel.: +420 532 294 605 Email: dcaro@redhat.com IRC: dcaro|dcaroest@{freenode|oftc|redhat} Web: www.redhat.com RHT Global #: 82-62605
-- Eyal Edri Associate Manager RHEV DevOps EMEA ENG Virtualization R&D Red Hat Israel phone: +972-9-7692018 irc: eedri (on #tlv #rhev-dev #rhev-integ)

Just to make sure, don't confuse lago jobs with ovirt-system tests, the
ovirt-system tests use bare metal slaves, those can't be created from templates so those can only be installed with pxe (or physically on site, as the virtual media of the ilo does not work so good)
sure, only lago* jobs, either way we can have a final check before I add the github credentials.
We already have a template for fc23, it's just creating a new slave from that
template (can be done from foreman too, faster than installing from scratch).
thought we could migrate all lago jobs with the slaves at once, but because of [1] NGN needs them too, so either way we need to have fc23 slaves on both jenkins's until full migration. +1, lets try to install a few new f23 slaves from template then.
working on it, will update. [1] https://ovirt-jira.atlassian.net/browse/OVIRT-461 On Mon, Apr 4, 2016 at 12:10 PM, Eyal Edri <eedri@redhat.com> wrote:
On Mon, Apr 4, 2016 at 11:52 AM, David Caro Estevez <dcaro@redhat.com> wrote:
On Mon, Apr 4, 2016 at 10:38 AM, David Caro Estevez <dcaro@redhat.com> wrote:
Hey David, as part of the migration to jenkins.phx.ovirt.org,I want to advance with the Lago jobs. I already migrated infra-puppet/infra-docs/ovirt-node/appliance/imgbased jobs, and so far
On 04/03 20:27, Nadav Goldin wrote: they
seem all to work. As far as I understand the Lago jobs are pretty independent so it should be rather simple. Currently there are 3 slaves configured (fc23, el7, fc21).
There are only 3 fc23 slaves, having one less, duplicates the check run time, and having only one, triplicates it, can you create new slaves instead of moving them from the old jenkins? (lago is not the only one using
On 04/04 11:49, Eyal Edri wrote: them, so
migrating all of them is not an option)
Is it possible to add new slaves with the current state of pxe not working? The ideal will be to have all new servers installed with the pxe fixed so we can deploy many more slaves. This way we can just add lots of slaves to the new jenkins.
We already have a template for fc23, it's just creating a new slave from that template (can be done from foreman too, faster than installing from scratch).
+1, lets try to install a few new f23 slaves from template then.
At the fist stage(until we finish the migration) jenkins_master_deploy-configs_merged is not running, so we could
which jobs get migrated. So if a patch to the jenkins yaml will be introduced during the migration process it will have to be re-run manually.
After migrating I'll disable the lago jobs in jenkins.ovirt.org, so even if JJB runs we will have only one jenkins running the CI checks.
Don't allow both to run anything at the same time, that will lead to confusion and branches being deleted at strange times on the github repo, if
control they
run on one jenkins master, run them there only.
One more question is if there are any other jobs which are
dependent on the
Lago jobs(like the publishers which are dependent on all build_artifacts on ovirt-node/appliance/node)
Lago is self-contained, anything lago needs (check-build-deploy) is tagged as lago*, any other job that uses lago, get's it from the repos.
As far as I understand the only thing needed for migration is
updating the
github api tokens and running JJB with *lago*.
And disabling the jobs on the other jenkins. The github configuration is not trivial though, the api token is valid only once and for a specific url. Also you have to configure the github hooks to point to the new jenkins (or it will not get any events), that is done at the github page, under project configuration.
What do you think?
Thanks
Nadav.
_______________________________________________ Infra mailing list Infra@ovirt.org http://lists.ovirt.org/mailman/listinfo/infra
-- David Caro
Red Hat S.L. Continuous Integration Engineer - EMEA ENG Virtualization R&D
Tel.: +420 532 294 605 Email: dcaro@redhat.com IRC: dcaro|dcaroest@{freenode|oftc|redhat} Web: www.redhat.com RHT Global #: 82-62605
_______________________________________________ Infra mailing list Infra@ovirt.org http://lists.ovirt.org/mailman/listinfo/infra
-- Eyal Edri Associate Manager RHEV DevOps EMEA ENG Virtualization R&D Red Hat Israel
phone: +972-9-7692018 irc: eedri (on #tlv #rhev-dev #rhev-integ)
_______________________________________________ Infra mailing list Infra@ovirt.org http://lists.ovirt.org/mailman/listinfo/infra
-- David Caro
Red Hat S.L. Continuous Integration Engineer - EMEA ENG Virtualization R&D
Tel.: +420 532 294 605 Email: dcaro@redhat.com IRC: dcaro|dcaroest@{freenode|oftc|redhat} Web: www.redhat.com RHT Global #: 82-62605
-- Eyal Edri Associate Manager RHEV DevOps EMEA ENG Virtualization R&D Red Hat Israel
phone: +972-9-7692018 irc: eedri (on #tlv #rhev-dev #rhev-integ)
_______________________________________________ Infra mailing list Infra@ovirt.org http://lists.ovirt.org/mailman/listinfo/infra

updates: - lago jobs were migrated, triggering pull request was tested and working, lago-bot commenting and check_merged still needs to be tested. - 4 new fc23 VMs were added to the new Jenkins instance(fc23-vm04-07) - 1 new el7 VM was added (el7-vm25) I've given admin permissions to all infra members who already enrolled, in case anyone needs access. On Mon, Apr 4, 2016 at 12:16 PM, Nadav Goldin <ngoldin@redhat.com> wrote:
Just to make sure, don't confuse lago jobs with ovirt-system tests, the
ovirt-system tests use bare metal slaves, those can't be created from templates so those can only be installed with pxe (or physically on site, as the virtual media of the ilo does not work so good)
sure, only lago* jobs, either way we can have a final check before I add the github credentials.
We already have a template for fc23, it's just creating a new slave from that
template (can be done from foreman too, faster than installing from scratch).
thought we could migrate all lago jobs with the slaves at once, but because of [1] NGN needs them too, so either way we need to have fc23 slaves on both jenkins's until full migration.
+1, lets try to install a few new f23 slaves from template then.
working on it, will update.
[1] https://ovirt-jira.atlassian.net/browse/OVIRT-461
On Mon, Apr 4, 2016 at 12:10 PM, Eyal Edri <eedri@redhat.com> wrote:
On Mon, Apr 4, 2016 at 11:52 AM, David Caro Estevez <dcaro@redhat.com> wrote:
On Mon, Apr 4, 2016 at 10:38 AM, David Caro Estevez <dcaro@redhat.com> wrote:
Hey David, as part of the migration to jenkins.phx.ovirt.org,I want to advance with the Lago jobs. I already migrated infra-puppet/infra-docs/ovirt-node/appliance/imgbased jobs, and so far
On 04/03 20:27, Nadav Goldin wrote: they
seem all to work. As far as I understand the Lago jobs are pretty independent so it should be rather simple. Currently there are 3 slaves configured (fc23, el7, fc21).
There are only 3 fc23 slaves, having one less, duplicates the check run time, and having only one, triplicates it, can you create new slaves instead of moving them from the old jenkins? (lago is not the only one using
On 04/04 11:49, Eyal Edri wrote: them, so
migrating all of them is not an option)
Is it possible to add new slaves with the current state of pxe not working? The ideal will be to have all new servers installed with the pxe fixed so we can deploy many more slaves. This way we can just add lots of slaves to the new jenkins.
We already have a template for fc23, it's just creating a new slave from that template (can be done from foreman too, faster than installing from scratch).
+1, lets try to install a few new f23 slaves from template then.
At the fist stage(until we finish the migration) jenkins_master_deploy-configs_merged is not running, so we could
which jobs get migrated. So if a patch to the jenkins yaml will be introduced during the migration process it will have to be re-run manually.
After migrating I'll disable the lago jobs in jenkins.ovirt.org, so even if JJB runs we will have only one jenkins running the CI checks.
Don't allow both to run anything at the same time, that will lead to confusion and branches being deleted at strange times on the github repo, if
control they
run on one jenkins master, run them there only.
One more question is if there are any other jobs which are
dependent on the
Lago jobs(like the publishers which are dependent on all build_artifacts on ovirt-node/appliance/node)
Lago is self-contained, anything lago needs (check-build-deploy) is tagged as lago*, any other job that uses lago, get's it from the repos.
As far as I understand the only thing needed for migration is
updating the
github api tokens and running JJB with *lago*.
And disabling the jobs on the other jenkins. The github configuration is not trivial though, the api token is valid only once and for a specific url. Also you have to configure the github hooks to point to the new jenkins (or it will not get any events), that is done at the github page, under project configuration.
What do you think?
Thanks
Nadav.
_______________________________________________ Infra mailing list Infra@ovirt.org http://lists.ovirt.org/mailman/listinfo/infra
-- David Caro
Red Hat S.L. Continuous Integration Engineer - EMEA ENG Virtualization R&D
Tel.: +420 532 294 605 Email: dcaro@redhat.com IRC: dcaro|dcaroest@{freenode|oftc|redhat} Web: www.redhat.com RHT Global #: 82-62605
_______________________________________________ Infra mailing list Infra@ovirt.org http://lists.ovirt.org/mailman/listinfo/infra
-- Eyal Edri Associate Manager RHEV DevOps EMEA ENG Virtualization R&D Red Hat Israel
phone: +972-9-7692018 irc: eedri (on #tlv #rhev-dev #rhev-integ)
_______________________________________________ Infra mailing list Infra@ovirt.org http://lists.ovirt.org/mailman/listinfo/infra
-- David Caro
Red Hat S.L. Continuous Integration Engineer - EMEA ENG Virtualization R&D
Tel.: +420 532 294 605 Email: dcaro@redhat.com IRC: dcaro|dcaroest@{freenode|oftc|redhat} Web: www.redhat.com RHT Global #: 82-62605
-- Eyal Edri Associate Manager RHEV DevOps EMEA ENG Virtualization R&D Red Hat Israel
phone: +972-9-7692018 irc: eedri (on #tlv #rhev-dev #rhev-integ)
_______________________________________________ Infra mailing list Infra@ovirt.org http://lists.ovirt.org/mailman/listinfo/infra

--YrQNB5Deg1WGKZi3 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On 04/04 23:23, Nadav Goldin wrote:
updates: - lago jobs were migrated, triggering pull request was tested and working, lago-bot commenting and check_merged still needs to be tested. - 4 new fc23 VMs were added to the new Jenkins instance(fc23-vm04-07) - 1 new el7 VM was added (el7-vm25) =20 I've given admin permissions to all infra members who already enrolled, in case anyone needs access.
Things that I had to change so far: * Allow non-logged in users to read jobs (not managed by puppet) * Add a new credential for the lago deploy job (ssh to resources.ovirt.org = as lago-deploy-snapshot user, not in puppet, using private key) * Upgrade the ssh-agent plugin and restart jenkins, as it was not pulling correctly the upgraded plugin just by 'reloading' https://gerrit.ovirt.org/#/c/55722/ btw. the jenkins host is currently failing to run puppet (it's using testing env), so I was unable to actually verify any patches, as I did not want to = mess up any on-going tests
=20 =20 =20 On Mon, Apr 4, 2016 at 12:16 PM, Nadav Goldin <ngoldin@redhat.com> wrote: =20
Just to make sure, don't confuse lago jobs with ovirt-system tests, the
ovirt-system tests use bare metal slaves, those can't be created from templates so those can only be installed with pxe (or physically on site, as the virtual media of the ilo does not work so good)
sure, only lago* jobs, either way we can have a final check before I add the github credentials.
We already have a template for fc23, it's just creating a new slave from that
template (can be done from foreman too, faster than installing from scratch).
thought we could migrate all lago jobs with the slaves at once, but because of [1] NGN needs them too, so either way we need to have fc23 slaves on both jenkins's until full migration.
+1, lets try to install a few new f23 slaves from template then.
working on it, will update.
[1] https://ovirt-jira.atlassian.net/browse/OVIRT-461
On Mon, Apr 4, 2016 at 12:10 PM, Eyal Edri <eedri@redhat.com> wrote:
On Mon, Apr 4, 2016 at 11:52 AM, David Caro Estevez <dcaro@redhat.com> wrote:
On 04/04 11:49, Eyal Edri wrote:
On Mon, Apr 4, 2016 at 10:38 AM, David Caro Estevez <dcaro@redhat.c=
om>
wrote:
On 04/03 20:27, Nadav Goldin wrote: > Hey David, > as part of the migration to jenkins.phx.ovirt.org,I want to advance with > the Lago jobs. I already migrated > infra-puppet/infra-docs/ovirt-node/appliance/imgbased jobs, and= so far they > seem all to work. As far as I understand the Lago jobs are pret= ty > independent so it should be rather simple. Currently there are 3 slaves > configured (fc23, el7, fc21).
There are only 3 fc23 slaves, having one less, duplicates the che= ck run time, and having only one, triplicates it, can you create new slaves instead of moving them from the old jenkins? (lago is not the only one using them, so migrating all of them is not an option)
Is it possible to add new slaves with the current state of pxe not working? The ideal will be to have all new servers installed with the pxe fi= xed so we can deploy many more slaves. This way we can just add lots of slaves to the new jenkins.
We already have a template for fc23, it's just creating a new slave f= rom that template (can be done from foreman too, faster than installing from scratch).
+1, lets try to install a few new f23 slaves from template then.
> > At the fist stage(until we finish the migration) > jenkins_master_deploy-configs_merged is not running, so we could
control
> which jobs get migrated. So if a patch to the jenkins yaml will=
be
> introduced during the migration process it will have to be re-r= un manually. > > After migrating I'll disable the lago jobs in jenkins.ovirt.org, so even if > JJB runs we will have only one jenkins running the CI checks.
Don't allow both to run anything at the same time, that will lead= to confusion and branches being deleted at strange times on the github repo, if they run on one jenkins master, run them there only.
> > One more question is if there are any other jobs which are dependent on the > Lago jobs(like the publishers which are dependent on all build_artifacts on > ovirt-node/appliance/node)
Lago is self-contained, anything lago needs (check-build-deploy) = is tagged as lago*, any other job that uses lago, get's it from the repos.
> > As far as I understand the only thing needed for migration is updating the > github api tokens and running JJB with *lago*.
And disabling the jobs on the other jenkins. The github configuration is not trivial though, the api token is valid only once and for a specific url. Also you have to configure the github hooks to point to the new jenkins (or it will not get any events), that is done at the github page, under project configuration.
> > What do you think? > > > Thanks > > Nadav.
> _______________________________________________ > Infra mailing list > Infra@ovirt.org > http://lists.ovirt.org/mailman/listinfo/infra
-- David Caro
Red Hat S.L. Continuous Integration Engineer - EMEA ENG Virtualization R&D
Tel.: +420 532 294 605 Email: dcaro@redhat.com IRC: dcaro|dcaroest@{freenode|oftc|redhat} Web: www.redhat.com RHT Global #: 82-62605
_______________________________________________ Infra mailing list Infra@ovirt.org http://lists.ovirt.org/mailman/listinfo/infra
-- Eyal Edri Associate Manager RHEV DevOps EMEA ENG Virtualization R&D Red Hat Israel
phone: +972-9-7692018 irc: eedri (on #tlv #rhev-dev #rhev-integ)
_______________________________________________ Infra mailing list Infra@ovirt.org http://lists.ovirt.org/mailman/listinfo/infra
-- David Caro
Red Hat S.L. Continuous Integration Engineer - EMEA ENG Virtualization R&D
Tel.: +420 532 294 605 Email: dcaro@redhat.com IRC: dcaro|dcaroest@{freenode|oftc|redhat} Web: www.redhat.com RHT Global #: 82-62605
-- Eyal Edri Associate Manager RHEV DevOps EMEA ENG Virtualization R&D Red Hat Israel
phone: +972-9-7692018 irc: eedri (on #tlv #rhev-dev #rhev-integ)
_______________________________________________ Infra mailing list Infra@ovirt.org http://lists.ovirt.org/mailman/listinfo/infra
--=20 David Caro Red Hat S.L. Continuous Integration Engineer - EMEA ENG Virtualization R&D Tel.: +420 532 294 605 Email: dcaro@redhat.com IRC: dcaro|dcaroest@{freenode|oftc|redhat} Web: www.redhat.com RHT Global #: 82-62605 --YrQNB5Deg1WGKZi3 Content-Type: application/pgp-signature; name="signature.asc" -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 iQEcBAEBAgAGBQJXBOuLAAoJEEBxx+HSYmnDnn0H/0mSsluQQgh3CoPHGAnQ1Krm Km68EGcldIy40ehRFvACJ2En1QCzhQjMhufz0C0lfkw6NOwWrNXDFfmReFZC923C FRZuzx2KoiDCa1IaccxLOH490wQDzCYtr1/2dfc56ILjLMIEAdHPGqpE2Yx0wPmV AYBDucQ88Jv3RZNQ42akY8pp3UaZWB3rILSt7L9CC/W7wmmZ8TBqa3+Ba8u3bibJ anJICacbL9WkmrIX0tBP6eJWI8xzdyFWSHw70wEawND5XulRtJ+ourA2NWve5xaG 5IXShzgufX2ilVbcYI5X/aEA+U8Ho00F3JyMaWS0ca1siOKfE+Zdd/xeqZYe4nM= =zHx1 -----END PGP SIGNATURE----- --YrQNB5Deg1WGKZi3--

On Wed, Apr 6, 2016 at 1:57 PM, David Caro Estevez <dcaro@redhat.com> wrote:
hings that I had to change so far:
* Allow non-logged in users to read jobs (not managed by puppet)
* Add a new credential for the lago deploy job (ssh to resources.ovirt.org as lago-deploy-snapshot user, not in puppet, using private key)
* Upgrade the ssh-agent plugin and restart jenkins, as it was not pulling correctly the upgraded plugin just by 'reloading' https://gerrit.ovirt.org/#/c/55722/
commented on the patch, did the ssh-agent upgrade solve it?
btw. the jenkins host is currently failing to run puppet (it's using testing env), so I was unable to actually verify any patches, as I did not want to mess up any on-going tests
sure I'll cherry-pick and test.

--SkvwRMAIpAhPCcCJ Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On 04/06 14:23, Nadav Goldin wrote:
On Wed, Apr 6, 2016 at 1:57 PM, David Caro Estevez <dcaro@redhat.com> wro= te: =20
hings that I had to change so far:
* Allow non-logged in users to read jobs (not managed by puppet)
* Add a new credential for the lago deploy job (ssh to resources.ovirt.= org as lago-deploy-snapshot user, not in puppet, using private key)
* Upgrade the ssh-agent plugin and restart jenkins, as it was not pulli= ng correctly the upgraded plugin just by 'reloading' https://gerrit.ovirt.org/#/c/55722/
commented on the patch, did the ssh-agent upgrade solve it?
Yep, with the above changes too, still testing a full-run (so haven't tested branch deletion and issue/pull request automated close)
=20
btw. the jenkins host is currently failing to run puppet (it's using testing env), so I was unable to actually verify any patches, as I did not want=
to
mess up any on-going tests
sure I'll cherry-pick and test.
--=20 David Caro Red Hat S.L. Continuous Integration Engineer - EMEA ENG Virtualization R&D Tel.: +420 532 294 605 Email: dcaro@redhat.com IRC: dcaro|dcaroest@{freenode|oftc|redhat} Web: www.redhat.com RHT Global #: 82-62605 --SkvwRMAIpAhPCcCJ Content-Type: application/pgp-signature; name="signature.asc" -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 iQEcBAEBAgAGBQJXBPIqAAoJEEBxx+HSYmnDup0H/1r3p07nBmCPltIA5D+RdPeu D5ZmYtbyBWdlAA3GxG1SSowHvHlAASDWIkmYOWtp/m1ntUgKEbVThNobHfF8yI9y qvZkrQ243M2A1LIqFGMh1wurQYURe4HFQdjQBEYQ1vg4cBeT0H4xTXMgUg7IuC7O Se3lhZcdx01TBvCk1VAmAEqJ1M67rZjqCfE2dT24jbDVC8s+oY2EvB0Yz+xlHKUm iX6+srAsBhTcVcFVX60a9kdDJiUbXQsCGThsqOL5m4w01j2HfRzga28DpEzfRHxa +yMN/ksTIIGi+lFDxE6ttOdSf09leIBv9ORdxelxREcFzPlK3MkOo45QKjXR9iE= =9R0o -----END PGP SIGNATURE----- --SkvwRMAIpAhPCcCJ--

On 04/06 14:23, Nadav Goldin wrote:
On Wed, Apr 6, 2016 at 1:57 PM, David Caro Estevez <dcaro@redhat.com> w= rote: =20
hings that I had to change so far:
* Allow non-logged in users to read jobs (not managed by puppet)
* Add a new credential for the lago deploy job (ssh to resources.ovir= t.org as lago-deploy-snapshot user, not in puppet, using private key)
* Upgrade the ssh-agent plugin and restart jenkins, as it was not pul=
--qGV0fN9tzfkG3CxV Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On 04/06 13:25, David Caro Estevez wrote: ling
correctly the upgraded plugin just by 'reloading' https://gerrit.ovirt.org/#/c/55722/
commented on the patch, did the ssh-agent upgrade solve it? =20 Yep, with the above changes too, still testing a full-run (so haven't tes= ted branch deletion and issue/pull request automated close)
Verified, works prefectly :)
=20
=20
btw. the jenkins host is currently failing to run puppet (it's using testing env), so I was unable to actually verify any patches, as I did not wa=
nt to
mess up any on-going tests
sure I'll cherry-pick and test. =20 --=20 David Caro =20 Red Hat S.L. Continuous Integration Engineer - EMEA ENG Virtualization R&D =20 Tel.: +420 532 294 605 Email: dcaro@redhat.com IRC: dcaro|dcaroest@{freenode|oftc|redhat} Web: www.redhat.com RHT Global #: 82-62605
--=20 David Caro Red Hat S.L. Continuous Integration Engineer - EMEA ENG Virtualization R&D Tel.: +420 532 294 605 Email: dcaro@redhat.com IRC: dcaro|dcaroest@{freenode|oftc|redhat} Web: www.redhat.com RHT Global #: 82-62605 --qGV0fN9tzfkG3CxV Content-Type: application/pgp-signature; name="signature.asc" -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 iQEcBAEBAgAGBQJXBPVaAAoJEEBxx+HSYmnDbGQH/2IhZmYS5mUcjsdXRSb3euEq siQFz5PgKkazgxoDwCaMTnRfkvc3e6A0qHsLueRvwiDLk/ZGwM4G3c/hT0UCRFpE F7Bh9NI8rWzT+fcr0BKNxAJ68Uco+/I+oshFK1lrSlcSVlJTQJcT4sz5j2BbpPB+ xts96hleqHus59ES/Yu1hNqzRmZOuz1Zn43QIoHBESnnOy86UXreST+SOpHuJshj i/3v90LHhkffu5PEMisXN80XI+8SOlRt1VBjyIlv8p/+rPr0DWISYafkumjDesu9 8feJ3W+RAZGZC1bWGwXStjr2EbtgzrJZI0Owixwr/9nxn6jVUNehWZKdUmlQcEw= =ApgL -----END PGP SIGNATURE----- --qGV0fN9tzfkG3CxV--
participants (4)
-
David Caro Estevez
-
Eyal Edri
-
Nadav Goldin
-
Sandro Bonazzola