--YrQNB5Deg1WGKZi3
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
On 04/04 23:23, Nadav Goldin wrote:
updates:
- lago jobs were migrated, triggering pull request was tested and working,
lago-bot commenting and check_merged still needs to be tested.
- 4 new fc23 VMs were added to the new Jenkins instance(fc23-vm04-07)
- 1 new el7 VM was added (el7-vm25)
=20
I've given admin permissions to all infra members who already enrolled, in
case anyone needs access.
Things that I had to change so far:
* Allow non-logged in users to read jobs (not managed by puppet)
* Add a new credential for the lago deploy job (ssh to
resources.ovirt.org =
as
lago-deploy-snapshot user, not in puppet, using private key)
* Upgrade the ssh-agent plugin and restart jenkins, as it was not pulling
correctly the upgraded plugin just by 'reloading'
https://gerrit.ovirt.org/#/c/55722/
btw. the jenkins host is currently failing to run puppet (it's using testing
env), so I was unable to actually verify any patches, as I did not want to =
mess
up any on-going tests
=20
=20
=20
On Mon, Apr 4, 2016 at 12:16 PM, Nadav Goldin <ngoldin(a)redhat.com> wrote:
=20
>
>
> Just to make sure, don't confuse lago jobs with ovirt-system tests, the
>> ovirt-system tests use bare metal slaves, those can't be created from
>> templates
>> so those can only be installed with pxe (or physically on site, as the
>> virtual
>> media of the ilo does not work so good)
>>
> sure, only lago* jobs, either way we can have a final check before I add
> the github credentials.
>
> > We already have a template for fc23, it's just creating a new slave
>> from that
>> > template (can be done from foreman too, faster than installing from
>> scratch).
>
> thought we could migrate all lago jobs with the slaves at once, but
> because of [1]
> NGN needs them too, so either way we need to have fc23 slaves on both
> jenkins's until
> full migration.
>
> +1, lets try to install a few new f23 slaves from template then.
>>
> working on it, will update.
>
>
>
> [1]
https://ovirt-jira.atlassian.net/browse/OVIRT-461
>
> On Mon, Apr 4, 2016 at 12:10 PM, Eyal Edri <eedri(a)redhat.com> wrote:
>
>>
>>
>> On Mon, Apr 4, 2016 at 11:52 AM, David Caro Estevez <dcaro(a)redhat.com>
>> wrote:
>>
>>> On 04/04 11:49, Eyal Edri wrote:
>>> > On Mon, Apr 4, 2016 at 10:38 AM, David Caro Estevez <dcaro(a)redhat.c=
om>
>>> > wrote:
>>> >
>>> > > On 04/03 20:27, Nadav Goldin wrote:
>>> > > > Hey David,
>>> > > > as part of the migration to jenkins.phx.ovirt.org,I want to
>>> advance with
>>> > > > the Lago jobs. I already migrated
>>> > > > infra-puppet/infra-docs/ovirt-node/appliance/imgbased jobs,
and=
so
>>> far
>>> > > they
>>> > > > seem all to work. As far as I understand the Lago jobs are
pret=
ty
>>> > > > independent so it should be rather
simple. Currently there are 3
>>> slaves
>>> > > > configured (fc23, el7, fc21).
>>> > >
>>> > >
>>> > > There are only 3 fc23 slaves, having one less, duplicates the che=
ck
>>> run
>>> > > time,
>>> > > and having only one, triplicates it, can you create new slaves
>>> instead of
>>> > > moving them from the old jenkins? (lago is not the only one using
>>> them, so
>>> > > migrating all of them is not an option)
>>> > >
>>> >
>>> > Is it possible to add new slaves with the current state of pxe not
>>> working?
>>> > The ideal will be to have all new servers installed with the pxe fi=
xed
>>> so
>>> > we can deploy many more slaves.
>>> > This way we can just add lots of slaves to the new jenkins.
>>>
>>> We already have a template for fc23, it's just creating a new slave f=
rom
>>> that
>>> template (can be done from foreman too, faster than installing from
>>> scratch).
>>>
>>
>> +1, lets try to install a few new f23 slaves from template then.
>>
>>
>>>
>>> >
>>> >
>>> > >
>>> > >
>>> > > >
>>> > > > At the fist stage(until we finish the migration)
>>> > > > jenkins_master_deploy-configs_merged is not running, so we
could
>>> control
>>> > > > which jobs get migrated. So if a patch to the jenkins yaml
will=
be
>>> > > > introduced during the migration process
it will have to be re-r=
un
>>> > > manually.
>>> > > >
>>> > > > After migrating I'll disable the lago jobs in
jenkins.ovirt.org,
>>> so
>>> > > even if
>>> > > > JJB runs we will have only one jenkins running the CI
checks.
>>> > >
>>> > > Don't allow both to run anything at the same time, that will
lead=
to
>>> > > confusion
>>> > > and branches being deleted at strange times on the github repo,
if
>>> they
>>> > > run on
>>> > > one jenkins master, run them there only.
>>> > >
>>> > > >
>>> > > > One more question is if there are any other jobs which are
>>> dependent on
>>> > > the
>>> > > > Lago jobs(like the publishers which are dependent on all
>>> build_artifacts
>>> > > on
>>> > > > ovirt-node/appliance/node)
>>> > >
>>> > > Lago is self-contained, anything lago needs (check-build-deploy) =
is
>>> tagged
>>> > > as
>>> > > lago*, any other job that uses lago, get's it from the repos.
>>> > >
>>> > > >
>>> > > > As far as I understand the only thing needed for migration
is
>>> updating
>>> > > the
>>> > > > github api tokens and running JJB with *lago*.
>>> > >
>>> > > And disabling the jobs on the other jenkins.
>>> > > The github configuration is not trivial though, the api token is
>>> valid only
>>> > > once and for a specific url. Also you have to configure the
github
>>> hooks to
>>> > > point to the new jenkins (or it will not get any events), that is
>>> done at
>>> > > the
>>> > > github page, under project configuration.
>>> > >
>>> > > >
>>> > > > What do you think?
>>> > > >
>>> > > >
>>> > > > Thanks
>>> > > >
>>> > > > Nadav.
>>> > >
>>> > > > _______________________________________________
>>> > > > Infra mailing list
>>> > > > Infra(a)ovirt.org
>>> > > >
http://lists.ovirt.org/mailman/listinfo/infra
>>> > >
>>> > >
>>> > > --
>>> > > David Caro
>>> > >
>>> > > Red Hat S.L.
>>> > > Continuous Integration Engineer - EMEA ENG Virtualization R&D
>>> > >
>>> > > Tel.: +420 532 294 605
>>> > > Email: dcaro(a)redhat.com
>>> > > IRC: dcaro|dcaroest@{freenode|oftc|redhat}
>>> > > Web:
www.redhat.com
>>> > > RHT Global #: 82-62605
>>> > >
>>> > > _______________________________________________
>>> > > Infra mailing list
>>> > > Infra(a)ovirt.org
>>> > >
http://lists.ovirt.org/mailman/listinfo/infra
>>> > >
>>> > >
>>> >
>>> >
>>> > --
>>> > Eyal Edri
>>> > Associate Manager
>>> > RHEV DevOps
>>> > EMEA ENG Virtualization R&D
>>> > Red Hat Israel
>>> >
>>> > phone: +972-9-7692018
>>> > irc: eedri (on #tlv #rhev-dev #rhev-integ)
>>>
>>> > _______________________________________________
>>> > Infra mailing list
>>> > Infra(a)ovirt.org
>>> >
http://lists.ovirt.org/mailman/listinfo/infra
>>>
>>>
>>> --
>>> David Caro
>>>
>>> Red Hat S.L.
>>> Continuous Integration Engineer - EMEA ENG Virtualization R&D
>>>
>>> Tel.: +420 532 294 605
>>> Email: dcaro(a)redhat.com
>>> IRC: dcaro|dcaroest@{freenode|oftc|redhat}
>>> Web:
www.redhat.com
>>> RHT Global #: 82-62605
>>>
>>
>>
>>
>> --
>> Eyal Edri
>> Associate Manager
>> RHEV DevOps
>> EMEA ENG Virtualization R&D
>> Red Hat Israel
>>
>> phone: +972-9-7692018
>> irc: eedri (on #tlv #rhev-dev #rhev-integ)
>>
>> _______________________________________________
>> Infra mailing list
>> Infra(a)ovirt.org
>>
http://lists.ovirt.org/mailman/listinfo/infra
>>
>>
>
--=20
David Caro
Red Hat S.L.
Continuous Integration Engineer - EMEA ENG Virtualization R&D
Tel.: +420 532 294 605
Email: dcaro(a)redhat.com
IRC: dcaro|dcaroest@{freenode|oftc|redhat}
Web:
www.redhat.com
RHT Global #: 82-62605
--YrQNB5Deg1WGKZi3
Content-Type: application/pgp-signature; name="signature.asc"
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAEBAgAGBQJXBOuLAAoJEEBxx+HSYmnDnn0H/0mSsluQQgh3CoPHGAnQ1Krm
Km68EGcldIy40ehRFvACJ2En1QCzhQjMhufz0C0lfkw6NOwWrNXDFfmReFZC923C
FRZuzx2KoiDCa1IaccxLOH490wQDzCYtr1/2dfc56ILjLMIEAdHPGqpE2Yx0wPmV
AYBDucQ88Jv3RZNQ42akY8pp3UaZWB3rILSt7L9CC/W7wmmZ8TBqa3+Ba8u3bibJ
anJICacbL9WkmrIX0tBP6eJWI8xzdyFWSHw70wEawND5XulRtJ+ourA2NWve5xaG
5IXShzgufX2ilVbcYI5X/aEA+U8Ho00F3JyMaWS0ca1siOKfE+Zdd/xeqZYe4nM=
=zHx1
-----END PGP SIGNATURE-----
--YrQNB5Deg1WGKZi3--