Just to make sure, don't confuse lago jobs with ovirt-system tests, the
ovirt-system tests use bare metal slaves, those can't be created
from
templates
so those can only be installed with pxe (or physically on site, as the
virtual
media of the ilo does not work so good)
sure, only lago* jobs, either way we can have a final check before I add
the github credentials.
We already have a template for fc23, it's just creating a new
slave from
that
> template (can be done from foreman too, faster than installing from
scratch).
thought we could migrate all lago jobs with the slaves at once, but because
of [1]
NGN needs them too, so either way we need to have fc23 slaves on both
jenkins's until
full migration.
+1, lets try to install a few new f23 slaves from template then.
working on it, will update.
[1]
https://ovirt-jira.atlassian.net/browse/OVIRT-461
On Mon, Apr 4, 2016 at 12:10 PM, Eyal Edri <eedri(a)redhat.com> wrote:
> On Mon, Apr 4, 2016 at 11:52
AM, David Caro Estevez <dcaro(a)redhat.com
>
wrote:
>> On 04/04 11:49, Eyal Edri wrote:
>> > On Mon, Apr 4, 2016 at 10:38 AM, David Caro Estevez <dcaro(a)redhat.com
>> > wrote:
>>
>> > > On 04/03 20:27,
Nadav Goldin wrote:
>> > > > Hey David,
>> > > > as part of the migration to jenkins.phx.ovirt.org,I want to
>> advance with
>> > > > the Lago jobs. I already migrated
>> > > > infra-puppet/infra-docs/ovirt-node/appliance/imgbased jobs, and
so
>> far
>> > > they
>> > > > seem all to work. As far as I understand the Lago jobs are pretty
>> > > > independent so it should be rather simple. Currently there are 3
>> slaves
>> > > > configured (fc23, el7, fc21).
>> >
>> >
>> > > There are only 3 fc23 slaves, having one
less, duplicates the check
>> run
>> > > time,
>> > > and having only one, triplicates it, can you create new slaves
>> instead of
>> > > moving them from the old jenkins? (lago is not the only one using
>> them, so
>> > > migrating all of them is not an option)
>> >
>>
>> > Is it possible to add new slaves with the current
state of pxe not
>> working?
>> > The ideal will be to have all new servers installed with the pxe fixed
>> so
>> > we can deploy many more slaves.
>> > This way we can just add lots of slaves to the new jenkins.
>
>> We already have a template for fc23, it's just
creating a new slave from
>> that
>> template (can be done from foreman too, faster than installing from
>> scratch).
>
> +1, lets try to install a few
new f23 slaves from template then.
>
>>
>>
>>
>
>> >
>>
> >
>> > > > At the fist
stage(until we finish the migration)
>> > > > jenkins_master_deploy-configs_merged is not running, so we could
>> control
>> > > > which jobs get migrated. So if a patch to the jenkins yaml will
be
>> > > > introduced during the migration process it will have to be re-run
>> > > manually.
>> > >
>> > > > After
migrating I'll disable the lago jobs in
jenkins.ovirt.org, so
>> > > even if
>> > > > JJB runs we will have only one jenkins running the CI checks.
>> >
>> > > Don't allow
both to run anything at the same time, that will lead to
>> > > confusion
>> > > and branches being deleted at strange times on the github repo, if
>> they
>> > > run on
>> > > one jenkins master, run them there only.
>> >
>> > >
>> > > > One more question is if there are any
other jobs which are
>> dependent on
>> > > the
>> > > > Lago jobs(like the publishers which are dependent on all
>> build_artifacts
>> > > on
>> > > > ovirt-node/appliance/node)
>> >
>> > > Lago is
self-contained, anything lago needs (check-build-deploy) is
>> tagged
>> > > as
>> > > lago*, any other job that uses lago, get's it from the repos.
>> >
>> > >
>> > > > As far as I understand the only thing
needed for migration is
>> updating
>> > > the
>> > > > github api tokens and running JJB with *lago*.
>> >
>> > > And disabling the
jobs on the other jenkins.
>> > > The github configuration is not trivial though, the api token is
>> valid only
>> > > once and for a specific url. Also you have to configure the github
>> hooks to
>> > > point to the new jenkins (or it will not get any events), that is
>> done at
>> > > the
>> > > github page, under project configuration.
>> >
>> > >
>> > > > What do you think?
>> > >
>> > >
>> > > > Thanks
>> > >
>> > > > Nadav.
>> >
>> > > >
_______________________________________________
>> > > > Infra mailing list
>> > > > Infra(a)ovirt.org
>> > > >
http://lists.ovirt.org/mailman/listinfo/infra
>> >
>> >
>> > > --
>> > > David Caro
>> >
>> > > Red Hat S.L.
>> > > Continuous Integration Engineer - EMEA ENG Virtualization R&D
>> >
>> > > Tel.: +420 532
294 605
>> > > Email: dcaro(a)redhat.com
>> > > IRC: dcaro|dcaroest@{freenode|oftc|redhat}
>> > > Web:
www.redhat.com
>> > > RHT Global #: 82-62605
>> >
>> > >
_______________________________________________
>> > > Infra mailing list
>> > > Infra(a)ovirt.org
>> > >
http://lists.ovirt.org/mailman/listinfo/infra
>> >
>> >
>>
>>
>> > --
>> > Eyal Edri
>> > Associate Manager
>> > RHEV DevOps
>> > EMEA ENG Virtualization R&D
>> > Red Hat Israel
>>
>> > phone: +972-9-7692018
>> > irc: eedri (on #tlv #rhev-dev #rhev-integ)
>
>> > _______________________________________________
>> > Infra mailing list
>> > Infra(a)ovirt.org
>> >
http://lists.ovirt.org/mailman/listinfo/infra
>
>
>> --
>> David Caro
>
>> Red Hat S.L.
>> Continuous Integration Engineer - EMEA ENG Virtualization R&D
>
>> Tel.: +420 532 294 605
>> Email: dcaro(a)redhat.com
>> IRC: dcaro|dcaroest@{freenode|oftc|redhat}
>> Web:
www.redhat.com
>> RHT Global #: 82-62605
>
> --
> Eyal Edri
> Associate Manager
> RHEV DevOps
> EMEA ENG Virtualization R&D
> Red Hat Israel
> phone: +972-9-7692018
> irc: eedri (on #tlv #rhev-dev #rhev-integ)
> _______________________________________________
> Infra mailing list
> Infra(a)ovirt.org
>
http://lists.ovirt.org/mailman/listinfo/infra