
--jq0ap7NbKX2Kqbes Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On 01/14 10:41, Barak Korren wrote:
Is there an easy way to do so from jenknis job without failing the job with slave connection error? Most projects I know that use ephemeral slaves have to work around it by having a job that starts/creates a slave tag and provisions the slave, and removes it at the end, if we can skip that extra job level better for us.
Maybe we could use [1] or [2] to trigger an external service. We can use [3] to prevent race conditions. It also opens up a possibility of a 'garbage collector' job that will shut down and remove offline slaves (which will cause pool VMs to come back up clean and re-join Jenkins with the swarm client)
So essentially no, this is what I said I wanted to avoid :/
iirc the puppet manifest for jenkins already has integration with the swarm plugin, we can use that instead.
Great, I'll look into that.
Can't we just use a reserved range for those machines instead? there's no need to remove from foreman, it can work with machines it does not provision.
Do we have such a range available? I was under the impression I will have to wrestle it out of our existing range in which Foreman had been poking holes at random...
We have a small range right now for non-jenkins vms, but it's easy (maybe not fast, but easy) to get the slaves to free another range. But we would have to do so anyhow unless we use internal ips or request a new range.
I'm in favor of using an internal network for the jenkins slaves, if they are the ones connecting to the master there's no need for externally addressable ips, so no need for public ips, though I recall that it was not so easy to set up, better discuss with the hosting
I think that even with swarm, eventually its Jenkins itself that will open connections to the slaves (Swarm plugin afaik is just used to notify Jenkins about slave existence, after taht it is used just like a regular slave, with SSH from Jenkins), so you will need external addresses for slaves as long as Jenkins in not running in PHX.
Afaik the swarm plugin is an extension of the jnlp slave connection method, and does not allow to change the connection method to ssh, it uses it's own (or so it seems from the docs, maybe that changed) that is to connect to the master from the slave.
Can't you just autoasign a hostgroup on creation on formean or something? Quick search throws a plugin that might do the trick: https://github.com/GregSutcliffe/foreman_default_hostgroup
+1 on moving any data aside from the hostgroup assignation to hiera though, so it can be versioned and peer reviewed.
I kinda prefer to move foreman out of the provisioning process here, I'm burned by our bad experience with it. At it seems to me we are agreed on this. =20 [1]: https://wiki.jenkins-ci.org/display/JENKINS/Notification+Plugin [2]: http://git.openstack.org/cgit/openstack-infra/zmq-event-publisher/tr= ee/README [3]: https://wiki.jenkins-ci.org/display/JENKINS/Single+Use+Slave+Plugin =20 =20 --=20 Barak Korren bkorren@redhat.com RHEV-CI Team
--=20 David Caro Red Hat S.L. Continuous Integration Engineer - EMEA ENG Virtualization R&D Tel.: +420 532 294 605 Email: dcaro@redhat.com IRC: dcaro|dcaroest@{freenode|oftc|redhat} Web: www.redhat.com RHT Global #: 82-62605 --jq0ap7NbKX2Kqbes Content-Type: application/pgp-signature; name="signature.asc" -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 iQEcBAEBAgAGBQJWl2t0AAoJEEBxx+HSYmnDLZQIAIQAVjZuXwp+gnuF44s+9pIO jwREVnLDgxHk8yIy4rflGUNT7KfIKKpWpCtmbeifiy2U3AojZEyKf91vQSPPjWo5 ANHTw0cL3ubvB/q55w/qRUwfpveoJ4OGX55VfVmk6KFauwhSrTjddBLXw56HfIjG +HPKgDEqrtgXafrX9H37KXmpLUWznzcUeOxLZmbrWWpVKdzQer3LXxe42vTERSSk O4D8fSAyH/9W5LoOf0IrPPQNyVwI11blBLgdft0CxBygXRYaXyAIhw3t/5IJFguT RtZ87iWQq+LDxrmc5HQT1owYZLzpXC6GMwEg435WZFivTwfjhB1+2mBc+oGvq0k= =cOFd -----END PGP SIGNATURE----- --jq0ap7NbKX2Kqbes--