Using oVirt VM pools in oVirt infra

David Caro dcaro at redhat.com
Wed Jan 13 17:14:11 UTC 2016


On 01/13 18:02, Anton Marchukov wrote:
> Hello All.
> 
> > What this comes down to is that if you run 'shutdown' in a VM from a
> > > pool, you will automatically get back a clean VM a few minutes later.
> > >
> >
> > Is there an easy way to do so from jenknis job without failing the job
> > with slave connection error? Most projects I know that use ephemeral
> >
> 
> But why we need it here. Do we really need to target ephemeral slaves or UI
> management of pool servers is not good enough in ovirt?

The issue is being able to recycle the slaves without breaking
any jenkins jobs, and if possble, automatically. Iiuc the key idea of
those slaves, is that they are ephemeral, so we can create/destroy
them on demand really easily.

> 
> > 1. Cease from creating new VMs in PHX via Foreman for a while.
> > > 2. Shutdown the PHX foreman proxy to disconnect it from managing the
> > > DNS and DHCP.
> > > 3. Map out our currently active MAC->IP->HOSTNAME combinations and
> > > create static DNS and DHCP configuration files (I suggest we also
> > > migrate from BIND+ISC DHCPD to Dnsmasq which is far easier to
> > > configure and provides very tight DNS, DHCP and TFTP integration)
> > > 4. Add configuration for a dynamically assigned IP range as described
> > above.
> > >
> > Can't we just use a reserved range for those machines instead? there's
> > no need to remove from foreman, it can work with machines it does not
> > provision.
> >
> 
> As I understand the problem here is that in one VLAN we obviously can have
> only one DHCP and if it is managed by foreman it may not be possible to
> have a range there that is not touchable by foreman. But it depends on how
> foreman touches DHCP config.

We already have reserved ips and ranges in the same dhcp that is
managed by foreman.

> 
> 
> > > Another way to resolve the current problem of coming up with a
> > > dynamically assignable range of IPs, is to create a new VLAN in PHX
> > > for the new pools of VMs.
> > >
> > I'm in favor of using an internal network for the jenkins slaves, if
> > they are the ones connecting to the master there's no need for
> > externally addressable ips, so no need for public ips, though I recall
> > that it was not so easy to set up, better discuss with the hosting
> >
> 
> I think if we want to scale that public IPv4 IPs might be indeed quite
> wasteful. I though about using IPv6 since e.g. we can just have one prefix
> and there is no need for DHCP so such VMs will live in the same VLAN as
> foreman if needed with no problem. But as I understand we need IPv4
> addressing on the slave for the tests, do I get it correct?
>

I'm not really sure, but if we are using lago for the functional
tests, maybe there's no need for them. I'm not really familiar with
ipv6, maybe it's time to get to know it :)

> 
> > Can't you just autoasign a hostgroup on creation on formean or
> > something?
> > Quick search throws a plugin that might do the trick:
> >   https://github.com/GregSutcliffe/foreman_default_hostgroup
> >
> > +1 on moving any data aside from the hostgroup assignation to hiera
> > though, so it can be versioned and peer reviewed.
> >
> 
> Can we somehow utilize cloud init for this.

I don't like the slaves explicitly registering themselves into
foreman, that makes the provision totally coupled with it from the
slave perspective.

> 
> Also do we really want to use vanilla OS templates for this instead of
> building our own based on vanialla but with configuration setting needed
> for us. I think it will also fasten slave creation although since they are
> not ephemeral this will not give much.
> 
> -- 
> Anton Marchukov
> Senior Software Engineer - RHEV CI - Red Hat

-- 
David Caro

Red Hat S.L.
Continuous Integration Engineer - EMEA ENG Virtualization R&D

Tel.: +420 532 294 605
Email: dcaro at redhat.com
IRC: dcaro|dcaroest@{freenode|oftc|redhat}
Web: www.redhat.com
RHT Global #: 82-62605
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 473 bytes
Desc: not available
URL: <http://lists.ovirt.org/pipermail/infra/attachments/20160113/3643f7e9/attachment.sig>


More information about the Infra mailing list