Moving to new infra vendor for the oVirt project

Eyal Edri eedri at redhat.com
Sun Dec 22 08:32:47 UTC 2013



----- Original Message -----
> From: "Ewoud Kohl van Wijngaarden" <ewoud+ovirt at kohlvanwijngaarden.nl>
> To: infra at ovirt.org
> Sent: Thursday, December 19, 2013 4:22:18 PM
> Subject: Re: Moving to new infra vendor for the oVirt project
> 
> On Thu, Dec 19, 2013 at 01:43:16AM -0500, Eyal Edri wrote:
> > Due to some technical difficulties and current experience we have decide to
> > switch hosting provider for the oVirt project,
> > to host mostly the Jenkins infra (jenkins slave mostly), but potentially
> > alterway servers as well (for production servers).
> > 
> > I would like to hear your thoughts about which servers and HW requirements
> > we need for the infra.
> > 
> > The proposed infra layout:
> > ===========================
> > 1. ovirt-engine 3.3.x stable manager to run on a VM[1]
> > 2. 3 bare-metal servers to serve as hypervisors for jenkins slaves and
> > nested vms for automation tests. [2]
> >    we'll expect to run at least 4 VMs on each hypervisor for various
> >    jenkins slaves, and have the ability to take one host for maintanance
> >    w/o affecting the
> >    infra downtime too much.
> > 3. external storage for a storage domain (NFS or ISCSI) and backups for all
> > oVirt services. [3]
> > 4. possibly manage alterway servers on the same ovirt-engine instance if
> > performance won't decline.
> >    (we can request for a european datacenter if that will reduce latency).
> >    we might want to install another foreman-proxy on that DC to allow
> >    installing VMs locally.
> >    we will want to migrate the jenkins server which is hosted on bare-metal
> >    now on alterway to a VM as well.
> > 5. fast network connection [4]
> > 
> > We need to decide on the optimal hardware for this infrastructure, going
> > forward with the oVirt project expanding
> > and needing more resources.
> > 
> > an initial estimation for the hardware: (feel free to pitch in and
> > propose/change)
> > =======================================
> > [1] engine VM should have 16GB RAM and 200GB disk, i'm not sure about which
> > CPU will be optimal yet - thoughts?
> > [2] bare metal hosts should have 48GB ram (with option to expand to 96 if
> > needed), no need for too much HD space if we're
> >     going to use external storage servers. available servers are:
> >     http://www.softlayer.com/dedicated-servers/dual-processor-servers
> > [3] 2TB NAS or Iscsi storage for hosting backups and storage domain for all
> > the VMs
> > [4] they offer public bandwitch of 5000GB, and a choise of uplink speeds -
> > 100mbps for public and private network for no-extra charge
> >     or 1GBPS uplink for extra cost.
> 
> I'd prefer to start with 96GB RAM. You can never have enough RAM ;)
> I think that 4 slaves per hypervisor isn't all that much given we also
> want to start supporting multiple operating systems.

Correct, though if this is upgradable i'm not sure it's worth going a head with full option
for start, cause it might affect our ability to get budget for other components (like external storage).

I'd like to get a feedback on the actual server to consider which might affect the overall cost.
there are 4 options of server families to choose from: (specific specs are in the url [1])
which differ in CPU power and cache size (

1. xeon 5300 series (not relevant - offers max 32 GB mem)
2. xeon 5400 series (not relevant - offers max 32 GB mem)
3. xeon 5500 series
4. xeon 5600 series
5. xeon E5-2600 series 

[1] http://www.softlayer.com/dedicated-servers/dual-processor-servers

> 
> Let's say we have Fedora, EL and Ubuntu. For each we support 1 or 2
> releases (F19, F20, EL5, EL6, Ubuntu latest LTS) that's already 5
> distros. With 2 slaves per distro, that's 10 slaves. Then we also
> sometimes want to start supporting a beta (F21, EL7, Ubuntu next LTS).
> 
> More RAM can also lower the requirement on storage because of caching.
> 
> Network wise I may be spoiled, but at $work it's all 1G and more and
> more 10G so I'd prefer 1G. That said, I don't know the extra cost. It
> will also depend on if we want to put resources.ovirt.org there. Maybe
> we can get more community support by setting up a mirrorlist and lower
> the requirement.
> 
> That said, +1 to moving. I'm suprised it was this hard to get things
> going at Rackspace.
> _______________________________________________
> Infra mailing list
> Infra at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra
> 



More information about the Infra mailing list