----- Original Message -----
> From: "Robert Middleswarth" <robert(a)middleswarth.net>
> To: "Ewoud Kohl van Wijngaarden" <ewoud+ovirt(a)kohlvanwijngaarden.nl>
> Cc: infra(a)ovirt.org
> Sent: Wednesday, August 1, 2012 6:02:06 AM
> Subject: Re: Moving Jenkins master ASAP
>
> On 07/31/2012 07:12 PM, Ewoud Kohl van Wijngaarden wrote:
>> On Tue, Jul 31, 2012 at 02:57:56PM -0400, Robert Middleswarth
>> wrote:
>>> On 07/31/2012 02:16 PM, Ewoud Kohl van Wijngaarden wrote:
>>>> On Tue, Jul 31, 2012 at 07:52:25AM -0700, Karsten 'quaid' Wade
>>>> wrote:
>>>>> On 07/31/2012 07:44 AM, Karsten 'quaid' Wade wrote:
>>>>>> We need to pick a new hosting solution for
jenkins.ovirt.org.
>>>>>>
>>>>>> One idea is for us to throw out some favorite hosting providers
>>>>>> here, and see if we can sort out what would be a good solution.
>>>>> This post is what made me aware that EC2 would be a dead-end for
>>>>> us
>>>>> for now:
>>>>>
>>>>>
http://blog.carlmercier.com/2012/01/05/ec2-is-basically-one-big-ripoff/
>>>>>
>>>>> In that post, the author used this host for comparison testing:
>>>>>
>>>>>
http://joesdatacenter.com/
>>>> My employer is a hosting provider so I'm somewhat biased here.
>>> It not just about the provider. I would need to see the bandwidth
>>> charts on the current Jenkins but I assume Just about any provider
>>> can handle it bandwidth needs. But the server Jenkins Master
>>> needs
>>> to run on. EC2 isn't cutting it. My testing box is a basic Sata
>>> drive and it is running much faster but there is no user load on
>>> the
>>> box. We really need a box with raid 10 drives in it to handle the
>>> high IO needs.
>>
http://jenkins.ekohl.nl/munin/ekohl.nl/jenkins.ekohl.nl/index.html
>> are
>> the stats of the jenkins slave we (my employer) provide. This is a
>> production load. Quick analysis shows that IO is limiting at times,
>> but
>> the high IO peaks correlate to the swap. So adding more than 8GB
>> RAM
>> would lessen the requirement on the IO. Note that it is currently
>> running on our SATA SAN, but I don't know the RAID config from the
>> top
>> of my head.
> Slave boxes are diff from the master. Jenkins copies all the files
> over
> from the master to the slave then back up to the master. Using a
> good
> chunk of bandwidith and disk IO on both the slaves and the master.
> Every job requires IO on the master and a lot of it. As the number
> of
> slaves goes up so does the IO on the master. The current EC2
> instance
> isn't holding it own with load. Spikes can literately take it
> offline
> and even when it is idle it still is showing a ton of IO from the
> people
> visiting the site. The question is with the limited budget what can
> we do.
>
> What we really want for the master is a dedicated machine with a sas
> /
> ssd raid 10 controller aka the profile of a database server. What we
> can get away with for now is the real question. I can offer up my
> boxes
> they are just running on Sata drives and currently running behind my
> 70/35 Verizon FiOS connection ( Jenkins.ovirt.info ). We could move
> them to a local co-lo for about $40.00 a U per month a friend of mine
> runs.
>
> What does everyone else think?
we can look at jenkins master load on
jenkins.ovirt.org/monitoring (you need to be
jenkins administrator to see it).
That is only monitoring the web traffic and not
the actual build process
and 3/4 of the data being pushed is archive.zip files for the node-iso
builds. Although part of the build process it is the only part that
shows up in the monitoring.
And with nightly getting pushed to
sometime this week the
traffic should drop even lower.
So far the only options I have seen talked about are.
1) limit what Jenkins can do by leaving it as is until hardware becomes
available though *Red Hat in a few months. I don't think Jenkins will
be able to make use of any more slaves since it is having a hard time
even keeping up with the current number of slaves.
2) We build a 2nd master on EC2 and break the load up some.
3) Move to another VPS provider and see if Jenkins Master will run
better on one of those services. Or if we can get the budget for it a
dedicated box.
4) We find someone else who will donate hardware I would say 16G of Ram
and either local storage or 10G storage network is pretty much min
requirement
5) We use my hardware on ovirt.info and migrate the slaves over to it.
Either using my current ISP that is fast but not on a static IP or Co-lo
the boxes (About $80 a month). My boxes are dual quad core's with 16G
of ram each but only have sata drives and no raid controller.
*Since my understanding is Red Hat has promised hardware in the 2012
4th/2013 1st qtr we are looking to get by until then.
--
Thanks
Robert Middleswarth
@rmiddle (twitter/IRC)