CloudSpin is back!

Eyal Edri eedri at redhat.com
Tue Apr 26 12:25:50 UTC 2016


+1.

Could be great to have a backup ovirt-engine for all our production
services.
I would focus on that front, we don't need currently more
hardware/resources for CI IMO, since we didn't maximized yet our PHX
resources (will soon with stateless pool + local disk optimizations)

On Tue, Apr 26, 2016 at 3:20 PM, Barak Korren <bkorren at redhat.com> wrote:

> Maybe we can ask him to host our bacups/DR WDYT?
>
>
> ---------- Forwarded message ----------
> From: Brian Proffitt <bproffit at redhat.com>
> Date: 25 April 2016 at 18:51
> Subject: Fwd: CloudSpin is back!
> To: Eyal Edri <eedri at redhat.com>, infra <infra at ovirt.org>, Donny Davis
> <dondavis at redhat.com>
>
>
> All:
>
> This is Donny, who worked with me to do an oVirt case study last year.
> He now works for Red Hat, and is interested in donating server space
> to the oVirt project, if needed.
>
> See the message below and let's get the discussion going!
>
> Peace,
> Brian
>
> ---------- Forwarded message ----------
> From: Donny Davis <dondavis at redhat.com>
> Date: Sat, Apr 23, 2016 at 12:16 AM
> Subject: CloudSpin is back!
> To: Brian Proffitt <bproffit at redhat.com>
>
>
> Brian,
>
> As promised, CloudSpin is back.
>
> This time I am bringing a little more to the party. I have some (30)
> public ipv4 address for the community, I would assume mainly for
> committers or infra needs. IPv4 address are so hard to get a hold of
> outside the "cloud", but the one's I do have will be shared for a
> great project.
>
> I also have some dedicated HW servers for the ovirt project.  There
> are currently 3 idle servers with ssd's and a 100+ GB of RAM each.
> Please let me know what you need, or the dev's want
>
> I have an all new DC, with 52TB of SAN storage, and a new blade
> chassis. Everything is on 10GBE fiber. The only change is the name, I
> didn't renew cloudspin in time and some **sh*l* took it from me...
>
> Anyways, I am on the fortnebula domain now.
>
> System status as of now is as follows
>
> Ovirt - 6 Nodes with 48GB each and 2x Quad core 2.93 GHZ procs (196 GB
> of RAM for each is enroute)
>
> Openstack - 4 Node with 96GB of ram and 2x Quad core 2.93 GHZ procs
>
> Openshift - 3  Node with 96GB of ram and 2x Quad core 2.93 GHZ procs
>
> One Critical systems SAN 2TB available (This SAN out preforms SSD storage)
>
> One General Storage SAN 52TB available
>
> You helped me get to Red Hat, which was my dream. I would like to give
> something back to the ovirt community that got me here.
>
> If the demand is higher for any particular system, then resources will
> be shifted to the system with the highest demand IE, I will
> de-commision openstack or openshift to meet the demands required.
>
> Also everything is on UPS, and a generator is also going to be
> installed in the next few weeks.
>
> I aim to provide 99% uptime to the ovirt project.
>
> Thanks again Brian
>
> Donny Davis
> dondavis at redhat.com
> DOD Public Sector Solution Architect
> RHCVA
> Cell 805 814 6800
>
>
>
> --
> Brian Proffitt
> Principal Community Analyst
> Open Source and Standards
> @TheTechScribe
> 574.383.9BKP
>
> _______________________________________________
> Infra mailing list
> Infra at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra
>
>
>
> --
> Barak Korren
> bkorren at redhat.com
> RHEV-CI Team
> _______________________________________________
> Infra mailing list
> Infra at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra
>
>
>


-- 
Eyal Edri
Associate Manager
RHEV DevOps
EMEA ENG Virtualization R&D
Red Hat Israel

phone: +972-9-7692018
irc: eedri (on #tlv #rhev-dev #rhev-integ)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/infra/attachments/20160426/4de5b527/attachment.html>


More information about the Infra mailing list