Proposing a new infra design

Kiril Nesenko kiril at redhat.com
Mon Jun 10 05:22:10 UTC 2013


----- Original Message -----
> From: "Ewoud Kohl van Wijngaarden" <ewoud+ovirt at kohlvanwijngaarden.nl>
> To: infra at ovirt.org
> Sent: Sunday, June 9, 2013 5:50:20 PM
> Subject: Re: Proposing a new infra design
> 
> On Sun, Jun 09, 2013 at 08:03:32AM -0400, Kiril Nesenko wrote:
> > Ewoud Kohl van Wijngaarden wrote:
> > > On Sun, Jun 09, 2013 at 02:40:02AM -0400, Kiril Nesenko wrote:
> > > > This is the design that I propose:
> > > >
> > > > amazon vm1(engine) USA -
> > > >                    |
> > > >                    |- rackspace01.ovirt.org
> > > >                    |- rackspace02.ovirt.org
> > > >
> > > > amazon vm2(engine) France -
> > > >                    |
> > > >                    |- alterway01.ovirt.org
> > > >                    |- alterway02.ovirt.org
> > >
> > > Regarding the amazon engines: don't you need some layer 2 networking
> > > between the manager and the hosts? Wouldn't a VM at rackspace be much
> > > better as an engine because of lower latency? Maybe rackspace can just
> > > provide a VLAN between the engine and hosts which should make our
> > > management much easier? Maybe we can try to achieve a similar situation
> > > at Alterway?
> >
> > If would be great !
> 
> I'd propose that we try to set up rackspace using this design and when
> we know this is running well, we look at converting/upgrading alterway.
> 
> >
> > >
> > > > - Storage
> > > > * For this design we need storage services that will be located in the
> > > >   same DCs as our bare metal hosts.
> > >
> > > Could we use gluster where possible? At alterway for example. For
> > > rackspace I'd prefer local storage per node, but I'll get to that later.
> >
> > gluster is a possible solution, but for gluster we still need external
> > storage.
> 
> You can run gluster on the hosts. Then you don't need external storage.

For the gluster service we need more bare metal hosts right ? Or you want 
to run it on the existing hosts ? 

> 
> > Why do you want to use a local storage :) ?
> 
> For rackspace nodes, I'd prefer to use local storage for jenkins slaves.
> Once we set them up properly I think they can be considered throw away
> thus don't need HA. By using direct attached storage you can gain some
> extra performance. If we have hosts that do need HA (if we were to run
> the jenkins master there for example), I'd prefer using some form of
> shared storage so we can do failovers.
> 
> >
> > >
> > > > * Storage for resources.ovirt.org - make no sense that VM stores RPMs
> > > >   on it. Much better to use a VM with a small HD and use external
> > > >   storage for storing RPMs.
> > >
> > > I don't quite understand this. I get that you'd want different
> > > partitions, but why external storage? Whether the host manages this or
> > > the guest, does it really make a difference?
> >
> > I am not sure on which servers resources.ovirt.org is running right
> > now, but I would like to run our infra on our servers. For this
> > purpose its better to create a VMs with a small HD and use ext.
> > storage to save RPMs on it.
> 
> Currently it's running on linode01. I still don't see the difference
> between the hypervisor using the shared storage (nfs/iscsi/gluster/...)
> and the VM. One advantage of the hypervisor doing it, is that you don't
> have to worry about access to storage on the VM.

What is linode01 ? bare metal ?

What I meant is that VM should use ext. storage for storing the RPMs. In that case 
you will create a VM with a small HD and save some space on the DCs storage domain for another VMs.

The second reason - if the VM will be corrupted somehow, we will have all our RPM repos on the ext. storage,
so you will be able to install a new VM and mount this storage.
> _______________________________________________
> Infra mailing list
> Infra at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra
> 



More information about the Infra mailing list