On Thursday, September 18, 2014 01:10:15 AM Michael Scherer wrote:
Le mercredi 17 septembre 2014 à 15:37 -0400, R P Herrold a écrit :
> On Wed, 17 Sep 2014, Michael Scherer wrote:
> > As I said in the past, the plan wouldn't work. To have 2 gears
> > communicate, we need to have them setup in a specific way, not just 2
> > gears in the same account. If one is moved to another node, we need to
> > have a specific triggers on the webserver gear to trigger a potential
> > configuration change.
>
> Why not just point the two through a pair of keyed access
> openvpn links, each to a fixed (and routing) central hub?
>
> MySQL will communicate just fine across a network fabric
>
> hub
>
> 10.0.0.1 10.0.1.1
>
> / \
>
> / \
>
> 10.0.0.2 10.0.1.2
> gear A gear B
> (the wiki) (the MySQL server)
>
> The hub just routes 10.0.1 and 10.0.0 back and forth
>
> Nothing changes, save re-establishment of an openvpn link when
> a 'spoke' moves
I would slightly be against the idea because :
1) we do not root access in the gears
2) the firewall will likely not be open for that from the gear to
external world
3) one of the main selling point of using openshift online was that we
do not have to manage the platform aspect. Adding openvpn to bypass the
platform is kinda managing a different platform than what we have, and
kinda negate the main advantage of using openshift.
4) we would have to manage the hub ( so need to manage 1 more server ),
so we could as well manage mysql and the wiki on the server and that's
it ?
If we must stretch the platform to its limit to make it do what we want,
I think we should accept that what we want is not what we have.
Again, i think openshift is a fine product when you use it with software
made for the platform ( ie, aware of the scaling requirement, aware of
the variable for integration, stateless if possible ).
But currently, it is:
- not integrated with puppet ( so we have 2 identity store )
- not integrated with icinga ( so it has its own monitoring )
- no backups made by ovirt infra ( but made by openshift ops )
- various space issue ( with a quite complex solution )
We can surely solve each of this with enough hack. I can surely run
puppet inside the gear if I want, running a nagios agent if we want,
make a clever backup script and solve the space issue by reinstalling
everything.
But if we go the pain of reinstallation and update, a more standard
setup would be cleaner and easier in the future, by using straight
tarball from upstream, by using standard system to cache the data, etc,
etc.
I can get you a publicly accessible vm on phx lab for the migration. but the
DNS will take some time to change itself, that would suffice for you?
If so, tell me the OS you need and how much space you tihnk we will need for
it.
--
David Caro
Red Hat S.L.
Continuous Integration Engineer - EMEA ENG Virtualization R&D
Tel.: +420 532 294 605
Email: dcaro(a)redhat.com
Web:
www.redhat.com
RHT Global #: 82-62605