On Wed, 2012-11-07 at 12:40 +0100, David Jaša wrote:
Ewoud Kohl van Wijngaarden píše v St 07. 11. 2012 v 12:08 +0100:
> On Wed, Nov 07, 2012 at 11:23:27AM +0100, David Jaša wrote:
> > Ewoud Kohl van Wijngaarden píše v St 07. 11. 2012 v 11:16 +0100:
> > > On Wed, Nov 07, 2012 at 03:52:14AM -0500, Simon Grinberg wrote:
> > > > ----- Original Message -----
> > > > > From: "Michal Skrivanek"
<michal.skrivanek(a)redhat.com>
> > > > > To: engine-devel(a)ovirt.org
> > > > > Sent: Tuesday, November 6, 2012 10:39:58 PM
> > > > > Subject: [Engine-devel] SPICE IP override
> > > > >
> > > > > Hi all,
> > > > > On behalf of Tomas - please check out the proposal for enhancing
our
> > > > > SPICE integration to allow to return a custom IP/FQDN instead of
the
> > > > > host IP address.
> > > > >
http://wiki.ovirt.org/wiki/Features/Display_Address_Override
> > > > > All comments are welcome...
> > > >
> > > > My 2 cents,
> > > >
> > > > This works under the assumption that all the users are either
outside
> > > > of the organization or inside.
> > > > But think of some of the following scenarios based on a topology
where
> > > > users in the main office are inside the corporate network while
users
> > > > on remote offices / WAN are on a detached different network on the
> > > > other side of the NAT / public firewall :
> > > >
> > > > With current 'per host override' proposal:
> > > > 1. Admin from the main office won't be able to access the VM
console
> > > > 2. No Mixed environment, meaning that you have to have designated
> > > > clusters for remote offices users vs main office users -
otherwise
> > > > connectivity to the console is determined based on scheduler
> > > > decision, or may break by live migration.
> > > > 3. Based on #2, If I'm a user travelling between offices I'll
have to
> > > > ask the admin to turn off my VM and move it to internal cluster
> > > > before I can reconnect
> > > >
> > > > My suggestion is to covert this to 'alternative' IP/FQDN
sending the
> > > > spice client both internal fqdn/ip and the alternative. The spice
> > > > client should detect which is available of the two and auto-connect.
> > > >
> > > > This requires enhancement of the spice client, but still solves all
> > > > the issues raised above (actually it solves about 90% of the use
cases
> > > > I've heard about in the past).
> > > >
> > > > Another alternative is for the engine to 'guess' or
'elect' which to
> > > > use, alternative or main, based on the IP of the client - meaning
> > > > admin provides the client ranges for providing internal host address
> > > > vs alternative - but this is more complicated compared for the
> > > > previous suggestion
> > > >
> > > > Thoughts?
> > >
> > > I agree with where you're going with this. The story I'd like to
see
> > > supported is close to this. We have external customers who should know
> > > nothing about our internal network, but should be able to access the
> > > console of their VMs. Currently we do this with a custom frontend which
> > > uses the API (and is about as old as the RHEV 2.2 API) and a TCP proxy,
> > > but we'd like to move to the standard UI. Currently the console
> > > connection prevents us from doing so.
> >
> > You could do that with this proposal, if you:
> > 1) DNAT some external-facing IPs to your hypervisor display network IPs
> > 2) resolve display network FQDN to the DNATing machine IPs for external
> > queries.
>
> I imagine you need 1 external-facing IP per host, which makes it
> expensive to scale since IPv4 space is very limited.
That's the cost of quick-to-implement solution.
If it is possible to have per-host display port range, you could work
this limitation around by setting non-overlapping ranges for each host
and use a single proxy or DNAT machine that would decide which port to
forward based on the range.
I am a colleague of Ewoud, and want to explain a bit more about how we
currently have implemented this.
All our virtualization hosts, but also the manager only live in the
internal network. We have written a webapplication that is a
self-service portal for our customers. This webapplication lives on a
host that is publicly reachable. This host can also reach the
virtualization servers on the internal network. Now for all actions
except for viewing a console the webhost only needs to access the api,
so what happens when a user wants to view the console (we currently use
vnc):
On the webhost we have reserved a number of ports.
Api request is made to get the host/port, and we set the ticket (vnc
password).
We create a tunnel (currently use socat, but can of course also be DNAT
or any kind of proxy) that connects one of the free ports on the
external webhost to the host/port combo we got back from the api.
This is a very simple implementation, but it works well in our
experience.
For added points you can make the proxy smarter, things like handling vm
migrations and maybe adding websockets support for a html5 spice
client ;)
Kind regards,
Sander