<html>
<head>
<meta content="text/html; charset=ISO-8859-1"
http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<br>
<div class="moz-cite-prefix">On 09/08/2014 04:45 PM, Itamar Heim
wrote:<br>
</div>
<blockquote cite="mid:540DC105.6010007@redhat.com" type="cite">On
09/08/2014 12:54 PM, Finstrle, Ludek wrote:
<br>
<blockquote type="cite">
<br>
Hi,
<br>
<br>
I'm quite new to ovirt and I'm going to deploy ovirt into
several tens
<br>
location all around world.
<br>
<br>
The connection between locations is neither dedicated nor 100%
reliable
<br>
as it's connected via internet.
<br>
I'm going with Gluster storage domains mainly.
<br>
It's not important to me to do live migrations or even offline
<br>
migrations between locations (they're independent).
<br>
<br>
What's the best design and components from your point of view? I
believe
<br>
I'm not the first one with such design.
<br>
<br>
I think about two possibilities:
<br>
1) One central Engine
<br>
- how to manage guests when connection drop between engine and
node
<br>
- latency is up to 1 second is it ok even with working
connection?
<br>
<br>
2) Engine in every location
<br>
- is it possible to have also one central point with information
<br>
from all engines together (at least read-only)?
<br>
- what about central reporting at least?
<br>
<br>
I like more one central Engine. My concern is how to work with
consoles
<br>
and also just 1 ISO storage domain and 1 export storage domain
(maybe
<br>
same hostname for ISO and export in every location). Another
topic is
<br>
how to reach console/stop/start/migrate guest inside location
while
<br>
there is connection down between the only Engine and nodes in
the
<br>
location.
<br>
<br>
Thanks ahead for you experience/ideas,
<br>
<br>
</blockquote>
<br>
we run this with central engine and remote clusters, but remote
clusters have decent connectivity.
<br>
<br>
oVirt 3.5 does bring several improvements to fencing management
which may help those with problematic links.
<br>
<br>
for option #2 - ManageIQ (upstream of Red Hat CloudForms) is a
"CMP" - Cloud Management Platform, which can provide overall
dashboard, self-service, service catalog, automation, etc. across
multiple ovirt deployments.
<br>
(they just released their first upstream release last week)
<br>
<br>
</blockquote>
Using ManageIQ does work but is still more for the management of
things running on the cloud. In this case you still will have
separate ovirt environments that you have to manage.<br>
<br>
That's not so bad and probably recommendable for a lot of use cases
as you can't mess up all your DC's with one login.<br>
<br>
Also it would be nice to have the frontend running somewhere that
doesn't have to be on the management lan of all the virtual
infrastructure and be able to reach even the IPMI's. If you have
users accessing it, they could hake the management server and have
access to all nodes in all DC's. You can build another interface
with api calls to seperate them, but why bother when we already have
a user interface.<br>
<br>
Also when your connection to the DC's is interrupted you
availabillity will suffer, but the reason we want to spread our
virtual infrastructure accross multiple DC's is to remove SPOF of
the datacenter.<br>
<br>
What we would like to do is have a central datacenter that hosts our
management infrastructure and is highly secure. When we access this
management infrastructure, we can then manage all our other racks in
other DC's through sattelite-systems/smart-proxies. Same like this:<br>
<br>
<img src="cid:part1.05040904.07080303@netbulae.eu" alt=""><br>
<br>
A more distributed ovirt-engine would IMHO make the virtual
datacenter/cloud infrastructure scale better across locations. While
ManageIQ does that for multiple clouds. Two different use cases.<br>
<br>
Kind regards,<br>
<br>
Jorick Astrego<br>
Netbulae B.V.<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
</body>
</html>