On 26 Apr 2016, at 16:48, Martin Sivak <msivak(a)redhat.com>
wrote:
> @awels: to add another layer of indirection via a dedicated
> hosted-engine per outlet seems a little much. we are talking about 500 *
> 4GB RAM at least in this example, so 2 TB RAM just for management
> purposes, if you follow engine hardware recommendations?
I would not go that far. Creating zones per continent (for example)
might be enough.
> At least RHEV states in the documentation you support up to 200 hosts
> per cluster alone.
The default configuration seems to only allow 250 hosts per datacenter.
# engine-config -g MaxNumberOfHostsInStoragePool
MaxNumberOfHostsInStoragePool: 250 version: general
yep, but that liit is there because within a DC there is a lot of assumption for flawless
fast enough communication, the most problematic is that all hosts need to access the same
storage and the monitoring gets expensive then.
This is a different situation with separate DCs, there’s no cross-DC communication.
I would guess many DCs work great actually.
Too many hosts and VMs in total might be an issue, but since the last official updates
there were a lot of changes. E.g. in stable state due to VM status events introduced in
3.6 the traffic required between each host and engine is much lower.
I would not be so afraid of thousands anymore, but of course YMMV
--
Martin Sivak
SLA / oVirt
On Tue, Apr 26, 2016 at 4:03 PM, Sven Kieske <svenkieske(a)gmail.com> wrote:
> On 26.04.2016 14:46, Martin Sivak wrote:
>> I think that 1000 hosts per engine is a bit over what we recommend
>> (and support). The fact that all of them are going to be remote might
>> not be ideal either. The engine assumes the network connection to all
>> hosts is almost flawless and the necessary routing and distance to
>> your hosts might not play nice with (for example) the fencing logic.
>
> Hi,
>
> this seems a little surprising.
>
> At least RHEV states in the documentation you support up to 200 hosts
> per cluster alone.
>
> There are no documented maxima for clusters or datacenters though.
>
> @awels: to add another layer of indirection via a dedicated
> hosted-engine per outlet seems a little much. we are talking about 500 *
> 4GB RAM at least in this example, so 2 TB RAM just for management
> purposes, if you follow engine hardware recommendations?
yeah. currently the added layer of manageiq with HEs everywhere is not that helpful for
this particular case. Still, a per-continent split or per-low-latency-area might not be a
bad idea.
I can imagine with a bit more tolerant timeouts and refreshes it might work well, with
incidents/disconnects being isolated within a DC
>
> But I agree, ovirt does not handle unstable or remote connections that
right. but most of that is again per-DC. You can’t do much cross-DC though (e.g. sharing a
template is a pain)
Thanks
michal
> well, so you might be better of with hundredths of remote
engines, but
> it seems to be a nightmare to manage, even if you automate everything.
>
> My personal experience is, that ovirt does scale at least until about
> 30-50 DCs managed by a single engine, but that setup was also on a LAN
> (but I would say it could scale well beyond these numbers, at least on a
> LAN).
>
> HTH
>
> Sven
>
>
> _______________________________________________
> Users mailing list
> Users(a)ovirt.org
>
http://lists.ovirt.org/mailman/listinfo/users
>
_______________________________________________
Users mailing list
Users(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/users