On 19/08/2020 14:48, Michael Jones wrote:


On 19/08/2020 12:12, Michael Jones wrote:
On 19/08/2020 10:41, Nir Soffer wrote:
There is no warning the method was deprecated and will be missing functionality.

The steps detailed on the alt install page are for the all-in-one running engine-setup.

It's also worth noting this works fine in;

Version 4.3.1.1-1.el7

but not in;

Version 4.4.1.10-1.el8

(el8 has the change in imageio daemons)

The alternate install method is still useful to have, but i think a red warning about all-in-one on el8 on that page would be good.

Kind Regards,
Michael Jones
Micheal, can you file a bug for this?

If you have a good use case for all-in-one deployment (not using
hosted engine), please explain
it in the bug.

Personally I think simple all-in-one deployment without the complexity
of hosted engine is better,
and we should keep it, but for this we need to teach engine to handle
the case when the proxy
and the daemon are the same server.

In this case engine will not try to setup a proxy ticket, and image
transfer would work directly
with the host daemon.

I'm not very optimistic that we will support this again, since this
feature is not needed for RHV
customers, but for oVirt this makes sense.

Nir

Yes, I can file a bug,

The main usage / setup's I have are;

on-prem installs:

- hosted engine
- gluster
- high availiblity
- internal ip address
- easy great...

dedicated host provider for example OVH single machine:

- alternate install
- all-in-one

The main reason for the separation is that using the cockpit install / hosted engine install causes problems with ip allocations;

cockpit method requires 1x ip for host, and 1x ip for engine vm, and both ip must be in the same subnet...

applying internal ip would cut off access, and to make it even harder, getting public ip blocks didn't work as the box main ip wouldn't be in the same subnet, adding nic alias ip doesn't work either (fail on install due to failing to setup ovirtmgmt network).

atm, i'll struggle with changing the machine's main ip to be one of the same subnet with the engine one... (currently causes host to be taken offline due to hosting provider, health checks)

provided i can change the host primary ip to be one of the OVH failover ip allocated in a block... i will be able to install using the cockpit.

and after the install i can setup internal ip with the network tag.

Kind Regards,

Mike

despite managing to get OVH to disable monitoring (ping to the main ip, and rebooting host) and getting the host in the same ip range as the engine vm...

ie:

host ip: 158.x.x.13/32 = not used anymore

new subnet: 54.x.x.x/28

and reserving;
host = 54.x.x.16
engine = 54.x.x.17

[ ERROR ] The Engine VM (54.x.x.17/28) and the default gateway (158.x.x.254) will not be in the same IP subnet.

the hosted engine installer crashes due to the gw being in a different subnet, so all three;

- host
- engine
- gateway

must be in the same subnet...

this rules out an install on ovh dedicated server.

unless... I can install the all-in-one again (this bit works), and then install the engine vm in an existing all-in-one setup...

essentially the cockpit installation is not compatible with this infra setup.

After going through the documentation again, I understand the best way to approach this would be to have a remote manager, ie;

self hosted engine (on-prem) > host/2nd DC/Cluster (remote/ovh)

standalone manager (on-prem) > host/2nd DC/Cluster (remote/ovh)

That way resolves the ip issues (only need host ip, just don't install the manager on the remote server)

outstanding... i need to workout the security implications of this.

shame all-in-one is gone, but the above does work, and even means the remote host can again use local storage.

I'll raise the bug report now i've finished testing, as I think stand alone, all-in-one, dedicated hosts are affordable and open ovirt to a wider user base (keeping hardware requirements minimal).

Thanks again,

Mike