Ah, I see.
The "host" in this context does need to be the backend mgt / gluster network.
I was able to add the 2nd host, and I'm working on adding the 3rd now.
Sent with ProtonMail Secure Email.
‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
On Saturday, March 20, 2021 4:32 PM, David White via Users <users(a)ovirt.org> wrote:
To clarify:
My screenshot keeps defaulting the "Host Address" to the Storage FQDN, so I
keep changing it to the correct fqdn.
Sent with ProtonMail Secure Email.
‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
On Saturday, March 20, 2021 4:30 PM, David White <dmwhite823(a)protonmail.com>
wrote:
> There may be a bug in the latest installer. Or I might have
missed a step somewhere.
> I did use the 4.4.5 installer for hyperconverged wizard, yes.
>
> I'm currently in the Engine console right now, and I only
see 1 host.
> I've navigated to Compute -> Hosts.
>
> That said, when I navigate to Compute -> Clusters ->
Default, I see this message:
> Some new hosts are detected in the cluster. You can Import them to engine or Detach
them from the cluster.
>
> I clicked on Import to try to import them into the engine.
> On the next screen, I see the other two physical hosts.
>
> I verified the Gluster peer address, as well as the front-end
Host address, typed in the root password, and clicked OK. The system acted like it was
doing stuff, but then eventually I landed back on the same "Add Hosts" screen as
before:
>
> [Screenshot from 2021-03-20 16-28-56.png]
>
> Am I missing something?
>
> Sent with ProtonMail Secure Email.
>
> ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
> On Saturday, March 20, 2021 4:17 PM, Jayme <jaymef(a)gmail.com> wrote:
>
> > If you deployed with wizard the hosted engine should
already be HA and can run on any host. I’d you look at GUI you will see a crown beside
each host that is capable of running the hostess engine.
> >
> > On Sat, Mar 20, 2021 at 5:14 PM David White via Users
<users(a)ovirt.org> wrote:
> >
> > > I just finished deploying oVirt 4.4.5 onto a 3-node
hyperconverged cluster running on Red Hat 8.3 OS.
> > >
> > > Over the course of the setup, I noticed that I had to
setup the storage for the engine separately from the gluster bricks.
> > >
> > > It looks like the engine was installed onto
/rhev/data-center/ on the first host, whereas the gluster bricks for all 3 hosts are on
/gluster_bricks/.
> > >
> > > I fear that I may already know the answer to this,
but:
> > > Is it possible to make the engine highly available?
> > >
> > > Also, thinking hypothetically here, what would happen
to my VMs that are physically on the first server, if the first server crashed? The engine
is what handles the high availability, correct? So what if a VM was running on the first
host? There would be nothing to automatically "move" it to one of the remaining
healthy hosts.
> > >
> > > Or am I misunderstanding something here?
> > >
> > > Sent with ProtonMail Secure Email.
> > >
> > > _______________________________________________
> > > Users mailing list -- users(a)ovirt.org
> > > To unsubscribe send an email to users-leave(a)ovirt.org
> > > Privacy Statement:
https://www.ovirt.org/privacy-policy.html
> > > oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
> > > List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/L6MMZSMSGIK...