If the engine manages to 'reboot' the host as you say, it seems that
the engine can communicate with the host.
So it might be that you have a bit of a chicken and egg problem: the
host is missing a required network and therefore goes back into
non-responsive.
If you have other (non mgmt) 'required' networks on the host, set the
networks to non-required under Compute | Clusters | <cluster> |
Logical Networks | Manager Networks temporarily.
In Compute | Hosts | <host> | networks | setup networks
can you attach 'ovirtmgmt' to the host in its current condition?
Try to reboot the host.
HTH
On Tue, May 22, 2018 at 7:25 PM, Gianluca Cecchi
<gianluca.cecchi(a)gmail.com> wrote:
> Hello,
> I have an host dead in a 4.1.9 environment (broken raid controller
> compromised both internal disks, don't ask me how it could happen...).
> I have replaced controller/disks and reinstalled ng-node on the same hw with
> the same parameters and same version (but ovirtmgmt bridge not present yet,
> only a bond0 that has to become backed up by the ovirtmgmt bridge when
> installed).
> In web admin gui it had been set as not responsive when failed.
> Now I cannot reinstall it as new.
> If I try to put it into maintenance it is correctly rebooted but then it
> remains non responsive.
> I think I have to "force remove" in some way the previous instance of this
> node, but the option is greyed out...
> How can I clean the node and then try to install as new again?
>
> Thanks,
> Gianluca
>
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org
>