If we need that extra server in Production DC (for hosted engine
redundancy and to allow maintenance) then lets take the lower end new
servers from 17-26 and replace it with the strong one.
We need to utilize our servers, I don't think we're at 50% utilization
even, looking at the memory consumption last time i checked when all slaves
were working
the problem is that we won't have live migration in the Production cluster
if we add
the new hosts there(because of the multi node NUMA mismatch), we could put
2 new
hosts in the Production cluster but then we would have to schedule a
downtime
in order to move them.
I think that coupling the engine upgrade with the new slaves is not
necessary.
we can start by installing the hook on ovirt-srv11 and spinning up new
slaves,
that way we can also test the hook in production. because there is no NFS
involved,
and live migration isn't working with the hook, the host is pretty much
self-contained
and this can't harm anything.
On Thu, May 12, 2016 at 4:59 PM, Anton Marchukov <amarchuk(a)redhat.com>
wrote:
I was talking about using the hooks before installing the ssds, but
if
> that can
> be done reliably also before the upgrade it's also a solution that will
> help
> scratch our current itches sooner.
>
That's about it. The hook in 3.6 contains the patch I submitted. It does
not work without it at all.
Although you can use rpm from 3.5 in 3.6.
> ^ well, the machines being slaves I think there's no problem having to
> stop
> them to migrate them, we have already experience with both fabric and
> jenkins
> so I guess it should not be hard to automate with those tools ;)
>
Yes there is no problem with that. But I am not aware about "stop all
slaves on the host" feature in
oVirt so that would be either manually clicking or we need to fabricate
it. Not a big deal too.
--
Anton Marchukov
Senior Software Engineer - RHEV CI - Red Hat
_______________________________________________
Infra mailing list
Infra(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra