ovirt-srv11

David Caro dcaro at redhat.com
Thu May 12 13:56:15 UTC 2016


On 05/12 15:49, Anton Marchukov wrote:
> Hello All.
> 
> The fixed hook is only in 3.6. The hook version in 3.5 does not work.
> Although we can install the hook from 3.6 into 3.5 as it should be
> compatible and this is how it was done on the test slave.

I was talking about using the hooks before installing the ssds, but if that can
be done reliably also before the upgrade it's also a solution that will help
scratch our current itches sooner.

> 
> Also just to remind some drawbacks of that solution as it was:
> 
> 1. It was not puppetized (not a drawback, just a reminder).
> 2. It will break vm migration. Means we will have to manually move the vms
> when we do host maintenance or need to automate this as fabric task.

^ well, the machines being slaves I think there's no problem having to stop
them to migrate them, we have already experience with both fabric and jenkins
so I guess it should not be hard to automate with those tools ;)
> 
> Anton.
> 
> On Thu, May 12, 2016 at 3:36 PM, David Caro <dcaro at redhat.com> wrote:
> 
> > On 05/11 18:44, Eyal Edri wrote:
> > > On Wed, May 11, 2016 at 6:27 PM, David Caro <dcaro at redhat.com> wrote:
> > >
> > > > On 05/11 18:21, Eyal Edri wrote:
> > > > > On Wed, May 11, 2016 at 5:36 PM, Nadav Goldin <ngoldin at redhat.com>
> > > > wrote:
> > > > >
> > > > > > Hi,
> > > > > > ovirt-srv11 host is in an empty cluster called
> > 'Production_CentOS', its
> > > > > > quite a strong machine with 251GB of ram, currently it has no VMs
> > and
> > > > as
> > > > > > far as I can tell isn't used at all.
> > > > > > I want to move it to the 'Jenkins_CentOS' cluster in order to add
> > more
> > > > VMs
> > > > > > and later upgrade the older clusters to el7(if we have enough
> > slaves
> > > > in the
> > > > > > Jenkins_CentOS cluster, we could just take the VMs down in the
> > Jenkins
> > > > > > cluster and upgrade). this is unrelated to the new hosts
> > > > ovirt-srv17-26.
> > > > > >
> > > > > >
> > > > > There is no reason to keep such a strong server there when we can
> > use it
> > > > > for many more slaves on Jenkins.
> > > > > If we need that extra server in Production DC (for hosted engine
> > > > redundancy
> > > > > and to allow maintenance) then lets take the lower end new servers
> > from
> > > > > 17-26 and replace it with the strong one.
> > > > > We need to utilize our servers, I don't think we're at 50%
> > utilization
> > > > > even, looking at the memory consumption last time i checked when all
> > > > slaves
> > > > > were working.
> > > >
> > > > As we already discussed, I strongly recommend implementing the local
> > disk
> > > > vms,
> > > > and keep an eye to the nfs load and net counters
> > > >
> > >
> > > I agree, and this is the plan, though before that we need:
> > >
> > >
> > >    1. backup & upgrade the HE instance to 3.5 (local hook is a 3.6, i
> > >    prefer using it with engine 3.6)
> > >    2. reinstall all servers from 01-10 to use the SSD
> >
> > ^ Do we need that? can't we start using the local disk hook on the new
> > servers?
> > >
> > >
> > > Once we do that we can start moving all VMs to use the local disk hook.
> > >
> > >
> > >
> > > >
> > > > >
> > > > >
> > > > >
> > > > > > I'm not sure why it was put there, so posting here if anyone
> > objects or
> > > > > > I'm missing something
> > > > > >
> > > > > >
> > > > > > Thanks
> > > > > >
> > > > > > Nadav.
> > > > > >
> > > > > >
> > > > > > _______________________________________________
> > > > > > Infra mailing list
> > > > > > Infra at ovirt.org
> > > > > > http://lists.ovirt.org/mailman/listinfo/infra
> > > > > >
> > > > > >
> > > > >
> > > > >
> > > > > --
> > > > > Eyal Edri
> > > > > Associate Manager
> > > > > RHEV DevOps
> > > > > EMEA ENG Virtualization R&D
> > > > > Red Hat Israel
> > > > >
> > > > > phone: +972-9-7692018
> > > > > irc: eedri (on #tlv #rhev-dev #rhev-integ)
> > > >
> > > > > _______________________________________________
> > > > > Infra mailing list
> > > > > Infra at ovirt.org
> > > > > http://lists.ovirt.org/mailman/listinfo/infra
> > > >
> > > >
> > > > --
> > > > David Caro
> > > >
> > > > Red Hat S.L.
> > > > Continuous Integration Engineer - EMEA ENG Virtualization R&D
> > > >
> > > > Tel.: +420 532 294 605
> > > > Email: dcaro at redhat.com
> > > > IRC: dcaro|dcaroest@{freenode|oftc|redhat}
> > > > Web: www.redhat.com
> > > > RHT Global #: 82-62605
> > > >
> > >
> > >
> > >
> > > --
> > > Eyal Edri
> > > Associate Manager
> > > RHEV DevOps
> > > EMEA ENG Virtualization R&D
> > > Red Hat Israel
> > >
> > > phone: +972-9-7692018
> > > irc: eedri (on #tlv #rhev-dev #rhev-integ)
> >
> > --
> > David Caro
> >
> > Red Hat S.L.
> > Continuous Integration Engineer - EMEA ENG Virtualization R&D
> >
> > Tel.: +420 532 294 605
> > Email: dcaro at redhat.com
> > IRC: dcaro|dcaroest@{freenode|oftc|redhat}
> > Web: www.redhat.com
> > RHT Global #: 82-62605
> >
> > _______________________________________________
> > Infra mailing list
> > Infra at ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/infra
> >
> >
> 
> 
> -- 
> Anton Marchukov
> Senior Software Engineer - RHEV CI - Red Hat

> _______________________________________________
> Infra mailing list
> Infra at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra


-- 
David Caro

Red Hat S.L.
Continuous Integration Engineer - EMEA ENG Virtualization R&D

Tel.: +420 532 294 605
Email: dcaro at redhat.com
IRC: dcaro|dcaroest@{freenode|oftc|redhat}
Web: www.redhat.com
RHT Global #: 82-62605
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 473 bytes
Desc: not available
URL: <http://lists.ovirt.org/pipermail/infra/attachments/20160512/2556491d/attachment.sig>


More information about the Infra mailing list