
Hi, ovirt-srv11 host is in an empty cluster called 'Production_CentOS', its quite a strong machine with 251GB of ram, currently it has no VMs and as far as I can tell isn't used at all. I want to move it to the 'Jenkins_CentOS' cluster in order to add more VMs and later upgrade the older clusters to el7(if we have enough slaves in the Jenkins_CentOS cluster, we could just take the VMs down in the Jenkins cluster and upgrade). this is unrelated to the new hosts ovirt-srv17-26. I'm not sure why it was put there, so posting here if anyone objects or I'm missing something Thanks Nadav.

--TdMwOTenGjBWB1uY Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On 05/11 17:36, Nadav Goldin wrote:
Hi, ovirt-srv11 host is in an empty cluster called 'Production_CentOS', its quite a strong machine with 251GB of ram, currently it has no VMs and as far as I can tell isn't used at all. I want to move it to the 'Jenkins_CentOS' cluster in order to add more VMs and later upgrade the older clusters to el7(if we have enough slaves in t= he Jenkins_CentOS cluster, we could just take the VMs down in the Jenkins cluster and upgrade). this is unrelated to the new hosts ovirt-srv17-26. =20 I'm not sure why it was put there, so posting here if anyone objects or I= 'm missing something
I think it was being used to test the local disk hooks, amarchuk might know more
=20 =20 Thanks =20 Nadav.
_______________________________________________ Infra mailing list Infra@ovirt.org http://lists.ovirt.org/mailman/listinfo/infra
--=20 David Caro Red Hat S.L. Continuous Integration Engineer - EMEA ENG Virtualization R&D Tel.: +420 532 294 605 Email: dcaro@redhat.com IRC: dcaro|dcaroest@{freenode|oftc|redhat} Web: www.redhat.com RHT Global #: 82-62605 --TdMwOTenGjBWB1uY Content-Type: application/pgp-signature; name="signature.asc" -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 iQEcBAEBAgAGBQJXM0UgAAoJEEBxx+HSYmnDL60H/jvcwtqOHrM21XCXfqQP71SO 46jTb+8eyh5az++Tg/Ck51HHnn/nLPGUrjJXsSA52BeNoeXZLWiGRPXIDoiYLmEp RSZxmcYOCUnxhqjhk54QpJUfTXLGqT8rIqmbJOAB/Q5WJk/JM0tiaaPllBTVAx2L JMFTaDz4dKc2a2j2ubvqMOh2C/0ayerWwZJ8NJQJt73wmGF/Ynyb3Il35N0ngv68 RZLV59TNfMhslCRtkzQBsbJDtqt8eNe7Qnoq9pl8TBYPO6wUx42uRIeHUlsGV25P qFNtILOiEsIjyKrXgTFVxDn3tdzfq51ksH+LiY9qStcgXzM18feI5ThpwYznxfE= =roXu -----END PGP SIGNATURE----- --TdMwOTenGjBWB1uY--

Hello David, Nadav. This cluster is not related to the hook. This cluster was intended for the migration of hosts from Fedora to CentOS. This was unfinished because we have only one host in that cluster and at that time did not have any more baremetal to add as the second, so was not able to move VMs there from Production cluster. The plan was to add the second host as soon as it is available, then move VMs from Production there and then preocision Fedora hosts in Production with CentOS and essentially remove Production Fedora cluster. Not sure if the plans for upgrades changes. If they are then you can take this of cause. Anton. On Wed, May 11, 2016 at 4:43 PM, David Caro <dcaro@redhat.com> wrote:
Hi, ovirt-srv11 host is in an empty cluster called 'Production_CentOS', its quite a strong machine with 251GB of ram, currently it has no VMs and as far as I can tell isn't used at all. I want to move it to the 'Jenkins_CentOS' cluster in order to add more VMs and later upgrade the older clusters to el7(if we have enough slaves in
On 05/11 17:36, Nadav Goldin wrote: the
Jenkins_CentOS cluster, we could just take the VMs down in the Jenkins cluster and upgrade). this is unrelated to the new hosts ovirt-srv17-26.
I'm not sure why it was put there, so posting here if anyone objects or I'm missing something
I think it was being used to test the local disk hooks, amarchuk might know more
Thanks
Nadav.
_______________________________________________ Infra mailing list Infra@ovirt.org http://lists.ovirt.org/mailman/listinfo/infra
-- David Caro
Red Hat S.L. Continuous Integration Engineer - EMEA ENG Virtualization R&D
Tel.: +420 532 294 605 Email: dcaro@redhat.com IRC: dcaro|dcaroest@{freenode|oftc|redhat} Web: www.redhat.com RHT Global #: 82-62605
_______________________________________________ Infra mailing list Infra@ovirt.org http://lists.ovirt.org/mailman/listinfo/infra
-- Anton Marchukov Senior Software Engineer - RHEV CI - Red Hat

On Wed, May 11, 2016 at 5:36 PM, Nadav Goldin <ngoldin@redhat.com> wrote:
Hi, ovirt-srv11 host is in an empty cluster called 'Production_CentOS', its quite a strong machine with 251GB of ram, currently it has no VMs and as far as I can tell isn't used at all. I want to move it to the 'Jenkins_CentOS' cluster in order to add more VMs and later upgrade the older clusters to el7(if we have enough slaves in the Jenkins_CentOS cluster, we could just take the VMs down in the Jenkins cluster and upgrade). this is unrelated to the new hosts ovirt-srv17-26.
There is no reason to keep such a strong server there when we can use it for many more slaves on Jenkins. If we need that extra server in Production DC (for hosted engine redundancy and to allow maintenance) then lets take the lower end new servers from 17-26 and replace it with the strong one. We need to utilize our servers, I don't think we're at 50% utilization even, looking at the memory consumption last time i checked when all slaves were working.
I'm not sure why it was put there, so posting here if anyone objects or I'm missing something
Thanks
Nadav.
_______________________________________________ Infra mailing list Infra@ovirt.org http://lists.ovirt.org/mailman/listinfo/infra
-- Eyal Edri Associate Manager RHEV DevOps EMEA ENG Virtualization R&D Red Hat Israel phone: +972-9-7692018 irc: eedri (on #tlv #rhev-dev #rhev-integ)

On Wed, May 11, 2016 at 5:36 PM, Nadav Goldin <ngoldin@redhat.com> wrote: =20
Hi, ovirt-srv11 host is in an empty cluster called 'Production_CentOS', its quite a strong machine with 251GB of ram, currently it has no VMs and as far as I can tell isn't used at all. I want to move it to the 'Jenkins_CentOS' cluster in order to add more = VMs and later upgrade the older clusters to el7(if we have enough slaves in=
--6o78gXsyQHm68LY/ Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On 05/11 18:21, Eyal Edri wrote: the
Jenkins_CentOS cluster, we could just take the VMs down in the Jenkins cluster and upgrade). this is unrelated to the new hosts ovirt-srv17-26.
There is no reason to keep such a strong server there when we can use it for many more slaves on Jenkins. If we need that extra server in Production DC (for hosted engine redundan= cy and to allow maintenance) then lets take the lower end new servers from 17-26 and replace it with the strong one. We need to utilize our servers, I don't think we're at 50% utilization even, looking at the memory consumption last time i checked when all slav= es were working.
As we already discussed, I strongly recommend implementing the local disk v= ms, and keep an eye to the nfs load and net counters
=20 =20 =20
I'm not sure why it was put there, so posting here if anyone objects or I'm missing something
Thanks
Nadav.
_______________________________________________ Infra mailing list Infra@ovirt.org http://lists.ovirt.org/mailman/listinfo/infra
=20 =20 --=20 Eyal Edri Associate Manager RHEV DevOps EMEA ENG Virtualization R&D Red Hat Israel =20 phone: +972-9-7692018 irc: eedri (on #tlv #rhev-dev #rhev-integ)
_______________________________________________ Infra mailing list Infra@ovirt.org http://lists.ovirt.org/mailman/listinfo/infra
--=20 David Caro Red Hat S.L. Continuous Integration Engineer - EMEA ENG Virtualization R&D Tel.: +420 532 294 605 Email: dcaro@redhat.com IRC: dcaro|dcaroest@{freenode|oftc|redhat} Web: www.redhat.com RHT Global #: 82-62605 --6o78gXsyQHm68LY/ Content-Type: application/pgp-signature; name="signature.asc" -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 iQEcBAEBAgAGBQJXM09hAAoJEEBxx+HSYmnDqQUH/2FLYn+yn2Es565JynZYEmAj nEhq7BoLkgxIy+bHP1wo49DPcBrPJq25mmO9avoDSarXEnCk63Tv3eRcyOUKo3tR dRJbzh5OJNF30ImvOasuoO4nhOfEbaTtWuXFItDbBTlGNKCwzVBk/uFTVq+I6uD1 h2jnh3kRyRTYsEk+399+D3I4A6R56oQMHYepq8H0dWKzYskzSoa6UQMcQqFMtrmH F37/rhLfC5r4KeX2oDUHqJqFIdaVhPp/t9wWM+HUW1Gu0J9fshVKNzQ44GwQnCii yhbivUen0VsHRBPzv27VVwBYhbvo2rznOlosoJUCAX8Qu2mYcY29IoFkznKX6m4= =1EYQ -----END PGP SIGNATURE----- --6o78gXsyQHm68LY/--

On Wed, May 11, 2016 at 6:27 PM, David Caro <dcaro@redhat.com> wrote:
On 05/11 18:21, Eyal Edri wrote:
On Wed, May 11, 2016 at 5:36 PM, Nadav Goldin <ngoldin@redhat.com> wrote:
Hi, ovirt-srv11 host is in an empty cluster called 'Production_CentOS', its quite a strong machine with 251GB of ram, currently it has no VMs and as far as I can tell isn't used at all. I want to move it to the 'Jenkins_CentOS' cluster in order to add more VMs and later upgrade the older clusters to el7(if we have enough slaves in the Jenkins_CentOS cluster, we could just take the VMs down in the Jenkins cluster and upgrade). this is unrelated to the new hosts ovirt-srv17-26.
There is no reason to keep such a strong server there when we can use it for many more slaves on Jenkins. If we need that extra server in Production DC (for hosted engine redundancy and to allow maintenance) then lets take the lower end new servers from 17-26 and replace it with the strong one. We need to utilize our servers, I don't think we're at 50% utilization even, looking at the memory consumption last time i checked when all slaves were working.
As we already discussed, I strongly recommend implementing the local disk vms, and keep an eye to the nfs load and net counters
I agree, and this is the plan, though before that we need: 1. backup & upgrade the HE instance to 3.5 (local hook is a 3.6, i prefer using it with engine 3.6) 2. reinstall all servers from 01-10 to use the SSD Once we do that we can start moving all VMs to use the local disk hook.
I'm not sure why it was put there, so posting here if anyone objects or I'm missing something
Thanks
Nadav.
_______________________________________________ Infra mailing list Infra@ovirt.org http://lists.ovirt.org/mailman/listinfo/infra
-- Eyal Edri Associate Manager RHEV DevOps EMEA ENG Virtualization R&D Red Hat Israel
phone: +972-9-7692018 irc: eedri (on #tlv #rhev-dev #rhev-integ)
_______________________________________________ Infra mailing list Infra@ovirt.org http://lists.ovirt.org/mailman/listinfo/infra
-- David Caro
Red Hat S.L. Continuous Integration Engineer - EMEA ENG Virtualization R&D
Tel.: +420 532 294 605 Email: dcaro@redhat.com IRC: dcaro|dcaroest@{freenode|oftc|redhat} Web: www.redhat.com RHT Global #: 82-62605
-- Eyal Edri Associate Manager RHEV DevOps EMEA ENG Virtualization R&D Red Hat Israel phone: +972-9-7692018 irc: eedri (on #tlv #rhev-dev #rhev-integ)

--a+0P3INHs7aeI7wh Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On 05/11 18:44, Eyal Edri wrote:
On Wed, May 11, 2016 at 6:27 PM, David Caro <dcaro@redhat.com> wrote: =20
On 05/11 18:21, Eyal Edri wrote:
On Wed, May 11, 2016 at 5:36 PM, Nadav Goldin <ngoldin@redhat.com> wrote:
Hi, ovirt-srv11 host is in an empty cluster called 'Production_CentOS',= its quite a strong machine with 251GB of ram, currently it has no VMs a= nd as far as I can tell isn't used at all. I want to move it to the 'Jenkins_CentOS' cluster in order to add m= ore VMs and later upgrade the older clusters to el7(if we have enough slaves in the Jenkins_CentOS cluster, we could just take the VMs down in the Jenk= ins cluster and upgrade). this is unrelated to the new hosts ovirt-srv17-26.
There is no reason to keep such a strong server there when we can use= it for many more slaves on Jenkins. If we need that extra server in Production DC (for hosted engine redundancy and to allow maintenance) then lets take the lower end new servers fr= om 17-26 and replace it with the strong one. We need to utilize our servers, I don't think we're at 50% utilization even, looking at the memory consumption last time i checked when all slaves were working.
As we already discussed, I strongly recommend implementing the local di= sk vms, and keep an eye to the nfs load and net counters
=20 I agree, and this is the plan, though before that we need: =20 =20 1. backup & upgrade the HE instance to 3.5 (local hook is a 3.6, i prefer using it with engine 3.6) 2. reinstall all servers from 01-10 to use the SSD
^ Do we need that? can't we start using the local disk hook on the new serv= ers?
=20 =20 Once we do that we can start moving all VMs to use the local disk hook. =20 =20 =20
I'm not sure why it was put there, so posting here if anyone object=
s or
I'm missing something
Thanks
Nadav.
_______________________________________________ Infra mailing list Infra@ovirt.org http://lists.ovirt.org/mailman/listinfo/infra
-- Eyal Edri Associate Manager RHEV DevOps EMEA ENG Virtualization R&D Red Hat Israel
phone: +972-9-7692018 irc: eedri (on #tlv #rhev-dev #rhev-integ)
_______________________________________________ Infra mailing list Infra@ovirt.org http://lists.ovirt.org/mailman/listinfo/infra
-- David Caro
Red Hat S.L. Continuous Integration Engineer - EMEA ENG Virtualization R&D
Tel.: +420 532 294 605 Email: dcaro@redhat.com IRC: dcaro|dcaroest@{freenode|oftc|redhat} Web: www.redhat.com RHT Global #: 82-62605
=20 =20 =20 --=20 Eyal Edri Associate Manager RHEV DevOps EMEA ENG Virtualization R&D Red Hat Israel =20 phone: +972-9-7692018 irc: eedri (on #tlv #rhev-dev #rhev-integ)
--=20 David Caro Red Hat S.L. Continuous Integration Engineer - EMEA ENG Virtualization R&D Tel.: +420 532 294 605 Email: dcaro@redhat.com IRC: dcaro|dcaroest@{freenode|oftc|redhat} Web: www.redhat.com RHT Global #: 82-62605 --a+0P3INHs7aeI7wh Content-Type: application/pgp-signature; name="signature.asc" -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 iQEcBAEBAgAGBQJXNIbyAAoJEEBxx+HSYmnDBOYIAIjIdw8yzuLhZHzft2tz/jiO mD4XibH41gsHn5A5bHR+MwkkZXnb81d2Oaw2S8fyfjhlQ6n9ie7ygVTVdM3j6CGL v83dGC/2UPmhQkh7kyTJC9s0V3GQR5DpqLxwiJNlhYxTf6WJQhIZhQLeKTefadAd KH1mTpSSn/j2qde5B3es4Uaai/jwUBxmPAeI5u5NQ2KkdF50aNHtWg6ZwwY1zwEw 7jAZgaR9xLEtmU5zLGLa99E94rK25UyibsNIbcRKd9k/TY89FNQp4nrzTvR9TKSy b4RM8JPfC4JesJIP3/1i3nFn9N/BBqN0B3PCqHG8v5Zrccdo5Xt0a3AlnrMqyyY= =GlIP -----END PGP SIGNATURE----- --a+0P3INHs7aeI7wh--

Hello All. The fixed hook is only in 3.6. The hook version in 3.5 does not work. Although we can install the hook from 3.6 into 3.5 as it should be compatible and this is how it was done on the test slave. Also just to remind some drawbacks of that solution as it was: 1. It was not puppetized (not a drawback, just a reminder). 2. It will break vm migration. Means we will have to manually move the vms when we do host maintenance or need to automate this as fabric task. Anton. On Thu, May 12, 2016 at 3:36 PM, David Caro <dcaro@redhat.com> wrote:
On 05/11 18:44, Eyal Edri wrote:
On Wed, May 11, 2016 at 6:27 PM, David Caro <dcaro@redhat.com> wrote:
On 05/11 18:21, Eyal Edri wrote:
On Wed, May 11, 2016 at 5:36 PM, Nadav Goldin <ngoldin@redhat.com> wrote:
Hi, ovirt-srv11 host is in an empty cluster called 'Production_CentOS', its quite a strong machine with 251GB of ram, currently it has no VMs and as far as I can tell isn't used at all. I want to move it to the 'Jenkins_CentOS' cluster in order to add more VMs and later upgrade the older clusters to el7(if we have enough slaves in the Jenkins_CentOS cluster, we could just take the VMs down in the Jenkins cluster and upgrade). this is unrelated to the new hosts ovirt-srv17-26.
There is no reason to keep such a strong server there when we can use it for many more slaves on Jenkins. If we need that extra server in Production DC (for hosted engine redundancy and to allow maintenance) then lets take the lower end new servers from 17-26 and replace it with the strong one. We need to utilize our servers, I don't think we're at 50% utilization even, looking at the memory consumption last time i checked when all slaves were working.
As we already discussed, I strongly recommend implementing the local disk vms, and keep an eye to the nfs load and net counters
I agree, and this is the plan, though before that we need:
1. backup & upgrade the HE instance to 3.5 (local hook is a 3.6, i prefer using it with engine 3.6) 2. reinstall all servers from 01-10 to use the SSD
^ Do we need that? can't we start using the local disk hook on the new servers?
Once we do that we can start moving all VMs to use the local disk hook.
I'm not sure why it was put there, so posting here if anyone
objects or
I'm missing something
Thanks
Nadav.
_______________________________________________ Infra mailing list Infra@ovirt.org http://lists.ovirt.org/mailman/listinfo/infra
-- Eyal Edri Associate Manager RHEV DevOps EMEA ENG Virtualization R&D Red Hat Israel
phone: +972-9-7692018 irc: eedri (on #tlv #rhev-dev #rhev-integ)
_______________________________________________ Infra mailing list Infra@ovirt.org http://lists.ovirt.org/mailman/listinfo/infra
-- David Caro
Red Hat S.L. Continuous Integration Engineer - EMEA ENG Virtualization R&D
Tel.: +420 532 294 605 Email: dcaro@redhat.com IRC: dcaro|dcaroest@{freenode|oftc|redhat} Web: www.redhat.com RHT Global #: 82-62605
-- Eyal Edri Associate Manager RHEV DevOps EMEA ENG Virtualization R&D Red Hat Israel
phone: +972-9-7692018 irc: eedri (on #tlv #rhev-dev #rhev-integ)
-- David Caro
Red Hat S.L. Continuous Integration Engineer - EMEA ENG Virtualization R&D
Tel.: +420 532 294 605 Email: dcaro@redhat.com IRC: dcaro|dcaroest@{freenode|oftc|redhat} Web: www.redhat.com RHT Global #: 82-62605
_______________________________________________ Infra mailing list Infra@ovirt.org http://lists.ovirt.org/mailman/listinfo/infra
-- Anton Marchukov Senior Software Engineer - RHEV CI - Red Hat

--7Rldj+JZnTQmDdGi Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On 05/12 15:49, Anton Marchukov wrote:
Hello All. =20 The fixed hook is only in 3.6. The hook version in 3.5 does not work. Although we can install the hook from 3.6 into 3.5 as it should be compatible and this is how it was done on the test slave.
I was talking about using the hooks before installing the ssds, but if that= can be done reliably also before the upgrade it's also a solution that will help scratch our current itches sooner.
=20 Also just to remind some drawbacks of that solution as it was: =20 1. It was not puppetized (not a drawback, just a reminder). 2. It will break vm migration. Means we will have to manually move the vms when we do host maintenance or need to automate this as fabric task.
=20 Anton. =20 On Thu, May 12, 2016 at 3:36 PM, David Caro <dcaro@redhat.com> wrote: =20
On 05/11 18:44, Eyal Edri wrote:
On Wed, May 11, 2016 at 6:27 PM, David Caro <dcaro@redhat.com> wrote:
On 05/11 18:21, Eyal Edri wrote:
On Wed, May 11, 2016 at 5:36 PM, Nadav Goldin <ngoldin@redhat.com> wrote:
Hi, ovirt-srv11 host is in an empty cluster called 'Production_CentOS', its quite a strong machine with 251GB of ram, currently it has no V= Ms and as far as I can tell isn't used at all. I want to move it to the 'Jenkins_CentOS' cluster in order to a=
^ well, the machines being slaves I think there's no problem having to stop them to migrate them, we have already experience with both fabric and jenki= ns so I guess it should not be hard to automate with those tools ;) dd
more
VMs
and later upgrade the older clusters to el7(if we have enough slaves in the Jenkins_CentOS cluster, we could just take the VMs down in the Jenkins cluster and upgrade). this is unrelated to the new hosts ovirt-srv17-26.
There is no reason to keep such a strong server there when we can use it for many more slaves on Jenkins. If we need that extra server in Production DC (for hosted engine redundancy and to allow maintenance) then lets take the lower end new servers from 17-26 and replace it with the strong one. We need to utilize our servers, I don't think we're at 50% utilization even, looking at the memory consumption last time i checked when = all slaves were working.
As we already discussed, I strongly recommend implementing the local disk vms, and keep an eye to the nfs load and net counters
I agree, and this is the plan, though before that we need:
1. backup & upgrade the HE instance to 3.5 (local hook is a 3.6, i prefer using it with engine 3.6) 2. reinstall all servers from 01-10 to use the SSD
^ Do we need that? can't we start using the local disk hook on the new servers?
Once we do that we can start moving all VMs to use the local disk hoo=
k.
I'm not sure why it was put there, so posting here if anyone
objects or
I'm missing something
Thanks
Nadav.
_______________________________________________ Infra mailing list Infra@ovirt.org http://lists.ovirt.org/mailman/listinfo/infra
-- Eyal Edri Associate Manager RHEV DevOps EMEA ENG Virtualization R&D Red Hat Israel
phone: +972-9-7692018 irc: eedri (on #tlv #rhev-dev #rhev-integ)
_______________________________________________ Infra mailing list Infra@ovirt.org http://lists.ovirt.org/mailman/listinfo/infra
-- David Caro
Red Hat S.L. Continuous Integration Engineer - EMEA ENG Virtualization R&D
Tel.: +420 532 294 605 Email: dcaro@redhat.com IRC: dcaro|dcaroest@{freenode|oftc|redhat} Web: www.redhat.com RHT Global #: 82-62605
-- Eyal Edri Associate Manager RHEV DevOps EMEA ENG Virtualization R&D Red Hat Israel
phone: +972-9-7692018 irc: eedri (on #tlv #rhev-dev #rhev-integ)
-- David Caro
Red Hat S.L. Continuous Integration Engineer - EMEA ENG Virtualization R&D
Tel.: +420 532 294 605 Email: dcaro@redhat.com IRC: dcaro|dcaroest@{freenode|oftc|redhat} Web: www.redhat.com RHT Global #: 82-62605
_______________________________________________ Infra mailing list Infra@ovirt.org http://lists.ovirt.org/mailman/listinfo/infra
=20 =20 --=20 Anton Marchukov Senior Software Engineer - RHEV CI - Red Hat
_______________________________________________ Infra mailing list Infra@ovirt.org http://lists.ovirt.org/mailman/listinfo/infra
--=20 David Caro Red Hat S.L. Continuous Integration Engineer - EMEA ENG Virtualization R&D Tel.: +420 532 294 605 Email: dcaro@redhat.com IRC: dcaro|dcaroest@{freenode|oftc|redhat} Web: www.redhat.com RHT Global #: 82-62605 --7Rldj+JZnTQmDdGi Content-Type: application/pgp-signature; name="signature.asc" -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 iQEcBAEBAgAGBQJXNIt/AAoJEEBxx+HSYmnDmX0H/0rLxGCVXCTaxhAyrZCglnAY KlIRG9vemrlAanw4CSy5lE8BUNcFL8ns4vG9e9dSmKqoyzZoWr7+Td08NGRiRoWQ rH8Gb/bhG+Jpl7BT4CL1ZvKq64sPwaxs5y02xt/kORuexV0JXbbgeN/Tg2fI319+ NoKAMwm/GiHiMlKM0Z7XPw7TRRNeAZTTnJJueJkxKA6hLdm/8UJrptj2/TjQijxL M/OpSZ1XMEZZqMqBo4m3SXfKagmN+lGK5xSLFJiEht7HobygjVAlX59XWI+jr+RC pJD0BM/JgFFppbv+xKcnCVb92FhxCplk4ATyYKPJnZ2CrGsczGIQc61ojsyiAvA= =L8vF -----END PGP SIGNATURE----- --7Rldj+JZnTQmDdGi--

I was talking about using the hooks before installing the ssds, but if that can be done reliably also before the upgrade it's also a solution that will help scratch our current itches sooner.
That's about it. The hook in 3.6 contains the patch I submitted. It does not work without it at all. Although you can use rpm from 3.5 in 3.6.
^ well, the machines being slaves I think there's no problem having to stop them to migrate them, we have already experience with both fabric and jenkins so I guess it should not be hard to automate with those tools ;)
Yes there is no problem with that. But I am not aware about "stop all slaves on the host" feature in oVirt so that would be either manually clicking or we need to fabricate it. Not a big deal too. -- Anton Marchukov Senior Software Engineer - RHEV CI - Red Hat

If we need that extra server in Production DC (for hosted engine redundancy and to allow maintenance) then lets take the lower end new servers from 17-26 and replace it with the strong one. We need to utilize our servers, I don't think we're at 50% utilization even, looking at the memory consumption last time i checked when all slaves were working
the problem is that we won't have live migration in the Production cluster if we add the new hosts there(because of the multi node NUMA mismatch), we could put 2 new hosts in the Production cluster but then we would have to schedule a downtime in order to move them. I think that coupling the engine upgrade with the new slaves is not necessary. we can start by installing the hook on ovirt-srv11 and spinning up new slaves, that way we can also test the hook in production. because there is no NFS involved, and live migration isn't working with the hook, the host is pretty much self-contained and this can't harm anything. On Thu, May 12, 2016 at 4:59 PM, Anton Marchukov <amarchuk@redhat.com> wrote:
I was talking about using the hooks before installing the ssds, but if
that can be done reliably also before the upgrade it's also a solution that will help scratch our current itches sooner.
That's about it. The hook in 3.6 contains the patch I submitted. It does not work without it at all. Although you can use rpm from 3.5 in 3.6.
^ well, the machines being slaves I think there's no problem having to stop them to migrate them, we have already experience with both fabric and jenkins so I guess it should not be hard to automate with those tools ;)
Yes there is no problem with that. But I am not aware about "stop all slaves on the host" feature in oVirt so that would be either manually clicking or we need to fabricate it. Not a big deal too.
-- Anton Marchukov Senior Software Engineer - RHEV CI - Red Hat
_______________________________________________ Infra mailing list Infra@ovirt.org http://lists.ovirt.org/mailman/listinfo/infra

I think that coupling the engine upgrade with the new slaves is not necessary. we can start by installing the hook on ovirt-srv11 and spinning up new slaves, that way we can also test the hook in production. because there is no NFS involved, and live migration isn't working with the hook, the host is pretty much self-contained and this can't harm anything.
FYI. We used to have fc21-scratchpad-test slave running in u/s ovirt in production for certainly more than half a year for sure. Not sure why it was delete and was it related to local disk or not. But I think this should be enough to consider it has being tested in production. What was not tested is the effect on NFS server as we had only one slave. But nowadays seeing overrun counters on storages we can conclude that it will help since anything directing traffic away from the storages will help. Anton. -- Anton Marchukov Senior Software Engineer - RHEV CI - Red Hat
participants (4)
-
Anton Marchukov
-
David Caro
-
Eyal Edri
-
Nadav Goldin