[JIRA] (OVIRT-1756) configure jenkins.ovirt.org to G1 garbage collector

This is a multi-part message in MIME format... ------------=_1510506560-11603-140 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit [ https://ovirt-jira.atlassian.net/browse/OVIRT-1756?page=com.atlassian.jira.p... ] Barak Korren reassigned OVIRT-1756: ----------------------------------- Assignee: Evgheni Dereveanchin (was: infra)
configure jenkins.ovirt.org to G1 garbage collector ---------------------------------------------------
Key: OVIRT-1756 URL: https://ovirt-jira.atlassian.net/browse/OVIRT-1756 Project: oVirt - virtualization made easy Issue Type: Improvement Components: oVirt CI Reporter: rgolanbb Assignee: Evgheni Dereveanchin
I noticed jenkins process cpu consumption going over 100% and stalling the web handlers. The io wait is not a problem nor mem. What I suspect is going on is tons of GC and GC pressure given that 12GB heap and fairly nice amount of users and requests. What we can do is to configure the gc logging to see if that is really GC pauses really and to move to using the G1 garbage collector. See this post from cloudbees on the move to G1 collector https://www.cloudbees.com/blog/joining-big-leagues-tuning-jenkins-gc-respons...
-- This message was sent by Atlassian Jira (v1001.0.0-SNAPSHOT#100070) ------------=_1510506560-11603-140 Content-Type: text/html; charset="UTF-8" Content-Disposition: inline Content-Transfer-Encoding: 7bit <html><body> <pre>[ https://ovirt-jira.atlassian.net/browse/OVIRT-1756?page=com.atlassian.jira.p... ]</pre> <h3>Barak Korren reassigned OVIRT-1756:</h3> <pre>Assignee: Evgheni Dereveanchin (was: infra)</pre> <blockquote><h3>configure jenkins.ovirt.org to G1 garbage collector</h3> <pre> Key: OVIRT-1756 URL: https://ovirt-jira.atlassian.net/browse/OVIRT-1756 Project: oVirt - virtualization made easy Issue Type: Improvement Components: oVirt CI Reporter: rgolanbb Assignee: Evgheni Dereveanchin</pre> <p>I noticed jenkins process cpu consumption going over 100% and stalling the web handlers. The io wait is not a problem nor mem. What I suspect is going on is tons of GC and GC pressure given that 12GB heap and fairly nice amount of users and requests. What we can do is to configure the gc logging to see if that is really GC pauses really and to move to using the G1 garbage collector. See this post from cloudbees on the move to G1 collector <a href="https://www.cloudbees.com/blog/joining-big-leagues-tuning-jenkins-gc-responsiveness-and-stability">https://www.cloudbees.com/blog/joining-big-leagues-tuning-jenkins-gc-responsiveness-and-stability</a></p></blockquote> <p>— This message was sent by Atlassian Jira (v1001.0.0-SNAPSHOT#100070)</p> <img src="https://u4043402.ct.sendgrid.net/wf/open?upn=i5TMWGV99amJbNxJpSp2-2BCmpYLyzY..." alt="" width="1" height="1" border="0" style="height:1px !important;width:1px !important;border-width:0 !important;margin-top:0 !important;margin-bottom:0 !important;margin-right:0 !important;margin-left:0 !important;padding-top:0 !important;padding-bottom:0 !important;padding-right:0 !important;padding-left:0 !important;"/> </body></html> ------------=_1510506560-11603-140--
participants (1)
-
Barak Korren (oVirt JIRA)