This is a multi-part message in MIME format...
------------=_1510506541-25426-133
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
[
https://ovirt-jira.atlassian.net/browse/OVIRT-1756?page=com.atlassian.jir...
]
Barak Korren updated OVIRT-1756:
--------------------------------
Component/s: oVirt CI
Epic Link: OVIRT-403
Issue Type: Improvement (was: By-EMAIL)
configure
jenkins.ovirt.org to G1 garbage collector
---------------------------------------------------
Key: OVIRT-1756
URL:
https://ovirt-jira.atlassian.net/browse/OVIRT-1756
Project: oVirt - virtualization made easy
Issue Type: Improvement
Components: oVirt CI
Reporter: rgolanbb
Assignee: infra
I noticed jenkins process cpu consumption going over 100% and stalling the
web handlers. The io wait is not a problem nor mem.
What I suspect is going on is tons of GC and GC pressure given that 12GB
heap and fairly nice amount of users and requests.
What we can do is to configure the gc logging to see if that is really GC
pauses really and to move to using the G1 garbage collector.
See this post from cloudbees on the move to G1 collector
https://www.cloudbees.com/blog/joining-big-leagues-tuning-jenkins-gc-resp...
--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100070)
------------=_1510506541-25426-133
Content-Type: text/html; charset="UTF-8"
Content-Disposition: inline
Content-Transfer-Encoding: 7bit
<html><body>
<pre>[
https://ovirt-jira.atlassian.net/browse/OVIRT-1756?page=com.atlassian.jir...
]</pre>
<h3>Barak Korren updated OVIRT-1756:</h3>
<pre>Component/s: oVirt CI
Epic Link: OVIRT-403
Issue Type: Improvement (was: By-EMAIL)</pre>
<blockquote><h3>configure
jenkins.ovirt.org to G1 garbage
collector</h3>
<pre> Key: OVIRT-1756
URL:
https://ovirt-jira.atlassian.net/browse/OVIRT-1756
Project: oVirt - virtualization made easy
Issue Type: Improvement
Components: oVirt CI
Reporter: rgolanbb
Assignee: infra</pre>
<p>I noticed jenkins process cpu consumption going over 100% and stalling the web
handlers. The io wait is not a problem nor mem. What I suspect is going on is tons of GC
and GC pressure given that 12GB heap and fairly nice amount of users and requests. What we
can do is to configure the gc logging to see if that is really GC pauses really and to
move to using the G1 garbage collector. See this post from cloudbees on the move to G1
collector <a
href="https://www.cloudbees.com/blog/joining-big-leagues-tuning-jenk...
<p>— This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100070)</p>
<img
src="https://u4043402.ct.sendgrid.net/wf/open?upn=i5TMWGV99amJbNxJpS...
alt="" width="1" height="1" border="0"
style="height:1px !important;width:1px !important;border-width:0
!important;margin-top:0 !important;margin-bottom:0 !important;margin-right:0
!important;margin-left:0 !important;padding-top:0 !important;padding-bottom:0
!important;padding-right:0 !important;padding-left:0 !important;"/>
</body></html>
------------=_1510506541-25426-133--