[
https://ovirt-jira.atlassian.net/browse/OVIRT-2661?page=com.atlassian.jir...
]
Barak Korren commented on OVIRT-2661:
-------------------------------------
I've cleaned up and restored the slave, growing the disk is not an option, we`ll need
to do that for all slaves.
The issue seems to be that with the diverse set of work we have running on the slave the
mock caches simply pile up even though we only keep them for two days. For most jobs that
is not an issue but it makes the node /appliance build jobs crash because of the sheer
amount of space they use. Once they crash they leave running processes behind that
prevent the data they left from being successfully deleted, thereby making the next job in
turn crash as well.
It seems we simply crossed some threshold with the amount of separate mock environment
caches we keep. In the short term the only simple solution would be to decrease the time
we keep mock caches.
No space left on
vm0038.workers-phx.ovirt.org
---------------------------------------------
Key: OVIRT-2661
URL:
https://ovirt-jira.atlassian.net/browse/OVIRT-2661
Project: oVirt - virtualization made easy
Issue Type: Outage
Components: Jenkins Slaves
Reporter: sbonazzo
Assignee: infra
Hi, took
https://jenkins.ovirt.org/computer/vm0038.workers-phx.ovirt.org/
offline due to no space left.
Can you please reprovision the VM and possibly add some more disk to it
while doing?
Thanks,
--
SANDRO BONAZZOLA
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <
https://www.redhat.com/>
sbonazzo(a)redhat.com
<
https://red.ht/sig>
--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100098)