First of all I'd like to clear things out with regards to the nagios status message:

DISK OK - free space: /srv/resources 183768 MB (15.98% inode=100%): 

This shows free space and percentage, so 100% inode space is free and ~16% disk space is free.
Currently this is around the threshold of 15% so we're seeing warnings.

The free space situation has improved a lot since the tested retention patch was merged.
It, however, only cleans up RPMs which results in node ISOs filling up their respective directory:
# du -sh master/*
195G    master/iso
135G    master/rpm

Also, the node ISO directory has corrupted directory names which is a known bug(OVIRT-2355)
and in total there's more than a hundred ISOs stored inside:

# du -sh master/iso/*                              
56G     master/iso/ovirt-node-ng-installer
1.5G    master/iso/ovirt-node-ng-installer-j-fc
9.6G    master/iso/ovirt-node-ng-installer-master-
65G     master/iso/ovirt-node-ng-installer-master-e
62G     master/iso/ovirt-node-ng-installer-master-fc
3.0G    master/iso/oVirt-toolsSetup


We'll need to fix naming here and then create a retention script as well.

For now I've kept the last 5 builds in each of these directories and that cleared 200GB (17% more free space).
I believe this is the major factor contributing to disk usage at this point as each node ISO is 1.2GB in size.

We can add disk space but there's not much point if it's going to be used for stale ISOs in misnamed directories.


On Mon, Sep 3, 2018 at 2:23 PM Sandro Bonazzola <sbonazzo@redhat.com> wrote:


2018-09-03 14:20 GMT+02:00 Eyal Edri <eedri@redhat.com>:


On Mon, Sep 3, 2018 at 3:12 PM Sandro Bonazzola <sbonazzo@redhat.com> wrote:


2018-09-03 14:07 GMT+02:00 Ehud Yonasi <eyonasi@redhat.com>:
Checking

We are releasing 4.2.6 GA today, increase in usage may have been caused by backup copy of the release repo while updating it.
I would consider increasing the inodes availability on that file system.

Yea, this is in plan, we now have more storage availalble, there are talks about splitting the official builds and CI builds to be on different 
servers.

Evgheni, is there additional available space on current storage we can allocate for resources? 

There's plenty of disk space, only inodes are running out

 
 

 

On Mon, Sep 3, 2018 at 3:06 PM Eyal Edri <eedri@redhat.com> wrote:
Can you check what is causing it? we should have resolved space issue by applying retention script and dropping 4.1

---------- Forwarded message ---------
From: <nagios@monitoring.ovirt.org>
Date: Mon, Sep 3, 2018 at 2:42 PM
Subject: ** PROBLEM Service Alert: Resources file server/Resources Partition is WARNING **
To: <eedri@redhat.com>


***** Nagios *****

Notification Type: PROBLEM

Service: Resources Partition
Host: Resources file server
Address: resources.ovirt.org
State: WARNING

Date/Time: Mon Sep 3 11:33:23 UTC 2018

Additional Info:

DISK WARNING - free space: /srv/resources 166334 MB (14.46% inode=100%):


--

Eyal edri


MANAGER

RHV/CNV DevOps

EMEA VIRTUALIZATION R&D


Red Hat EMEA

TRIED. TESTED. TRUSTED.
phone: +972-9-7692018
irc: eedri (on #tlv #rhev-dev #rhev-integ)

_______________________________________________
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-leave@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
List Archives: https://lists.ovirt.org/archives/list/infra@ovirt.org/message/C3YOLSRLODL47JIKDCD2SLBGYKJ4TWRC/




--

SANDRO BONAZZOLA

MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV

Red Hat EMEA

sbonazzo@redhat.com   



--

Eyal edri


MANAGER

RHV/CNV DevOps

EMEA VIRTUALIZATION R&D


Red Hat EMEA

TRIED. TESTED. TRUSTED.
phone: +972-9-7692018
irc: eedri (on #tlv #rhev-dev #rhev-integ)



--

SANDRO BONAZZOLA

MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV

Red Hat EMEA

sbonazzo@redhat.com   



--
Regards,
Evgheni Dereveanchin