ovirt-imageio leaves .guestfs-0 folder under /var/tmp during check-patch

Hi, Should it be like that? Is there a way to clean this leftover in the check-patch script? Thanks Gil

IIRC, this is the default location libguestfs caches files. It can be changed with LIBGUESTFS_TMPDIR env parameter, but whether the default behaviour should be changed is a different question I guess. On Tue, May 23, 2017 at 4:46 PM, Gil Shinar <gshinar@redhat.com> wrote:
Hi,
Should it be like that? Is there a way to clean this leftover in the check-patch script?
Thanks Gil
_______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel

On 23 May 2017 at 17:25, Nadav Goldin <ngoldin@redhat.com> wrote:
IIRC, this is the default location libguestfs caches files. It can be changed with LIBGUESTFS_TMPDIR env parameter, but whether the default behaviour should be changed is a different question I guess.
Question is why does it stay there once the job is done Also I'm guessing its creating it via libvirt somehow b/c imageio does not have any bind-mounts into mock. -- Barak Korren RHV DevOps team , RHCE, RHCi Red Hat EMEA redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted

On Tue, May 23, 2017 at 5:34 PM, Barak Korren <bkorren@redhat.com> wrote:
On 23 May 2017 at 17:25, Nadav Goldin <ngoldin@redhat.com> wrote:
IIRC, this is the default location libguestfs caches files. It can be changed with LIBGUESTFS_TMPDIR env parameter, but whether the default behaviour should be changed is a different question I guess.
Question is why does it stay there once the job is done
It's in /var/tmp and it's supposed to be cached. Is there an issues with this? It's 400MB image, AFAIR. Y.
Also I'm guessing its creating it via libvirt somehow b/c imageio does not have any bind-mounts into mock.
-- Barak Korren RHV DevOps team , RHCE, RHCi Red Hat EMEA redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted _______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel

This may be vdsm, imageio does nor use libguestfs. בתאריך יום ג׳, 23 במאי 2017, 18:35, מאת Yaniv Kaul <ykaul@redhat.com>:
On Tue, May 23, 2017 at 5:34 PM, Barak Korren <bkorren@redhat.com> wrote:
On 23 May 2017 at 17:25, Nadav Goldin <ngoldin@redhat.com> wrote:
IIRC, this is the default location libguestfs caches files. It can be changed with LIBGUESTFS_TMPDIR env parameter, but whether the default behaviour should be changed is a different question I guess.
Question is why does it stay there once the job is done
It's in /var/tmp and it's supposed to be cached. Is there an issues with this? It's 400MB image, AFAIR. Y.
Also I'm guessing its creating it via libvirt somehow b/c imageio does not have any bind-mounts into mock.
-- Barak Korren RHV DevOps team , RHCE, RHCi Red Hat EMEA redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted _______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel
_______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel

On 23 May 2017 at 18:34, Yaniv Kaul <ykaul@redhat.com> wrote:
It's in /var/tmp and it's supposed to be cached. Is there an issues with this? It's 400MB image, AFAIR.
We currently have /var/tmp wiped out after each and every job run. We are looking into stopping that to allow it to be used for persistent caches, but we don't want the slaves to fill up as a result. We we need to understand how fast may this accumulate. 400MB accumulation per run is a lot. Unless this is stable and also gets recycled automatically. -- Barak Korren RHV DevOps team , RHCE, RHCi Red Hat EMEA redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted

On Wed, May 24, 2017 at 9:46 AM Barak Korren <bkorren@redhat.com> wrote:
On 23 May 2017 at 18:34, Yaniv Kaul <ykaul@redhat.com> wrote:
It's in /var/tmp and it's supposed to be cached. Is there an issues with this? It's 400MB image, AFAIR.
We currently have /var/tmp wiped out after each and every job run.
Please keep this behavior.
We are looking into stopping that to allow it to be used for persistent caches,
Use /var/cache? but we don't want the slaves to fill up as a result.
We we need to understand how fast may this accumulate.
400MB accumulation per run is a lot. Unless this is stable and also gets recycled automatically.
-- Barak Korren RHV DevOps team , RHCE, RHCi Red Hat EMEA redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted _______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel

On Wed, May 24, 2017 at 10:30 AM, Nir Soffer <nsoffer@redhat.com> wrote:
On Wed, May 24, 2017 at 9:46 AM Barak Korren <bkorren@redhat.com> wrote:
On 23 May 2017 at 18:34, Yaniv Kaul <ykaul@redhat.com> wrote:
It's in /var/tmp and it's supposed to be cached. Is there an issues with this? It's 400MB image, AFAIR.
We currently have /var/tmp wiped out after each and every job run.
Please keep this behavior.
We are looking into stopping that to allow it to be used for persistent caches,
Use /var/cache?
/dev/shm is just as good. It's only 400MB. Y.
but we don't want the slaves to fill up as a result.
We we need to understand how fast may this accumulate.
400MB accumulation per run is a lot. Unless this is stable and also gets recycled automatically.
-- Barak Korren RHV DevOps team , RHCE, RHCi Red Hat EMEA redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted _______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel

On 24 May 2017 at 11:17, Yaniv Kaul <ykaul@redhat.com> wrote:
/dev/shm is just as good. It's only 400MB. Y.
Forgive my language but, hell no. This is not the gigantic Lago bare metals you are used to. We don't want GWT builds to start failing on running out of RAM. -- Barak Korren RHV DevOps team , RHCE, RHCi Red Hat EMEA redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted

On Wed, May 24, 2017 at 11:35 AM, Barak Korren <bkorren@redhat.com> wrote:
On 24 May 2017 at 11:17, Yaniv Kaul <ykaul@redhat.com> wrote:
/dev/shm is just as good. It's only 400MB. Y.
Forgive my language but, hell no. This is not the gigantic Lago bare metals you are used to. We don't want GWT builds to start failing on running out of RAM.
Buy more RAM. Y.
-- Barak Korren RHV DevOps team , RHCE, RHCi Red Hat EMEA redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted

On Wed, May 24, 2017 at 12:38 PM, Yaniv Kaul <ykaul@redhat.com> wrote:
On Wed, May 24, 2017 at 11:35 AM, Barak Korren <bkorren@redhat.com> wrote:
On 24 May 2017 at 11:17, Yaniv Kaul <ykaul@redhat.com> wrote:
/dev/shm is just as good. It's only 400MB. Y.
Forgive my language but, hell no. This is not the gigantic Lago bare metals you are used to. We don't want GWT builds to start failing on running out of RAM.
Buy more RAM.
This is the best solution as having the cache on the ram will shorten the time of engine jobs.
Y.
-- Barak Korren RHV DevOps team , RHCE, RHCi Red Hat EMEA redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
_______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel

To get back to the original point - I do not see a connection with imageio anywhere. It's libguestfs's temp dir. Now to decide what to do with it I think we should first understand which test uses/invokes libguestfs and for what purpose? On 24 May 2017, at 12:35, Gil Shinar <gshinar@redhat.com> wrote: On Wed, May 24, 2017 at 12:38 PM, Yaniv Kaul <ykaul@redhat.com> wrote:
On Wed, May 24, 2017 at 11:35 AM, Barak Korren <bkorren@redhat.com> wrote:
On 24 May 2017 at 11:17, Yaniv Kaul <ykaul@redhat.com> wrote:
/dev/shm is just as good. It's only 400MB. Y.
Forgive my language but, hell no. This is not the gigantic Lago bare metals you are used to. We don't want GWT builds to start failing on running out of RAM.
Buy more RAM.
This is the best solution as having the cache on the ram will shorten the time of engine jobs.
Y.
-- Barak Korren RHV DevOps team , RHCE, RHCi Red Hat EMEA redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
_______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel
_______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel

I wrote that it was imageio because I have disabled deletion of /var/tmp on one job only (jenkins check-patch) and saw that on the same Jenkins slave only imageio check-patch and Jenkins check-patch run. Jenkins check-patch has nothing to do with libguestfs so I assumed that imageio did. Here is a list of running jobs on the slave I have checked /var/tmp on. The imageio job cleans /var/tmp and jenkins job doesn't. [image: Inline image 1] Anyhow, I'll take your word on that and assume that the Jenkins build history has bugs and a VDSM or some other job run on that slave. Now lets go back to the main interest of this thread. If we'll know, that whatever is being written to /var/tmp, can be considered as cache and can be used by the next run of the job that uses it, it might be a good idea not to clean /var/tmp. Jenkins is helping us with that by trying to run jobs on the same slave as much as possible. We will start by monitoring our disks constantly to see how fast, if at all, they are getting full. On Wed, May 24, 2017 at 6:28 PM, Michal Skrivanek <mskrivan@redhat.com> wrote:
To get back to the original point - I do not see a connection with imageio anywhere. It's libguestfs's temp dir. Now to decide what to do with it I think we should first understand which test uses/invokes libguestfs and for what purpose?
On 24 May 2017, at 12:35, Gil Shinar <gshinar@redhat.com> wrote:
On Wed, May 24, 2017 at 12:38 PM, Yaniv Kaul <ykaul@redhat.com> wrote:
On Wed, May 24, 2017 at 11:35 AM, Barak Korren <bkorren@redhat.com> wrote:
On 24 May 2017 at 11:17, Yaniv Kaul <ykaul@redhat.com> wrote:
/dev/shm is just as good. It's only 400MB. Y.
Forgive my language but, hell no. This is not the gigantic Lago bare metals you are used to. We don't want GWT builds to start failing on running out of RAM.
Buy more RAM.
This is the best solution as having the cache on the ram will shorten the time of engine jobs.
Y.
-- Barak Korren RHV DevOps team , RHCE, RHCi Red Hat EMEA redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
_______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel
_______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel

בתאריך יום ה׳, 25 במאי 2017, 10:59, מאת Gil Shinar <gshinar@redhat.com>:
I wrote that it was imageio because I have disabled deletion of /var/tmp on one job only (jenkins check-patch) and saw that on the same Jenkins slave only imageio check-patch and Jenkins check-patch run. Jenkins check-patch has nothing to do with libguestfs so I assumed that imageio did. Here is a list of running jobs on the slave I have checked /var/tmp on. The imageio job cleans /var/tmp and jenkins job doesn't. [image: Inline image 1]
Anyhow, I'll take your word on that and assume that the Jenkins build history has bugs and a VDSM or some other job run on that slave.
Now lets go back to the main interest of this thread. If we'll know, that whatever is being written to /var/tmp, can be considered as cache and can be used by the next run of the job that uses it, it might be a good idea not to clean /var/tmp. Jenkins is helping us with that by trying to run jobs on the same slave as much as possible.
Vdsm and ovirt-imageio use /var/tmp because we need file system supporting direct I/O. /tmp is using tmpfs which does not support it. We have no need for "cached" data kept after a test run, and we cannot promise that test will never leave junk in /var/tmp since tests run before they are reviewed. Even correct tests can leave junk if the test runner is killed (for example, on timeout). The only way to keep slaves clean is to clean /tmp and /var/tmp after each run. Treating /var/tmp as cache is very wrong. We will start by monitoring our disks constantly to see how fast, if at
all, they are getting full.
On Wed, May 24, 2017 at 6:28 PM, Michal Skrivanek <mskrivan@redhat.com> wrote:
To get back to the original point - I do not see a connection with imageio anywhere. It's libguestfs's temp dir. Now to decide what to do with it I think we should first understand which test uses/invokes libguestfs and for what purpose?
On 24 May 2017, at 12:35, Gil Shinar <gshinar@redhat.com> wrote:
On Wed, May 24, 2017 at 12:38 PM, Yaniv Kaul <ykaul@redhat.com> wrote:
On Wed, May 24, 2017 at 11:35 AM, Barak Korren <bkorren@redhat.com> wrote:
On 24 May 2017 at 11:17, Yaniv Kaul <ykaul@redhat.com> wrote:
/dev/shm is just as good. It's only 400MB. Y.
Forgive my language but, hell no. This is not the gigantic Lago bare metals you are used to. We don't want GWT builds to start failing on running out of RAM.
Buy more RAM.
This is the best solution as having the cache on the ram will shorten the time of engine jobs.
Y.
-- Barak Korren RHV DevOps team , RHCE, RHCi Red Hat EMEA redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
_______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel
_______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel
_______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel

On 25 May 2017 at 11:22, Nir Soffer <nsoffer@redhat.com> wrote:
Vdsm and ovirt-imageio use /var/tmp because we need file system supporting direct I/O. /tmp is using tmpfs which does not support it.
We have no need for "cached" data kept after a test run, and we cannot promise that test will never leave junk in /var/tmp since tests run before they are reviewed. Even correct tests can leave junk if the test runner is killed (for example, on timeout).
The only way to keep slaves clean is to clean /tmp and /var/tmp after each run. Treating /var/tmp as cache is very wrong.
You need to differentiate between the '/var/tmp' you see from your scripts to the one we are talking about here. - When you use /var/tmp in your script you use the one inside the mock environment. It is specific to yore script run time environment and will always be wiped out when its done. - We are talking about "/var/tmp" _of_the_execution_slave_, the only way you can get to it is either specifically bind-mount it from the "*.mounts" file, or have some daemon like libvirtd or dockerd write to it. BTW if you want any guarantees about the FS you are using, you better bind-mount something to the point you are writing to, otherwise things will break when we make infrastructure changes like for example moving the chroots to RAM or onto layered file-systems. -- Barak Korren RHV DevOps team , RHCE, RHCi Red Hat EMEA redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted

On Thu, May 25, 2017 at 12:11 PM Barak Korren <bkorren@redhat.com> wrote:
On 25 May 2017 at 11:22, Nir Soffer <nsoffer@redhat.com> wrote:
Vdsm and ovirt-imageio use /var/tmp because we need file system supporting direct I/O. /tmp is using tmpfs which does not support it.
We have no need for "cached" data kept after a test run, and we cannot promise that test will never leave junk in /var/tmp since tests run before they are reviewed. Even correct tests can leave junk if the test runner is killed (for example, on timeout).
The only way to keep slaves clean is to clean /tmp and /var/tmp after each run. Treating /var/tmp as cache is very wrong.
You need to differentiate between the '/var/tmp' you see from your scripts to the one we are talking about here.
- When you use /var/tmp in your script you use the one inside the mock environment. It is specific to yore script run time environment and will always be wiped out when its done.
Great, this is what we need.
- We are talking about "/var/tmp" _of_the_execution_slave_, the only way you can get to it is either specifically bind-mount it from the "*.mounts" file, or have some daemon like libvirtd or dockerd write to it.
In this case, I don't see how vdsm tests can pollute the host /var/tmp. Vdsm run 2 tests running virt-alignment-scan, one with --help, and one with non existing images, so the temporary directory cannot be created by these tests.
BTW if you want any guarantees about the FS you are using, you better bind-mount something to the point you are writing to, otherwise things will break when we make infrastructure changes like for example moving the chroots to RAM or onto layered file-systems.
We need a location which exists on developer laptop, developer hosts, and CI environments, and /var/tmp proved to be a good choice so far. We expect that ovirt CI will not break this assumption in the future. However, writing test data to storage is a waste of time, and having a memory based file system supporting direct I/O would speed up lot of tests. So we can do this: truncate -s 5g /tmp/backing mkfs.ext4 /tmp/backing mount -o loop /tmp/backing /tmp/mnt And now we have direct I/O support and great performance: $ dd if=/dev/zero of=/tmp/mnt/direct-io-test bs=8M count=128 oflag=direct 128+0 records in 128+0 records out 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 0.536528 s, 2.0 GB/s This greatly speeds up some tests which are marked a slow tests, and never run unless using --enable-slow-tests. Without slow tests using /var/tmp: $ ./run_tests_local.sh storage_qcow2_test.py -s ... Ran 31 tests in 0.709s With slow tests using loop device based temporary directory: $ ./run_tests_local.sh storage_qcow2_test.py --enable-slow-tests -s ... Ran 31 tests in 7.019s With slow tests, using /var/tmp: $ ./run_tests_local.sh storage_qcow2_test.py --enable-slow-tests -s ... Ran 31 tests in 90.491s This requires root to mounting and unmounting the backing file, so it is not a good solution for developers, when you need to run certain tests all the time, but it can be a good solution for the CI. Barak, do you think ovirt CI can provide this functionality? Nir

On 25 May 2017 at 15:42, Nir Soffer <nsoffer@redhat.com> wrote:
In this case, I don't see how vdsm tests can pollute the host /var/tmp.
Vdsm run 2 tests running virt-alignment-scan, one with --help, and one with non existing images, so the temporary directory cannot be created by these tests.
As you see in $subject, we are looking for something that is invoking libgustfs, could vdsm or imageio be doing that?
We need a location which exists on developer laptop, developer hosts, and CI environments, and /var/tmp proved to be a good choice so far. We expect that ovirt CI will not break this assumption in the future.
One thing you can do is use something like "/var/tmp/vdst_test" and also include that string in an "automation/*.mounts" file. This will cause 'mock_runner.sh' to mkdir and bind-mount it into the environment so you are using the hosts real '/var/tmp/vdsm_test' which will probably remain a real file-system for the time being. But if you do this, please clean up when you're done...
However, writing test data to storage is a waste of time, and having a memory based file system supporting direct I/O would speed up lot of tests.
So we can do this:
truncate -s 5g /tmp/backing mkfs.ext4 /tmp/backing mount -o loop /tmp/backing /tmp/mnt
And now we have direct I/O support and great performance:
This requires root to mounting and unmounting the backing file, so it is not a good solution for developers, when you need to run certain tests all the time, but it can be a good solution for the CI.
Have you tried doing this from inside mock? You are root there so it might work. But note that /tmp is not in RAM in EL7. Better use /dev/shm for compatibility.
Barak, do you think ovirt CI can provide this functionality?
I guess we could come up with some special syntax in the '*.mounts' file to support something like this, but I'd rather not add any big features to mock_runner before we rewrite it to use containers. -- Barak Korren RHV DevOps team , RHCE, RHCi Red Hat EMEA redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted

On Thu, May 25, 2017 at 4:00 PM Barak Korren <bkorren@redhat.com> wrote:
On 25 May 2017 at 15:42, Nir Soffer <nsoffer@redhat.com> wrote:
In this case, I don't see how vdsm tests can pollute the host /var/tmp.
Vdsm run 2 tests running virt-alignment-scan, one with --help, and one with non existing images, so the temporary directory cannot be created by these tests.
As you see in $subject, we are looking for something that is invoking libgustfs, could vdsm or imageio be doing that?
ovirt-imagio does not invoke any external program. Vdsm tests invoke virt-alignment-scan but not in a way that can leave anything around: LIBGUESTFS_BACKEND=direct virt-alignment-scan --help LIBGUESTFS_BACKEND=direct virt-alignment-scan --add no-such-file I don't know about other tests using libguestfs. But how a program running inside mock chroot can access host's /var/tmp?

On 25 May 2017, at 15:40, Barak Korren <bkorren@redhat.com> wrote:
But how a program running inside mock chroot can access host's /var/tmp?
By talking to libvirtd that runs outside.
Can you please add more details about the file you’ve seen? Was it a single file? When it was created? What does it contain? What kind of other tests are being executed on this host? Are you sure it’s from check-patch? Thanks, michal
-- Barak Korren RHV DevOps team , RHCE, RHCi Red Hat EMEA redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted

On Thu, May 25, 2017 at 10:58 AM, Gil Shinar <gshinar@redhat.com> wrote:
I wrote that it was imageio because I have disabled deletion of /var/tmp on one job only (jenkins check-patch) and saw that on the same Jenkins slave only imageio check-patch and Jenkins check-patch run. Jenkins check-patch has nothing to do with libguestfs so I assumed that imageio did. Here is a list of running jobs on the slave I have checked /var/tmp on. The imageio job cleans /var/tmp and jenkins job doesn't.
I thought it was ovirt-system-tests (Lago specifically) using virt-* tools. Y.
[image: Inline image 1]
Anyhow, I'll take your word on that and assume that the Jenkins build history has bugs and a VDSM or some other job run on that slave.
Now lets go back to the main interest of this thread. If we'll know, that whatever is being written to /var/tmp, can be considered as cache and can be used by the next run of the job that uses it, it might be a good idea not to clean /var/tmp. Jenkins is helping us with that by trying to run jobs on the same slave as much as possible. We will start by monitoring our disks constantly to see how fast, if at all, they are getting full.
On Wed, May 24, 2017 at 6:28 PM, Michal Skrivanek <mskrivan@redhat.com> wrote:
To get back to the original point - I do not see a connection with imageio anywhere. It's libguestfs's temp dir. Now to decide what to do with it I think we should first understand which test uses/invokes libguestfs and for what purpose?
On 24 May 2017, at 12:35, Gil Shinar <gshinar@redhat.com> wrote:
On Wed, May 24, 2017 at 12:38 PM, Yaniv Kaul <ykaul@redhat.com> wrote:
On Wed, May 24, 2017 at 11:35 AM, Barak Korren <bkorren@redhat.com> wrote:
On 24 May 2017 at 11:17, Yaniv Kaul <ykaul@redhat.com> wrote:
/dev/shm is just as good. It's only 400MB. Y.
Forgive my language but, hell no. This is not the gigantic Lago bare metals you are used to. We don't want GWT builds to start failing on running out of RAM.
Buy more RAM.
This is the best solution as having the cache on the ram will shorten the time of engine jobs.
Y.
-- Barak Korren RHV DevOps team , RHCE, RHCi Red Hat EMEA redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
_______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel
_______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel
_______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel

One thing I was missing and Barak found is that our cleanup script does not remove files and folders that starts with a dot. It is, obviously, a bug but it means that the build history screen shot I have pasted here doesn't have anything to do with the .guestfs-0 folder. As much as I know, system tests jobs are running only on bare metal slaves. The slave I saw this folder on was a VM. On Thu, May 25, 2017 at 10:56 PM, Yaniv Kaul <ykaul@redhat.com> wrote:
On Thu, May 25, 2017 at 10:58 AM, Gil Shinar <gshinar@redhat.com> wrote:
I wrote that it was imageio because I have disabled deletion of /var/tmp on one job only (jenkins check-patch) and saw that on the same Jenkins slave only imageio check-patch and Jenkins check-patch run. Jenkins check-patch has nothing to do with libguestfs so I assumed that imageio did. Here is a list of running jobs on the slave I have checked /var/tmp on. The imageio job cleans /var/tmp and jenkins job doesn't.
I thought it was ovirt-system-tests (Lago specifically) using virt-* tools. Y.
[image: Inline image 1]
Anyhow, I'll take your word on that and assume that the Jenkins build history has bugs and a VDSM or some other job run on that slave.
Now lets go back to the main interest of this thread. If we'll know, that whatever is being written to /var/tmp, can be considered as cache and can be used by the next run of the job that uses it, it might be a good idea not to clean /var/tmp. Jenkins is helping us with that by trying to run jobs on the same slave as much as possible. We will start by monitoring our disks constantly to see how fast, if at all, they are getting full.
On Wed, May 24, 2017 at 6:28 PM, Michal Skrivanek <mskrivan@redhat.com> wrote:
To get back to the original point - I do not see a connection with imageio anywhere. It's libguestfs's temp dir. Now to decide what to do with it I think we should first understand which test uses/invokes libguestfs and for what purpose?
On 24 May 2017, at 12:35, Gil Shinar <gshinar@redhat.com> wrote:
On Wed, May 24, 2017 at 12:38 PM, Yaniv Kaul <ykaul@redhat.com> wrote:
On Wed, May 24, 2017 at 11:35 AM, Barak Korren <bkorren@redhat.com> wrote:
On 24 May 2017 at 11:17, Yaniv Kaul <ykaul@redhat.com> wrote:
/dev/shm is just as good. It's only 400MB. Y.
Forgive my language but, hell no. This is not the gigantic Lago bare metals you are used to. We don't want GWT builds to start failing on running out of RAM.
Buy more RAM.
This is the best solution as having the cache on the ram will shorten the time of engine jobs.
Y.
-- Barak Korren RHV DevOps team , RHCE, RHCi Red Hat EMEA redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
_______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel
_______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel
_______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel

On 24 May 2017 at 10:30, Nir Soffer <nsoffer@redhat.com> wrote:
Please keep this behavior.
<snip>
Use /var/cache?
Can`t. In order to keep allowing local use of "mock_runner.sh", the directory needs to: 1. Exist on every system by default 2. Be accessible from a user level (not root) "/var/cache" is root owned. Stuff you see there gets 'mkdir'ed and 'chown'ed by RPM post-install scripts. -- Barak Korren RHV DevOps team , RHCE, RHCi Red Hat EMEA redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted

On Wed, May 24, 2017 at 9:45 AM, Barak Korren <bkorren@redhat.com> wrote:
On 23 May 2017 at 18:34, Yaniv Kaul <ykaul@redhat.com> wrote:
It's in /var/tmp and it's supposed to be cached. Is there an issues with this? It's 400MB image, AFAIR.
We currently have /var/tmp wiped out after each and every job run. We are looking into stopping that to allow it to be used for persistent caches, but we don't want the slaves to fill up as a result.
We we need to understand how fast may this accumulate.
It's the exact same image, I don't see it adding up. Y.
400MB accumulation per run is a lot. Unless this is stable and also gets recycled automatically.
-- Barak Korren RHV DevOps team , RHCE, RHCi Red Hat EMEA redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
participants (6)
-
Barak Korren
-
Gil Shinar
-
Michal Skrivanek
-
Nadav Goldin
-
Nir Soffer
-
Yaniv Kaul