On Thu, May 25, 2017 at 12:11 PM Barak Korren <bkorren@redhat.com> wrote:
On 25 May 2017 at 11:22, Nir Soffer <nsoffer@redhat.com> wrote:
Vdsm and ovirt-imageio use /var/tmp because we need file system supporting direct I/O. /tmp is using tmpfs which does not support it. 

We have no need for "cached" data kept after a test run, and we cannot promise that test will never leave junk in /var/tmp since tests run before they are reviewed. Even correct tests can leave junk if the test runner is killed (for example, on timeout).

The only way to keep slaves clean is to clean /tmp and /var/tmp after each run. Treating /var/tmp as cache is very wrong.

You need to differentiate between the '/var/tmp' you see from your
scripts to the one we are talking about here.

- When you use /var/tmp in your script you use the one inside the mock
  environment. It is specific to yore script run time environment and
  will always be wiped out when its done.

Great, this is what we need.
 

- We are talking about "/var/tmp" _of_the_execution_slave_, the only
  way you can get to it is either specifically bind-mount it from the
  "*.mounts" file, or have some daemon like libvirtd or dockerd write
  to it.

In this case, I don't see how vdsm tests can pollute the host /var/tmp.

Vdsm run 2 tests running virt-alignment-scan, one with --help, and one
with non existing images, so the temporary directory cannot be created
by these tests.
 

BTW if you want any guarantees about the FS you are using, you better
bind-mount something to the point you are writing to, otherwise things
will break when we make infrastructure changes like for example moving
the chroots to RAM or onto layered file-systems.

We need a location which exists on developer laptop, developer hosts,
and CI environments, and /var/tmp proved to be a good choice so far.
We expect that ovirt CI will not break this assumption in the future.

However, writing test data to storage is a waste of time, and having
a memory based file system supporting direct I/O would speed up lot
of tests.

So we can do this:

truncate -s 5g /tmp/backing
mkfs.ext4 /tmp/backing
mount -o loop /tmp/backing /tmp/mnt

And now we have direct I/O support and great performance:

$ dd if=/dev/zero of=/tmp/mnt/direct-io-test bs=8M count=128 oflag=direct
128+0 records in
128+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 0.536528 s, 2.0 GB/s

This greatly speeds up some tests which are marked a slow tests, and 
never run unless using --enable-slow-tests.

Without slow tests using /var/tmp:
$ ./run_tests_local.sh storage_qcow2_test.py -s
...
Ran 31 tests in 0.709s

With slow tests using loop device based temporary directory:
$ ./run_tests_local.sh storage_qcow2_test.py --enable-slow-tests -s
...
Ran 31 tests in 7.019s

With slow tests, using /var/tmp:
$ ./run_tests_local.sh storage_qcow2_test.py --enable-slow-tests -s
...
Ran 31 tests in 90.491s


This requires root to mounting and unmounting the backing file, so
it is not a good solution for developers, when you need to run certain
tests all the time, but it can be a good solution for the CI.

Barak, do you think ovirt CI can provide this functionality?

Nir