Hello Yaniv.

I don't do mock. And if I run everything in RAM (whether directly under /dev/shm/<somewhere> or in a zram disk), I honestly don't need the Linux system cache.

Besides the possible difference in disk speeds I think the second factor is this Linux fs cache that basically create an analog of RAM disk on the fly.

Well, theoretically, if you have enough RAM and you keep re-running, many of the data is indeed going to be cached. I'd argue that it's a better use of RAM to just run it there.
 

Ok then I guess we traced down the reason. I did not do any tests to confirm this yet but I believe the "mock" is causing this effect. On our side we use mock that is at the moment essential part of the standard CI offering. Mock utilizes file system cache of the chroots and those cashes are shared between runs that along with Linux fs cache means that they are probably already were identified to be good candidates for RAM fs cache and put there.

So since mock has own cache it should be trivial for Linux to effectively identify them and put into RAM and this benefits all runs that use mock. If we remove the mock than it is completely different.

Anton.

--
Anton Marchukov
Senior Software Engineer - RHEV CI - Red Hat