On 6 Nov 2020, at 11:29, Milan Zamazal <mzamazal(a)redhat.com>
wrote:
Marcin Sobczyk <msobczyk(a)redhat.com> writes:
>> On 11/5/20 11:30 AM, Milan Zamazal wrote:
>> Marcin Sobczyk <msobczyk(a)redhat.com> writes:
>>
>>> On 11/4/20 11:29 AM, Yedidyah Bar David wrote:
>>>> Perhaps what you want, some day, is for the individual tests to
>>>> have make-style dependencies? So you'll issue just a single test,
>>>> and OST will only
>>>> run the bare minimum for running it.
>>> Yeah, I had the same idea. It's not easy to implement it though.
>>> 'pytest' has a "tests are independent" design, so we would
need to
>>> build something on top of that (or try to invent our own test
>>> framework, which is a very bad idea). But even with a
>>> dependency-resolving solution, there are tests that set something up
>>> just to bring it down in a moment (by design), so we'd probably need
>>> some kind of "provides" and "tears down" markers. Then
you have the
>>> fact that some things take a lot of time and we do other stuff in
>>> between, while waiting - dependency resolving could force things to
>>> happen linearly and the run times could skyrocket... It's a complex
>>> subject that requires a serious think-through.
>> Actually I was once thinking about introducing test dependencies in
>> order to run independent tests in parallel and to speed up OST runs this
>> way. The idea was that OST just waits on something at many places and
>> it could run other tests in the meantime (we do some test interleaving
>> in extreme cases but it's suboptimal and difficult to maintain).
> Yeah, I think I remember you did that during one of OST's hackathons.
>
>>
>> When arranging some things manually, I could achieve a significant
>> speedup. But the problem is, of course, how to make an automated
>> dependency management and handle all the possible situations and corner
>> cases. It would be quite a lot of work, I think.
>>
> Exactly. I.e. I can see there's [1], but of course that will work only
> on py3.
py3 is the least problem.
> The dependency management is something we'd have to implement and maintain
> on our own probably.
Yes, this is the hard part.
> Then of course we'd be introducing the test repeatability
> problem, since ordering of things for different runs might be different,
> which in current state of OST is something I'd like to avoid.
It should be easy to have a switch between deterministic and
non-deterministic ordering. Then one can use the fast, dynamic ordering
for running tests more quickly and the suboptimal but deterministic
ordering can be used for repeatability (on CI etc.). So this is not a
real problem.
Why do you think it’s going to be significantly faster? I do not see
much space for increasing it, at least not with the current set of
tests.
Actual tests that can run in parallel take cca 10 minutes. There’s
initial install, backup/restore when you can’t run anything, storage
operations ( if you try to parallelize them they only run slower(and
fail))
You’re not going to gain much...