[
https://ovirt-jira.atlassian.net/browse/OVIRT-2591?page=com.atlassian.jir...
]
Eyal Edri commented on OVIRT-2591:
----------------------------------
Few questions:
# Does it apply also to podman and buildha, assuming we'll move to them in a few
months?
# Can you estimate or measure how much time this setup will reduce from running tests? it
will have to be something significant to justify this kind of improvement
Also, we probably want to put on hold any major infrastructure improvement ( that requires
a significant amount of work/code ) until we'll know what are the requirements to run
it inside CentOS CI infra.
cc [[~gbenhaim@redhat.com][~dbelenky@redhat.com][~bkorren(a)redhat.com]
Add a distributed docker-cache
------------------------------
Key: OVIRT-2591
URL:
https://ovirt-jira.atlassian.net/browse/OVIRT-2591
Project: oVirt - virtualization made easy
Issue Type: Improvement
Reporter: Roman Mohr
Assignee: infra
Priority: High
What?
If CI builds get heavy and things are running inside containers, I expect
that the CI system proactively tries to optimize when it can. Since the CI
system provides the docker installation, I would expect that under some
conditions, it automatically puts heavy docker builds in a distributed
cache in the cluster. Examples on how this can achieved are listed in [1]
and [2].
Why?
Dockerfiles have the advantage that we can isolate our biuld-steps in a
Dockerfile. This gives reproducibility, but also means that e.g. curl
downloads or RPM installs are not visible for the CI system. Therefore it
is beneficial for the CI system and the user (more speed and less
utilization), to put docker images with their build chain into a
distributed cache and pre-fetch the cache into the docker cache of the
build slot. Pre-fetching based on e.g. gibhub project probably makes sense.
[1]
https://runnable.com/blog/distributing-docker-cache-across-hosts
[2]
https://blog.codeship.com/building-a-remote-caching-system/
--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100096)