On 24 November 2017 at 11:05, Viktor Mihajlovski
<mihajlov(a)linux.vnet.ibm.com> wrote:
On 21.11.2017 11:26, Dan HorĂ¡k wrote:
[...]
>> qemu s390x emulation does not work with code compiled for z12.
>> Would a real virtual machine be what you need?
>> The Fedora team DOES have access to a z13. Not sure how much
>> resources are available, but can you contact Dan Horak (on cc) if
>> there is enough spare capacity.
>
> Christian is right, we have a publicly accessible guest running Fedora
> on the Marist College z13 mainframe. It's currently used by ~5 projects
> (for example glibc and qemu) as their build and CI host, so adding
> another project depends how intensive ovirt's usage would be.
As a first step one could only build the packages needed for the KVM
host. At this point in time that would be vdsm and ovirt-host, both are
building rather quickly.
It should be possible to ensure that only these are built on a s390
system using appropriate node filters.
[...]
We can get more accurate data by looking at the ppc64c build history
(We support ppc64le only for hypervisor usage, similar to what is
intended for s390).
Here is the history for vdsm:
http://jenkins.ovirt.org/job/vdsm_master_build-artifacts-el7-ppc64le/buil...
(~20 builds a day taking 1-2 minutes each)
And here is the one for ovirt-host:
http://jenkins.ovirt.org/job/ovirt-host_master_check-patch-el7-ppc64le/bu...
(only 1 build in history, taking 3-4 minutes)
Looking at what else we have building on ppc64le:
http://jenkins.ovirt.org/search/?q=master_build-artifacts-el7-ppc64le
I can also see ioprocess with is a vdsm dependency, and the SDK which
is probably not really needed.
So for ioprocess:
http://jenkins.ovirt.org/job/ioprocess_master_build-artifacts-el7-ppc64le...
I'd say its very rarely built.
So we end up with ~20 1-2 minute builds a day (Timed but the amount of
Fedora versions we want to support, but what will probably be just
one), with the rest being a statistical error...
I wonder about sharing a VM with other project though. We do use mock
for running the build script so the build itself should be fairly
isolated, but we have some of our own wrapper scripts around mock that
do things trying to keep build dependencies in the chroot cache over
time. We're also incompatible with mock's new systemd-nspawn backend,
so we force it to work with the older chroot-based backend. If other
projects are using mock as well, I wonder if we may end up with race
conditions arising from shared use of /var/lib/mock.
Bottom line - we may end up being a little noisy neighbours if we
share a VM, but we can try that and see what happens, how to we move
foreward with trying that?
--
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. |
redhat.com/trusted