On 24 November 2017 at 14:36, Dan Horák <dan(a)danny.cz> wrote:
On Fri, 24 Nov 2017 13:16:53 +0200
Barak Korren <bkorren(a)redhat.com> wrote:
> On 24 November 2017 at 11:05, Viktor Mihajlovski
> <mihajlov(a)linux.vnet.ibm.com> wrote:
> > On 21.11.2017 11:26, Dan Horák wrote:
> > [...]
> >>> qemu s390x emulation does not work with code compiled for z12.
> >>> Would a real virtual machine be what you need?
> >>> The Fedora team DOES have access to a z13. Not sure how much
> >>> resources are available, but can you contact Dan Horak (on cc) if
> >>> there is enough spare capacity.
> >>
> >> Christian is right, we have a publicly accessible guest running
> >> Fedora on the Marist College z13 mainframe. It's currently used by
> >> ~5 projects (for example glibc and qemu) as their build and CI
> >> host, so adding another project depends how intensive ovirt's
> >> usage would be.
> > As a first step one could only build the packages needed for the KVM
> > host. At this point in time that would be vdsm and ovirt-host, both
> > are building rather quickly.
> > It should be possible to ensure that only these are built on a s390
> > system using appropriate node filters.
> > [...]
>
> We can get more accurate data by looking at the ppc64c build history
> (We support ppc64le only for hypervisor usage, similar to what is
> intended for s390).
> Here is the history for vdsm:
>
http://jenkins.ovirt.org/job/vdsm_master_build-artifacts-el7-ppc64le/buil...
> (~20 builds a day taking 1-2 minutes each)
> And here is the one for ovirt-host:
>
http://jenkins.ovirt.org/job/ovirt-host_master_check-patch-el7-ppc64le/bu...
> (only 1 build in history, taking 3-4 minutes)
>
> Looking at what else we have building on ppc64le:
>
http://jenkins.ovirt.org/search/?q=master_build-artifacts-el7-ppc64le
> I can also see ioprocess with is a vdsm dependency, and the SDK which
> is probably not really needed.
> So for ioprocess:
>
http://jenkins.ovirt.org/job/ioprocess_master_build-artifacts-el7-ppc64le...
> I'd say its very rarely built.
>
> So we end up with ~20 1-2 minute builds a day (Timed but the amount of
> Fedora versions we want to support, but what will probably be just
> one), with the rest being a statistical error...
>
> I wonder about sharing a VM with other project though. We do use mock
> for running the build script so the build itself should be fairly
> isolated, but we have some of our own wrapper scripts around mock that
> do things trying to keep build dependencies in the chroot cache over
> time. We're also incompatible with mock's new systemd-nspawn backend,
> so we force it to work with the older chroot-based backend. If other
> projects are using mock as well, I wonder if we may end up with race
> conditions arising from shared use of /var/lib/mock.
it should work fine
> Bottom line - we may end up being a little noisy neighbours if we
> share a VM, but we can try that and see what happens, how to we move
> foreward with trying that?
ok, I'm pretty sure we can make it work :-) Please send me your
public SSH key and preferred username, then I'll set up you an account
for you and we can work on the remaining details.
An update for everyone woh may have been watching this thread - we made it work.
With Dan's kind help we've attached an s390x VM to oVirt's CI
infrastructure. I've then gone ahead and made some code changes to
make our CI code play nice on it (So far we just assumed we own the
execution slaves and can do what we want on them...). Following that
I've gone ahead and added the basic configuration needed to make the
oVirt CI system support s390x jobs.
For now we only support using Fedora 26 on s390x. Please let me know
if other distributions are desired.
The code changes I've made had already been tested and are now pending
code review:
https://gerrit.ovirt.org/c/85219
https://gerrit.ovirt.org/c/85221
Once those patches are merged it will become possible to add s390x
jobs to any oVirt project by adding '390x' to the list of
architectures targeted by the project in the JJB YAML, as well as
setting the 'node_filter' to be 's390x' for that architecture.
--
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. |
redhat.com/trusted