
Hi, as a heads up: I'd like to contribute some basic support for the s390 architecture to oVirt. For that purpose I'll post some patches to gerrit, mostly for the following repositories: VDSM, ovirt-engine-api-model and ovirt-engine. There might be some collateral patches that need to go into ovirt-host[-deploy] since not all of the pre-requisite RPMs are available on s390, but this probably needs a separate discussion. I've started to hang out on the #ovirt OFTC channel, feel free to ping me if you have questions related to s390. Thanks! -- Mit freundlichen Grüßen/Kind Regards Viktor Mihajlovski

On 18 September 2017 at 10:30, Viktor Mihajlovski <mihajlov@linux.vnet.ibm.com> wrote:
Hi,
as a heads up: I'd like to contribute some basic support for the s390 architecture to oVirt. For that purpose I'll post some patches to gerrit, mostly for the following repositories: VDSM, ovirt-engine-api-model and ovirt-engine. There might be some collateral patches that need to go into ovirt-host[-deploy] since not all of the pre-requisite RPMs are available on s390, but this probably needs a separate discussion.
I've started to hang out on the #ovirt OFTC channel, feel free to ping me if you have questions related to s390. Thanks!
I wonder, to you have some way to contribute compute resources so we could run oVirt CI and build jobs on s360? We could make s360 jobs as long as we can get a version of mock that runs on it. -- Barak Korren RHV DevOps team , RHCE, RHCi Red Hat EMEA redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted

On 18.09.2017 10:36, Barak Korren wrote:
On 18 September 2017 at 10:30, Viktor Mihajlovski <mihajlov@linux.vnet.ibm.com> wrote:
Hi,
as a heads up: I'd like to contribute some basic support for the s390 architecture to oVirt. For that purpose I'll post some patches to gerrit, mostly for the following repositories: VDSM, ovirt-engine-api-model and ovirt-engine. There might be some collateral patches that need to go into ovirt-host[-deploy] since not all of the pre-requisite RPMs are available on s390, but this probably needs a separate discussion.
I've started to hang out on the #ovirt OFTC channel, feel free to ping me if you have questions related to s390. Thanks!
I wonder, to you have some way to contribute compute resources so we could run oVirt CI and build jobs on s360? We could make s360 jobs as long as we can get a version of mock that runs on it.
Unfortunately not. In order to get started there's however always the possibility to get a LinuxONE Community Cloud instance over at https://developer.ibm.com/linuxone . Not sure whether it would be possible for you to get access to the s390 Fedora build system? -- Mit freundlichen Grüßen/Kind Regards Viktor Mihajlovski

On 18.09.2017 09:30, Viktor Mihajlovski wrote:
Hi,
as a heads up: I'd like to contribute some basic support for the s390 architecture to oVirt. For that purpose I'll post some patches to gerrit, mostly for the following repositories: VDSM, ovirt-engine-api-model and ovirt-engine. There might be some collateral patches that need to go into ovirt-host[-deploy] since not all of the pre-requisite RPMs are available on s390, but this probably needs a separate discussion.
I've started to hang out on the #ovirt OFTC channel, feel free to ping me if you have questions related to s390. Thanks!
Short update, with yesterday's API model 4.2.25 release, there's basic support for s390 available in ovirt-engine. At this point in time, there are no ovirt yum repositories for the s390x architecture - not sure what the process would be to add s390x repositories and how to build the binary RPMs at least for the host packages (i.e. vdsm-*). Maybe it would possible to use the s390-koji infrastructure used to build Fedora for s390x? -- Mit freundlichen Grüßen/Kind Regards Viktor Mihajlovski

On 16 November 2017 at 18:52, Viktor Mihajlovski <mihajlov@linux.vnet.ibm.com> wrote:
Short update, with yesterday's API model 4.2.25 release, there's basic support for s390 available in ovirt-engine. At this point in time, there are no ovirt yum repositories for the s390x architecture - not sure what the process would be to add s390x repositories and how to build the binary RPMs at least for the host packages (i.e. vdsm-*). Maybe it would possible to use the s390-koji infrastructure used to build Fedora for s390x?
Koji is very opinionated about how RPMs and specfiles should look, AFAIK A pretty massive amount of work would be needed to make everything that is needed for a node to build on it, and then we would end up with a process that is quite different with how we currently do builds for other platforms. (I might be wrong about this, since some oVirt packages get also built as part of the CentOS virt SIG, and that is done using Koji as well) More specifically, Koji usually assumes the starting point for the build process would be a specfile, while in oVirt we typically generated the specfile and then the RPM as part of a bigger build process. Does fedora have an s390x server associated to it? We do use the same basic environment setup tool - mock - as the basis of our build infrastructure, so if Fedora is actually emulating s390x is some way while using mock, we might be able to do the same thing. Just for general knowledge, the process for building oVirt repos, is to have *-build-artifacts-* jobs for each project that build RPM's after patches get merged, and then have the change-queue to collect the built packages, rung them through ovirt-system-tests (a.k.a. OST) and finally deposit them into the 'tested' rpoe, from which they are copied nightly to the '*-snapshot' repos. OST only tests for CentOS 7/x86_64 ATM, but we bring along packages for other distros and architectures via the same process while assuming that if a package for a given commit works for CentOS 7/x86_64 it would probably work for other platforms as well... -- Barak Korren RHV DevOps team , RHCE, RHCi Red Hat EMEA redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted

Koji is very opinionated about how RPMs and specfiles should look, AFAIK
Actually, koji does not care much. Fedora packaging rules are enforced by package reviewers.
More specifically, Koji usually assumes the starting point for the build process would be a specfile
This is correct though.
Does fedora have an s390x server associated to it?
They used to have s390 emulators running for that purpose iirc. Best regards Martin Sivak On Thu, Nov 16, 2017 at 6:27 PM, Barak Korren <bkorren@redhat.com> wrote:
On 16 November 2017 at 18:52, Viktor Mihajlovski <mihajlov@linux.vnet.ibm.com> wrote:
Short update, with yesterday's API model 4.2.25 release, there's basic support for s390 available in ovirt-engine. At this point in time, there are no ovirt yum repositories for the s390x architecture - not sure what the process would be to add s390x repositories and how to build the binary RPMs at least for the host packages (i.e. vdsm-*). Maybe it would possible to use the s390-koji infrastructure used to build Fedora for s390x?
Koji is very opinionated about how RPMs and specfiles should look, AFAIK A pretty massive amount of work would be needed to make everything that is needed for a node to build on it, and then we would end up with a process that is quite different with how we currently do builds for other platforms.
(I might be wrong about this, since some oVirt packages get also built as part of the CentOS virt SIG, and that is done using Koji as well)
More specifically, Koji usually assumes the starting point for the build process would be a specfile, while in oVirt we typically generated the specfile and then the RPM as part of a bigger build process.
Does fedora have an s390x server associated to it? We do use the same basic environment setup tool - mock - as the basis of our build infrastructure, so if Fedora is actually emulating s390x is some way while using mock, we might be able to do the same thing.
Just for general knowledge, the process for building oVirt repos, is to have *-build-artifacts-* jobs for each project that build RPM's after patches get merged, and then have the change-queue to collect the built packages, rung them through ovirt-system-tests (a.k.a. OST) and finally deposit them into the 'tested' rpoe, from which they are copied nightly to the '*-snapshot' repos.
OST only tests for CentOS 7/x86_64 ATM, but we bring along packages for other distros and architectures via the same process while assuming that if a package for a given commit works for CentOS 7/x86_64 it would probably work for other platforms as well...
-- Barak Korren RHV DevOps team , RHCE, RHCi Red Hat EMEA redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted _______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel

On 16 November 2017 at 19:38, Martin Sivak <msivak@redhat.com> wrote:
Koji is very opinionated about how RPMs and specfiles should look, AFAIK
Actually, koji does not care much. Fedora packaging rules are enforced by package reviewers.
More specifically, Koji usually assumes the starting point for the build process would be a specfile
This is correct though.
Does fedora have an s390x server associated to it?
They used to have s390 emulators running for that purpose iirc.
I wonder if someone could provide more information about this. Is this done via qemu? Or built-in to mock perhaps? It this exists, I can pave the way to enabling s390x build support on oVirt infra. -- Barak Korren RHV DevOps team , RHCE, RHCi Red Hat EMEA redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted

On 18 November 2017 at 13:14, Barak Korren <bkorren@redhat.com> wrote:
On 16 November 2017 at 19:38, Martin Sivak <msivak@redhat.com> wrote:
Koji is very opinionated about how RPMs and specfiles should look, AFAIK
Actually, koji does not care much. Fedora packaging rules are enforced by package reviewers.
More specifically, Koji usually assumes the starting point for the build process would be a specfile
This is correct though.
Does fedora have an s390x server associated to it?
They used to have s390 emulators running for that purpose iirc.
I wonder if someone could provide more information about this. Is this done via qemu? Or built-in to mock perhaps? It this exists, I can pave the way to enabling s390x build support on oVirt infra.
There seems to be a 'qemu-system-s390x.x86_64' package avalable for CentOS7 in EPEL, so we might be able to use that to get some emulated s390x Jenkins slaves up. As none of the oVirt infra team members is currently experienced with using this, we would appreciate so help there with: * Figuring out the right libvirt commands to get an s390x VM running on an x86_64 host. * Figuring out the right way to get an OS image for that VM (Probably using virt-install to run the Fedora installer) * Making sure some JVM is available in those VMs Once we have that, the rest is quite straight forward. I've created a Jira ticket to map out, discuss and track the required steps: https://ovirt-jira.atlassian.net/browse/OVIRT-1772 WRT the OS we would use, I can see there are s390x Fedora builds available but no CentOS builds, is there ongoing work to remedy this, we'd rather not have to upgrade our Jenkins slaves every 6 months... As a side note - at this time it seems we would have to manually spin up the s390x VMs with libvirt. It would be awesome if oVirt could get support for foreign architecture emulation at some point. -- Barak Korren RHV DevOps team , RHCE, RHCi Red Hat EMEA redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted

On 11/19/2017 09:53 AM, Barak Korren wrote:
On 18 November 2017 at 13:14, Barak Korren <bkorren@redhat.com> wrote:
On 16 November 2017 at 19:38, Martin Sivak <msivak@redhat.com> wrote:
Koji is very opinionated about how RPMs and specfiles should look, AFAIK
Actually, koji does not care much. Fedora packaging rules are enforced by package reviewers.
More specifically, Koji usually assumes the starting point for the build process would be a specfile
This is correct though.
Does fedora have an s390x server associated to it?
They used to have s390 emulators running for that purpose iirc.
I wonder if someone could provide more information about this. Is this done via qemu? Or built-in to mock perhaps? It this exists, I can pave the way to enabling s390x build support on oVirt infra.
There seems to be a 'qemu-system-s390x.x86_64' package avalable for CentOS7 in EPEL, so we might be able to use that to get some emulated s390x Jenkins slaves up. As none of the oVirt infra team members is currently experienced with using this, we would appreciate so help there with:
qemu s390x emulation does not work with code compiled for z12. Would a real virtual machine be what you need? The Fedora team DOES have access to a z13. Not sure how much resources are available, but can you contact Dan Horak (on cc) if there is enough spare capacity.
* Figuring out the right libvirt commands to get an s390x VM running on an x86_64 host. * Figuring out the right way to get an OS image for that VM (Probably using virt-install to run the Fedora installer) * Making sure some JVM is available in those VMs
Once we have that, the rest is quite straight forward.
I've created a Jira ticket to map out, discuss and track the required steps: https://ovirt-jira.atlassian.net/browse/OVIRT-1772
WRT the OS we would use, I can see there are s390x Fedora builds available but no CentOS builds, is there ongoing work to remedy this, we'd rather not have to upgrade our Jenkins slaves every 6 months...
As a side note - at this time it seems we would have to manually spin up the s390x VMs with libvirt. It would be awesome if oVirt could get support for foreign architecture emulation at some point.

On Mon, 20 Nov 2017 09:16:48 +0100 Christian Borntraeger <borntraeger@de.ibm.com> wrote:
On 11/19/2017 09:53 AM, Barak Korren wrote:
On 18 November 2017 at 13:14, Barak Korren <bkorren@redhat.com> wrote:
On 16 November 2017 at 19:38, Martin Sivak <msivak@redhat.com> wrote:
Koji is very opinionated about how RPMs and specfiles should look, AFAIK
Actually, koji does not care much. Fedora packaging rules are enforced by package reviewers.
More specifically, Koji usually assumes the starting point for the build process would be a specfile
This is correct though.
Does fedora have an s390x server associated to it?
They used to have s390 emulators running for that purpose iirc.
I wonder if someone could provide more information about this. Is this done via qemu? Or built-in to mock perhaps? It this exists, I can pave the way to enabling s390x build support on oVirt infra.
There seems to be a 'qemu-system-s390x.x86_64' package avalable for CentOS7 in EPEL, so we might be able to use that to get some emulated s390x Jenkins slaves up. As none of the oVirt infra team members is currently experienced with using this, we would appreciate so help there with:
qemu s390x emulation does not work with code compiled for z12. Would a real virtual machine be what you need? The Fedora team DOES have access to a z13. Not sure how much resources are available, but can you contact Dan Horak (on cc) if there is enough spare capacity.
Christian is right, we have a publicly accessible guest running Fedora on the Marist College z13 mainframe. It's currently used by ~5 projects (for example glibc and qemu) as their build and CI host, so adding another project depends how intensive ovirt's usage would be. Dan
* Figuring out the right libvirt commands to get an s390x VM running on an x86_64 host. * Figuring out the right way to get an OS image for that VM (Probably using virt-install to run the Fedora installer) * Making sure some JVM is available in those VMs
Once we have that, the rest is quite straight forward.
I've created a Jira ticket to map out, discuss and track the required steps: https://ovirt-jira.atlassian.net/browse/OVIRT-1772
WRT the OS we would use, I can see there are s390x Fedora builds available but no CentOS builds, is there ongoing work to remedy this, we'd rather not have to upgrade our Jenkins slaves every 6 months...
As a side note - at this time it seems we would have to manually spin up the s390x VMs with libvirt. It would be awesome if oVirt could get support for foreign architecture emulation at some point.

On 21.11.2017 11:26, Dan Horák wrote: [...]
qemu s390x emulation does not work with code compiled for z12. Would a real virtual machine be what you need? The Fedora team DOES have access to a z13. Not sure how much resources are available, but can you contact Dan Horak (on cc) if there is enough spare capacity.
Christian is right, we have a publicly accessible guest running Fedora on the Marist College z13 mainframe. It's currently used by ~5 projects (for example glibc and qemu) as their build and CI host, so adding another project depends how intensive ovirt's usage would be. As a first step one could only build the packages needed for the KVM host. At this point in time that would be vdsm and ovirt-host, both are building rather quickly. It should be possible to ensure that only these are built on a s390 system using appropriate node filters. [...] --
Mit freundlichen Grüßen/Kind Regards Viktor Mihajlovski

On Fri, 24 Nov 2017 10:05:06 +0100 Viktor Mihajlovski <mihajlov@linux.vnet.ibm.com> wrote:
On 21.11.2017 11:26, Dan Horák wrote: [...]
qemu s390x emulation does not work with code compiled for z12. Would a real virtual machine be what you need? The Fedora team DOES have access to a z13. Not sure how much resources are available, but can you contact Dan Horak (on cc) if there is enough spare capacity.
Christian is right, we have a publicly accessible guest running Fedora on the Marist College z13 mainframe. It's currently used by ~5 projects (for example glibc and qemu) as their build and CI host, so adding another project depends how intensive ovirt's usage would be. As a first step one could only build the packages needed for the KVM host. At this point in time that would be vdsm and ovirt-host, both are building rather quickly. It should be possible to ensure that only these are built on a s390 system using appropriate node filters.
I would say there are still 2 options how to build "add-on" packages that install/run on top of Fedora and it depends whether they create dependency chains or are standalone. For standalone packages (= their Buildrequires can be resolved solely from the Fedora repos) one can use scratch builds koji and grab the results (= rpms) when the build finishes. For packages with dependency chains is using an own "builder" machine required. The mock tool is capable of managing a repo with built rpms, so it shouldn't be difficult to achieve on our public guest. Dan

sending once more for the s390x list ... On Fri, 24 Nov 2017 10:05:06 +0100 Viktor Mihajlovski <mihajlov@linux.vnet.ibm.com> wrote:
On 21.11.2017 11:26, Dan Horák wrote: [...]
qemu s390x emulation does not work with code compiled for z12. Would a real virtual machine be what you need? The Fedora team DOES have access to a z13. Not sure how much resources are available, but can you contact Dan Horak (on cc) if there is enough spare capacity.
Christian is right, we have a publicly accessible guest running Fedora on the Marist College z13 mainframe. It's currently used by ~5 projects (for example glibc and qemu) as their build and CI host, so adding another project depends how intensive ovirt's usage would be. As a first step one could only build the packages needed for the KVM host. At this point in time that would be vdsm and ovirt-host, both are building rather quickly. It should be possible to ensure that only these are built on a s390 system using appropriate node filters.
I would say there are still 2 options how to build "add-on" packages that install/run on top of Fedora and it depends whether they create dependency chains or are standalone. For standalone packages (= their Buildrequires can be resolved solely from the Fedora repos) one can use scratch builds koji and grab the results (= rpms) when the build finishes. For packages with dependency chains is using an own "builder" machine required. The mock tool is capable of managing a repo with built rpms, so it shouldn't be difficult to achieve on our public guest. Dan

On 24 November 2017 at 11:05, Viktor Mihajlovski <mihajlov@linux.vnet.ibm.com> wrote:
On 21.11.2017 11:26, Dan Horák wrote: [...]
qemu s390x emulation does not work with code compiled for z12. Would a real virtual machine be what you need? The Fedora team DOES have access to a z13. Not sure how much resources are available, but can you contact Dan Horak (on cc) if there is enough spare capacity.
Christian is right, we have a publicly accessible guest running Fedora on the Marist College z13 mainframe. It's currently used by ~5 projects (for example glibc and qemu) as their build and CI host, so adding another project depends how intensive ovirt's usage would be. As a first step one could only build the packages needed for the KVM host. At this point in time that would be vdsm and ovirt-host, both are building rather quickly. It should be possible to ensure that only these are built on a s390 system using appropriate node filters. [...]
We can get more accurate data by looking at the ppc64c build history (We support ppc64le only for hypervisor usage, similar to what is intended for s390). Here is the history for vdsm: http://jenkins.ovirt.org/job/vdsm_master_build-artifacts-el7-ppc64le/buildTi... (~20 builds a day taking 1-2 minutes each) And here is the one for ovirt-host: http://jenkins.ovirt.org/job/ovirt-host_master_check-patch-el7-ppc64le/build... (only 1 build in history, taking 3-4 minutes) Looking at what else we have building on ppc64le: http://jenkins.ovirt.org/search/?q=master_build-artifacts-el7-ppc64le I can also see ioprocess with is a vdsm dependency, and the SDK which is probably not really needed. So for ioprocess: http://jenkins.ovirt.org/job/ioprocess_master_build-artifacts-el7-ppc64le/bu... I'd say its very rarely built. So we end up with ~20 1-2 minute builds a day (Timed but the amount of Fedora versions we want to support, but what will probably be just one), with the rest being a statistical error... I wonder about sharing a VM with other project though. We do use mock for running the build script so the build itself should be fairly isolated, but we have some of our own wrapper scripts around mock that do things trying to keep build dependencies in the chroot cache over time. We're also incompatible with mock's new systemd-nspawn backend, so we force it to work with the older chroot-based backend. If other projects are using mock as well, I wonder if we may end up with race conditions arising from shared use of /var/lib/mock. Bottom line - we may end up being a little noisy neighbours if we share a VM, but we can try that and see what happens, how to we move foreward with trying that? -- Barak Korren RHV DevOps team , RHCE, RHCi Red Hat EMEA redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted

On Fri, 24 Nov 2017 13:16:53 +0200 Barak Korren <bkorren@redhat.com> wrote:
On 24 November 2017 at 11:05, Viktor Mihajlovski <mihajlov@linux.vnet.ibm.com> wrote:
On 21.11.2017 11:26, Dan Horák wrote: [...]
qemu s390x emulation does not work with code compiled for z12. Would a real virtual machine be what you need? The Fedora team DOES have access to a z13. Not sure how much resources are available, but can you contact Dan Horak (on cc) if there is enough spare capacity.
Christian is right, we have a publicly accessible guest running Fedora on the Marist College z13 mainframe. It's currently used by ~5 projects (for example glibc and qemu) as their build and CI host, so adding another project depends how intensive ovirt's usage would be. As a first step one could only build the packages needed for the KVM host. At this point in time that would be vdsm and ovirt-host, both are building rather quickly. It should be possible to ensure that only these are built on a s390 system using appropriate node filters. [...]
We can get more accurate data by looking at the ppc64c build history (We support ppc64le only for hypervisor usage, similar to what is intended for s390). Here is the history for vdsm: http://jenkins.ovirt.org/job/vdsm_master_build-artifacts-el7-ppc64le/buildTi... (~20 builds a day taking 1-2 minutes each) And here is the one for ovirt-host: http://jenkins.ovirt.org/job/ovirt-host_master_check-patch-el7-ppc64le/build... (only 1 build in history, taking 3-4 minutes)
Looking at what else we have building on ppc64le: http://jenkins.ovirt.org/search/?q=master_build-artifacts-el7-ppc64le I can also see ioprocess with is a vdsm dependency, and the SDK which is probably not really needed. So for ioprocess: http://jenkins.ovirt.org/job/ioprocess_master_build-artifacts-el7-ppc64le/bu... I'd say its very rarely built.
So we end up with ~20 1-2 minute builds a day (Timed but the amount of Fedora versions we want to support, but what will probably be just one), with the rest being a statistical error...
I wonder about sharing a VM with other project though. We do use mock for running the build script so the build itself should be fairly isolated, but we have some of our own wrapper scripts around mock that do things trying to keep build dependencies in the chroot cache over time. We're also incompatible with mock's new systemd-nspawn backend, so we force it to work with the older chroot-based backend. If other projects are using mock as well, I wonder if we may end up with race conditions arising from shared use of /var/lib/mock.
it should work fine
Bottom line - we may end up being a little noisy neighbours if we share a VM, but we can try that and see what happens, how to we move foreward with trying that?
ok, I'm pretty sure we can make it work :-) Please send me your public SSH key and preferred username, then I'll set up you an account for you and we can work on the remaining details. Dan

On 24 November 2017 at 14:36, Dan Horák <dan@danny.cz> wrote:
ok, I'm pretty sure we can make it work :-) Please send me your public SSH key and preferred username, then I'll set up you an account for you and we can work on the remaining details.
I would actually like two keys to be configured, one for our staging system: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCcaW1eHhuFfPKgDcSir/D2/qZlBwMSSXUXZi4F8SOt0C6WFggRcZB6kjk73GyzkZ879Wlr97WITAPXaaEFSNnaa0TSfTpuElqhOipR/IEM9KYDDfYIIoABBebhn6kpBBQ81gd3L4+8Lv6xse6YBu4/HhfILBbUN20DVUYd9vGUc2y0c9RasJjotdg7+1iUzbT/dqPG1OX/S4M/qIF6wcygnedHt2KtPE+QosCACNdtshGHwngO9H+wXv9e/37WFwU6dRMESCxrBAM+gxO8+0nLANW28GDr5EGYNs4gy5reyTKS8qqswHqK4h5bi7Ad1BSx29DUa8wEOUO/TV2eiUsz jenkins@jenkins-staging.phx.ovirt.org And another for the production system: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDsZ34L+B3YzL7a6zCrJB41r/IqM/s1ILXyjslApSrtquQRtUcbeoE7kS4PdyhO2U4Pu91EzYMPWc7JVnQirwKX5ksXwxZn/Y8f5KzKm5IfPRJfX6sBWS9eGRsyLj5JQjHiVYiBSsACidIr8zc3lJo/nxhp18wj5Ao4h5rhqpw/P+u53/NQ0KvRQtrBFxgWR9JM6KpcjB6rVzm1OBJQPe9aSm97NLh3ijXxYNrbIpXt/YoyByP36QVlcM+L9idFAWY2TkCX5mWclCJJeCint9+SxD0gRW3/tgNWwxx7nkFDGdl/WKhgT0JjmCVFqSG/cGNYYMX+A25zKqqD1SqPNFhx jenkins@jenkins.phx.ovirt.org The user names for the systems are 'jenkins-staging' and 'jenkins' respectively, though it'll also be ok to use a different user name and the same user for both. Both those users need to belong to the 'mock' group. I have some places in the code where it just assumes it's got password-less sudo configured and tries to install some needed dependencies on its own. Those dependencies are most probably not needed for the builds we're discussing here (They include libvirt and docker that are used by build processes that escape mock to generate container and VM images...) , but I'll need to add some checking in my code and skip those settings if sudo permissions are not available. This is why I'd rather connect the staging system first. -- Barak Korren RHV DevOps team , RHCE, RHCi Red Hat EMEA redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted

On 24 November 2017 at 14:36, Dan Horák <dan@danny.cz> wrote:
On Fri, 24 Nov 2017 13:16:53 +0200 Barak Korren <bkorren@redhat.com> wrote:
On 24 November 2017 at 11:05, Viktor Mihajlovski <mihajlov@linux.vnet.ibm.com> wrote:
On 21.11.2017 11:26, Dan Horák wrote: [...]
qemu s390x emulation does not work with code compiled for z12. Would a real virtual machine be what you need? The Fedora team DOES have access to a z13. Not sure how much resources are available, but can you contact Dan Horak (on cc) if there is enough spare capacity.
Christian is right, we have a publicly accessible guest running Fedora on the Marist College z13 mainframe. It's currently used by ~5 projects (for example glibc and qemu) as their build and CI host, so adding another project depends how intensive ovirt's usage would be. As a first step one could only build the packages needed for the KVM host. At this point in time that would be vdsm and ovirt-host, both are building rather quickly. It should be possible to ensure that only these are built on a s390 system using appropriate node filters. [...]
We can get more accurate data by looking at the ppc64c build history (We support ppc64le only for hypervisor usage, similar to what is intended for s390). Here is the history for vdsm: http://jenkins.ovirt.org/job/vdsm_master_build-artifacts-el7-ppc64le/buildTi... (~20 builds a day taking 1-2 minutes each) And here is the one for ovirt-host: http://jenkins.ovirt.org/job/ovirt-host_master_check-patch-el7-ppc64le/build... (only 1 build in history, taking 3-4 minutes)
Looking at what else we have building on ppc64le: http://jenkins.ovirt.org/search/?q=master_build-artifacts-el7-ppc64le I can also see ioprocess with is a vdsm dependency, and the SDK which is probably not really needed. So for ioprocess: http://jenkins.ovirt.org/job/ioprocess_master_build-artifacts-el7-ppc64le/bu... I'd say its very rarely built.
So we end up with ~20 1-2 minute builds a day (Timed but the amount of Fedora versions we want to support, but what will probably be just one), with the rest being a statistical error...
I wonder about sharing a VM with other project though. We do use mock for running the build script so the build itself should be fairly isolated, but we have some of our own wrapper scripts around mock that do things trying to keep build dependencies in the chroot cache over time. We're also incompatible with mock's new systemd-nspawn backend, so we force it to work with the older chroot-based backend. If other projects are using mock as well, I wonder if we may end up with race conditions arising from shared use of /var/lib/mock.
it should work fine
Bottom line - we may end up being a little noisy neighbours if we share a VM, but we can try that and see what happens, how to we move foreward with trying that?
ok, I'm pretty sure we can make it work :-) Please send me your public SSH key and preferred username, then I'll set up you an account for you and we can work on the remaining details.
An update for everyone woh may have been watching this thread - we made it work. With Dan's kind help we've attached an s390x VM to oVirt's CI infrastructure. I've then gone ahead and made some code changes to make our CI code play nice on it (So far we just assumed we own the execution slaves and can do what we want on them...). Following that I've gone ahead and added the basic configuration needed to make the oVirt CI system support s390x jobs. For now we only support using Fedora 26 on s390x. Please let me know if other distributions are desired. The code changes I've made had already been tested and are now pending code review: https://gerrit.ovirt.org/c/85219 https://gerrit.ovirt.org/c/85221 Once those patches are merged it will become possible to add s390x jobs to any oVirt project by adding '390x' to the list of architectures targeted by the project in the JJB YAML, as well as setting the 'node_filter' to be 's390x' for that architecture. -- Barak Korren RHV DevOps team , RHCE, RHCi Red Hat EMEA redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted

On Sun, Dec 10, 2017 at 4:10 AM, Barak Korren <bkorren@redhat.com> wrote:
On 24 November 2017 at 14:36, Dan Horák <dan@danny.cz> wrote:
On Fri, 24 Nov 2017 13:16:53 +0200 Barak Korren <bkorren@redhat.com> wrote:
On 24 November 2017 at 11:05, Viktor Mihajlovski <mihajlov@linux.vnet.ibm.com> wrote:
On 21.11.2017 11:26, Dan Horák wrote: [...]
qemu s390x emulation does not work with code compiled for z12. Would a real virtual machine be what you need? The Fedora team DOES have access to a z13. Not sure how much resources are available, but can you contact Dan Horak (on cc) if there is enough spare capacity.
Christian is right, we have a publicly accessible guest running Fedora on the Marist College z13 mainframe. It's currently used by ~5 projects (for example glibc and qemu) as their build and CI host, so adding another project depends how intensive ovirt's usage would be. As a first step one could only build the packages needed for the KVM host. At this point in time that would be vdsm and ovirt-host, both are building rather quickly. It should be possible to ensure that only these are built on a s390 system using appropriate node filters. [...]
We can get more accurate data by looking at the ppc64c build history (We support ppc64le only for hypervisor usage, similar to what is intended for s390). Here is the history for vdsm: http://jenkins.ovirt.org/job/vdsm_master_build-artifacts- el7-ppc64le/buildTimeTrend (~20 builds a day taking 1-2 minutes each) And here is the one for ovirt-host: http://jenkins.ovirt.org/job/ovirt-host_master_check-patch- el7-ppc64le/buildTimeTrend (only 1 build in history, taking 3-4 minutes)
Looking at what else we have building on ppc64le: http://jenkins.ovirt.org/search/?q=master_build-artifacts-el7-ppc64le I can also see ioprocess with is a vdsm dependency, and the SDK which is probably not really needed. So for ioprocess: http://jenkins.ovirt.org/job/ioprocess_master_build- artifacts-el7-ppc64le/buildTimeTrend I'd say its very rarely built.
So we end up with ~20 1-2 minute builds a day (Timed but the amount of Fedora versions we want to support, but what will probably be just one), with the rest being a statistical error...
I wonder about sharing a VM with other project though. We do use mock for running the build script so the build itself should be fairly isolated, but we have some of our own wrapper scripts around mock that do things trying to keep build dependencies in the chroot cache over time. We're also incompatible with mock's new systemd-nspawn backend, so we force it to work with the older chroot-based backend. If other projects are using mock as well, I wonder if we may end up with race conditions arising from shared use of /var/lib/mock.
it should work fine
Bottom line - we may end up being a little noisy neighbours if we share a VM, but we can try that and see what happens, how to we move foreward with trying that?
ok, I'm pretty sure we can make it work :-) Please send me your public SSH key and preferred username, then I'll set up you an account for you and we can work on the remaining details.
An update for everyone woh may have been watching this thread - we made it work.
With Dan's kind help we've attached an s390x VM to oVirt's CI infrastructure. I've then gone ahead and made some code changes to make our CI code play nice on it (So far we just assumed we own the execution slaves and can do what we want on them...). Following that I've gone ahead and added the basic configuration needed to make the oVirt CI system support s390x jobs.
For now we only support using Fedora 26 on s390x. Please let me know if other distributions are desired.
The code changes I've made had already been tested and are now pending code review: https://gerrit.ovirt.org/c/85219 https://gerrit.ovirt.org/c/85221
Once those patches are merged it will become possible to add s390x jobs to any oVirt project by adding '390x' to the list of architectures targeted by the project in the JJB YAML, as well as setting the 'node_filter' to be 's390x' for that architecture.
Great job all! So I guess the question is, for those of us that maintain oVirt projects, are there any that we know need s390x support added for?
-- Barak Korren RHV DevOps team , RHCE, RHCi Red Hat EMEA redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted _______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel
-- GREG SHEREMETA SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHV UX Red Hat NA <https://www.redhat.com/> gshereme@redhat.com IRC: gshereme <https://red.ht/sig>

On 10 December 2017 at 13:38, Greg Sheremeta <gshereme@redhat.com> wrote:
Great job all! So I guess the question is, for those of us that maintain oVirt projects, are there any that we know need s390x support added for?
Well, Viktor Mihajlovski had been contributing patches to VDSM et el. to enable making an s390x hypervisor node. So those are the obvious candidates. Greg, I guess this is less relevant to your current line of work... -- Barak Korren RHV DevOps team , RHCE, RHCi Red Hat EMEA redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted

I am getting some error logs when the engine starts looking for s390 icons. Nothing critical, but add noise.... 2017-12-10 13:51:01,987+02 WARN [org.ovirt.engine.core.bll.IconLoader] (ServerService Thread Pool -- 53) [] java.lang.RuntimeException: Icon for other_s390x was not found in /home/frolland/ovirt-engine/share/ovirt-engine/icons/small 2017-12-10 13:51:01,987+02 WARN [org.ovirt.engine.core.bll.IconLoader] (ServerService Thread Pool -- 53) [] java.lang.RuntimeException: Icon for other_s390x was not found in /home/frolland/ovirt-engine/share/ovirt-engine/icons/large 2017-12-10 13:51:01,987+02 WARN [org.ovirt.engine.core.bll.IconLoader] (ServerService Thread Pool -- 53) [] java.lang.RuntimeException: Icon for other_linux_s390x was not found in /home/frolland/ovirt-engine/share/ovirt-engine/icons/small 2017-12-10 13:51:01,988+02 WARN [org.ovirt.engine.core.bll.IconLoader] (ServerService Thread Pool -- 53) [] java.lang.RuntimeException: Icon for other_linux_s390x was not found in /home/frolland/ovirt-engine/share/ovirt-engine/icons/large 2017-12-10 13:51:01,988+02 WARN [org.ovirt.engine.core.bll.IconLoader] (ServerService Thread Pool -- 53) [] java.lang.RuntimeException: Icon for rhel_7_s390x was not found in /home/frolland/ovirt-engine/share/ovirt-engine/icons/small 2017-12-10 13:51:01,988+02 WARN [org.ovirt.engine.core.bll.IconLoader] (ServerService Thread Pool -- 53) [] java.lang.RuntimeException: Icon for rhel_7_s390x was not found in /home/frolland/ovirt-engine/share/ovirt-engine/icons/large 2017-12-10 13:51:01,988+02 WARN [org.ovirt.engine.core.bll.IconLoader] (ServerService Thread Pool -- 53) [] java.lang.RuntimeException: Icon for sles_12_s390x was not found in /home/frolland/ovirt-engine/share/ovirt-engine/icons/small 2017-12-10 13:51:01,988+02 WARN [org.ovirt.engine.core.bll.IconLoader] (ServerService Thread Pool -- 53) [] java.lang.RuntimeException: Icon for sles_12_s390x was not found in /home/frolland/ovirt-engine/share/ovirt-engine/icons/large 2017-12-10 13:51:01,988+02 WARN [org.ovirt.engine.core.bll.IconLoader] (ServerService Thread Pool -- 53) [] java.lang.RuntimeException: Icon for ubuntu_16_04_s390x was not found in /home/frolland/ovirt-engine/share/ovirt-engine/icons/small 2017-12-10 13:51:01,988+02 WARN [org.ovirt.engine.core.bll.IconLoader] (ServerService Thread Pool -- 53) [] java.lang.RuntimeException: Icon for ubuntu_16_04_s390x was not found in /home/frolland/ovirt-engine/share/ovirt-engine/icons/large On Sun, Dec 10, 2017 at 1:45 PM, Barak Korren <bkorren@redhat.com> wrote:
On 10 December 2017 at 13:38, Greg Sheremeta <gshereme@redhat.com> wrote:
Great job all! So I guess the question is, for those of us that maintain oVirt projects, are there any that we know need s390x support added for?
Well, Viktor Mihajlovski had been contributing patches to VDSM et el. to enable making an s390x hypervisor node. So those are the obvious candidates.
Greg, I guess this is less relevant to your current line of work...
-- Barak Korren RHV DevOps team , RHCE, RHCi Red Hat EMEA redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
_______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel

On 10.12.2017 12:58, Fred Rolland wrote:
I am getting some error logs when the engine starts looking for s390 icons. Nothing critical, but add noise....
2017-12-10 13:51:01,987+02 WARN [org.ovirt.engine.core.bll.IconLoader] (ServerService Thread Pool -- 53) [] java.lang.RuntimeException: Icon for other_s390x was not found in /home/frolland/ovirt-engine/share/ovirt-engine/icons/small 2017-12-10 13:51:01,987+02 WARN [org.ovirt.engine.core.bll.IconLoader] (ServerService Thread Pool -- 53) [] java.lang.RuntimeException: Icon for other_s390x was not found in /home/frolland/ovirt-engine/share/ovirt-engine/icons/large 2017-12-10 13:51:01,987+02 WARN [org.ovirt.engine.core.bll.IconLoader] (ServerService Thread Pool -- 53) [] java.lang.RuntimeException: Icon for other_linux_s390x was not found in /home/frolland/ovirt-engine/share/ovirt-engine/icons/small 2017-12-10 13:51:01,988+02 WARN [org.ovirt.engine.core.bll.IconLoader] (ServerService Thread Pool -- 53) [] java.lang.RuntimeException: Icon for other_linux_s390x was not found in /home/frolland/ovirt-engine/share/ovirt-engine/icons/large 2017-12-10 13:51:01,988+02 WARN [org.ovirt.engine.core.bll.IconLoader] (ServerService Thread Pool -- 53) [] java.lang.RuntimeException: Icon for rhel_7_s390x was not found in /home/frolland/ovirt-engine/share/ovirt-engine/icons/small 2017-12-10 13:51:01,988+02 WARN [org.ovirt.engine.core.bll.IconLoader] (ServerService Thread Pool -- 53) [] java.lang.RuntimeException: Icon for rhel_7_s390x was not found in /home/frolland/ovirt-engine/share/ovirt-engine/icons/large 2017-12-10 13:51:01,988+02 WARN [org.ovirt.engine.core.bll.IconLoader] (ServerService Thread Pool -- 53) [] java.lang.RuntimeException: Icon for sles_12_s390x was not found in /home/frolland/ovirt-engine/share/ovirt-engine/icons/small 2017-12-10 13:51:01,988+02 WARN [org.ovirt.engine.core.bll.IconLoader] (ServerService Thread Pool -- 53) [] java.lang.RuntimeException: Icon for sles_12_s390x was not found in /home/frolland/ovirt-engine/share/ovirt-engine/icons/large 2017-12-10 13:51:01,988+02 WARN [org.ovirt.engine.core.bll.IconLoader] (ServerService Thread Pool -- 53) [] java.lang.RuntimeException: Icon for ubuntu_16_04_s390x was not found in /home/frolland/ovirt-engine/share/ovirt-engine/icons/small 2017-12-10 13:51:01,988+02 WARN [org.ovirt.engine.core.bll.IconLoader] (ServerService Thread Pool -- 53) [] java.lang.RuntimeException: Icon for ubuntu_16_04_s390x was not found in /home/frolland/ovirt-engine/share/ovirt-engine/icons/large
Thanks for the feedback, I didn't add the symlinks for the s390x OS types. Will push a patch soon ... [...] -- Mit freundlichen Grüßen/Kind Regards Viktor Mihajlovski IBM Deutschland Research & Development GmbH Vorsitzender des Aufsichtsrats: Martina Köderitz Geschäftsführung: Dirk Wittkopp Sitz der Gesellschaft: Böblingen Registergericht: Amtsgericht Stuttgart, HRB 243294

On 10.12.2017 10:10, Barak Korren wrote: [...]
An update for everyone woh may have been watching this thread - we made it work.
What a nice surprise after returning from a few days off. Big thanks to both of you, Barak and Dan.
With Dan's kind help we've attached an s390x VM to oVirt's CI infrastructure. I've then gone ahead and made some code changes to make our CI code play nice on it (So far we just assumed we own the execution slaves and can do what we want on them...). Following that I've gone ahead and added the basic configuration needed to make the oVirt CI system support s390x jobs.
For now we only support using Fedora 26 on s390x. Please let me know if other distributions are desired.
Lately, I've been using Fedora 27 for the s390x porting, mostly because the newer package levels of the key virtualization components. If possible, Fedora 27 would be nice.
The code changes I've made had already been tested and are now pending code review: https://gerrit.ovirt.org/c/85219 https://gerrit.ovirt.org/c/85221
Once those patches are merged it will become possible to add s390x jobs to any oVirt project by adding '390x' to the list of architectures targeted by the project in the JJB YAML, as well as setting the 'node_filter' to be 's390x' for that architecture.
Looking forward to see this merged. Once the s390x RPMs show up in the repository, I will try to do a clean setup of a s390x cluster. -- Mit freundlichen Grüßen/Kind Regards Viktor Mihajlovski

On Thu, 14 Dec 2017 11:24:27 +0100 Viktor Mihajlovski <mihajlov@linux.vnet.ibm.com> wrote:
On 10.12.2017 10:10, Barak Korren wrote: [...]
An update for everyone woh may have been watching this thread - we made it work.
What a nice surprise after returning from a few days off. Big thanks to both of you, Barak and Dan.
With Dan's kind help we've attached an s390x VM to oVirt's CI infrastructure. I've then gone ahead and made some code changes to make our CI code play nice on it (So far we just assumed we own the execution slaves and can do what we want on them...). Following that I've gone ahead and added the basic configuration needed to make the oVirt CI system support s390x jobs.
For now we only support using Fedora 26 on s390x. Please let me know if other distributions are desired.
Lately, I've been using Fedora 27 for the s390x porting, mostly because the newer package levels of the key virtualization components. If possible, Fedora 27 would be nice.
The Marist College agrees to provide us more guests, so we shouldn't see capacity issues in the near (and mid) future. F-26 is good enough for your F-27 builds, because you use mock, right?
The code changes I've made had already been tested and are now pending code review: https://gerrit.ovirt.org/c/85219 https://gerrit.ovirt.org/c/85221
Once those patches are merged it will become possible to add s390x jobs to any oVirt project by adding '390x' to the list of architectures targeted by the project in the JJB YAML, as well as setting the 'node_filter' to be 's390x' for that architecture.
Looking forward to see this merged. Once the s390x RPMs show up in the repository, I will try to do a clean setup of a s390x cluster.
Dan

On 14.12.2017 11:34, Dan Horák wrote:
On Thu, 14 Dec 2017 11:24:27 +0100 [...]
Lately, I've been using Fedora 27 for the s390x porting, mostly because the newer package levels of the key virtualization components. If possible, Fedora 27 would be nice.
The Marist College agrees to provide us more guests, so we shouldn't see capacity issues in the near (and mid) future. F-26 is good enough for your F-27 builds, because you use mock, right?
Right, a mock config for f27-s390x should do trick even on f26 as a build host. [...] -- Mit freundlichen Grüßen/Kind Regards Viktor Mihajlovski

On 14 December 2017 at 12:34, Dan Horák <dan@danny.cz> wrote:
On Thu, 14 Dec 2017 11:24:27 +0100 Viktor Mihajlovski <mihajlov@linux.vnet.ibm.com> wrote:
On 10.12.2017 10:10, Barak Korren wrote: [...]
An update for everyone woh may have been watching this thread - we made it work.
What a nice surprise after returning from a few days off. Big thanks to both of you, Barak and Dan.
With Dan's kind help we've attached an s390x VM to oVirt's CI infrastructure. I've then gone ahead and made some code changes to make our CI code play nice on it (So far we just assumed we own the execution slaves and can do what we want on them...). Following that I've gone ahead and added the basic configuration needed to make the oVirt CI system support s390x jobs.
For now we only support using Fedora 26 on s390x. Please let me know if other distributions are desired.
Lately, I've been using Fedora 27 for the s390x porting, mostly because the newer package levels of the key virtualization components. If possible, Fedora 27 would be nice.
The Marist College agrees to provide us more guests, so we shouldn't see capacity issues in the near (and mid) future. F-26 is good enough for your F-27 builds, because you use mock, right?
Yeah, when I said I only support fc26, it was because I didn`t bother enabling the mock configuration for fc27 on s390x on our system. I'll try to find some time to do that later today. -- Barak Korren RHV DevOps team , RHCE, RHCi Red Hat EMEA redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted

Dan Horak used to be part of this project. Lets see if he still remembers that. Dan: Can you please tell us how the s390 builds in Fedora are/were done when no hw was available? Martin Sivak On Sat, Nov 18, 2017 at 12:14 PM, Barak Korren <bkorren@redhat.com> wrote:
On 16 November 2017 at 19:38, Martin Sivak <msivak@redhat.com> wrote:
Koji is very opinionated about how RPMs and specfiles should look, AFAIK
Actually, koji does not care much. Fedora packaging rules are enforced by package reviewers.
More specifically, Koji usually assumes the starting point for the build process would be a specfile
This is correct though.
Does fedora have an s390x server associated to it?
They used to have s390 emulators running for that purpose iirc.
I wonder if someone could provide more information about this. Is this done via qemu? Or built-in to mock perhaps? It this exists, I can pave the way to enabling s390x build support on oVirt infra.
-- Barak Korren RHV DevOps team , RHCE, RHCi Red Hat EMEA redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted

On Mon, 20 Nov 2017 09:24:00 +0100 Martin Sivak <msivak@redhat.com> wrote:
Dan Horak used to be part of this project. Lets see if he still remembers that.
Dan: Can you please tell us how the s390 builds in Fedora are/were done when no hw was available?
AFAIK Fedora was always building its packages on real mainframe HW, using a dedicated LPAR on the Red Hat's machine. Dan
Martin Sivak
On Sat, Nov 18, 2017 at 12:14 PM, Barak Korren <bkorren@redhat.com> wrote:
On 16 November 2017 at 19:38, Martin Sivak <msivak@redhat.com> wrote:
Koji is very opinionated about how RPMs and specfiles should look, AFAIK
Actually, koji does not care much. Fedora packaging rules are enforced by package reviewers.
More specifically, Koji usually assumes the starting point for the build process would be a specfile
This is correct though.
Does fedora have an s390x server associated to it?
They used to have s390 emulators running for that purpose iirc.
I wonder if someone could provide more information about this. Is this done via qemu? Or built-in to mock perhaps? It this exists, I can pave the way to enabling s390x build support on oVirt infra.
-- Barak Korren RHV DevOps team , RHCE, RHCi Red Hat EMEA redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted

On 16.11.2017 18:27, Barak Korren wrote:
On 16 November 2017 at 18:52, Viktor Mihajlovski <mihajlov@linux.vnet.ibm.com> wrote:
Short update, with yesterday's API model 4.2.25 release, there's basic support for s390 available in ovirt-engine. At this point in time, there are no ovirt yum repositories for the s390x architecture - not sure what the process would be to add s390x repositories and how to build the binary RPMs at least for the host packages (i.e. vdsm-*). Maybe it would possible to use the s390-koji infrastructure used to build Fedora for s390x?
Koji is very opinionated about how RPMs and specfiles should look, AFAIK A pretty massive amount of work would be needed to make everything that is needed for a node to build on it, and then we would end up with a process that is quite different with how we currently do builds for other platforms.
(I might be wrong about this, since some oVirt packages get also built as part of the CentOS virt SIG, and that is done using Koji as well)
More specifically, Koji usually assumes the starting point for the build process would be a specfile, while in oVirt we typically generated the specfile and then the RPM as part of a bigger build process.
Does fedora have an s390x server associated to it? There's a build system for Fedora on s390x: https://s390.koji.fedoraproject.org/koji We do use the same basic environment setup tool - mock - as the basis of our build infrastructure, so if Fedora is actually emulating s390x is some way while using mock, we might be able to do the same thing.
Just for general knowledge, the process for building oVirt repos, is to have *-build-artifacts-* jobs for each project that build RPM's after patches get merged, and then have the change-queue to collect the built packages, rung them through ovirt-system-tests (a.k.a. OST) and finally deposit them into the 'tested' rpoe, from which they are copied nightly to the '*-snapshot' repos.
OST only tests for CentOS 7/x86_64 ATM, but we bring along packages for other distros and architectures via the same process while assuming that if a package for a given commit works for CentOS 7/x86_64 it would probably work for other platforms as well...
I've noticed that there's no test automation for non-x86 arches. How are the ppc64le packages then built and stored in the repos? Would it be conceivable to do the following? After a successfull build and OST of a package, take the SRPM and submit to s390-koji to produce the s390x binary RPMs. Then copy the binary RPMs into the respective repository, e.g. *-snapshot/rpm/el7/s390x (and update the repository metadata). -- Mit freundlichen Grüßen/Kind Regards Viktor Mihajlovski

On Fri, Nov 17, 2017 at 8:45 AM, Viktor Mihajlovski <mihajlov@linux.vnet.ibm.com> wrote:
On 16.11.2017 18:27, Barak Korren wrote:
On 16 November 2017 at 18:52, Viktor Mihajlovski <mihajlov@linux.vnet.ibm.com> wrote:
Short update, with yesterday's API model 4.2.25 release, there's basic support for s390 available in ovirt-engine. At this point in time, there are no ovirt yum repositories for the s390x architecture - not sure what the process would be to add s390x repositories and how to build the binary RPMs at least for the host packages (i.e. vdsm-*). Maybe it would possible to use the s390-koji infrastructure used to build Fedora for s390x?
Koji is very opinionated about how RPMs and specfiles should look, AFAIK A pretty massive amount of work would be needed to make everything that is needed for a node to build on it, and then we would end up with a process that is quite different with how we currently do builds for other platforms.
(I might be wrong about this, since some oVirt packages get also built as part of the CentOS virt SIG, and that is done using Koji as well)
More specifically, Koji usually assumes the starting point for the build process would be a specfile, while in oVirt we typically generated the specfile and then the RPM as part of a bigger build process.
Does fedora have an s390x server associated to it? There's a build system for Fedora on s390x: https://s390.koji.fedoraproject.org/koji
That is now deprecated, all the architectures including s390x build in the primary Fedora koji infra https://koji.fedoraproject.org/koji/packageinfo?packageID=12944 You'd just have to make the appropriate changes to the vdsm package and it would build for Fedora on s390x
We do use the same basic environment setup tool - mock - as the basis of our build infrastructure, so if Fedora is actually emulating s390x is some way while using mock, we might be able to do the same thing.
Just for general knowledge, the process for building oVirt repos, is to have *-build-artifacts-* jobs for each project that build RPM's after patches get merged, and then have the change-queue to collect the built packages, rung them through ovirt-system-tests (a.k.a. OST) and finally deposit them into the 'tested' rpoe, from which they are copied nightly to the '*-snapshot' repos.
OST only tests for CentOS 7/x86_64 ATM, but we bring along packages for other distros and architectures via the same process while assuming that if a package for a given commit works for CentOS 7/x86_64 it would probably work for other platforms as well...
I've noticed that there's no test automation for non-x86 arches. How are the ppc64le packages then built and stored in the repos?
This is slowly improving, it's about having available infrastructure and people to assist in getting it running and supporting it. For example I believe there's now ppc64le testing in openQA now.
Would it be conceivable to do the following? After a successfull build and OST of a package, take the SRPM and submit to s390-koji to produce the s390x binary RPMs. Then copy the binary RPMs into the respective repository, e.g. *-snapshot/rpm/el7/s390x (and update the repository metadata).
We don't currently have s390x EPEL

On Fri, 17 Nov 2017, Peter Robinson wrote:
We don't currently have s390x EPEL
ummm -- I think Peter is suggesting EPEL executables -- this is a solved problem The ClefOS s390x distribution has all of RHEL 6, RHEL 7, EPEL 6, EPEL 7, and more in s390x, and has had for years. Neale Ferguson (SNA) has 'banged the drum' on the product at trade shows etc, and had a nice Docker demonstration under s390x I have covered all of the s390x and z/VM mailing lists for nearly a decade on this topic, and it gets mentions at least monthly. Some may recall SNA principal David Boyes' demonstration (with Debian kit) of spinning up north of 40k instances under the older IFL model. Rich Troth (long time CMS developer) uses ClefOS, and he was over at my office earlier this month on trying to do some evangalism on how to make installations easier for people coming from 'Distributed' The initial builds were done in Hercules emulators [I started doing this probably 8 years ago, and demonstrated at the Louisville VMWorkShop five years ago], then completely re-built starting mid RHEL 6 (after the IBM EOL on older Z hardware), naively # uname -a Linux lclef01.lf-dev.marist.edu 3.10.0-693.5.2.el7.s390x #1 SMP Fri Oct 27 20:15:11 EDT 2017 s390x s390x s390x GNU/Linux SNA hosts the primary mirror, but I anticipate getting a mirror up in higher bandwidth at Marist University (near the IBM facility at Poughkeepsie), hopefully yet this year [herrold@centos-7 ~]$ lynx -dump \ "http://mirrors.sinenomine.net/epel?releasever=7&arch=s390x&repo=epel" https://download.sinenomine.net/epel/epel-7 -- Russ herrold

On 17 November 2017 at 10:45, Viktor Mihajlovski <mihajlov@linux.vnet.ibm.com> wrote:
On 16.11.2017 18:27, Barak Korren wrote:
On 16 November 2017 at 18:52, Viktor Mihajlovski <mihajlov@linux.vnet.ibm.com> wrote:
Short update, with yesterday's API model 4.2.25 release, there's basic support for s390 available in ovirt-engine. At this point in time, there are no ovirt yum repositories for the s390x architecture - not sure what the process would be to add s390x repositories and how to build the binary RPMs at least for the host packages (i.e. vdsm-*). Maybe it would possible to use the s390-koji infrastructure used to build Fedora for s390x?
Koji is very opinionated about how RPMs and specfiles should look, AFAIK A pretty massive amount of work would be needed to make everything that is needed for a node to build on it, and then we would end up with a process that is quite different with how we currently do builds for other platforms.
(I might be wrong about this, since some oVirt packages get also built as part of the CentOS virt SIG, and that is done using Koji as well)
More specifically, Koji usually assumes the starting point for the build process would be a specfile, while in oVirt we typically generated the specfile and then the RPM as part of a bigger build process.
Does fedora have an s390x server associated to it? There's a build system for Fedora on s390x: https://s390.koji.fedoraproject.org/koji We do use the same basic environment setup tool - mock - as the basis of our build infrastructure, so if Fedora is actually emulating s390x is some way while using mock, we might be able to do the same thing.
Just for general knowledge, the process for building oVirt repos, is to have *-build-artifacts-* jobs for each project that build RPM's after patches get merged, and then have the change-queue to collect the built packages, rung them through ovirt-system-tests (a.k.a. OST) and finally deposit them into the 'tested' rpoe, from which they are copied nightly to the '*-snapshot' repos.
OST only tests for CentOS 7/x86_64 ATM, but we bring along packages for other distros and architectures via the same process while assuming that if a package for a given commit works for CentOS 7/x86_64 it would probably work for other platforms as well...
I've noticed that there's no test automation for non-x86 arches. How are the ppc64le packages then built and stored in the repos?
There is no test automation but there IS build automation (Supporting just PPC64LE for now). As I've said, we pass all the packages through the same pipeline and just publish them as the equivalent x86_64 packages get built.
Would it be conceivable to do the following? After a successfull build and OST of a package, take the SRPM and submit to s390-koji to produce the s390x binary RPMs. Then copy the binary RPMs into the respective repository, e.g. *-snapshot/rpm/el7/s390x (and update the repository metadata).
We don't have support of post-OST triggering in our framework ATM. This would also mean that the s390x would hit the repos out-of-sync with all other packages. I'd much rather find some way to build the packages on oVirt's existing infrastructure rather then out-source it to Fedora., While we are related to Fedora in some ways, oVirt is its own project with its own tooling and processes. -- Barak Korren RHV DevOps team , RHCE, RHCi Red Hat EMEA redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
participants (10)
-
Barak Korren
-
Christian Borntraeger
-
Dan Horák
-
Dan Horák
-
Fred Rolland
-
Greg Sheremeta
-
Martin Sivak
-
Peter Robinson
-
R P Herrold
-
Viktor Mihajlovski