[JIRA] (OVIRT-1772) s390x build support for oVirt infra
by Barak Korren (oVirt JIRA)
[ https://ovirt-jira.atlassian.net/browse/OVIRT-1772?page=com.atlassian.jir... ]
Barak Korren updated OVIRT-1772:
--------------------------------
Blocked By: Code review
Status: Blocked (was: In Progress)
> s390x build support for oVirt infra
> -----------------------------------
>
> Key: OVIRT-1772
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-1772
> Project: oVirt - virtualization made easy
> Issue Type: Improvement
> Components: oVirt CI
> Reporter: Barak Korren
> Assignee: infra
> Labels: s390x, standard-ci
>
> There has been some recent interest in the community lately in building oVirt node components for the s390x architecture.
> Here is a list of things we would need in order to enable s390x builds:
> * Bring up some s390x Jenkins slave VMs, this implies:
> ** Getting s390x VMs up and running
> ** Getting an operating system for these VMs
> ** Getting Java running on these VMs, in order to run the Jenkins agent.
> * Enable '{{mock_runner.sh}}' to create s390x build environments.
> * Create '{{build-artifacts}}' jobs for s390x
> The easiest way get s390x slave VMs would be if we could get our hands on some real s390x machines, just like the ppc64le machines we currently have. But this does not seem to be likely to happen. An alternative would be to use some kind of s390x emulation. This kind of emulation seems to be used by the Fedora project for their s390x builds. A version of qemu that supports s390x is available in EPEL in the '{{qemu-system-s390x}}' package. Once installed, the package adds the following libvirt capabilities structure:
> {code}
> <guest>
> <os_type>hvm</os_type>
> <arch name='s390x'>
> <wordsize>64</wordsize>
> <emulator>/usr/bin/qemu-system-s390x</emulator>
> <machine maxCpus='255'>s390-virtio</machine>
> <machine canonical='s390-virtio' maxCpus='255'>s390</machine>
> <machine maxCpus='255'>s390-ccw-virtio</machine>
> <machine canonical='s390-ccw-virtio' maxCpus='255'>s390-ccw</machine>
> <domain type='qemu'/>
> </arch>
> <features>
> <cpuselection/>
> <deviceboot/>
> <disksnapshot default='on' toggle='no'/>
> </features>
> </guest>
> {code}
> So it seems we can get s390x VMs running on our x86_64 hardware. The next issue to tackle would be to get an OS running on these VMs. The seems to be a Fedora s390x release but not a CentOS one. Neither of these projects release an s390x cloud image, so we may end up having to make our own using '{{virt-install}}' or try to convince [Richard Jones|mailto:rjones@redhat.com] to make such images available in '{{virt-builder}}'.
> Once we have VMs up, we need to turn them into Jenkins slaves. Hopefully the s390x Fedora build includes Java so this may be trivial.
> To get '{{mock_runner.sh}}' support we would need an appropriates '{{mock}}' configuration files. Suitable files for Fedora on s390x package seems to already be shipped with the '{{mock}}' package, so there is not much to do there besides copying the file into our '{{jenkins}}' repo and making the usual adjustments we typically make to enable proxy and mirrors support.
> Once we have all of that up and running, adding s390x '{{build-artifacts}}' jobs would be trivial, we'll just have to add the right tags in the JJB YAML.
--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100074)
7 years
[JIRA] (OVIRT-1772) s390x build support for oVirt infra
by Barak Korren (oVirt JIRA)
[ https://ovirt-jira.atlassian.net/browse/OVIRT-1772?page=com.atlassian.jir... ]
Barak Korren commented on OVIRT-1772:
-------------------------------------
An s390x VM has been added as a slave to both Jenkins instances. When linked patches are merged s390x support for oVirt CI will be ready for use by any oVirt project.
Now blocking ticket on code review.
> s390x build support for oVirt infra
> -----------------------------------
>
> Key: OVIRT-1772
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-1772
> Project: oVirt - virtualization made easy
> Issue Type: Improvement
> Components: oVirt CI
> Reporter: Barak Korren
> Assignee: infra
> Labels: s390x, standard-ci
>
> There has been some recent interest in the community lately in building oVirt node components for the s390x architecture.
> Here is a list of things we would need in order to enable s390x builds:
> * Bring up some s390x Jenkins slave VMs, this implies:
> ** Getting s390x VMs up and running
> ** Getting an operating system for these VMs
> ** Getting Java running on these VMs, in order to run the Jenkins agent.
> * Enable '{{mock_runner.sh}}' to create s390x build environments.
> * Create '{{build-artifacts}}' jobs for s390x
> The easiest way get s390x slave VMs would be if we could get our hands on some real s390x machines, just like the ppc64le machines we currently have. But this does not seem to be likely to happen. An alternative would be to use some kind of s390x emulation. This kind of emulation seems to be used by the Fedora project for their s390x builds. A version of qemu that supports s390x is available in EPEL in the '{{qemu-system-s390x}}' package. Once installed, the package adds the following libvirt capabilities structure:
> {code}
> <guest>
> <os_type>hvm</os_type>
> <arch name='s390x'>
> <wordsize>64</wordsize>
> <emulator>/usr/bin/qemu-system-s390x</emulator>
> <machine maxCpus='255'>s390-virtio</machine>
> <machine canonical='s390-virtio' maxCpus='255'>s390</machine>
> <machine maxCpus='255'>s390-ccw-virtio</machine>
> <machine canonical='s390-ccw-virtio' maxCpus='255'>s390-ccw</machine>
> <domain type='qemu'/>
> </arch>
> <features>
> <cpuselection/>
> <deviceboot/>
> <disksnapshot default='on' toggle='no'/>
> </features>
> </guest>
> {code}
> So it seems we can get s390x VMs running on our x86_64 hardware. The next issue to tackle would be to get an OS running on these VMs. The seems to be a Fedora s390x release but not a CentOS one. Neither of these projects release an s390x cloud image, so we may end up having to make our own using '{{virt-install}}' or try to convince [Richard Jones|mailto:rjones@redhat.com] to make such images available in '{{virt-builder}}'.
> Once we have VMs up, we need to turn them into Jenkins slaves. Hopefully the s390x Fedora build includes Java so this may be trivial.
> To get '{{mock_runner.sh}}' support we would need an appropriates '{{mock}}' configuration files. Suitable files for Fedora on s390x package seems to already be shipped with the '{{mock}}' package, so there is not much to do there besides copying the file into our '{{jenkins}}' repo and making the usual adjustments we typically make to enable proxy and mirrors support.
> Once we have all of that up and running, adding s390x '{{build-artifacts}}' jobs would be trivial, we'll just have to add the right tags in the JJB YAML.
--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100074)
7 years
[JIRA] (OVIRT-1772) s390x build support for oVirt infra
by Barak Korren (oVirt JIRA)
[ https://ovirt-jira.atlassian.net/browse/OVIRT-1772?page=com.atlassian.jir... ]
Barak Korren updated OVIRT-1772:
--------------------------------
Status: In Progress (was: To Do)
> s390x build support for oVirt infra
> -----------------------------------
>
> Key: OVIRT-1772
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-1772
> Project: oVirt - virtualization made easy
> Issue Type: Improvement
> Components: oVirt CI
> Reporter: Barak Korren
> Assignee: infra
> Labels: s390x, standard-ci
>
> There has been some recent interest in the community lately in building oVirt node components for the s390x architecture.
> Here is a list of things we would need in order to enable s390x builds:
> * Bring up some s390x Jenkins slave VMs, this implies:
> ** Getting s390x VMs up and running
> ** Getting an operating system for these VMs
> ** Getting Java running on these VMs, in order to run the Jenkins agent.
> * Enable '{{mock_runner.sh}}' to create s390x build environments.
> * Create '{{build-artifacts}}' jobs for s390x
> The easiest way get s390x slave VMs would be if we could get our hands on some real s390x machines, just like the ppc64le machines we currently have. But this does not seem to be likely to happen. An alternative would be to use some kind of s390x emulation. This kind of emulation seems to be used by the Fedora project for their s390x builds. A version of qemu that supports s390x is available in EPEL in the '{{qemu-system-s390x}}' package. Once installed, the package adds the following libvirt capabilities structure:
> {code}
> <guest>
> <os_type>hvm</os_type>
> <arch name='s390x'>
> <wordsize>64</wordsize>
> <emulator>/usr/bin/qemu-system-s390x</emulator>
> <machine maxCpus='255'>s390-virtio</machine>
> <machine canonical='s390-virtio' maxCpus='255'>s390</machine>
> <machine maxCpus='255'>s390-ccw-virtio</machine>
> <machine canonical='s390-ccw-virtio' maxCpus='255'>s390-ccw</machine>
> <domain type='qemu'/>
> </arch>
> <features>
> <cpuselection/>
> <deviceboot/>
> <disksnapshot default='on' toggle='no'/>
> </features>
> </guest>
> {code}
> So it seems we can get s390x VMs running on our x86_64 hardware. The next issue to tackle would be to get an OS running on these VMs. The seems to be a Fedora s390x release but not a CentOS one. Neither of these projects release an s390x cloud image, so we may end up having to make our own using '{{virt-install}}' or try to convince [Richard Jones|mailto:rjones@redhat.com] to make such images available in '{{virt-builder}}'.
> Once we have VMs up, we need to turn them into Jenkins slaves. Hopefully the s390x Fedora build includes Java so this may be trivial.
> To get '{{mock_runner.sh}}' support we would need an appropriates '{{mock}}' configuration files. Suitable files for Fedora on s390x package seems to already be shipped with the '{{mock}}' package, so there is not much to do there besides copying the file into our '{{jenkins}}' repo and making the usual adjustments we typically make to enable proxy and mirrors support.
> Once we have all of that up and running, adding s390x '{{build-artifacts}}' jobs would be trivial, we'll just have to add the right tags in the JJB YAML.
--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100074)
7 years
[JIRA] (OVIRT-1772) s390x build support for oVirt infra
by Barak Korren (oVirt JIRA)
[ https://ovirt-jira.atlassian.net/browse/OVIRT-1772?page=com.atlassian.jir... ]
Barak Korren commented on OVIRT-1772:
-------------------------------------
Current direction for implementing support is to use a loaned slave managed by Dan Horak. This requires patching STDCI code to be able to run on a slave where we don't have full sudo privileges, as well as adding some new mock_runner configuration. Patches implementing these two things have been implemented and tested. (Linked to this ticket)
> s390x build support for oVirt infra
> -----------------------------------
>
> Key: OVIRT-1772
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-1772
> Project: oVirt - virtualization made easy
> Issue Type: Improvement
> Components: oVirt CI
> Reporter: Barak Korren
> Assignee: infra
> Labels: s390x, standard-ci
>
> There has been some recent interest in the community lately in building oVirt node components for the s390x architecture.
> Here is a list of things we would need in order to enable s390x builds:
> * Bring up some s390x Jenkins slave VMs, this implies:
> ** Getting s390x VMs up and running
> ** Getting an operating system for these VMs
> ** Getting Java running on these VMs, in order to run the Jenkins agent.
> * Enable '{{mock_runner.sh}}' to create s390x build environments.
> * Create '{{build-artifacts}}' jobs for s390x
> The easiest way get s390x slave VMs would be if we could get our hands on some real s390x machines, just like the ppc64le machines we currently have. But this does not seem to be likely to happen. An alternative would be to use some kind of s390x emulation. This kind of emulation seems to be used by the Fedora project for their s390x builds. A version of qemu that supports s390x is available in EPEL in the '{{qemu-system-s390x}}' package. Once installed, the package adds the following libvirt capabilities structure:
> {code}
> <guest>
> <os_type>hvm</os_type>
> <arch name='s390x'>
> <wordsize>64</wordsize>
> <emulator>/usr/bin/qemu-system-s390x</emulator>
> <machine maxCpus='255'>s390-virtio</machine>
> <machine canonical='s390-virtio' maxCpus='255'>s390</machine>
> <machine maxCpus='255'>s390-ccw-virtio</machine>
> <machine canonical='s390-ccw-virtio' maxCpus='255'>s390-ccw</machine>
> <domain type='qemu'/>
> </arch>
> <features>
> <cpuselection/>
> <deviceboot/>
> <disksnapshot default='on' toggle='no'/>
> </features>
> </guest>
> {code}
> So it seems we can get s390x VMs running on our x86_64 hardware. The next issue to tackle would be to get an OS running on these VMs. The seems to be a Fedora s390x release but not a CentOS one. Neither of these projects release an s390x cloud image, so we may end up having to make our own using '{{virt-install}}' or try to convince [Richard Jones|mailto:rjones@redhat.com] to make such images available in '{{virt-builder}}'.
> Once we have VMs up, we need to turn them into Jenkins slaves. Hopefully the s390x Fedora build includes Java so this may be trivial.
> To get '{{mock_runner.sh}}' support we would need an appropriates '{{mock}}' configuration files. Suitable files for Fedora on s390x package seems to already be shipped with the '{{mock}}' package, so there is not much to do there besides copying the file into our '{{jenkins}}' repo and making the usual adjustments we typically make to enable proxy and mirrors support.
> Once we have all of that up and running, adding s390x '{{build-artifacts}}' jobs would be trivial, we'll just have to add the right tags in the JJB YAML.
--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100074)
7 years
[JIRA] (OVIRT-1788) new ui_sanity scenario for basic_suite -- need multiple firefoxes and chromium
by Barak Korren (oVirt JIRA)
This is a multi-part message in MIME format...
------------=_1512763653-17340-252
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
[ https://ovirt-jira.atlassian.net/browse/OVIRT-1788?page=com.atlassian.jir... ]
Barak Korren commented on OVIRT-1788:
-------------------------------------
{quote}
Perhaps just use old versions of the container? Example:
https://github.com/SeleniumHQ/docker-selenium/releases/tag/3.4.0-einsteinium
uses Firefox 54
{quote}
Yep. This is what Docker tags are for...
In fact, we should probably set things up so that we always use specific tagged versions in tests rather then '{{:latest}}', so that we don't introduce a rouge source of change to our testing apparatus.
> new ui_sanity scenario for basic_suite -- need multiple firefoxes and chromium
> ------------------------------------------------------------------------------
>
> Key: OVIRT-1788
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-1788
> Project: oVirt - virtualization made easy
> Issue Type: Improvement
> Components: OST
> Reporter: Greg Sheremeta
> Assignee: infra
>
> I'm writing a suite that does headless UI testing. One goal is to open headless firefox and actually open the UI, perform a login, make sure things look good, make sure there are no ui.log errors, etc. I'll also eventually add chromium, which can run headless now too.
> The suite requires several firefox versions to be installed on the test machine, along with chromium. There are also some binary components required, geckodriver and chromedriver. These are not packaged.
> Ideally the browsers can be installed to /opt/firefox55, /opt/firefox56, /opt/chromium62, etc. on the machine running the suite. So I think it makes sense to maintain a custom rpm with all of this.
> Where can this rpm live? What is a reliable way to do this? (I know we want to avoid copr.)
--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100074)
------------=_1512763653-17340-252
Content-Type: text/html; charset="UTF-8"
Content-Disposition: inline
Content-Transfer-Encoding: 7bit
<html><body>
<pre>[ https://ovirt-jira.atlassian.net/browse/OVIRT-1788?page=com.atlassian.jir... ]</pre>
<h3>Barak Korren commented on OVIRT-1788:</h3>
<p>{quote} Perhaps just use old versions of the container? Example: <a href="https://github.com/SeleniumHQ/docker-selenium/releases/tag/3.4.0-einsteinium">https://github.com/SeleniumHQ/docker-selenium/releases/tag/3.4.0-einsteinium</a> uses Firefox 54 {quote}</p>
<p>Yep. This is what Docker tags are for…</p>
<p>In fact, we should probably set things up so that we always use specific tagged versions in tests rather then ‘{{:latest}}’, so that we don't introduce a rouge source of change to our testing apparatus.</p>
<blockquote><h3>new ui_sanity scenario for basic_suite — need multiple firefoxes and chromium</h3>
<pre> Key: OVIRT-1788
URL: https://ovirt-jira.atlassian.net/browse/OVIRT-1788
Project: oVirt - virtualization made easy
Issue Type: Improvement
Components: OST
Reporter: Greg Sheremeta
Assignee: infra</pre>
<p>I'm writing a suite that does headless UI testing. One goal is to open headless firefox and actually open the UI, perform a login, make sure things look good, make sure there are no ui.log errors, etc. I'll also eventually add chromium, which can run headless now too. The suite requires several firefox versions to be installed on the test machine, along with chromium. There are also some binary components required, geckodriver and chromedriver. These are not packaged. Ideally the browsers can be installed to /opt/firefox55, /opt/firefox56, /opt/chromium62, etc. on the machine running the suite. So I think it makes sense to maintain a custom rpm with all of this. Where can this rpm live? What is a reliable way to do this? (I know we want to avoid copr.)</p></blockquote>
<p>— This message was sent by Atlassian Jira (v1001.0.0-SNAPSHOT#100074)</p>
<img src="https://u4043402.ct.sendgrid.net/wf/open?upn=i5TMWGV99amJbNxJpSp2-2BJ33BS..." alt="" width="1" height="1" border="0" style="height:1px !important;width:1px !important;border-width:0 !important;margin-top:0 !important;margin-bottom:0 !important;margin-right:0 !important;margin-left:0 !important;padding-top:0 !important;padding-bottom:0 !important;padding-right:0 !important;padding-left:0 !important;"/>
</body></html>
------------=_1512763653-17340-252--
7 years
[JIRA] (OVIRT-1788) new ui_sanity scenario for basic_suite -- need multiple firefoxes and chromium
by Greg Sheremeta (oVirt JIRA)
[ https://ovirt-jira.atlassian.net/browse/OVIRT-1788?page=com.atlassian.jir... ]
Greg Sheremeta commented on OVIRT-1788:
---------------------------------------
One issue I see is that the provided containers use only the latest browser.
Example:
https://github.com/SeleniumHQ/docker-selenium/blob/master/NodeFirefox/Doc...
ARG FIREFOX_VERSION=57.0.1
Any ideas on how we could use older browsers too?
Perhaps just use old versions of the container? Example:
https://github.com/SeleniumHQ/docker-selenium/releases/tag/3.4.0-einsteinium
uses Firefox 54
> new ui_sanity scenario for basic_suite -- need multiple firefoxes and chromium
> ------------------------------------------------------------------------------
>
> Key: OVIRT-1788
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-1788
> Project: oVirt - virtualization made easy
> Issue Type: Improvement
> Components: OST
> Reporter: Greg Sheremeta
> Assignee: infra
>
> I'm writing a suite that does headless UI testing. One goal is to open headless firefox and actually open the UI, perform a login, make sure things look good, make sure there are no ui.log errors, etc. I'll also eventually add chromium, which can run headless now too.
> The suite requires several firefox versions to be installed on the test machine, along with chromium. There are also some binary components required, geckodriver and chromedriver. These are not packaged.
> Ideally the browsers can be installed to /opt/firefox55, /opt/firefox56, /opt/chromium62, etc. on the machine running the suite. So I think it makes sense to maintain a custom rpm with all of this.
> Where can this rpm live? What is a reliable way to do this? (I know we want to avoid copr.)
--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100074)
7 years