[Ovirt] [CQ weekly status] [28-06-2019]
by Dafna Ron
This mail is to provide the current status of CQ and allow people to review
status before and after the weekend.
Please refer to below colour map for further information on the meaning of
the colours.
*CQ-4.2*: GREEN (#1)
Last failure was on 28-06 for mom due to failed test:
002_bootstrap.add_master_storage_domain. this is a known issue which has a
fix which was probably not added to the upgrade suite. I re-triggered the
change to see if it passes
*CQ-4.3*: GREEN (#2)
Last failure was on 27-06-2019 for project ovirt-web-ui due to test
get_host_device which seemed to be a race. I re-triggered the patch and it
passed.
*CQ-Master:* GREEN (#1)
Last failure was on 28-06-3019 for project ovirt-node-ng-image on
build-artifacts for fc29
There is a ticket opened https://ovirt-jira.atlassian.net/browse/OVIRT-2747
Current running jobs for 4.2 [1], 4.3 [2] and master [3] can be found
here:
[1]
http://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-4.2_change-...
[2]
https://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-4.3_change...
[3]
http://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-master_chan...
Happy week!
Dafna
-------------------------------------------------------------------------------------------------------------------
COLOUR MAP
Green = job has been passing successfully
** green for more than 3 days may suggest we need a review of our test
coverage
1.
1-3 days GREEN (#1)
2.
4-7 days GREEN (#2)
3.
Over 7 days GREEN (#3)
Yellow = intermittent failures for different projects but no lasting or
current regressions
** intermittent would be a healthy project as we expect a number of
failures during the week
** I will not report any of the solved failures or regressions.
1.
Solved job failures YELLOW (#1)
2.
Solved regressions YELLOW (#2)
Red = job has been failing
** Active Failures. The colour will change based on the amount of time the
project/s has been broken. Only active regressions would be reported.
1.
1-3 days RED (#1)
2.
4-7 days RED (#2)
3.
Over 7 days RED (#3)
5 years, 5 months
[VDSM] New random failure on Fedora 29
by Nir Soffer
I had this unrelated failure now, never seen it before.
======================================================================
ERROR: test_event_handler (client_test.VdsmClientTests)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/jenkins/workspace/vdsm_standard-check-patch/vdsm/tests/testValidation.py",
line 274, in wrapper
return f(*args, **kwargs)
File "/home/jenkins/workspace/vdsm_standard-check-patch/vdsm/tests/client_test.py",
line 220, in test_event_handler
client.Test.sendEvent()
File "/home/jenkins/workspace/vdsm_standard-check-patch/vdsm/lib/vdsm/client.py",
line 294, in _call
raise TimeoutError(method, kwargs, timeout)
TimeoutError: Request Test.sendEvent with args {} timed out after 3 seconds
Build:
https://jenkins.ovirt.org/job/vdsm_standard-check-patch/6692//artifact/ch...
Looking at the captured logs, we see this alarming error:
2019-06-24 19:49:58,108 ERROR (Detector thread) [vds.dispatcher]
uncaptured python exception,closing channel
<yajsonrpc.betterAsyncore.Dispatcher ('::1', 34444, 0, 0) at
0x7f74983a5c20> (<type 'exceptions.AttributeError'>:'NoneType' object
has no attribute 'headers'
[/usr/lib64/python2.7/asyncore.py|readwrite|108]
[/usr/lib64/python2.7/asyncore.py|handle_read_event|449]
[vdsm/lib/yajsonrpc/betterAsyncore.py|handle_read|71]
[vdsm/lib/yajsonrpc/betterAsyncore.py|_delegate_call|168]
[vdsm/lib/vdsm/protocoldetector.py|handle_read|129]
[vdsm/lib/yajsonrpc/stompserver.py|handle_socket|412]
[vdsm/lib/vdsm/rpc/bindingjsonrpc.py|add_socket|54]
[vdsm/lib/yajsonrpc/stompserver.py|createListener|379]
[vdsm/lib/yajsonrpc/stompserver.py|StompListener|345]
[vdsm/lib/yajsonrpc/betterAsyncore.py|__init__|47]
[vdsm/lib/yajsonrpc/betterAsyncore.py|switch_implementation|86]
[vdsm/lib/yajsonrpc/stompserver.py|init|363]
[vdsm/lib/vdsm/rpc/bindingjsonrpc.py|_onAccept|57]
[vdsm/lib/yajsonrpc/stomp.py|set_message_handler|635]
[/usr/lib64/python2.7/asyncore.py|handle_read_event|449]
[vdsm/lib/yajsonrpc/betterAsyncore.py|handle_read|71]
[vdsm/lib/yajsonrpc/betterAsyncore.py|_delegate_call|168]
[vdsm/lib/yajsonrpc/stomp.py|handle_read|411]
[vdsm/lib/yajsonrpc/stomp.py|parse|313]
[vdsm/lib/yajsonrpc/stomp.py|_parse_header|249])
(betterAsyncore:179)
Unhandled exception means we did not implement some handle_error, and we
got the default error handling which give
very unhelpful tracebacks:
https://github.com/python/cpython/blob/6cbff564f00fbe17b5443f6986a44ce116...
So we have 2 issues:
- missing handle_error - fixing this first will help to fix the actual
error bellow
- the actual error: 'NoneType' object has no attribute 'headers', from:
yajsonrpc/stomp.py line 249
Nir
5 years, 6 months
yum x dnf on el8 host installation?
by Milan Zamazal
Hi, when I experiment with installing a RHEL 8 host from Engine (master
branch running on RHEL 7), I get the following error in host deploy:
2019-06-25 12:59:48,701+0200 DEBUG otopi.plugins.otopi.network.firewalld firewalld._get_firewalld_cmd_version:116 firewalld version: 0.6.3
2019-06-25 12:59:48,701+0200 DEBUG otopi.plugins.otopi.dialog.machine dialog.__logString:204 DIALOG:SEND **%EventEnd STAGE customization METHOD otopi.plugins.otopi.network.firewalld.Plugin._customization (None)
2019-06-25 12:59:48,701+0200 DEBUG otopi.context context.dumpEnvironment:731 ENVIRONMENT DUMP - BEGIN
2019-06-25 12:59:48,702+0200 DEBUG otopi.context context.dumpEnvironment:741 ENV NETWORK/firewalldAvailable=bool:'True'
2019-06-25 12:59:48,702+0200 DEBUG otopi.context context.dumpEnvironment:745 ENVIRONMENT DUMP - END
2019-06-25 12:59:48,702+0200 DEBUG otopi.context context._executeMethod:127 Stage customization METHOD otopi.plugins.otopi.core.config.Plugin._customize1
2019-06-25 12:59:48,702+0200 DEBUG otopi.plugins.otopi.dialog.machine dialog.__logString:204 DIALOG:SEND **%EventStart STAGE customization METHOD otopi.plugins.otopi.core.config.Plugin._customize1 (None)
2019-06-25 12:59:48,703+0200 DEBUG otopi.plugins.otopi.dialog.machine dialog.__logString:204 DIALOG:SEND **%EventEnd STAGE customization METHOD otopi.plugins.otopi.core.config.Plugin._customize1 (None)
2019-06-25 12:59:48,703+0200 DEBUG otopi.context context._executeMethod:127 Stage customization METHOD otopi.plugins.ovirt_host_deploy.kdump.packages.Plugin._customization
2019-06-25 12:59:48,703+0200 DEBUG otopi.plugins.otopi.dialog.machine dialog.__logString:204 DIALOG:SEND **%EventStart STAGE customization METHOD otopi.plugins.ovirt_host_deploy.kdump.packages.Plugin._customization (None)
2019-06-25 12:59:48,704+0200 DEBUG otopi.context context._executeMethod:145 method exception
Traceback (most recent call last):
File "/tmp/ovirt-AX5hJa1Fcr/pythonlib/otopi/context.py", line 132, in _executeMethod
method['method']()
File "/tmp/ovirt-AX5hJa1Fcr/otopi-plugins/ovirt-host-deploy/kdump/packages.py", line 216, in _customization
self._kexec_tools_version_supported()
File "/tmp/ovirt-AX5hJa1Fcr/otopi-plugins/ovirt-host-deploy/kdump/packages.py", line 143, in _kexec_tools_version_supported
from rpmUtils.miscutils import compareEVR
ModuleNotFoundError: No module named 'rpmUtils'
2019-06-25 12:59:48,705+0200 ERROR otopi.context context._executeMethod:154 Failed to execute stage 'Environment customization': No module named 'rpmUtils'
It's apparently because `rpmUtils' module from `yum' package is no
longer present on RHEL 8 (in `yum' or another very easily guessable
package). I'm not sure how el7 x el8 host installation is supposed to
work and whether I can do anything at the moment to proceed a bit
further.
I suppose similar yum x dnf issues will need some attention; not only in
host deploy but everywhere where yum or rpm manipulation is used.
Thanks,
Milan
5 years, 6 months
Re: [CentOS-devel] The future of the community build services
by Sandro Bonazzola
Hi,
Il giorno lun 24 giu 2019 alle ore 17:21 Thomas Oulevey <
thomas.oulevey(a)cern.ch> ha scritto:
> Hi Folks,
>
> It's time to have a look to the SIGs and the community needs for the
> next version of the community build service(cbs). We would like to
> request feedback and know if the different communities plan on using
> modular builds, and what would be their timeline to be able to produce
> C8 artifacts (rpms, imagefactory builds, etc...).
>
From oVirt perspective we are not planning to build modules but we need to
see how advanced virtualization module will look like within CentOS 8.
We plan to consume some modules but not building one also for building in
cbs (we'll need at least javapackages-tools module in cbs build root)
>
> As you know Koji is now used to build CentOS 8 [1], and is based on
> the mbbox [2] distribution.
>
> It makes sense for cbs.centos.org to follow this trend and be based on
> the same templates.
>
> At the same time we needed to evaluate how and where we can deploy the
> new Openshift based templates, and associated builders for different
> arches.
>
> It's understood that the transition between both systems should be
> smooth and do not impact your C6/C7 builds and release cycle. We will
> plan accordingly.
>
> Please let us know, if you have comments and/or new ideas.
>
> --
> Thomas Oulevey
>
>
> [1]: https://blog.centos.org/2019/06/centos-8-status-17-june-2019/
> [2]: https://github.com/puiterwijk/mbbox
> _______________________________________________
> CentOS-devel mailing list
> CentOS-devel(a)centos.org
> https://lists.centos.org/mailman/listinfo/centos-devel
>
--
Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo(a)redhat.com
<https://www.redhat.com/>
5 years, 6 months
Fwd: [CentOS-devel] The future of the community build services
by Sandro Bonazzola
FYI
---------- Forwarded message ---------
Da: Thomas Oulevey <thomas.oulevey(a)cern.ch>
Date: lun 24 giu 2019 alle ore 17:21
Subject: [CentOS-devel] The future of the community build services
To: <centos-devel(a)centos.org>
Hi Folks,
It's time to have a look to the SIGs and the community needs for the
next version of the community build service(cbs). We would like to
request feedback and know if the different communities plan on using
modular builds, and what would be their timeline to be able to produce
C8 artifacts (rpms, imagefactory builds, etc...).
As you know Koji is now used to build CentOS 8 [1], and is based on
the mbbox [2] distribution.
It makes sense for cbs.centos.org to follow this trend and be based on
the same templates.
At the same time we needed to evaluate how and where we can deploy the
new Openshift based templates, and associated builders for different arches.
It's understood that the transition between both systems should be
smooth and do not impact your C6/C7 builds and release cycle. We will
plan accordingly.
Please let us know, if you have comments and/or new ideas.
--
Thomas Oulevey
[1]: https://blog.centos.org/2019/06/centos-8-status-17-june-2019/
[2]: https://github.com/puiterwijk/mbbox
_______________________________________________
CentOS-devel mailing list
CentOS-devel(a)centos.org
https://lists.centos.org/mailman/listinfo/centos-devel
--
Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo(a)redhat.com
<https://www.redhat.com/>
5 years, 6 months
Weakness of repos in OST
by Dominik Holler
Hi,
from my point of view, we are not testing the repos in OST,
because we manage the packages manually.
The clean way would be installing something like
https://resources.ovirt.org/pub/yum-repo/ovirt-release43-pre.rpm
But this would download each package multiple times each run.
Maybe a way to test the repos would be an OST which bypasses
lago's repo management.
What is your view on this?
Dominik
5 years, 6 months
[Ovirt] [CQ weekly status] [21-06-2019]
by Dafna Ron
This mail is to provide the current status of CQ and allow people to review
status before and after the weekend.
Please refer to below colour map for further information on the meaning of
the colours.
*CQ-4.2*: GREEN (#1)
Last failure was on 18-06 for project v2v-conversion-host due to failed
build-artifacts
which is already fixed by next merged patches.
*CQ-4.3*: RED (#1)
1. We have a failure on ovirt-engine-metrics due to a dependency to a new
ansible package that has not been synced yet to centos repo.
I am adding virt-sig-common repo until this is synced next week:
https://gerrit.ovirt.org/#/c/101023/
*CQ-Master:* RED (#1)
1. same failure on ovirt-engine-metrics as 4.3
2. ovirt-hosted-engine-setup is failing due to package dependency change
from python-libguestfs to python2-libguestfs. mail sent to Ido to check the
issue.
Current running jobs for 4.2 [1], 4.3 [2] and master [3] can be found
here:
[1]
http://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-4.2_change-...
[2]
https://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-4.3_change...
[3]
http://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-master_chan...
Happy week!
Dafna
-------------------------------------------------------------------------------------------------------------------
COLOUR MAP
Green = job has been passing successfully
** green for more than 3 days may suggest we need a review of our test
coverage
1.
1-3 days GREEN (#1)
2.
4-7 days GREEN (#2)
3.
Over 7 days GREEN (#3)
Yellow = intermittent failures for different projects but no lasting or
current regressions
** intermittent would be a healthy project as we expect a number of
failures during the week
** I will not report any of the solved failures or regressions.
1.
Solved job failures YELLOW (#1)
2.
Solved regressions YELLOW (#2)
Red = job has been failing
** Active Failures. The colour will change based on the amount of time the
project/s has been broken. Only active regressions would be reported.
1.
1-3 days RED (#1)
2.
4-7 days RED (#2)
3.
Over 7 days RED (#3)
5 years, 6 months