We experience problems with failed CI on VDSM patches.
This is what we see in logs:
out=`tox --version`; \
if [ $? -ne 0 ]; then \
echo "Error: cannot run tox, please install tox \
2.9.1 or later"; \
exit 1; \
version=`echo $out | cut -d' ' -f1`; \
if python2.7 build-aux/vercmp $version 2.9.1; then \
echo "Error: tox is too old, please install tox \
2.9.1 or later"; \
exit 1; \
Traceback (most recent call last):
File "/usr/bin/tox", line 7, in <module>
from tox import cmdline
File "/usr/lib/python2.7/site-packages/tox/__init__.py", line 4, in
from .hookspecs import hookimpl
File "/usr/lib/python2.7/site-packages/tox/hookspecs.py", line 4, in
from pluggy import HookimplMarker
File "/usr/lib/python2.7/site-packages/pluggy/__init__.py", line 16, in
from .manager import PluginManager, PluginValidationError
File "/usr/lib/python2.7/site-packages/pluggy/manager.py", line 6, in
line 9, in <module>
File "/usr/lib/python2.7/site-packages/zipp.py", line 12, in <module>
File "/usr/lib/python2.7/site-packages/more_itertools/__init__.py", line
1, in <module>
from more_itertools.more import * # noqa
File "/usr/lib/python2.7/site-packages/more_itertools/more.py", line 340
def _collate(*iterables, key=lambda a: a, reverse=False):
SyntaxError: invalid syntax
Error: cannot run tox, please install tox 2.9.1 or later
make: *** [tox] Error 1
+ '[' 2 -ne 0 ']'
+ echo '*** err: 2'
*** err: 2
+ tar --directory /var/log --exclude 'journal/*' -czf
+ tar --directory /var/host_log --exclude 'journal/*' -czf
+ python2 tests/storage/userstorage.py teardown
Can someone look at it?
Thank you in advance!
This mail is to provide the current status of CQ and allow people to review
status before and after the weekend.
Please refer to below colour map for further information on the meaning of
*CQ-4.2*: GREEN (#2)
Last failure was on 20-08 for ovirt-ansible-hosted-engine-setup. It was
caused by random network issue (fetching repo from github), retriggered job
for the change passed well.
*CQ-4.3*: GREEN (#2)
Last failure was on 20-08 for ovirt-ansible-hosted-engine-setup. It was
caused by random network issue, retriggered job for the change passed well.
*CQ-Master:* RED (#1)
Last failure was on 25-08 for ovirt-engine, caused by failed cloning of
engine repo (even with lately increased timeout value 20min). Related
ticket from last time was reopened.
Current running jobs for 4.2 , 4.3  and master  can be found
Have a nice week!
Green = job has been passing successfully
** green for more than 3 days may suggest we need a review of our test
1-3 days GREEN (#1)
4-7 days GREEN (#2)
Over 7 days GREEN (#3)
Yellow = intermittent failures for different projects but no lasting or
** intermittent would be a healthy project as we expect a number of
failures during the week
** I will not report any of the solved failures or regressions.
Solved job failures YELLOW (#1)
Solved regressions YELLOW (#2)
Red = job has been failing
** Active Failures. The colour will change based on the amount of time the
project/s has been broken. Only active regressions would be reported.
1-3 days RED (#1)
4-7 days RED (#2)
Over 7 days RED (#3)
Devel mailing list -- devel(a)ovirt.org
To unsubscribe send an email to devel-leave(a)ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: https://www.ovirt
List Archives: https://lists.ovirt.org/archives/list/devel@ovirt
please help CentOS Virtualization SIG testing
if we can confirm they're working fine we'll have them built regularly with
new CentOS releases.
You can just reply here or directly to me with your feedback.
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
<https://www.redhat.com/>*Red Hat respects your work life balance.
Therefore there is no need to answer this email out of your office hours.*
### A pinch of history
As a short introduction for those who do not know me as I rarely post
here: I'm part of Red Hat OSPO Comminfra Team (previously OSPO was
called OSAS) and currently hosting and taking care of the website.
This topic has been discussed many times: neither people writing the
site neither us for the maintenance are happy with the current state of
affair. But we had not real plan and picking a random tool which might
lead to similar problems and without any way to support it properly was
not very appealing. Moreover we tried migrating to Middleman 4, the
obvious path to modernization, but this happened to be more complicated
than expected and this new version had terrible performance problems. As
you now version 3 is already slow, and that's quite an understatement.
In the meanwhile a few things have changed, and people taking care of
the website too, so let's talk about the current situation.
(very short version of the history around this)
### Why now?
What has changed:
* several projects moved to Jekyll, a software we were already
investigating at the time and is being used by Google Pages (they
created it in fact), and they were very happy with the result; we
believe it has the necessary features, it is well maintained because
Google uses it heavily, and it's very fast
* we've tried Jekyll ourselves, we're happy too, and we do have the
Ansible playbooks and tooling to deploy it now
* over time the website had some waves of cleanup (still needs quite
some love though) and recently the blog moved to WordPress to get
comment support and a more friendly interface (see OVIRT-2652) which
allowed more cleanup (see #2030)
With the blog separated and a lot of custom Ruby code removed, the
tooling ready, I believe we can now work on migrating the content.
We may realize Jekyll is not the right tool, but people seemed to like
the idea at the time and current experience seem to indicate it should
improve things and be maintainable. The goal here is to experiment and
switch to production only if we're happy with it.
### Early work
I have started a branch called 'jekyll_migration' to put my work on it.
This is very early work (I just started), I already hit various
difficulties, and I can't commit 100% of my time on it so it will
require some time.
Several of my changes happened to not be really specific to the
migration, and the current site would benefit from these
fixes/cleanups/simplifications… thus I'll extracts these changes and
create separate PRs for master.
If you wish to help, then you can contact me directly or reply to this
thread. You may also create PRs to this topic branch, but please do not
push anything directly.
Since yesterday I can't get ovirt-provider-ovn builds to run, each
manifesting the error below:
error: index-pack died of signal 15
09:27:35 fatal: index-pack failed
09:27:35 at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:2042)
09:27:35 at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandWithCredentials(CliGitAPIImpl.java:1761)
09:27:35 at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.access$400(CliGitAPIImpl.java:72)
09:27:35 at org.jenkinsci.plugins.gitclient.CliGitAPIImpl$1.execute(CliGitAPIImpl.java:442)
09:27:35 at org.jenkinsci.plugins.gitclient.CliGitAPIImpl$2.execute(CliGitAPIImpl.java:655)
09:27:35 at org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$1.call(RemoteGitImpl.java:153)
09:27:35 at org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$1.call(RemoteGitImpl.java:146)
09:27:35 at hudson.remoting.UserRequest.perform(UserRequest.java:212)
09:27:35 at hudson.remoting.UserRequest.perform(UserRequest.java:54)
09:27:35 at hudson.remoting.Request$2.run(Request.java:369)
09:27:35 at hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72)
09:27:35 at java.util.concurrent.FutureTask.run(FutureTask.java:266)
09:27:35 at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
09:27:35 at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
09:27:35 at java.lang.Thread.run(Thread.java:748)
09:27:35 Suppressed: hudson.remoting.Channel$CallSiteStackTrace:
Remote call to vm0038.workers-phx.ovirt.org
09:27:35 at hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1743)
09:27:35 at hudson.remoting.UserRequest$ExceptionResponse.retrieve(UserRequest.java:357)
09:27:35 at hudson.remoting.Channel.call(Channel.java:957)
09:27:35 at org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler.execute(RemoteGitImpl.java:146)
09:27:35 at sun.reflect.GeneratedMethodAccessor793.invoke(Unknown Source)
09:27:35 at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
09:27:35 at java.lang.reflect.Method.invoke(Method.java:498)
09:27:35 at org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler.invoke(RemoteGitImpl.java:132)
09:27:35 at com.sun.proxy.$Proxy118.execute(Unknown Source)
09:27:35 at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1152)
09:27:35 at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1192)
09:27:35 at org.jenkinsci.plugins.workflow.steps.scm.SCMStep.checkout(SCMStep.java:124)
09:27:35 at org.jenkinsci.plugins.workflow.steps.scm.SCMStep$StepExecutionImpl.run(SCMStep.java:93)
09:27:35 at org.jenkinsci.plugins.workflow.steps.scm.SCMStep$StepExecutionImpl.run(SCMStep.java:80)
09:27:35 at org.jenkinsci.plugins.workflow.steps.SynchronousNonBlockingStepExecution.lambda$start$0(SynchronousNonBlockingStepExecution.java:47)
09:27:35 at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
09:27:35 ... 4 more
It can be seen on:
We finished a paper on open source projects' Coverity usage in May which will be published in ISSRE'19 <http://2019.issre.net/>. (Attached)
Thanks for your help in giving me access to Coverity data.
I'd love to hear your feedback on this whenever you have time.
Also, based on your experience of using static analysis tools, I'd love to hear if you have any future research suggestions.
I've been trying to reproduce an issue when it was possible, using ansible, to create an image in one dc with disk in another one.
Since adding validation to ImportRepoImage command turned out to not be enough, this time I am trying to stick as close as possible to the original reproducer.
OS is CentOS 7, ansible-playbook is 2.8.4 (latest, that is)
I have this simplistic yml that references vars from the inventory file, like the following:
and the inventory file is very straightforward as well, just a bunch of vars:
However, as I run this against my engine, which is latest master, ansible is able to login successfully and then fail with
"msg": "The response content type 'text/html;charset=UTF-8' isn't the expected XML. Is the path '/ovirt-********/api' included in the 'url' parameter correct? The typical one is '/ovirt-engine/api'"
This ovirt-******* is very intriguing. I sniffed the traffic and here is what is done, apparently:
2019-08-16 20:56:18 ::1 ::1 > POST apple:8080 /ovirt-engine/sso/oauth/token HTTP/1.1 - -
2019-08-16 20:56:18 ::1 ::1 < - - - HTTP/1.1 200 OK
2019-08-16 20:56:18 ::1 ::1 > GET apple:8080 /ovirt-********/api/templates?search=name%3Dfc28-cloud-test HTTP/1.1 - -
2019-08-16 20:56:18 ::1 ::1 < - - - HTTP/1.1 404 Not Found
So it actually tries to reach /ovirt-********/api/templates?search=name%3Dfc28-cloud-test and gets 404 page.
Once would think that something is wrong with ansible, but actually when I change engine url in inventory file to https://10-37-137-185.rhev.lab.eng.brq.redhat.com/ovirt-engine/api everthing runs fine.
So I guess it's engine messing up the URL after all?