Build failed in Jenkins:
system-sync_mirrors-fedora-base-fc29-x86_64 #218
by jenkins@jenkins.phx.ovirt.org
See <http://jenkins.ovirt.org/job/system-sync_mirrors-fedora-base-fc29-x86_64/...>
Changes:
[Sandro Bonazzola] ioprocess: drop 4.1 jobs
[Sandro Bonazzola] mom: drop 4.1 jobs
[Sandro Bonazzola] ovirt-hosted-engine-ha: drop 4.1 jobs
[Sandro Bonazzola] ovirt-engine-sdk-ruby: drop 4.1 jobs
[Sandro Bonazzola] ovirt-engine-nodejs-modules: drop 4.1 jobs
[Sandro Bonazzola] ovirt-engine-extension-logger-log4j: drop 4.1 jobs
[Sandro Bonazzola] ovirt-engine-extension-aaa-misc: drop 4.1 jobs
[Sandro Bonazzola] ovirt-engine-extension-aaa-ldap: drop 4.1 jobs
[Sandro Bonazzola] ovirt-engine-extension-aaa-jdbc: drop 4.1 jobs
[Sandro Bonazzola] ovirt-engine-dashboard: drop 4.1 jobs
[Sandro Bonazzola] ovirt-engine-api-model: drop 4.1 jobs
[Sandro Bonazzola] ovirt-engine-api-metamodel: drop 4.1 jobs
[Sandro Bonazzola] ovirt-demo-tool: drop 4.1 jobs
[Sandro Bonazzola] ovirt-containers: drop 4.1 jobs
[Sandro Bonazzola] ovirt-engine-dashboard: drop master jobs
[Sandro Bonazzola] ovirt-containers: drop abandoned project
------------------------------------------
Started by timer
[EnvInject] - Loading node environment variables.
Building remotely on mirrors.phx.ovirt.org (mirrors) in workspace <http://jenkins.ovirt.org/job/system-sync_mirrors-fedora-base-fc29-x86_64/ws/>
> git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
> git config remote.origin.url http://gerrit.ovirt.org/jenkins.git # timeout=10
Cleaning workspace
> git rev-parse --verify HEAD # timeout=10
Resetting working tree
> git reset --hard # timeout=10
> git clean -fdx # timeout=10
Pruning obsolete local branches
Fetching upstream changes from http://gerrit.ovirt.org/jenkins.git
> git --version # timeout=10
> git fetch --tags --progress http://gerrit.ovirt.org/jenkins.git +refs/heads/*:refs/remotes/origin/* --prune
> git rev-parse origin/master^{commit} # timeout=10
Checking out Revision e8b9034e509402e9e9ce1073efc73c70a8c53793 (origin/master)
> git config core.sparsecheckout # timeout=10
> git checkout -f e8b9034e509402e9e9ce1073efc73c70a8c53793
Commit message: "ovirt-containers: drop abandoned project"
> git rev-list --no-walk b30d5e49fe6b620704ed2244a1c36653357a5be7 # timeout=10
[system-sync_mirrors-fedora-base-fc29-x86_64] $ /bin/bash -xe /tmp/jenkins7420747090607629775.sh
+ jenkins/scripts/mirror_mgr.sh resync_yum_mirror fedora-base-fc29 x86_64 jenkins/data/mirrors-reposync.conf
+ MIRRORS_MP_BASE=/var/www/html/repos
+ MIRRORS_HTTP_BASE=http://mirrors.phx.ovirt.org/repos
+ MIRRORS_CACHE=/home/jenkins/mirrors_cache
+ MAX_LOCK_ATTEMPTS=120
+ LOCK_WAIT_INTERVAL=5
+ LOCK_BASE=/home/jenkins
+ OLD_MD_TO_KEEP=100
+ HTTP_SELINUX_TYPE=httpd_sys_content_t
+ HTTP_FILE_MODE=644
+ main resync_yum_mirror fedora-base-fc29 x86_64 jenkins/data/mirrors-reposync.conf
+ local command=resync_yum_mirror
+ command_args=("${@:2}")
+ local command_args
+ cmd_resync_yum_mirror fedora-base-fc29 x86_64 jenkins/data/mirrors-reposync.conf
+ local repo_name=fedora-base-fc29
+ local repo_archs=x86_64
+ local reposync_conf=jenkins/data/mirrors-reposync.conf
+ local sync_needed
+ mkdir -p /home/jenkins/mirrors_cache
+ verify_repo_fs fedora-base-fc29 yum
+ local repo_name=fedora-base-fc29
+ local repo_type=yum
+ sudo install -o jenkins -d /var/www/html/repos/yum /var/www/html/repos/yum/fedora-base-fc29 /var/www/html/repos/yum/fedora-base-fc29/base
+ check_yum_sync_needed fedora-base-fc29 x86_64 jenkins/data/mirrors-reposync.conf sync_needed
+ local repo_name=fedora-base-fc29
+ local repo_archs=x86_64
+ local reposync_conf=jenkins/data/mirrors-reposync.conf
+ local p_sync_needed=sync_needed
+ local reposync_out
+ echo 'Checking if mirror needs a resync'
Checking if mirror needs a resync
+ rm -rf /home/jenkins/mirrors_cache/fedora-base-fc29
++ IFS=,
++ echo x86_64
+ for arch in '$(IFS=,; echo $repo_archs)'
++ run_reposync fedora-base-fc29 x86_64 jenkins/data/mirrors-reposync.conf --urls --quiet
++ local repo_name=fedora-base-fc29
++ local repo_arch=x86_64
++ local reposync_conf=jenkins/data/mirrors-reposync.conf
++ extra_args=("${@:4}")
++ local extra_args
++ reposync --config=jenkins/data/mirrors-reposync.conf --repoid=fedora-base-fc29 --arch=x86_64 --cachedir=/home/jenkins/mirrors_cache --download_path=/var/www/html/repos/yum/fedora-base-fc29/base --norepopath --newest-only --urls --quiet
Traceback (most recent call last):
File "/usr/bin/reposync", line 343, in <module>
main()
File "/usr/bin/reposync", line 175, in main
my.doRepoSetup()
File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 681, in doRepoSetup
return self._getRepos(thisrepo, True)
File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 721, in _getRepos
self._repos.doSetup(thisrepo)
File "/usr/lib/python2.7/site-packages/yum/repos.py", line 157, in doSetup
self.retrieveAllMD()
File "/usr/lib/python2.7/site-packages/yum/repos.py", line 88, in retrieveAllMD
dl = repo._async and repo._commonLoadRepoXML(repo)
File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line 1465, in _commonLoadRepoXML
local = self.cachedir + '/repomd.xml'
File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line 774, in <lambda>
cachedir = property(lambda self: self._dirGetAttr('cachedir'))
File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line 757, in _dirGetAttr
self.dirSetup()
File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line 735, in dirSetup
self._dirSetupMkdir_p(dir)
File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line 712, in _dirSetupMkdir_p
raise Errors.RepoError, msg
yum.Errors.RepoError: Error making cache directory: /home/jenkins/mirrors_cache/centos-kvm-common-el7/gen error was: [Errno 17] File exists: '/home/jenkins/mirrors_cache/centos-kvm-common-el7/gen'
+ reposync_out=
Build step 'Execute shell' marked build as failure
5 years, 9 months
[JENKINS] Failed to setup proejct
kubevirt_kubevirt_standard-check-pr
by jenkins@jenkins.phx.ovirt.org
Failed to run project_setup.sh for:
#3215 kubevirt [check-patch].
It probably means that docker_cleanup.py failed.
This step doesn't fail the job, but we do collect
data about such failures to find the root cause.
Infra owner, ensure that we're not running out of
disk space on openshift-kubevirt-org-rf329.
5 years, 9 months
[JENKINS] Failed to setup proejct
kubevirt_kubevirt_standard-check-pr
by jenkins@jenkins.phx.ovirt.org
Failed to run project_setup.sh for:
#3212 kubevirt [check-patch].
It probably means that docker_cleanup.py failed.
This step doesn't fail the job, but we do collect
data about such failures to find the root cause.
Infra owner, ensure that we're not running out of
disk space on openshift-kubevirt-org-fbq72.
5 years, 9 months
[Ovirt] [CQ weekly status] [18-01-2019]
by Dafna Ron
Hi,
This mail is to provide the current status of CQ and allow people to review
status before and after the weekend.
Please refer to below colour map for further information on the meaning of
the colours.
*CQ-4.2*: RED (#1)
one issue still pending:
1. ovirt-provider-ovn is failing due to change
https://gerrit.ovirt.org/#/c/96926/ - ip_version is mandatory on POSTs
The change is failing the ost test because ip version needs to be
specifically provided as a result of introducing v6
There is a patch by Miguel which I merged. there are a few patches in test
now so waiting to see if the patch fixes the issue.
https://gerrit.ovirt.org/#/c/97072/ - basic-suite-42: align w/ networking
API
Jira: https://ovirt-jira.atlassian.net/browse/OVIRT-2655
*CQ-Master:* GREEN (#2)
Last failure was on Jan 14th for project ovirt-ansible-hosted-engine-setup.
The project was failing on
https://docs.fedoraproject.org/en-US/packaging-guidelines/Directory_Repla...
Jira was closed today as projects are passing:
https://ovirt-jira.atlassian.net/browse/OVIRT-2650
Current running jobs for 4.2 [1] and master [2] can be found here:
[1]
http://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-4.2_change-...
[2]
http://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-master_chan...
Happy week!
Dafna
-------------------------------------------------------------------------------------------------------------------
COLOUR MAP
Green = job has been passing successfully
** green for more than 3 days may suggest we need a review of our test
coverage
1.
1-3 days GREEN (#1)
2.
4-7 days GREEN (#2)
3.
Over 7 days GREEN (#3)
Yellow = intermittent failures for different projects but no lasting or
current regressions
** intermittent would be a healthy project as we expect a number of
failures during the week
** I will not report any of the solved failures or regressions.
1.
Solved job failures YELLOW (#1)
2.
Solved regressions YELLOW (#2)
Red = job has been failing
** Active Failures. The colour will change based on the amount of time the
project/s has been broken. Only active regressions would be reported.
1.
1-3 days RED (#1)
2.
4-7 days RED (#2)
3.
Over 7 days RED (#3)
5 years, 9 months
[ OST Failure Report ] [ oVirt 4.2 (ovirt-provider-ovn) ] [
18-01-2019 ] [ 098_ovirt_provider_ovn.use_ovn_provider ]
by Dafna Ron
Hi,
We have a failure in ovn tests in branch 4.2. Marcin/Miguel, can you please
take a look?
Jira opened: https://ovirt-jira.atlassian.net/browse/OVIRT-2655
Link and headline of suspected patches:
https://gerrit.ovirt.org/#/c/96926/ - ip_version is mandatory on POSTs
Link to Job:
http://jenkins.ovirt.org/job/ovirt-4.2_change-queue-tester/3742/
Link to all logs:
http://jenkins.ovirt.org/job/ovirt-4.2_change-queue-tester/3742/artifact/...
(Relevant) error snippet from the log:
<error>
2019-01-18 00:14:30,591 root Starting server
2019-01-18 00:14:30,592 root Version: 1.2.19-0.20190117180529.gite1d4195
2019-01-18 00:14:30,592 root Build date: 20190117180529
2019-01-18 00:14:30,592 root Githash: e1d4195
2019-01-18 00:20:39,394 ovsdbapp.backend.ovs_idl.vlog ssl:127.0.0.1:6641:
no response to inactivity probe after 5.01 seconds, disconnecting
2019-01-18 00:45:01,435 root From: ::ffff:192.168.200.1:49008 Request: POST
/v2.0/subnets/
2019-01-18 00:45:01,435 root Request body:
{"subnet": {"network_id": "99c260ec-dad4-40b9-8732-df32dd54bd00",
"dns_nameservers": ["8.8.8.8"], "cidr": "1.1.1.0/24", "gateway_ip":
"1.1.1.1", "name": "subnet_1"}}
2019-01-18 00:45:01,435 root Missing 'ip_version' attribute
Traceback (most recent call last):
File "/usr/share/ovirt-provider-ovn/handlers/base_handler.py", line 134,
in _handle_request
method, path_parts, content
File "/usr/share/ovirt-provider-ovn/handlers/selecting_handler.py", line
175, in handle_request
return self.call_response_handler(handler, content, parameters)
File "/usr/share/ovirt-provider-ovn/handlers/neutron.py", line 36, in
call_response_handler
return response_handler(ovn_north, content, parameters)
File "/usr/share/ovirt-provider-ovn/handlers/neutron_responses.py", line
154, in post_subnets
subnet = nb_db.add_subnet(received_subnet)
File "/usr/share/ovirt-provider-ovn/neutron/neutron_api_mappers.py", line
74, in wrapper
validate_rest_input(rest_data)
File "/usr/share/ovirt-provider-ovn/neutron/neutron_api_mappers.py", line
596, in validate_add_rest_input
raise BadRequestError('Missing \'ip_version\' attribute')
BadRequestError: Missing 'ip_version' attribute
</error>
5 years, 9 months