[JENKINS] Failed to setup proejct
kubevirt_containerized-data-importer_standard-check-pr
by jenkins@jenkins.phx.ovirt.org
Failed to run project_setup.sh for:
#1389 containerized-data-importer [check-patch].
It probably means that docker_cleanup.py failed.
This step doesn't fail the job, but we do collect
data about such failures to find the root cause.
Infra owner, ensure that we're not running out of
disk space on openshift-kubevirt-org-bxfjq.
5 years, 10 months
[JENKINS] Failed to setup proejct
kubevirt_containerized-data-importer_standard-check-pr
by jenkins@jenkins.phx.ovirt.org
Failed to run project_setup.sh for:
#1390 containerized-data-importer [check-patch].
It probably means that docker_cleanup.py failed.
This step doesn't fail the job, but we do collect
data about such failures to find the root cause.
Infra owner, ensure that we're not running out of
disk space on openshift-kubevirt-org-bjd42.
5 years, 10 months
[JENKINS] Failed to setup proejct
kubevirt_containerized-data-importer_standard-check-pr
by jenkins@jenkins.phx.ovirt.org
Failed to run project_setup.sh for:
#1389 containerized-data-importer [check-patch].
It probably means that docker_cleanup.py failed.
This step doesn't fail the job, but we do collect
data about such failures to find the root cause.
Infra owner, ensure that we're not running out of
disk space on openshift-kubevirt-org-9xgpc.
5 years, 10 months
[JENKINS] Failed to setup proejct
kubevirt_containerized-data-importer_standard-check-pr
by jenkins@jenkins.phx.ovirt.org
Failed to run project_setup.sh for:
#1388 containerized-data-importer [check-patch].
It probably means that docker_cleanup.py failed.
This step doesn't fail the job, but we do collect
data about such failures to find the root cause.
Infra owner, ensure that we're not running out of
disk space on openshift-kubevirt-org-kb8b6.
5 years, 10 months
[JENKINS] Failed to setup proejct
kubevirt_containerized-data-importer_standard-check-pr
by jenkins@jenkins.phx.ovirt.org
Failed to run project_setup.sh for:
#1387 containerized-data-importer [check-patch].
It probably means that docker_cleanup.py failed.
This step doesn't fail the job, but we do collect
data about such failures to find the root cause.
Infra owner, ensure that we're not running out of
disk space on openshift-kubevirt-org-rwdl1.
5 years, 10 months
Build failed in Jenkins:
system-sync_mirrors-fedora-base-fc29-x86_64 #328
by jenkins@jenkins.phx.ovirt.org
See <http://jenkins.ovirt.org/job/system-sync_mirrors-fedora-base-fc29-x86_64/...>
Changes:
[Liora Milbaum] master_deploy: versioned_value filter
------------------------------------------
Started by timer
[EnvInject] - Loading node environment variables.
Building remotely on mirrors.phx.ovirt.org (mirrors) in workspace <http://jenkins.ovirt.org/job/system-sync_mirrors-fedora-base-fc29-x86_64/ws/>
No credentials specified
> git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
> git config remote.origin.url http://gerrit.ovirt.org/jenkins.git # timeout=10
Cleaning workspace
> git rev-parse --verify HEAD # timeout=10
Resetting working tree
> git reset --hard # timeout=10
> git clean -fdx # timeout=10
Pruning obsolete local branches
Fetching upstream changes from http://gerrit.ovirt.org/jenkins.git
> git --version # timeout=10
> git fetch --tags --progress http://gerrit.ovirt.org/jenkins.git +refs/heads/*:refs/remotes/origin/* --prune # timeout=10
> git rev-parse origin/master^{commit} # timeout=10
Checking out Revision 798d77b25a396540dd740e97ba9075b6e3926310 (origin/master)
> git config core.sparsecheckout # timeout=10
> git checkout -f 798d77b25a396540dd740e97ba9075b6e3926310 # timeout=10
Commit message: "master_deploy: versioned_value filter"
> git rev-list --no-walk 6d5cd8a6ec82ac556b4c414207b85536d44f39b8 # timeout=10
[system-sync_mirrors-fedora-base-fc29-x86_64] $ /bin/bash -xe /tmp/jenkins1463866524962743710.sh
+ jenkins/scripts/mirror_mgr.sh resync_yum_mirror fedora-base-fc29 x86_64 jenkins/data/mirrors-reposync.conf
+ MIRRORS_MP_BASE=/var/www/html/repos
+ MIRRORS_HTTP_BASE=http://mirrors.phx.ovirt.org/repos
+ MIRRORS_CACHE=/home/jenkins/mirrors_cache
+ MAX_LOCK_ATTEMPTS=120
+ LOCK_WAIT_INTERVAL=5
+ LOCK_BASE=/home/jenkins
+ OLD_MD_TO_KEEP=100
+ HTTP_SELINUX_TYPE=httpd_sys_content_t
+ HTTP_FILE_MODE=644
+ main resync_yum_mirror fedora-base-fc29 x86_64 jenkins/data/mirrors-reposync.conf
+ local command=resync_yum_mirror
+ command_args=("${@:2}")
+ local command_args
+ cmd_resync_yum_mirror fedora-base-fc29 x86_64 jenkins/data/mirrors-reposync.conf
+ local repo_name=fedora-base-fc29
+ local repo_archs=x86_64
+ local reposync_conf=jenkins/data/mirrors-reposync.conf
+ local sync_needed
+ mkdir -p /home/jenkins/mirrors_cache
+ verify_repo_fs fedora-base-fc29 yum
+ local repo_name=fedora-base-fc29
+ local repo_type=yum
+ sudo install -o jenkins -d /var/www/html/repos/yum /var/www/html/repos/yum/fedora-base-fc29 /var/www/html/repos/yum/fedora-base-fc29/base
+ check_yum_sync_needed fedora-base-fc29 x86_64 jenkins/data/mirrors-reposync.conf sync_needed
+ local repo_name=fedora-base-fc29
+ local repo_archs=x86_64
+ local reposync_conf=jenkins/data/mirrors-reposync.conf
+ local p_sync_needed=sync_needed
+ local reposync_out
+ echo 'Checking if mirror needs a resync'
Checking if mirror needs a resync
+ rm -rf /home/jenkins/mirrors_cache/fedora-base-fc29
++ IFS=,
++ echo x86_64
+ for arch in '$(IFS=,; echo $repo_archs)'
++ run_reposync fedora-base-fc29 x86_64 jenkins/data/mirrors-reposync.conf --urls --quiet
++ local repo_name=fedora-base-fc29
++ local repo_arch=x86_64
++ local reposync_conf=jenkins/data/mirrors-reposync.conf
++ extra_args=("${@:4}")
++ local extra_args
++ reposync --config=jenkins/data/mirrors-reposync.conf --repoid=fedora-base-fc29 --arch=x86_64 --cachedir=/home/jenkins/mirrors_cache --download_path=/var/www/html/repos/yum/fedora-base-fc29/base --norepopath --newest-only --urls --quiet
Traceback (most recent call last):
File "/usr/bin/reposync", line 343, in <module>
main()
File "/usr/bin/reposync", line 175, in main
my.doRepoSetup()
File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 681, in doRepoSetup
return self._getRepos(thisrepo, True)
File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 721, in _getRepos
self._repos.doSetup(thisrepo)
File "/usr/lib/python2.7/site-packages/yum/repos.py", line 157, in doSetup
self.retrieveAllMD()
File "/usr/lib/python2.7/site-packages/yum/repos.py", line 88, in retrieveAllMD
dl = repo._async and repo._commonLoadRepoXML(repo)
File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line 1465, in _commonLoadRepoXML
local = self.cachedir + '/repomd.xml'
File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line 774, in <lambda>
cachedir = property(lambda self: self._dirGetAttr('cachedir'))
File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line 757, in _dirGetAttr
self.dirSetup()
File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line 735, in dirSetup
self._dirSetupMkdir_p(dir)
File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line 712, in _dirSetupMkdir_p
raise Errors.RepoError, msg
yum.Errors.RepoError: Error making cache directory: /home/jenkins/mirrors_cache/centos-kvm-common-el7 error was: [Errno 17] File exists: '/home/jenkins/mirrors_cache/centos-kvm-common-el7'
+ reposync_out=
Build step 'Execute shell' marked build as failure
5 years, 10 months
[JENKINS] Failed to setup proejct
kubevirt_containerized-data-importer_standard-check-pr
by jenkins@jenkins.phx.ovirt.org
Failed to run project_setup.sh for:
#1380 containerized-data-importer [check-patch].
It probably means that docker_cleanup.py failed.
This step doesn't fail the job, but we do collect
data about such failures to find the root cause.
Infra owner, ensure that we're not running out of
disk space on openshift-kubevirt-org-41bfd.
5 years, 10 months
[oVirt Jenkins] ovirt-system-tests_compat-4.1-suite-master - Build
# 493 - Failure!
by jenkins@jenkins.phx.ovirt.org
Project: http://jenkins.ovirt.org/job/ovirt-system-tests_compat-4.1-suite-master/
Build: http://jenkins.ovirt.org/job/ovirt-system-tests_compat-4.1-suite-master/493/
Build Number: 493
Build Status: Failure
Triggered By: Started by timer
-------------------------------------
Changes Since Last Success:
-------------------------------------
Changes for Build #493
[Sandro Bonazzola] repos: master: switch to gluster 5
-----------------
Failed Tests:
-----------------
1 tests failed.
FAILED: 002_bootstrap.get_host_devices_41
Error Message:
Could not find block_vda1 device in host devices:
Stack Trace:
Traceback (most recent call last):
File "/usr/lib64/python2.7/unittest/case.py", line 369, in run
testMethod()
File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in runTest
self.test(*self.arg)
File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 142, in wrapped_test
test()
File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 60, in wrapper
return func(get_test_prefix(), *args, **kwargs)
File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 79, in wrapper
prefix.virt_env.engine_vm().get_api(api_ver=4), *args, **kwargs
File "/home/jenkins/workspace/ovirt-system-tests_compat-4.1-suite-master/ovirt-system-tests/compat-4.1-suite-master/test-scenarios/002_bootstrap.py", line 1016, in get_host_devices
raise RuntimeError('Could not find block_vda1 device in host devices: {}'.format(device_list))
RuntimeError: Could not find block_vda1 device in host devices:
5 years, 10 months