[JIRA] (OVIRT-2660) Please cleanup oVirt Image Glance Repository
by sbonazzo (oVirt JIRA)
sbonazzo created OVIRT-2660:
-------------------------------
Summary: Please cleanup oVirt Image Glance Repository
Key: OVIRT-2660
URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2660
Project: oVirt - virtualization made easy
Issue Type: By-EMAIL
Reporter: sbonazzo
Assignee: infra
oVirt Image Glance repository keeps lot of disks from distributions which
reached end of life or have been obsoleted by new releases (CentOS, Fedora,
Ubuntu and CirrOS).
Can you please drop unsupported distributions images from the repository?
--
SANDRO BONAZZOLA
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo(a)redhat.com
<https://red.ht/sig>
--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100098)
5 years, 10 months
Build failed in Jenkins:
system-sync_mirrors-fedora-updates-fc29-x86_64 #222
by jenkins@jenkins.phx.ovirt.org
See <http://jenkins.ovirt.org/job/system-sync_mirrors-fedora-updates-fc29-x86_...>
------------------------------------------
Started by timer
[EnvInject] - Loading node environment variables.
Building remotely on mirrors.phx.ovirt.org (mirrors) in workspace <http://jenkins.ovirt.org/job/system-sync_mirrors-fedora-updates-fc29-x86_...>
> git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
> git config remote.origin.url http://gerrit.ovirt.org/jenkins.git # timeout=10
Cleaning workspace
> git rev-parse --verify HEAD # timeout=10
Resetting working tree
> git reset --hard # timeout=10
> git clean -fdx # timeout=10
Pruning obsolete local branches
Fetching upstream changes from http://gerrit.ovirt.org/jenkins.git
> git --version # timeout=10
> git fetch --tags --progress http://gerrit.ovirt.org/jenkins.git +refs/heads/*:refs/remotes/origin/* --prune
> git rev-parse origin/master^{commit} # timeout=10
Checking out Revision 7d6c5113f814af2234e0d05d1e01f52ffa7258c2 (origin/master)
> git config core.sparsecheckout # timeout=10
> git checkout -f 7d6c5113f814af2234e0d05d1e01f52ffa7258c2
Commit message: "Containerized loader node"
> git rev-list --no-walk 7d6c5113f814af2234e0d05d1e01f52ffa7258c2 # timeout=10
[system-sync_mirrors-fedora-updates-fc29-x86_64] $ /bin/bash -xe /tmp/jenkins2542019165601063126.sh
+ jenkins/scripts/mirror_mgr.sh resync_yum_mirror fedora-updates-fc29 x86_64 jenkins/data/mirrors-reposync.conf
+ MIRRORS_MP_BASE=/var/www/html/repos
+ MIRRORS_HTTP_BASE=http://mirrors.phx.ovirt.org/repos
+ MIRRORS_CACHE=/home/jenkins/mirrors_cache
+ MAX_LOCK_ATTEMPTS=120
+ LOCK_WAIT_INTERVAL=5
+ LOCK_BASE=/home/jenkins
+ OLD_MD_TO_KEEP=100
+ HTTP_SELINUX_TYPE=httpd_sys_content_t
+ HTTP_FILE_MODE=644
+ main resync_yum_mirror fedora-updates-fc29 x86_64 jenkins/data/mirrors-reposync.conf
+ local command=resync_yum_mirror
+ command_args=("${@:2}")
+ local command_args
+ cmd_resync_yum_mirror fedora-updates-fc29 x86_64 jenkins/data/mirrors-reposync.conf
+ local repo_name=fedora-updates-fc29
+ local repo_archs=x86_64
+ local reposync_conf=jenkins/data/mirrors-reposync.conf
+ local sync_needed
+ mkdir -p /home/jenkins/mirrors_cache
+ verify_repo_fs fedora-updates-fc29 yum
+ local repo_name=fedora-updates-fc29
+ local repo_type=yum
+ sudo install -o jenkins -d /var/www/html/repos/yum /var/www/html/repos/yum/fedora-updates-fc29 /var/www/html/repos/yum/fedora-updates-fc29/base
+ check_yum_sync_needed fedora-updates-fc29 x86_64 jenkins/data/mirrors-reposync.conf sync_needed
+ local repo_name=fedora-updates-fc29
+ local repo_archs=x86_64
+ local reposync_conf=jenkins/data/mirrors-reposync.conf
+ local p_sync_needed=sync_needed
+ local reposync_out
+ echo 'Checking if mirror needs a resync'
Checking if mirror needs a resync
+ rm -rf /home/jenkins/mirrors_cache/fedora-updates-fc29
++ IFS=,
++ echo x86_64
+ for arch in '$(IFS=,; echo $repo_archs)'
++ run_reposync fedora-updates-fc29 x86_64 jenkins/data/mirrors-reposync.conf --urls --quiet
++ local repo_name=fedora-updates-fc29
++ local repo_arch=x86_64
++ local reposync_conf=jenkins/data/mirrors-reposync.conf
++ extra_args=("${@:4}")
++ local extra_args
++ reposync --config=jenkins/data/mirrors-reposync.conf --repoid=fedora-updates-fc29 --arch=x86_64 --cachedir=/home/jenkins/mirrors_cache --download_path=/var/www/html/repos/yum/fedora-updates-fc29/base --norepopath --newest-only --urls --quiet
Traceback (most recent call last):
File "/usr/bin/reposync", line 343, in <module>
main()
File "/usr/bin/reposync", line 175, in main
my.doRepoSetup()
File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 681, in doRepoSetup
return self._getRepos(thisrepo, True)
File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 721, in _getRepos
self._repos.doSetup(thisrepo)
File "/usr/lib/python2.7/site-packages/yum/repos.py", line 157, in doSetup
self.retrieveAllMD()
File "/usr/lib/python2.7/site-packages/yum/repos.py", line 88, in retrieveAllMD
dl = repo._async and repo._commonLoadRepoXML(repo)
File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line 1479, in _commonLoadRepoXML
result = self._getFileRepoXML(local, text)
File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line 1256, in _getFileRepoXML
size=102400) # setting max size as 100K
File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line 1022, in _getFile
result = self.grab.urlgrab(misc.to_utf8(relative), local,
File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line 700, in <lambda>
grab = property(lambda self: self._getgrab())
File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line 695, in _getgrab
self._setupGrab()
File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line 632, in _setupGrab
urls = self.urls
File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line 878, in <lambda>
urls = property(fget=lambda self: self._geturls(),
File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line 875, in _geturls
self._baseurlSetup()
File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line 821, in _baseurlSetup
mirrorurls.extend(list(self.metalink_data.urls()))
File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line 918, in <lambda>
metalink_data = property(fget=lambda self: self._getMetalink(),
File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line 897, in _getMetalink
raise Errors.RepoError, msg
yum.Errors.RepoError: Cannot retrieve metalink for repository: fedora-updates-fc29. Please verify its path and try again
+ reposync_out=
Build step 'Execute shell' marked build as failure
5 years, 10 months
[JIRA] (OVIRT-2659) Urgent: ChangeQueue is broken
by sbonazzo (oVirt JIRA)
sbonazzo created OVIRT-2659:
-------------------------------
Summary: Urgent: ChangeQueue is broken
Key: OVIRT-2659
URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2659
Project: oVirt - virtualization made easy
Issue Type: By-EMAIL
Reporter: sbonazzo
Assignee: infra
Jobs are not being added to change queue due to a bug:
https://jenkins.ovirt.org/job/standard-enqueue/18870/
*00:03:06.321* Will enqueue to: ovirt-master[Pipeline] parallel
<https://jenkins.ovirt.org/job/standard-enqueue/18870/console#>[Pipeline]
{ (Branch: ovirt-master)
<https://jenkins.ovirt.org/job/standard-enqueue/18870/console#>[Pipeline]
withEnv <https://jenkins.ovirt.org/job/standard-enqueue/18870/console#>[Pipeline]
{ <https://jenkins.ovirt.org/job/standard-enqueue/18870/console#>[Pipeline]
sh (hide <https://jenkins.ovirt.org/job/standard-enqueue/18870/console#>)*00:03:07.538*
Traceback (most recent call last):*00:03:07.539* File
"/home/jenkins/workspace/standard-enqueue(a)tmp/durable-e69f565e/script.sh",
line 2, in <module>*00:03:07.539* from scripts.change_queue import
JenkinsChangeQueueClient*00:03:07.539* File
"/home/jenkins/workspace/standard-enqueue/jenkins/scripts/change_queue/__init__.py",
line 14, in <module>*00:03:07.539* from .changes import
DisplayableChangeWrapper, ChangeInStreamWrapper, \*00:03:07.539*
File "/home/jenkins/workspace/standard-enqueue/jenkins/scripts/change_queue/changes/__init__.py",
line 15, in <module>*00:03:07.539* from scripts.gerrit import
GerritPatchset*00:03:07.539* File
"/home/jenkins/workspace/standard-enqueue/jenkins/scripts/gerrit.py",
line 10, in <module>*00:03:07.539* from paramiko.client import
SSHClient
*00:03:07.539* ImportError: No module named paramiko.client
--
SANDRO BONAZZOLA
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo(a)redhat.com
<https://red.ht/sig>
--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100098)
5 years, 10 months
[JENKINS] Failed to setup proejct
kubevirt_kubevirt_standard-check-pr
by jenkins@jenkins.phx.ovirt.org
Failed to run project_setup.sh for:
#3298 kubevirt [check-patch].
It probably means that docker_cleanup.py failed.
This step doesn't fail the job, but we do collect
data about such failures to find the root cause.
Infra owner, ensure that we're not running out of
disk space on openshift-kubevirt-org-nqwhb.
5 years, 10 months
[JENKINS] Failed to setup proejct
kubevirt_kubevirt_standard-check-pr
by jenkins@jenkins.phx.ovirt.org
Failed to run project_setup.sh for:
#3295 kubevirt [check-patch].
It probably means that docker_cleanup.py failed.
This step doesn't fail the job, but we do collect
data about such failures to find the root cause.
Infra owner, ensure that we're not running out of
disk space on openshift-kubevirt-org-ph35p.
5 years, 10 months
[JENKINS] Failed to setup proejct
kubevirt_kubevirt_standard-check-pr
by jenkins@jenkins.phx.ovirt.org
Failed to run project_setup.sh for:
#3295 kubevirt [check-patch].
It probably means that docker_cleanup.py failed.
This step doesn't fail the job, but we do collect
data about such failures to find the root cause.
Infra owner, ensure that we're not running out of
disk space on openshift-kubevirt-org-5btxl.
5 years, 10 months
[JENKINS] Failed to setup proejct
kubevirt_containerized-data-importer_standard-check-pr
by jenkins@jenkins.phx.ovirt.org
Failed to run project_setup.sh for:
#1139 containerized-data-importer [check-patch].
It probably means that docker_cleanup.py failed.
This step doesn't fail the job, but we do collect
data about such failures to find the root cause.
Infra owner, ensure that we're not running out of
disk space on openshift-kubevirt-org-z94hk.
5 years, 10 months