Build failed in Jenkins: system-sync_mirrors-epel-el6-x86_64 #1143
by jenkins@jenkins.phx.ovirt.org
See <http://jenkins.ovirt.org/job/system-sync_mirrors-epel-el6-x86_64/1143/dis...>
------------------------------------------
Started by timer
[EnvInject] - Loading node environment variables.
Building remotely on mirrors.phx.ovirt.org (mirrors) in workspace <http://jenkins.ovirt.org/job/system-sync_mirrors-epel-el6-x86_64/ws/>
> git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
> git config remote.origin.url http://gerrit.ovirt.org/jenkins.git # timeout=10
Cleaning workspace
> git rev-parse --verify HEAD # timeout=10
Resetting working tree
> git reset --hard # timeout=10
> git clean -fdx # timeout=10
Pruning obsolete local branches
Fetching upstream changes from http://gerrit.ovirt.org/jenkins.git
> git --version # timeout=10
> git fetch --tags --progress http://gerrit.ovirt.org/jenkins.git +refs/heads/*:refs/remotes/origin/* --prune
> git rev-parse origin/master^{commit} # timeout=10
Checking out Revision 304b66c2aca3fa56530cd337e97f5fe84ec44cf0 (origin/master)
> git config core.sparsecheckout # timeout=10
> git checkout -f 304b66c2aca3fa56530cd337e97f5fe84ec44cf0
Commit message: "nsis-simple-service-plugin:: move to 4.2 and latest fedora"
> git rev-list 304b66c2aca3fa56530cd337e97f5fe84ec44cf0 # timeout=10
[system-sync_mirrors-epel-el6-x86_64] $ /bin/bash -xe /tmp/jenkins8369699459956369535.sh
+ jenkins/scripts/mirror_mgr.sh resync_yum_mirror epel-el6 x86_64 jenkins/data/mirrors-reposync.conf
Checking if mirror needs a resync
Traceback (most recent call last):
File "/usr/bin/reposync", line 343, in <module>
main()
File "/usr/bin/reposync", line 175, in main
my.doRepoSetup()
File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 681, in doRepoSetup
return self._getRepos(thisrepo, True)
File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 721, in _getRepos
self._repos.doSetup(thisrepo)
File "/usr/lib/python2.7/site-packages/yum/repos.py", line 157, in doSetup
self.retrieveAllMD()
File "/usr/lib/python2.7/site-packages/yum/repos.py", line 88, in retrieveAllMD
dl = repo._async and repo._commonLoadRepoXML(repo)
File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line 1474, in _commonLoadRepoXML
if self._latestRepoXML(local):
File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line 1443, in _latestRepoXML
oxml = self._saveOldRepoXML(local)
File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line 1300, in _saveOldRepoXML
shutil.copy2(local, old_local)
File "/usr/lib64/python2.7/shutil.py", line 131, in copy2
copystat(src, dst)
File "/usr/lib64/python2.7/shutil.py", line 98, in copystat
os.utime(dst, (st.st_atime, st.st_mtime))
OSError: [Errno 2] No such file or directory: '/home/jenkins/mirrors_cache/fedora-updates-fc26/repomd.xml.old.tmp'
Build step 'Execute shell' marked build as failure
6 years, 10 months
[JIRA] (OVIRT-1863) Service Alert: foreman/HTTP tcp connection going up and down
by Evgheni Dereveanchin (oVirt JIRA)
[ https://ovirt-jira.atlassian.net/browse/OVIRT-1863?page=com.atlassian.jir... ]
Evgheni Dereveanchin reassigned OVIRT-1863:
-------------------------------------------
Assignee: Evgheni Dereveanchin (was: infra)
> Service Alert: foreman/HTTP tcp connection going up and down
> ------------------------------------------------------------
>
> Key: OVIRT-1863
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-1863
> Project: oVirt - virtualization made easy
> Issue Type: Task
> Reporter: Dafna Ron
> Assignee: Evgheni Dereveanchin
>
> Hi,
> we have been getting service disruption alerts for foreman http
> The service is going up/down.
> Can you please investigate this when you get a chance?
> ***** Icinga *****
> Notification Type: PROBLEM
> Service: HTTP tcp connection
> Host: foreman
> Address: 66.187.230.32
> State: CRITICAL
> Date/Time: Fri Jan 26 08:42:38 UTC 2018
> ***** Icinga *****
> Notification Type: RECOVERY
> Service: HTTP tcp connection
> Host: foreman
> Address: 66.187.230.32
> State: OK
> Date/Time: Fri Jan 26 14:45:38 UTC 2018
> Additional Info:
> HTTP OK: HTTP/1.1 200 OK - 4131 bytes in 1.563 second response time
--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100077)
6 years, 10 months
[ OST Failure Report ] [ oVirt Master (vcredist-x86) ] [ 26-01-2018 ] [ 002_bootstrap.verify_add_all_hosts ]
by Dafna Ron
Hi,
We had a failure on test 002_bootstrap.verify_add_all_hosts in Ovirt Master
I don't think the patch reported is related to the issue.
>From what I can see, the host was locked when we tried to query its status
and since we couldn't fence the host we exited in timeout.
I do not think the reported patch was the reason for failure.
*Link and headline of reported patches:
https://gerrit.ovirt.org/#/c/82234/1 <https://gerrit.ovirt.org/#/c/82234/1>
- Initial import from ovirt-wgt-toolchainLink to
Job:http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/5123/
<http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/5123/>Link
to all
logs:http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/5123/a...
<http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/5123/artifact/>(Relevant)
error snippet from the log: <error>2018-01-26 06:37:31,022-05 INFO
[org.ovirt.engine.core.bll.RefreshHostCapabilitiesCommand]
(ForkJoinPool-1-worker-4) [75b3e642] Failed to Acquire Lock to object
'EngineLock:{exclusiveLocks='[52ee1a61-31c3-4af8-ae28-65f0953f1804=VDS,
HOST_NETWORK52ee1a61-31c3-4af8-ae28-65f0953f1804=HOST_NETWORK]',
sharedLocks=''}'2018-01-26 06:37:31,022-05 WARN
[org.ovirt.engine.core.bll.RefreshHostCapabilitiesCommand]
(ForkJoinPool-1-worker-4) [75b3e642] Validation of action
'RefreshHostCapabilities' failed for user SYSTEM. Reasons:
VAR__ACTION__REFRESH,VAR__TYPE__HOST_CAPABILITIES,ACTION_TYPE_FAILED_OBJECT_LOCKED2018-01-26
06:37:31,373-05 DEBUG
[org.ovirt.vdsm.jsonrpc.client.reactors.stomp.impl.Message] (SSL Stomp
Reactor) [1d883766]
MESSAGEcontent-length:80destination:jms.topic.vdsm_responsescontent-type:application/jsonsubscription:ce434ca4-0c91-44b6-843a-7f8cff40d8e4{"jsonrpc":
"2.0", "id": "8f1c8652-044c-4051-85d7-3d290ee2014f", "result":
true}^@2018-01-26 06:37:31,374-05 DEBUG
[org.ovirt.vdsm.jsonrpc.client.internal.ResponseWorker] (ResponseWorker) []
Message received: {"jsonrpc": "2.0", "id":
"8f1c8652-044c-4051-85d7-3d290ee2014f", "result": true}2018-01-26
06:37:31,374-05 ERROR [org.ovirt.vdsm.jsonrpc.client.JsonRpcClient]
(ResponseWorker) [] Not able to update response for
"8f1c8652-044c-4051-85d7-3d290ee2014f"2018-01-26 06:37:31,376-05 DEBUG
[org.ovirt.vdsm.jsonrpc.client.reactors.stomp.impl.Message] (SSL Stomp
Reactor) [1d883766] MESSAGEcontent-length:802018-01-26 06:37:35,147-05
ERROR [org.ovirt.engine.core.bll.pm.FenceProxyLocator]
(EE-ManagedThreadFactory-engineScheduled-Thread-8) [78ab5d20] Can not run
fence action on host 'lago-basic-suite-master-host-0', no suitable proxy
host was found.</error>*
6 years, 10 months
[JIRA] (OVIRT-1863) Service Alert: foreman/HTTP tcp connection going up and down
by Dafna Ron (oVirt JIRA)
Dafna Ron created OVIRT-1863:
--------------------------------
Summary: Service Alert: foreman/HTTP tcp connection going up and down
Key: OVIRT-1863
URL: https://ovirt-jira.atlassian.net/browse/OVIRT-1863
Project: oVirt - virtualization made easy
Issue Type: Task
Reporter: Dafna Ron
Assignee: infra
Hi,
we have been getting service disruption alerts for foreman http
The service is going up/down.
Can you please investigate this when you get a chance?
***** Icinga *****
Notification Type: PROBLEM
Service: HTTP tcp connection
Host: foreman
Address: 66.187.230.32
State: CRITICAL
Date/Time: Fri Jan 26 08:42:38 UTC 2018
***** Icinga *****
Notification Type: RECOVERY
Service: HTTP tcp connection
Host: foreman
Address: 66.187.230.32
State: OK
Date/Time: Fri Jan 26 14:45:38 UTC 2018
Additional Info:
HTTP OK: HTTP/1.1 200 OK - 4131 bytes in 1.563 second response time
--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100077)
6 years, 10 months