oVirt infra daily report - unstable production jobs - 182
by jenkins@jenkins.phx.ovirt.org
------=_Part_384_1790250126.1482879603351
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
Good morning!
Attached is the HTML page with the jenkins status report. You can see it also here:
- http://jenkins.ovirt.org/job/system_jenkins-report/182//artifact/exported...
Cheers,
Jenkins
------=_Part_384_1790250126.1482879603351
Content-Type: text/html; charset=us-ascii; name=upstream_report.html
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename=upstream_report.html
Content-ID: <upstream_report.html>
<!DOCTYPE html><head><style type="text/css">
table.gridtable {
border-collapse: collapse;
table-layout:fixed;
width:1600px;
font-family: monospace;
font-size:13px;
}
.head {
font-size:20px;
font-family: arial;
}
.sub {
font-size:18px;
background-color:#e5e5e5;
font-family: arial;
}
pre {
font-family: monospace;
display: inline;
white-space: pre-wrap;
white-space: -moz-pre-wrap !important;
white-space: -pre-wrap;
white-space: -o-pre-wrap;
word-wrap: break-word;
}
</style>
</head>
<body>
<table class="gridtable" border=2>
<tr><th colspan=2 class=head>
RHEVM CI Jenkins Daily Report - 27/12/2016
</th></tr><tr><th colspan=2 class=sub>
<font color="blue"><a href="http://jenkins.ovirt.org/">00 Unstable Critical</a></font>
</th></tr>
<tr><td>
<a href="http://jenkins.ovirt.org/job/deploy-to-ovirt_experimental_4.1/">deploy-to-ovirt_experimental_4.1</a>
</td><td>
This job is automatically updated by jenkins job builder, any manual
change will be lost in the next update. If you want to make permanent
changes, check out the <a href="http://gerrit.ovirt.org/gitweb?p=jenkins.git;a=tree;h=refs/heads/master;h...">
jenkins</a> repo.
<!-- Managed by Jenkins Job Builder -->
</td></tr>
<tr><td>
<a href="http://jenkins.ovirt.org/job/ovirt-node-ng_ovirt-master-experimental_buil...">ovirt-node-ng_ovirt-master-experimental_build-artifacts-el7-x86_64</a>
</td><td>
This job is automatically updated by jenkins job builder, any manual
change will be lost in the next update. If you want to make permanent
changes, check out the <a href="http://gerrit.ovirt.org/gitweb?p=jenkins.git;a=tree;h=refs/heads/master;h...">
jenkins</a> repo.
<!-- Managed by Jenkins Job Builder -->
</td></tr>
<tr><td>
<a href="http://jenkins.ovirt.org/job/ovirt_3.6_he-system-tests/">ovirt_3.6_he-system-tests</a>
</td><td>
This job is automatically updated by jenkins job builder, any manual
change will be lost in the next update. If you want to make permanent
changes, check out the <a href="http://gerrit.ovirt.org/gitweb?p=jenkins.git;a=tree;h=refs/heads/master;h...">
jenkins</a> repo.
<!-- Managed by Jenkins Job Builder -->
</td></tr>
<tr><td>
<a href="http://jenkins.ovirt.org/job/ovirt_3.6_image-ng-system-tests/">ovirt_3.6_image-ng-system-tests</a>
</td><td>
This job is automatically updated by jenkins job builder, any manual
change will be lost in the next update. If you want to make permanent
changes, check out the <a href="http://gerrit.ovirt.org/gitweb?p=jenkins.git;a=tree;h=refs/heads/master;h...">
jenkins</a> repo.
<!-- Managed by Jenkins Job Builder -->
</td></tr>
<tr><td>
<a href="http://jenkins.ovirt.org/job/ovirt_master_system-tests_per_patch/">ovirt_master_system-tests_per_patch</a>
</td><td>
This job is automatically updated by jenkins job builder, any manual
change will be lost in the next update. If you want to make permanent
changes, check out the <a href="http://gerrit.ovirt.org/gitweb?p=jenkins.git;a=tree;h=refs/heads/master;h...">
jenkins</a> repo.
<!-- Managed by Jenkins Job Builder -->
</td></tr>
<tr><td>
<a href="http://jenkins.ovirt.org/job/system-backup_jenkins_old_ovirt_org/">system-backup_jenkins_old_ovirt_org</a>
</td><td>
This job is automatically updated by jenkins job builder, any manual
change will be lost in the next update. If you want to make permanent
changes, check out the <a href="http://gerrit.ovirt.org/gitweb?p=jenkins.git;a=tree;h=refs/heads/master;h...">
jenkins</a> repo.
<!-- Managed by Jenkins Job Builder -->
<br><br>Job disabled - The old Jenkins is probably not up any more
</td></tr>
------=_Part_384_1790250126.1482879603351--
7 years, 10 months
Build failed in Jenkins: ovirt_master_system-tests #878
by jenkins@jenkins.phx.ovirt.org
See <http://jenkins.ovirt.org/job/ovirt_master_system-tests/878/changes>
Changes:
[Yaniv Kaul] Fixes and changes to storage tests
[Sandro Bonazzola] ovirt-hosted-engine-ha: added 4.1 branch
[Sandro Bonazzola] ovirt-release: move to 4.1 branch
[Sandro Bonazzola] cockpit-ovirt: exclude fc23 from 4.1
[Gil Shinar] Fix bugs in sdk yamls
------------------------------------------
[...truncated 762 lines...]
+ echo '----------- Cleaning with lago'
----------- Cleaning with lago
+ lago --workdir <http://jenkins.ovirt.org/job/ovirt_master_system-tests/ws/ovirt-system-te...> destroy --yes --all-prefixes
+ echo '----------- Cleaning with lago done'
----------- Cleaning with lago done
+ [[ 0 != \0 ]]
+ echo '======== Cleanup done'
======== Cleanup done
+ exit 0
+ exit
Took 52 seconds
===================================
##!
##! ERROR ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
##!########################################################
##########################################################
Build step 'Execute shell' marked build as failure
Performing Post build task...
Match found for :.* : True
Logical operation result is TRUE
Running script : #!/bin/bash -xe
echo 'shell_scripts/system_tests.collect_logs.sh'
#
# Required jjb vars:
# version
#
VERSION=master
SUITE_TYPE=
WORKSPACE="$PWD"
OVIRT_SUITE="$SUITE_TYPE_suite_$VERSION"
TESTS_LOGS="$WORKSPACE/ovirt-system-tests/exported-artifacts"
rm -rf "$WORKSPACE/exported-artifacts"
mkdir -p "$WORKSPACE/exported-artifacts"
if [[ -d "$TESTS_LOGS" ]]; then
mv "$TESTS_LOGS/"* "$WORKSPACE/exported-artifacts/"
fi
[ovirt_master_system-tests] $ /bin/bash -xe /tmp/hudson6460262332536282065.sh
+ echo shell_scripts/system_tests.collect_logs.sh
shell_scripts/system_tests.collect_logs.sh
+ VERSION=master
+ SUITE_TYPE=
+ WORKSPACE=<http://jenkins.ovirt.org/job/ovirt_master_system-tests/ws/>
+ OVIRT_SUITE=master
+ TESTS_LOGS=<http://jenkins.ovirt.org/job/ovirt_master_system-tests/ws/ovirt-system-te...>
+ rm -rf <http://jenkins.ovirt.org/job/ovirt_master_system-tests/878/artifact/expor...>
+ mkdir -p <http://jenkins.ovirt.org/job/ovirt_master_system-tests/878/artifact/expor...>
+ [[ -d <http://jenkins.ovirt.org/job/ovirt_master_system-tests/ws/ovirt-system-te...> ]]
+ mv <http://jenkins.ovirt.org/job/ovirt_master_system-tests/ws/ovirt-system-te...> <http://jenkins.ovirt.org/job/ovirt_master_system-tests/ws/ovirt-system-te...> <http://jenkins.ovirt.org/job/ovirt_master_system-tests/878/artifact/expor...>
POST BUILD TASK : SUCCESS
END OF POST BUILD TASK : 0
Match found for :.* : True
Logical operation result is TRUE
Running script : #!/bin/bash -x
echo "shell-scripts/mock_cleanup.sh"
# Make clear this is the cleanup, helps reading the jenkins logs
cat <<EOC
_______________________________________________________________________
#######################################################################
# #
# CLEANUP #
# #
#######################################################################
EOC
shopt -s nullglob
WORKSPACE="${WORKSPACE:-$PWD}"
UMOUNT_RETRIES="${UMOUNT_RETRIES:-3}"
UMOUNT_RETRY_DELAY="${UMOUNT_RETRY_DELAY:-1s}"
safe_umount() {
local mount="${1:?}"
local attempt
for ((attempt=0 ; attempt < $UMOUNT_RETRIES ; attempt++)); do
# If this is not the 1st time through the loop, Sleep a while to let
# the problem "solve itself"
[[ attempt > 0 ]] && sleep "$UMOUNT_RETRY_DELAY"
# Try to umount
sudo umount --lazy "$mount" && return 0
# See if the mount is already not there despite failing
findmnt --kernel --first "$mount" > /dev/null && return 0
done
echo "ERROR: Failed to umount $mount."
return 1
}
# restore the permissions in the working dir, as sometimes it leaves files
# owned by root and then the 'cleanup workspace' from jenkins job fails to
# clean and breaks the jobs
sudo chown -R "$USER" "$WORKSPACE"
# Archive the logs, we want them anyway
logs=(
./*log
./*/logs
)
if [[ "$logs" ]]; then
for log in "${logs[@]}"; do
[[ "$log" = ./exported-artifacts/* ]] && continue
echo "Copying ${log} to exported-artifacts"
mv $log exported-artifacts/
done
fi
# stop any processes running inside the chroot
failed=false
mock_confs=("$WORKSPACE"/*/mocker*)
# Clean current jobs mockroot if any
for mock_conf_file in "${mock_confs[@]}"; do
[[ "$mock_conf_file" ]] || continue
echo "Cleaning up mock $mock_conf"
mock_root="${mock_conf_file##*/}"
mock_root="${mock_root%.*}"
my_mock="/usr/bin/mock"
my_mock+=" --configdir=${mock_conf_file%/*}"
my_mock+=" --root=${mock_root}"
my_mock+=" --resultdir=$WORKSPACE"
#TODO: investigate why mock --clean fails to umount certain dirs sometimes,
#so we can use it instead of manually doing all this.
echo "Killing all mock orphan processes, if any."
$my_mock \
--orphanskill \
|| {
echo "ERROR: Failed to kill orphans on $chroot."
failed=true
}
mock_root="$(\
grep \
-Po "(?<=config_opts\['root'\] = ')[^']*" \
"$mock_conf_file" \
)" || :
[[ "$mock_root" ]] || continue
mounts=($(mount | awk '{print $3}' | grep "$mock_root")) || :
if [[ "$mounts" ]]; then
echo "Found mounted dirs inside the chroot $chroot. Trying to umount."
fi
for mount in "${mounts[@]}"; do
safe_umount "$mount" || failed=true
done
done
# Clean any leftover chroot from other jobs
for mock_root in /var/lib/mock/*; do
this_chroot_failed=false
mounts=($(cut -d\ -f2 /proc/mounts | grep "$mock_root" | sort -r)) || :
if [[ "$mounts" ]]; then
echo "Found mounted dirs inside the chroot $mock_root." \
"Trying to umount."
fi
for mount in "${mounts[@]}"; do
safe_umount "$mount" && continue
# If we got here, we failed $UMOUNT_RETRIES attempts so we should make
# noise
failed=true
this_chroot_failed=true
done
if ! $this_chroot_failed; then
sudo rm -rf "$mock_root"
fi
done
# remove mock caches that are older then 2 days:
find /var/cache/mock/ -mindepth 1 -maxdepth 1 -type d -mtime +2 -print0 | \
xargs -0 -tr sudo rm -rf
# We make no effort to leave around caches that may still be in use because
# packages installed in them may go out of date, so may as well recreate them
# Drop all left over libvirt domains
for UUID in $(virsh list --all --uuid); do
virsh destroy $UUID || :
sleep 2
virsh undefine --remove-all-storage --storage vda --snapshots-metadata $UUID || :
done
if $failed; then
echo "Cleanup script failed, propegating failure to job"
exit 1
fi
[ovirt_master_system-tests] $ /bin/bash -x /tmp/hudson3370390969952432808.sh
+ echo shell-scripts/mock_cleanup.sh
shell-scripts/mock_cleanup.sh
+ cat
_______________________________________________________________________
#######################################################################
# #
# CLEANUP #
# #
#######################################################################
+ shopt -s nullglob
+ WORKSPACE=<http://jenkins.ovirt.org/job/ovirt_master_system-tests/ws/>
+ UMOUNT_RETRIES=3
+ UMOUNT_RETRY_DELAY=1s
+ sudo chown -R jenkins <http://jenkins.ovirt.org/job/ovirt_master_system-tests/ws/>
+ logs=(./*log ./*/logs)
+ [[ -n ./ovirt-system-tests/logs ]]
+ for log in '"${logs[@]}"'
+ [[ ./ovirt-system-tests/logs = ./exported-artifacts/* ]]
+ echo 'Copying ./ovirt-system-tests/logs to exported-artifacts'
Copying ./ovirt-system-tests/logs to exported-artifacts
+ mv ./ovirt-system-tests/logs exported-artifacts/
+ failed=false
+ mock_confs=("$WORKSPACE"/*/mocker*)
+ for mock_conf_file in '"${mock_confs[@]}"'
+ [[ -n <http://jenkins.ovirt.org/job/ovirt_master_system-tests/ws/ovirt-system-te...> ]]
+ echo 'Cleaning up mock '
Cleaning up mock
+ mock_root=mocker-epel-7-x86_64.el7.cfg
+ mock_root=mocker-epel-7-x86_64.el7
+ my_mock=/usr/bin/mock
+ my_mock+=' --configdir=<http://jenkins.ovirt.org/job/ovirt_master_system-tests/ws/ovirt-system-tests'>
+ my_mock+=' --root=mocker-epel-7-x86_64.el7'
+ my_mock+=' --resultdir=<http://jenkins.ovirt.org/job/ovirt_master_system-tests/ws/'>
+ echo 'Killing all mock orphan processes, if any.'
Killing all mock orphan processes, if any.
+ /usr/bin/mock --configdir=<http://jenkins.ovirt.org/job/ovirt_master_system-tests/ws/ovirt-system-tests> --root=mocker-epel-7-x86_64.el7 --resultdir=<http://jenkins.ovirt.org/job/ovirt_master_system-tests/ws/> --orphanskill
WARNING: Could not find required logging config file: <http://jenkins.ovirt.org/job/ovirt_master_system-tests/ws/ovirt-system-te....> Using default...
INFO: mock.py version 1.2.21 starting (python version = 3.5.1)...
Start: init plugins
INFO: selinux enabled
Finish: init plugins
Start: run
Finish: run
++ grep -Po '(?<=config_opts\['\''root'\''\] = '\'')[^'\'']*' <http://jenkins.ovirt.org/job/ovirt_master_system-tests/ws/ovirt-system-te...>
+ mock_root=epel-7-x86_64-b8fbc97503d2f9478015768d3094ffe4
+ [[ -n epel-7-x86_64-b8fbc97503d2f9478015768d3094ffe4 ]]
+ mounts=($(mount | awk '{print $3}' | grep "$mock_root"))
++ mount
++ awk '{print $3}'
++ grep epel-7-x86_64-b8fbc97503d2f9478015768d3094ffe4
+ :
+ [[ -n '' ]]
+ find /var/cache/mock/ -mindepth 1 -maxdepth 1 -type d -mtime +2 -print0
+ xargs -0 -tr sudo rm -rf
++ virsh list --all --uuid
+ false
POST BUILD TASK : SUCCESS
END OF POST BUILD TASK : 1
Recording test results
ERROR: Step ?Publish JUnit test result report? failed: No test report files were found. Configuration error?
Archiving artifacts
7 years, 10 months
Re: ost host addition failure
by Eyal Edri
Any updates?
The tests are still failing on vdsmd won't start from Sunday... master
repos havn't been refreshed for a few days due to this.
from host deploy log: [1]
basic-suite-master-engine/_var_log_ovirt-engine/host-deploy/ovirt-host-deploy-20161227012930-192.168.201.4-14af2bf0.log
the job links [2]
[1]
http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_master/lastComp...
016-12-27 01:29:29 DEBUG otopi.plugins.otopi.services.systemd
plugin.execute:921 execute-output: ('/bin/systemctl', 'start',
'vdsmd.service') stdout:
2016-12-27 01:29:29 DEBUG otopi.plugins.otopi.services.systemd
plugin.execute:926 execute-output: ('/bin/systemctl', 'start',
'vdsmd.service') stderr:
A dependency job for vdsmd.service failed. See 'journalctl -xe' for details.
2016-12-27 01:29:29 DEBUG otopi.context context._executeMethod:142
method exception
Traceback (most recent call last):
File "/tmp/ovirt-QZ1ucxWFfm/pythonlib/otopi/context.py", line 132,
in _executeMethod
method['method']()
File "/tmp/ovirt-QZ1ucxWFfm/otopi-plugins/ovirt-host-deploy/vdsm/packages.py",
line 209, in _start
self.services.state('vdsmd', True)
File "/tmp/ovirt-QZ1ucxWFfm/otopi-plugins/otopi/services/systemd.py",
line 141, in state
service=name,
RuntimeError: Failed to start service 'vdsmd'
2016-12-27 01:29:29 ERROR otopi.context context._executeMethod:151
Failed to execute stage 'Closing up': Failed to start service 'vdsmd'
2016-12-27 01:29:29 DEBUG otopi.context context.dumpEnvironment:760
ENVIRONMENT DUMP - BEGIN
2016-12-27 01:29:29 DEBUG otopi.context context.dumpEnvironment:770
ENV BASE/error=bool:'True'
2016-12-27 01:29:29 DEBUG otopi.context context.dumpEnvironment:770
ENV BASE/excep
[2] http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_master/lastComp...
On Sun, Dec 25, 2016 at 11:31 AM, Eyal Edri <eedri(a)redhat.com> wrote:
> We should see it fixed here hopefully [1]
>
>
> [1] http://jenkins.ovirt.org/view/All%20Running%20jobs/job/
> test-repo_ovirt_experimental_master/4412/console
>
> On Sun, Dec 25, 2016 at 11:19 AM, Dan Kenigsberg <danken(a)redhat.com>
> wrote:
>
>> On Sun, Dec 25, 2016 at 10:28 AM, Yaniv Kaul <ykaul(a)redhat.com> wrote:
>> >
>> >
>> > On Sun, Dec 25, 2016 at 9:47 AM, Dan Kenigsberg <danken(a)redhat.com>
>> wrote:
>> >>
>> >> Correct. https://gerrit.ovirt.org/#/c/69052/
>> >>
>> >> Can you try adding
>> >> lago shell "$vm_name" -c "mkdir -p /var/log/ovirt-imageio-daemon/ &&
>> >> chown vdsm:kvm /var/log/ovirt-imageio-daemon/"
>> >
>> >
>> > How will it know what is the vdsm user before installing vdsm?
>>
>> You're right. a hack would have to `chmod a+rwx
>> /var/log/ovirt-imageio-daemon/` instead.
>>
>> > Why not either:
>> > 1. Fix it
>>
>> yes, that's why we've opened
>> https://bugzilla.redhat.com/show_bug.cgi?id=1400003 ; now a fix is
>> getting merged. I don't know when it is going to be ready in lago's
>> repos.
>>
>> > -or-
>> > 2. Revert the offending patch?
>>
>> I'm not aware of such patch. It's a race that has been there since
>> ever, and I don't know why it suddenly pops up so often.
>>
>
>
>
> --
> Eyal Edri
> Associate Manager
> RHV DevOps
> EMEA ENG Virtualization R&D
> Red Hat Israel
>
> phone: +972-9-7692018 <+972%209-769-2018>
> irc: eedri (on #tlv #rhev-dev #rhev-integ)
>
--
Eyal Edri
Associate Manager
RHV DevOps
EMEA ENG Virtualization R&D
Red Hat Israel
phone: +972-9-7692018
irc: eedri (on #tlv #rhev-dev #rhev-integ)
7 years, 10 months