RHEVM CI Jenkins daily report - 20
by jenkins@jenkins.phx.ovirt.org
------=_Part_409_446041513.1469055603607
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
Good morning!
Attached is the HTML page with the jenkins status report. You can see it also here:
- http://jenkins.ovirt.org/job/system_jenkins-report/20//artifact/exported-...
Cheers,
Jenkins
------=_Part_409_446041513.1469055603607
Content-Type: application/octet-stream; name=upstream_report.html
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename=upstream_report.html
Content-ID: <upstream_report.html>
<!DOCTYPE html><head><style type="text/css">
table.gridtable {
border-collapse: collapse;
table-layout:fixed;
width:1600px;
font-family: monospace;
font-size:13px;
}
.head {
font-size:20px;
font-family: arial;
}
.sub {
font-size:18px;
background-color:#e5e5e5;
font-family: arial;
}
pre {
font-family: monospace;
display: inline;
white-space: pre-wrap;
white-space: -moz-pre-wrap !important;
white-space: -pre-wrap;
white-space: -o-pre-wrap;
word-wrap: break-word;
}
</style>
</head>
<body>
<table class="gridtable" border=2>
<tr><th colspan=2 class=head>
RHEVM CI Jenkins Daily Report - 20/07/2016
</th></tr><tr><th colspan=2 class=sub>
<font color="blue"><a href="http://jenkins.ovirt.org/">00 Unstable Jobs (Production)</a></font>
</th></tr>
<tr><td>
<a href="http://jenkins.ovirt.org/job/ovirt-node-ng_ovirt-3.6_build-artifacts-fc22...">ovirt-node-ng_ovirt-3.6_build-artifacts-fc22-x86_64</a>
</td><td>
This job is automatically updated by jenkins job builder, any manual
change will be lost in the next update. If you want to make permanent
changes, check out the <a href="http://gerrit.ovirt.org/gitweb?p=jenkins.git;a=tree;h=refs/heads/master;h...">
jenkins</a> repo.
<!-- Managed by Jenkins Job Builder -->
</td></tr>
<tr><td>
<a href="http://jenkins.ovirt.org/job/ovirt-node-plugin-hosted-engine_ovirt-3.6_ch...">ovirt-node-plugin-hosted-engine_ovirt-3.6_check-merged-el7-x86_64</a>
</td><td>
This job is automatically updated by jenkins job builder, any manual
change will be lost in the next update. If you want to make permanent
changes, check out the <a href="http://gerrit.ovirt.org/gitweb?p=jenkins.git;a=tree;h=refs/heads/master;h...">
jenkins</a> repo.
<!-- Managed by Jenkins Job Builder -->
</td></tr>
<tr><td>
<a href="http://jenkins.ovirt.org/job/vdsm_3.6_check-merged-fc23-x86_64/">vdsm_3.6_check-merged-fc23-x86_64</a>
</td><td>
This job is automatically updated by jenkins job builder, any manual
change will be lost in the next update. If you want to make permanent
changes, check out the <a href="http://gerrit.ovirt.org/gitweb?p=jenkins.git;a=tree;h=refs/heads/master;h...">
jenkins</a> repo.
<!-- Managed by Jenkins Job Builder -->
</td></tr>
<tr><td>
<a href="http://jenkins.ovirt.org/job/vdsm_4.0_check-merged-el7-x86_64/">vdsm_4.0_check-merged-el7-x86_64</a>
</td><td>
This job is automatically updated by jenkins job builder, any manual
change will be lost in the next update. If you want to make permanent
changes, check out the <a href="http://gerrit.ovirt.org/gitweb?p=jenkins.git;a=tree;h=refs/heads/master;h...">
jenkins</a> repo.
<!-- Managed by Jenkins Job Builder -->
</td></tr>
<tr><td>
<a href="http://jenkins.ovirt.org/job/vdsm_4.0_check-merged-fc23-x86_64/">vdsm_4.0_check-merged-fc23-x86_64</a>
</td><td>
This job is automatically updated by jenkins job builder, any manual
change will be lost in the next update. If you want to make permanent
changes, check out the <a href="http://gerrit.ovirt.org/gitweb?p=jenkins.git;a=tree;h=refs/heads/master;h...">
jenkins</a> repo.
<!-- Managed by Jenkins Job Builder -->
</td></tr>
<tr><td>
<a href="http://jenkins.ovirt.org/job/vdsm_master_check-merged-el7-x86_64/">vdsm_master_check-merged-el7-x86_64</a>
</td><td>
This job is automatically updated by jenkins job builder, any manual
change will be lost in the next update. If you want to make permanent
changes, check out the <a href="http://gerrit.ovirt.org/gitweb?p=jenkins.git;a=tree;h=refs/heads/master;h...">
jenkins</a> repo.
<!-- Managed by Jenkins Job Builder -->
</td></tr>
<tr><td>
<a href="http://jenkins.ovirt.org/job/vdsm_master_check-merged-fc24-x86_64/">vdsm_master_check-merged-fc24-x86_64</a>
</td><td>
This job is automatically updated by jenkins job builder, any manual
change will be lost in the next update. If you want to make permanent
changes, check out the <a href="http://gerrit.ovirt.org/gitweb?p=jenkins.git;a=tree;h=refs/heads/master;h...">
jenkins</a> repo.
<!-- Managed by Jenkins Job Builder -->
</td></tr>
------=_Part_409_446041513.1469055603607--
8 years, 4 months
Let's Consider Having Three Separated Environments in Infra
by Anton Marchukov
Hello All.
This is a follow up to the meeting minutes. Just to record my thoughts to
consider during the actual design.
To make it straight. I think we need to target creation of 3 (yes, three)
independent and completely similar setups with as less shared parts as
possible.
If we choose to go with reliability on service level than we do need 3
because:
1. If we mess up with one environment (e,g, storage will be completely dead
there) we will have 2 left working that gives us a reliability still
because one of them can fail. So it will move us out of crunch mode into
the regular work mode.
2. All consensus based algorithms generally require at least 2N+1 instances
unless they utilize some special mode. The lowest is N=1 that is 3 and it
would make sense to distribute them into different environments.
I know the concern for having even 2 envs was that we will spend more
effort to maintain them. But I think the opposite is true. Having 3 is
actually less effort to maintain if we make them similar because of:
1. We can do gradual canary update, Same as with failure. You can test
update on 1 instance leaving 2 left running that still provides
reliability. So upgrade is no longer time constrained and safe.
2. If environments are similar then once we establish the correct playbook
for one we can just apply it for second and later for third. So this
overhead is not tripled in fact and if automated than it is no additional
effort at all.
3. We are more open to test and play with one. We can even destroy it
recreate from scratch, etc. Indirectly this will reduce our effort.
I think the only real problem with it is the initial step when we should
design an ideal hardware and network layout for that. But once it is done
it will be easier to go with 3 environments. Also it may be possible to
design the plan the way that we start with just one and later convert it
into three.
Anton.
--
Anton Marchukov
Senior Software Engineer - RHEV CI - Red Hat
8 years, 4 months
Building for fc23 fails on yum package install
by Vojtech Szocs
Hi,
we're trying to build Dashboard 1.0.0-1 and for fc23 the build fails:
16:26:03 INFO: installing package(s): ovirt-engine-nodejs ovirt-engine-nodejs-modules
16:26:03 ERROR: Command failed. See logs for output.
16:26:03 # /usr/bin/yum-deprecated --installroot /var/lib/mock/fedora-23-x86_64-84590ba0ae0d50bf8fb4605dac9e1a22-7835/root/ --releasever 23 install ovirt-engine-nodejs ovirt-engine-nodejs-modules --setopt=tsflags=nocontexts
16:26:03 Install packages took 2 seconds
http://jenkins.ovirt.org/job/ovirt-engine-dashboard_4.0_check-patch-fc23-...
Is this an infra issue? I'll try to retrigger the build in the meantime.
Thanks,
Vojtech
8 years, 4 months
Build failed in Jenkins: ovirt_4.0_he-system-tests #37
by jenkins@jenkins.phx.ovirt.org
See <http://jenkins.ovirt.org/job/ovirt_4.0_he-system-tests/37/changes>
Changes:
[Lev Veyde] ovirt-system-tests: Add automation for he_iscsi_basic_suite_4.0
------------------------------------------
[...truncated 681 lines...]
|-
|-
|- --== MISC CONFIGURATION ==--
|-
|- Please choose Data Warehouse sampling scale:
|- (1) Basic
|- (2) Full
|- (1, 2)[1]:
|-
|- --== END OF CONFIGURATION ==--
|-
|- [ INFO ] Stage: Setup validation
|- [WARNING] Cannot validate host name settings, reason: cannot resolve own name 'hosted-engine'
|- [WARNING] Warning: Not enough memory is available on the host. Minimum requirement is 4096MB, and 16384MB is recommended.
|-
|- --== CONFIGURATION PREVIEW ==--
|-
|- Application mode : both
|- Default SAN wipe after delete : False
|- Firewall manager : firewalld
|- Update Firewall : True
|- Host FQDN : hosted-engine.lago.local
|- Engine database secured connection : False
|- Engine database host : localhost
|- Engine database user name : engine
|- Engine database name : engine
|- Engine database port : 5432
|- Engine database host name validation : False
|- DWH database secured connection : False
|- DWH database host : localhost
|- DWH database user name : ovirt_engine_history
|- DWH database name : ovirt_engine_history
|- DWH database port : 5432
|- DWH database host name validation : False
|- Engine installation : True
|- PKI organization : lago.local
|- Configure local Engine database : True
|- Set application as default page : True
|- Configure Apache SSL : True
|- DWH installation : True
|- Configure local DWH database : True
|- Engine Host FQDN : hosted-engine.lago.local
|- Configure VMConsole Proxy : True
|- Configure WebSocket Proxy : True
|- [ INFO ] Stage: Transaction setup
|- [ INFO ] Stopping engine service
|- [ INFO ] Stopping ovirt-fence-kdump-listener service
|- [ INFO ] Stopping dwh service
|- [ INFO ] Stopping websocket-proxy service
|- [ INFO ] Stage: Misc configuration
|- [ INFO ] Stage: Package installation
|- [ INFO ] Stage: Misc configuration
|- [ INFO ] Upgrading CA
|- [ INFO ] Initializing PostgreSQL
|- [ INFO ] Creating PostgreSQL 'engine' database
|- [ INFO ] Configuring PostgreSQL
|- [ INFO ] Creating PostgreSQL 'ovirt_engine_history' database
|- [ INFO ] Configuring PostgreSQL
|- [ INFO ] Creating CA
|- [ INFO ] Creating/refreshing Engine database schema
Build timed out (after 360 minutes). Marking the build as failed.
Build was aborted
Performing Post build task...
Match found for :.* : True
Logical operation result is TRUE
Running script : #!/bin/bash -xe
echo 'shell_scripts/system_tests.collect_logs.sh'
#
# Required jjb vars:
# version
#
VERSION=4.0
SUITE_TYPE=
WORKSPACE="$PWD"
OVIRT_SUITE="$SUITE_TYPE_suite_$VERSION"
TESTS_LOGS="$WORKSPACE/ovirt-system-tests/exported-artifacts"
rm -rf "$WORKSPACE/exported-artifacts"
mkdir -p "$WORKSPACE/exported-artifacts"
if [[ -d "$TESTS_LOGS" ]]; then
mv "$TESTS_LOGS/"* "$WORKSPACE/exported-artifacts/"
fi
[ovirt_4.0_he-system-tests] $ /bin/bash -xe /tmp/hudson301786022997325798.sh
+ echo shell_scripts/system_tests.collect_logs.sh
shell_scripts/system_tests.collect_logs.sh
+ VERSION=4.0
+ SUITE_TYPE=
+ WORKSPACE=<http://jenkins.ovirt.org/job/ovirt_4.0_he-system-tests/ws/>
+ OVIRT_SUITE=4.0
+ TESTS_LOGS=<http://jenkins.ovirt.org/job/ovirt_4.0_he-system-tests/ws/ovirt-system-te...>
+ rm -rf <http://jenkins.ovirt.org/job/ovirt_4.0_he-system-tests/37/artifact/export...>
+ mkdir -p <http://jenkins.ovirt.org/job/ovirt_4.0_he-system-tests/37/artifact/export...>
+ [[ -d <http://jenkins.ovirt.org/job/ovirt_4.0_he-system-tests/ws/ovirt-system-te...> ]]
POST BUILD TASK : SUCCESS
END OF POST BUILD TASK : 0
Match found for :.* : True
Logical operation result is TRUE
Running script : #!/bin/bash -xe
echo "shell-scripts/mock_cleanup.sh"
shopt -s nullglob
WORKSPACE="$PWD"
# Make clear this is the cleanup, helps reading the jenkins logs
cat <<EOC
_______________________________________________________________________
#######################################################################
# #
# CLEANUP #
# #
#######################################################################
EOC
# Archive the logs, we want them anyway
logs=(
./*log
./*/logs
)
if [[ "$logs" ]]; then
tar cvzf exported-artifacts/logs.tgz "${logs[@]}"
rm -rf "${logs[@]}"
fi
# stop any processes running inside the chroot
failed=false
mock_confs=("$WORKSPACE"/*/mocker*)
# Clean current jobs mockroot if any
for mock_conf_file in "${mock_confs[@]}"; do
[[ "$mock_conf_file" ]] || continue
echo "Cleaning up mock $mock_conf"
mock_root="${mock_conf_file##*/}"
mock_root="${mock_root%.*}"
my_mock="/usr/bin/mock"
my_mock+=" --configdir=${mock_conf_file%/*}"
my_mock+=" --root=${mock_root}"
my_mock+=" --resultdir=$WORKSPACE"
#TODO: investigate why mock --clean fails to umount certain dirs sometimes,
#so we can use it instead of manually doing all this.
echo "Killing all mock orphan processes, if any."
$my_mock \
--orphanskill \
|| {
echo "ERROR: Failed to kill orphans on $chroot."
failed=true
}
mock_root="$(\
grep \
-Po "(?<=config_opts\['root'\] = ')[^']*" \
"$mock_conf_file" \
)" || :
[[ "$mock_root" ]] || continue
mounts=($(mount | awk '{print $3}' | grep "$mock_root")) || :
if [[ "$mounts" ]]; then
echo "Found mounted dirs inside the chroot $chroot. Trying to umount."
fi
for mount in "${mounts[@]}"; do
sudo umount --lazy "$mount" \
|| {
echo "ERROR: Failed to umount $mount."
failed=true
}
done
done
# Clean any leftover chroot from other jobs
for mock_root in /var/lib/mock/*; do
this_chroot_failed=false
mounts=($(mount | awk '{print $3}' | grep "$mock_root")) || :
if [[ "$mounts" ]]; then
echo "Found mounted dirs inside the chroot $mock_root." \
"Trying to umount."
fi
for mount in "${mounts[@]}"; do
sudo umount --lazy "$mount" \
|| {
echo "ERROR: Failed to umount $mount."
failed=true
this_chroot_failed=true
}
done
if ! $this_chroot_failed; then
sudo rm -rf "$mock_root"
fi
done
if $failed; then
echo "Aborting."
exit 1
fi
# remove mock system cache, we will setup proxies to do the caching and this
# takes lots of space between runs
shopt -u nullglob
sudo rm -Rf /var/cache/mock/*
# restore the permissions in the working dir, as sometimes it leaves files
# owned by root and then the 'cleanup workspace' from jenkins job fails to
# clean and breaks the jobs
sudo chown -R "$USER" "$WORKSPACE"
[ovirt_4.0_he-system-tests] $ /bin/bash -xe /tmp/hudson1800591941197086904.sh
+ echo shell-scripts/mock_cleanup.sh
shell-scripts/mock_cleanup.sh
+ shopt -s nullglob
+ WORKSPACE=<http://jenkins.ovirt.org/job/ovirt_4.0_he-system-tests/ws/>
+ cat
_______________________________________________________________________
#######################################################################
# #
# CLEANUP #
# #
#######################################################################
+ logs=(./*log ./*/logs)
+ [[ -n ./ovirt-system-tests/logs ]]
+ tar cvzf exported-artifacts/logs.tgz ./ovirt-system-tests/logs
/tmp/hudson1689124028786776252.sh: line 24: 97484 Terminated ../jenkins/mock_configs/mock_runner.sh --mock-confs-dir ../jenkins/mock_configs --try-proxy --execute-script "automation/$OVIRT_SUITE.sh" fc23
./ovirt-system-tests/logs/
./ovirt-system-tests/logs/mocker-fedora-23-x86_64.fc23.he_basic_suite_4.0.sh/
./ovirt-system-tests/logs/mocker-fedora-23-x86_64.fc23.he_basic_suite_4.0.sh/he_basic_suite_4.0.sh.log
./ovirt-system-tests/logs/mocker-fedora-23-x86_64.fc23.clean_rpmdb/
./ovirt-system-tests/logs/mocker-fedora-23-x86_64.fc23.clean_rpmdb/stdout_stderr.log
./ovirt-system-tests/logs/mocker-fedora-23-x86_64.fc23.install_packages/
./ovirt-system-tests/logs/mocker-fedora-23-x86_64.fc23.install_packages/root.log
./ovirt-system-tests/logs/mocker-fedora-23-x86_64.fc23.install_packages/build.log
./ovirt-system-tests/logs/mocker-fedora-23-x86_64.fc23.install_packages/state.log
./ovirt-system-tests/logs/mocker-fedora-23-x86_64.fc23.install_packages/stdout_stderr.log
./ovirt-system-tests/logs/mocker-fedora-23-x86_64.fc23.init/
./ovirt-system-tests/logs/mocker-fedora-23-x86_64.fc23.init/root.log
./ovirt-system-tests/logs/mocker-fedora-23-x86_64.fc23.init/build.log
./ovirt-system-tests/logs/mocker-fedora-23-x86_64.fc23.init/state.log
./ovirt-system-tests/logs/mocker-fedora-23-x86_64.fc23.init/stdout_stderr.log
gzip: stdout: No space left on device
tar: exported-artifacts/logs.tgz: Wrote only 2048 of 10240 bytes
tar: Error is not recoverable: exiting now
POST BUILD TASK : FAILURE
END OF POST BUILD TASK : 1
Recording test results
ERROR: Step ‘Publish JUnit test result report’ failed: No test report files were found. Configuration error?
Archiving artifacts
8 years, 4 months