Tests failed because global_setup.sh failed
by Nir Soffer
Not sure why global setup failed:
+ [[ ! -O /home/jenkins/.ssh ]]
+ [[ ! -G /home/jenkins/.ssh ]]
+ verify_set_permissions 700 /home/jenkins/.ssh
+ local target_permissions=700
+ local path_to_set=/home/jenkins/.ssh
++ stat -c %a /home/jenkins/.ssh
+ local access=700
+ [[ 700 != \7\0\0 ]]
+ return 0
+ [[ -f /home/jenkins/.ssh/known_hosts ]]
+ verify_set_ownership /home/jenkins/.ssh/known_hosts
+ local path_to_set=/home/jenkins/.ssh/known_hosts
++ id -un
+ local owner=jenkins
++ id -gn
+ local group=jenkins
+ [[ ! -O /home/jenkins/.ssh/known_hosts ]]
+ [[ ! -G /home/jenkins/.ssh/known_hosts ]]
+ verify_set_permissions 644 /home/jenkins/.ssh/known_hosts
+ local target_permissions=644
+ local path_to_set=/home/jenkins/.ssh/known_hosts
++ stat -c %a /home/jenkins/.ssh/known_hosts
+ local access=644
+ [[ 644 != \6\4\4 ]]
+ return 0
+ return 0
+ true
+ log ERROR Aborting.
Build:
https://jenkins.ovirt.org/blue/rest/organizations/jenkins/pipelines/vdsm_...
5 years, 11 months
Re: [VDSM] Tests fail with "missing workspace /home/jenkins/workspace/vdsm_standard-check-patch/vdsm on vm0038.workers-phx.ovirt.org"
by Nir Soffer
On Fri, Dec 21, 2018 at 1:47 AM Nir Soffer <nsoffer(a)redhat.com> wrote:
> Third time today:
>
> Error:
> sh: /home/jenkins/workspace/vdsm_standard-check-patch/vdsm@tmp
> /durable-15072b7e/jenkins-result.txt.tmp:
> No such file or directory
> mv: cannot stat
> '/home/jenkins/workspace/vdsm_standard-check-patch/vdsm@tmp
> /durable-15072b7e/jenkins-result.txt.tmp':
> No such file or directory
> missing workspace
> /home/jenkins/workspace/vdsm_standard-check-patch/vdsm on
> vm0013.workers-phx.ovirt.org
>
> Build:
>
> https://jenkins.ovirt.org/blue/rest/organizations/jenkins/pipelines/vdsm_...
Adding devel - if you see this error please report here.
On Thu, Dec 20, 2018 at 10:01 PM Nir Soffer <nsoffer(a)redhat.com> wrote:
> >
> > Two incidents on a different slaves today:
> >
> > Error:
> > sh: /home/jenkins/workspace/vdsm_standard-check-patch/vdsm@tmp
> /durable-d993b7a2/jenkins-result.txt.tmp:
> > No such file or directory
> > mv: cannot stat
> > '/home/jenkins/workspace/vdsm_standard-check-patch/vdsm@tmp
> /durable-d993b7a2/jenkins-result.txt.tmp':
> > No such file or directory
> > missing workspace
> > /home/jenkins/workspace/vdsm_standard-check-patch/vdsm on
> > vm0088.workers-phx.ovirt.org
> >
> > Build:
> >
> https://jenkins.ovirt.org/blue/rest/organizations/jenkins/pipelines/vdsm_...
> >
> >
> > Error:
> > sh: /home/jenkins/workspace/vdsm_standard-check-patch/vdsm@tmp
> /durable-edec94a0/jenkins-result.txt.tmp:
> > No such file or directory
> > mv: cannot stat
> > '/home/jenkins/workspace/vdsm_standard-check-patch/vdsm@tmp
> /durable-edec94a0/jenkins-result.txt.tmp':
> > No such file or directory
> > missing workspace
> > /home/jenkins/workspace/vdsm_standard-check-patch/vdsm on
> > vm0013.workers-phx.ovirt.org
> >
> > Build:
> >
> https://jenkins.ovirt.org/blue/rest/organizations/jenkins/pipelines/vdsm_...
> >
> > On Wed, Dec 19, 2018 at 4:36 PM Nir Soffer <nsoffer(a)redhat.com> wrote:
> > >
> > > I have seen this before, very strange failure.
> > >
> > > The job run normally and then abort when we start to run the tests.
> The error
> > > does not come from our tests.
> > >
> > > Error:
> > > ../tests/run_tests_local.sh API_test.py alignmentscan_test.py
> > > api_response_test.py bridge_test.py caps_test.py client_test.py
> > > clientif_test.py cmdutils_test.py config_test.py cpu_profile_test.py
> > > cpuinfo_test.py domcaps_test.py encoding_test.py exception_test.py
> > > executor_test.py eventfd_test.py fallocate_test.py filecontrol_test.py
> > > fuser_test.py glob_test.py gluster_cli_test.py
> > > gluster_exception_test.py glusterTestData.py
> > > gluster_thinstorage_test.py hooks_test.py hostdev_test.py
> > > hoststats_test.py hugepages_test.py hwinfo_test.py jobs_test.py
> > > jsonRpcClient_test.py jsonrpc_test.py loopback_test.py mkimage_test.py
> > > modprobe.py moduleloader_test.py monkeypatch_test.py mom_test.py
> > > mompolicy_test.py osinfo_test.py osutils_test.py passwords_test.py
> > > permutation_test.py protocoldetector_test.py response_test.py
> > > rngsources_test.py schedule_test.py schemavalidation_test.py
> > > sigutils_test.py sparsify_test.py stompadapter_test.py
> > > stompasyncclient_test.py stompasyncdispatcher_test.py stomp_test.py
> > > sysprep_test.py taskset_test.py testlib_test.py tool_confmeta_test.py
> > > tool_test.py throttledlog_test.py unicode_test.py utils_test.py
> > > validate_test.py vdsmapi_test.py vdsmdumpchains_test.py verify.py
> > > vmapi_test.py vmfakelib_test.py vmTestsData.py zombiereaper_test.py
> > > network/*_test.py devices/parsing/complex_vm_test.py
> > > common/cache_test.py common/cmdutils_test.py common/concurrent_test.py
> > > common/contextlib_test.py common/fileutils_test.py
> > > common/function_test.py common/hostutils_test.py
> > > common/libvirtconnection_test.py common/logutils_test.py
> > > common/network_test.py common/osutils_test.py common/proc_test.py
> > > common/properties_test.py common/pthread_test.py common/time_test.py
> > > common/validate_test.py
> > >
> > > missing workspace
> > > /home/jenkins/workspace/vdsm_standard-check-patch/vdsm on
> > > vm0038.workers-phx.ovirt.org
> > >
> > >
> > > Build:
> > >
> https://jenkins.ovirt.org/blue/rest/organizations/jenkins/pipelines/vdsm_...
>
5 years, 11 months
[VDSM] all test passed, build failed with "tar: journal/b087148aba6d49b9bbef488e52a48752/system.journal: file changed as we read it"
by Nir Soffer
We have this failure that pops randomly:
1. All tests pass
*00:13:13.284* ___________________________________ summary
____________________________________*00:13:13.285* tests: commands
succeeded*00:13:13.286* storage-py27: commands
succeeded*00:13:13.286* storage-py36: commands
succeeded*00:13:13.286* lib-py27: commands succeeded*00:13:13.287*
lib-py36: commands succeeded*00:13:13.288* network-py27: commands
succeeded*00:13:13.290* network-py36: commands
succeeded*00:13:13.291* virt-py27: commands succeeded*00:13:13.292*
virt-py36: commands succeeded*00:13:13.293* congratulations :)
2. But we fail to collect logs at the end
*00:14:35.992* ##########################################################*00:14:35.995*
## Wed Nov 28 17:39:50 UTC 2018 Finished env:
fc28:fedora-28-x86_64*00:14:35.996* ## took 764
seconds*00:14:35.997* ## rc = 1*00:14:35.997*
##########################################################*00:14:36.009*
##! ERROR vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv*00:14:36.010* ##!
Last 20 log entries:
/tmp/mock_logs.Lcop4ZOq/script/stdout_stderr.log*00:14:36.011*
##!*00:14:36.012*
journal/b087148aba6d49b9bbef488e52a48752/system.journal*00:14:36.013*
tar: journal/b087148aba6d49b9bbef488e52a48752/system.journal: file
changed as we read it*00:14:36.014*
journal/b087148aba6d49b9bbef488e52a48752/user-1000.journal*00:14:36.015*
lastlog*00:14:36.015* libvirt/*00:14:36.015*
libvirt/lxc/*00:14:36.015* libvirt/libxl/*00:14:36.016*
libvirt/qemu/*00:14:36.016*
libvirt/qemu/LiveOS-f920001d-be4e-47ea-ac26-72480fd5be87.log*00:14:36.017*
libvirt/uml/*00:14:36.017* ovirt-guest-agent/*00:14:36.017*
ovirt-guest-agent/ovirt-guest-agent.log*00:14:36.017*
README*00:14:36.018* samba/*00:14:36.018* samba/old/*00:14:36.018*
sssd/*00:14:36.018* tallylog*00:14:36.018* wtmp*00:14:36.018* Took 678
seconds*00:14:36.018*
===================================*00:14:36.019* ##!*00:14:36.019*
##! ERROR ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^*00:14:36.019*
##!########################################################
This looks like an issue with vdsm check-patch.sh:
function collect_logs {
res=$?
[ "$res" -ne 0 ] && echo "*** err: $res"
cd /var/log
tar -cvzf "$EXPORT_DIR/mock_varlogs.tar.gz" *
cd /var/host_log
tar -cvzf "$EXPORT_DIR/host_varlogs.tar.gz" *
}
trap collect_logs EXIT
Seems that tar fail to collect log if the log is modified while copied,
which makes sense.
We can ignore errors in tar, since log collection should not fail the
build, but I think
a better solution is to avoid collecting any logs since vdsm writes its own
logs during
tests - all the info must be in vdsm log.
Here is the list of collected logs:
*00:13:47.280* + tar -cvzf
/home/jenkins/workspace/vdsm_master_check-patch-fc28-x86_64/vdsm/exported-artifacts/mock_varlogs.tar.gz
btmp dnf.librepo.log dnf.log dnf.rpm.log faillog glusterfs hawkey.log
journal lastlog libvirt openvswitch README tallylog vdsm_tests.log
wtmp yum.log*00:13:47.285* btmp*00:13:47.285*
dnf.librepo.log*00:13:47.299* dnf.log*00:13:47.309*
dnf.rpm.log*00:13:47.310* faillog*00:13:47.311*
glusterfs/*00:13:47.312* hawkey.log*00:13:47.313*
journal/*00:13:47.313* lastlog*00:13:47.315* libvirt/*00:13:47.315*
libvirt/qemu/*00:13:47.316* openvswitch/*00:13:47.317*
openvswitch/ovs-vswitchd.log*00:13:47.318*
openvswitch/ovsdb-server.log*00:13:47.319* README*00:13:47.320*
tallylog*00:13:47.321* vdsm_tests.log*00:13:47.342* wtmp*00:13:47.343*
yum.log*00:13:47.349* + cd /var/host_log*00:13:47.350* + tar -cvzf
/home/jenkins/workspace/vdsm_master_check-patch-fc28-x86_64/vdsm/exported-artifacts/host_varlogs.tar.gz
anaconda audit boot.log btmp chrony cloud-init.log
cloud-init-output.log cron dnf.librepo.log dnf.log dnf.rpm.log
firewalld glusterfs hawkey.log journal lastlog libvirt
ovirt-guest-agent README samba sssd tallylog wtmp*00:13:47.356*
anaconda/*00:13:47.356* anaconda/ifcfg.log*00:13:47.357*
anaconda/ks-script-l5qnynnj.log*00:13:47.358*
anaconda/storage.log*00:13:47.359* anaconda/program.log*00:13:47.395*
anaconda/ks-script-b5_08tmo.log*00:13:47.396*
anaconda/ks-script-6uks8bp3.log*00:13:47.397*
anaconda/hawkey.log*00:13:47.398* anaconda/syslog*00:13:47.406*
anaconda/journal.log*00:13:47.449*
anaconda/dnf.librepo.log*00:13:47.458*
anaconda/packaging.log*00:13:47.465* anaconda/dbus.log*00:13:47.466*
anaconda/anaconda.log*00:13:47.467*
anaconda/ks-script-slrcz39_.log*00:13:47.503* audit/*00:13:47.504*
audit/audit.log.3*00:13:47.657* audit/audit.log.2*00:13:47.814*
audit/audit.log.1*00:13:47.981* audit/audit.log*00:13:48.008*
audit/audit.log.4*00:13:48.155* boot.log*00:13:48.156*
btmp*00:13:48.157* chrony/*00:13:48.159* cloud-init.log*00:13:48.159*
cloud-init-output.log*00:13:48.161* cron*00:13:48.162*
dnf.librepo.log*00:13:49.930* dnf.log*00:13:51.335*
dnf.rpm.log*00:13:51.421* firewalld*00:13:51.423*
glusterfs/*00:13:51.424* hawkey.log*00:13:51.704*
journal/*00:13:51.708*
journal/b087148aba6d49b9bbef488e52a48752/*00:13:51.709*
journal/b087148aba6d49b9bbef488e52a48752/system.journal*00:13:55.817*
tar: journal/b087148aba6d49b9bbef488e52a48752/system.journal: file
changed as we read it*00:13:55.819*
journal/b087148aba6d49b9bbef488e52a48752/user-1000.journal*00:13:55.915*
lastlog*00:13:55.923* libvirt/*00:13:55.924*
libvirt/lxc/*00:13:55.926* libvirt/libxl/*00:13:55.927*
libvirt/qemu/*00:13:55.928*
libvirt/qemu/LiveOS-f920001d-be4e-47ea-ac26-72480fd5be87.log*00:13:55.929*
libvirt/uml/*00:13:55.930* ovirt-guest-agent/*00:13:55.930*
ovirt-guest-agent/ovirt-guest-agent.log*00:13:55.932*
README*00:13:55.933* samba/*00:13:55.933* samba/old/*00:13:55.935*
sssd/*00:13:55.935* tallylog*00:13:55.935* wtmp
Most if not all are lot relevant to vdsm tests, and should not be collected.
This was added in:
commit 9c9c17297433e5a5a49aa19cde10b206e7db61e9
Author: Edward Haas <edwardh(a)redhat.com>
Date: Tue Apr 17 10:53:11 2018 +0300
automation: Collect logs even when check-patch fails
Change-Id: Idfe07ce6fc55473b1db1d7f16754f559cc5c345a
Signed-off-by: Edward Haas <edwardh(a)redhat.com>
Reviewed in:
https://gerrit.ovirt.org/c/90370
Edward, can you explain why do we need to collect logs during check-patch,
and why do we need to collect all the logs in the system?
Nir
5 years, 11 months
[Ovirt] [CQ weekly status] [21-12-2018]
by Dafna Ron
Hi,
This mail is to provide the current status of CQ and allow people to review
status before and after the weekend.
Please refer to below colour map for further information on the meaning of
the colours.
*CQ-4.2*: GREEN (#3)
Last job failure was on Dec 11th (i.e no failures reported in the last 10
days for 4.2)
*CQ-Master:* RED (#1)
Project: ALL
Reason: we have an active regression:
https://ovirt-jira.atlassian.net/browse/OVIRT-2634
Fix has been merged and waiting to pass CQ:
https://gerrit.ovirt.org/#/c/96371/
Please note:
Due to the shut down and holidays, the next report will be sent on Jan 11th
Happy Holidays!
Dafna
-------------------------------------------------------------------------------------------------------------------
COLOUR MAP
Green = job has been passing successfully
** green for more than 3 days may suggest we need a review of our test
coverage
1.
1-3 days GREEN (#1)
2.
4-7 days GREEN (#2)
3.
Over 7 days GREEN (#3)
Yellow = intermittent failures for different projects but no lasting or
current regressions
** intermittent would be a healthy project as we expect a number of
failures during the week
** I will not report any of the solved failures or regressions.
1.
Solved job failures YELLOW (#1)
2.
Solved regressions YELLOW (#2)
Red = job has been failing
** Active Failures. The colour will change based on the amount of time the
project/s has been broken. Only active regressions would be reported.
1.
1-3 days RED (#1)
2.
4-7 days RED (#2)
3.
Over 7 days RED (#3)
5 years, 11 months
planned datacenter maintenance 21.12.2018 20:30 UTC
by Evgheni Dereveanchin
Hi everyone,
The network switch stack used by the oVirt PHX datacenter needs a reboot
which is scheduled for tomorrow. It is expected to be a fast task yet it
may cut all networking including shared storage access for all of our
hypervisors for a couple of minutes.
For this reason I'll shut down non-critical services beforehand and pause
CI to minimize I/O activity and protect against potential VM disk
corruption.
Maintenance window: 21.12.2018 20:30 UTC - 21:30 UTC
Services that may be unreachable for short periods of time during this
outage are:
* Package repositories
* Glance image repository
* Jenkins CI
Other services such as the website, gerrit and mailing lists are not
affected and will be operating as usual.
--
Regards,
Evgheni Dereveanchin
5 years, 11 months
Failure while trying to setup master
by Shani Leviim
Hi,
I've fetched & rebased ovirt-engine master, but got into this error message
while trying to run the setup:
[ ERROR ] schema.sh: FATAL: Cannot execute sql command:
--file=/home/sleviim/ovirt-engine/share/ovirt-engine/dbscripts/upgrade/04_03_0590_remove_maintenance_and_optional_reason_required.sql
[ ERROR ] Failed to execute stage 'Misc configuration': Engine schema
refresh failed
Can anyone assist, please?
*Regards,*
*Shani Leviim*
5 years, 11 months
Adding multiple VM NICs or VM tags in a single request
by Mahesh Falmari
Hi Nir and RedHat Team,
Would like to confirm if it is possible to create multiple VM tags or multiple VM NICs in a single POST request. Currently, I could see that a single request creates a single VM NIC or VM tag for a given virtual machine.
We are using the following APIs but looks like it doesn’t allows us to add multiple VM NICs or VM tags in a single request. If it is possible, please let us know how the request body should look like in that case.
POST /vms/{vm:id}/tags
Request body:
<tags>
<name>tag1</name>
</tags>
POST /vms/{vm:id}/nics
Request body:
<nic>
<name>network1</name>
</nic>
Thanks & Regards,
Mahesh Falmari
5 years, 11 months
Nominate Dominik Holler to have a merge permissions on ovirt-site
by Michael Burman
Hi All,
I would like to suggest the nomination of Dominik Holler to have and get a
merge permissions on ovirt-site.
Dominik is a professional member and developer in our ovirt community and
organization. His contribution to ovirt is huge and i believe that he is an
appropriate nomination to receive such permissions.
Thank you)
--
Michael Burman
Senior Quality engineer - rhv network - redhat israel
Red Hat
<https://www.redhat.com>
mburman(a)redhat.com M: 0545355725 IM: mburman
<https://red.ht/sig>
5 years, 11 months