[JENKINS] Failed to setup proejct
kubevirt_containerized-data-importer_standard-check-pr
by jenkins@jenkins.phx.ovirt.org
Failed to run project_setup.sh for:
#1459 containerized-data-importer [check-patch].
It probably means that docker_cleanup.py failed.
This step doesn't fail the job, but we do collect
data about such failures to find the root cause.
Infra owner, ensure that we're not running out of
disk space on openshift-kubevirt-org-56n01.
5 years, 8 months
[JENKINS] Failed to setup proejct
kubevirt_containerized-data-importer_standard-check-pr
by jenkins@jenkins.phx.ovirt.org
Failed to run project_setup.sh for:
#1458 containerized-data-importer [check-patch].
It probably means that docker_cleanup.py failed.
This step doesn't fail the job, but we do collect
data about such failures to find the root cause.
Infra owner, ensure that we're not running out of
disk space on openshift-kubevirt-org-gwkk8.
5 years, 8 months
[JENKINS] Failed to setup proejct
kubevirt_containerized-data-importer_standard-check-pr
by jenkins@jenkins.phx.ovirt.org
Failed to run project_setup.sh for:
#1457 containerized-data-importer [check-patch].
It probably means that docker_cleanup.py failed.
This step doesn't fail the job, but we do collect
data about such failures to find the root cause.
Infra owner, ensure that we're not running out of
disk space on openshift-kubevirt-org-f5gss.
5 years, 8 months
[JENKINS] Failed to setup proejct
kubevirt_containerized-data-importer_standard-check-pr
by jenkins@jenkins.phx.ovirt.org
Failed to run project_setup.sh for:
#1457 containerized-data-importer [check-patch].
It probably means that docker_cleanup.py failed.
This step doesn't fail the job, but we do collect
data about such failures to find the root cause.
Infra owner, ensure that we're not running out of
disk space on openshift-kubevirt-org-68zmd.
5 years, 8 months
[JENKINS] Failed to setup proejct
kubevirt_containerized-data-importer_standard-check-pr
by jenkins@jenkins.phx.ovirt.org
Failed to run project_setup.sh for:
#1453 containerized-data-importer [check-patch].
It probably means that docker_cleanup.py failed.
This step doesn't fail the job, but we do collect
data about such failures to find the root cause.
Infra owner, ensure that we're not running out of
disk space on openshift-kubevirt-org-cq769.
5 years, 8 months
[RHV] [CQ weekly status] [01-03-2019]
by Dafna Ron
Hi,
This mail is to provide the current status of CQ and allow people to review
status before and after the weekend.
Please refer to below colour map for further information on the meaning of
the colours.
*CQ-RHV-4.2:* GREEN (#1)
Last failure was on 28-02-2019 for project rhevm-appliance due to a failed
build-artifacts job.
this issue is still happening and message was sent to developer.
*CQ-RHV-Master:* GREEN (#1)
Last failure is identical to 4.2 - build-artifacts for the same project.
** We do not yet have a 4.3 CQ job which we will start working on next
week.
Current running jobs for 4.2 [1] and for master [2] can be found below:
[1]
https://rhv-devops-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/view/Change%20...
[2]
https://rhv-devops-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/view/Change%20...
Happy week!
Dafna
------------------------------------------------------------------------------------------------------------------------
COLOUR MAP
Green = job has been passing successfully
** green for more than 3 days may suggest we need a review of our test
coverage
1.
1-3 days GREEN (#1)
2.
4-7 days GREEN (#2)
3.
Over 7 days GREEN (#3)
Yellow = intermittent failures for different projects but no lasting or
current regressions
** intermittent would be a healthy project as we expect a number of
failures during the week
** I will not report any of the solved failures or regressions.
1.
Solved job failures YELLOW (#1)
2.
Solved regressions YELLOW (#2)
Red = job has been failing
** Active Failures. The colour will change based on the amount of time the
project/s has been broken. Only active regressions would be reported.
1.
1-3 days RED (#1)
2.
4-7 days RED (#2)
5 years, 8 months
[Ovirt] [CQ weekly status] [01-03-2019]
by Dafna Ron
Hi,
This mail is to provide the current status of CQ and allow people to review
status before and after the weekend.
Please refer to below colour map for further information on the meaning of
the colours.
*CQ-4.2*: GREEN (#1)
Last CQ job failure on 4.2 was 27-02-2019 on project ovirt-provider-ovn due
to a failed build-artifacts job. this issue is now fixed.
Please note that this is a CQ job failure but not an OST job failure as we
exit if no package is available.
*CQ-4.3*: RED (#1)
We added a CQ job for 4.3 but not all projects created the branch yet.
I created a google sheet to follow the advancement of this work and I will
use this report to ask all the managers to go over the doc and add info:
https://docs.google.com/spreadsheets/d/1i142cTN_0QwOvogtlQiF6bcIYxTpBqYBe...
*CQ-Master:* RED (#1)
We have sporadic failures in test get_host_devices
this is effecting all projects and we have yet to know what is causing the
issue.
Michal checked the issue and thinks the test should be modified to do a
re-try as the info is filled by the host 4 seconds after the query is sent.
Milan was asked to add a loop with 1s delay to get_host_devices to fix the
test.
Please note that we have ruled out a possible infra cause for this failure
before alerting dev as the failure is sporadic and happens on multiple
projects.
Jira: https://ovirt-jira.atlassian.net/browse/OVIRT-2686
Current running jobs for 4.2 [1] and master [2] can be found here:
[1]
http://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-4.2_change-...
[2]
http://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-master_chan...
Happy week!
Dafna
-------------------------------------------------------------------------------------------------------------------
COLOUR MAP
Green = job has been passing successfully
** green for more than 3 days may suggest we need a review of our test
coverage
1.
1-3 days GREEN (#1)
2.
4-7 days GREEN (#2)
3.
Over 7 days GREEN (#3)
Yellow = intermittent failures for different projects but no lasting or
current regressions
** intermittent would be a healthy project as we expect a number of
failures during the week
** I will not report any of the solved failures or regressions.
1.
Solved job failures YELLOW (#1)
2.
Solved regressions YELLOW (#2)
Red = job has been failing
** Active Failures. The colour will change based on the amount of time the
project/s has been broken. Only active regressions would be reported.
1.
1-3 days RED (#1)
2.
4-7 days RED (#2)
3.
Over 7 days RED (#3)
5 years, 8 months
[JENKINS] Failed to setup proejct
kubevirt_containerized-data-importer_standard-check-pr
by jenkins@jenkins.phx.ovirt.org
Failed to run project_setup.sh for:
#1448 containerized-data-importer [check-patch].
It probably means that docker_cleanup.py failed.
This step doesn't fail the job, but we do collect
data about such failures to find the root cause.
Infra owner, ensure that we're not running out of
disk space on openshift-kubevirt-org-jbwbg.
5 years, 8 months