[ OST Failure Report ] [ oVirt master (ovirt-engine) ] [ 10-03-19 ] [ verify_glance_import ]
by Liora Milbaum
Link and headline of suspected patches:
Link to Job:
http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/13310/
Link to all logs:
(Relevant) error snippet from the log:
<error>
[36m@ Run test: 004_basic_sanity.py: [0m [0m [0m
nose.config: INFO: Ignoring files matching ['^\\.', '^_', '^setup\\.py$']
[36m # verify_glance_import: [0m [0m [0m
[31m * Unhandled exception in <function <lambda> at 0x7f9f147cbed8>
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line
234, in assert_equals_within
res = func()
File "/home/jenkins/workspace/ovirt-master_change-queue-tester/ovirt-system-tests/basic-suite-master/test-scenarios/004_basic_sanity.py",
line 1153, in <lambda>
lambda: api.disks.get(disk_name).status.state == 'ok',
AttributeError: 'NoneType' object has no attribute 'status' [0m
[36m * Collect artifacts: [0m [0m [0m
[36m ~ [Thread-4] Copy from
lago-basic-suite-master-engine:/tmp/otopi* to
/dev/shm/ost/deployment-basic-suite-master/default/test_logs/004_basic_sanity.verify_glance_import-20190307161910/lago-basic-suite-master-engine/_tmp_otopi*:
[31mERROR [0m (in 0:00:00) [0m
[36m - [Thread-4] lago-basic-suite-master-engine: [31mERROR [0m
(in 0:00:00) [0m
[36m * Collect artifacts: [31mERROR [0m (in 0:00:02) [0m
[36m # verify_glance_import: [31mERROR [0m (in 0:00:02) [0m
[36m # Results located at
/dev/shm/ost/deployment-basic-suite-master/004_basic_sanity.py.junit.xml
[0m
[36m@ Run test: 004_basic_sanity.py: [31mERROR [0m (in 0:00:02) [0m
[31mError occured, aborting
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/ovirtlago/cmd.py", line 383, in do_run
self.cli_plugins[args.ovirtverb].do_run(args)
File "/usr/lib/python2.7/site-packages/lago/plugins/cli.py", line
184, in do_run
self._do_run(**vars(args))
File "/usr/lib/python2.7/site-packages/lago/utils.py", line 549, in wrapper
return func(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/lago/utils.py", line 560, in wrapper
return func(*args, prefix=prefix, **kwargs)
File "/usr/lib/python2.7/site-packages/ovirtlago/cmd.py", line 107,
in do_ovirt_runtest
raise RuntimeError('Some tests failed')
RuntimeError: Some tests failed [0m
</error>
--
*Liora Milbaum*
Senior Principal Software Engineer
RHV/CNV DevOps
EMEA VIRTUALIZATION R&D
T: +972-54-6560051
<https://red.ht/sig>
5 years, 9 months
[ OST Failure Report ] [ oVirt Master (ovirt-engine-nodejs-modules) ] [ 27-02-2019 ] [ 002_bootstrap.add_vm2_lease ]
by Dafna Ron
Hi,
We have a failure on the project in basic suite, master branch. The recent
failure was in patch:
https://gerrit.ovirt.org/#/c/98087/1 - Add pre-seed for ovirt-web-ui
CQ is pointing at the below as the root cause (which was merged a while
back):
https://gerrit.ovirt.org/#/c/97491/ - Add pre-seed for ovirt-web-ui
Can you please check the issue as it seems both patches are changing the
same thing and the project seem to be broken since
https://gerrit.ovirt.org/#/c/97491/3
Latest failure:
https://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/13171/
Logs:
https://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/13171/arti...
errors from logs:
Engine:
2019-02-27 13:37:28,479-05 ERROR
[org.ovirt.engine.core.bll.UpdateVmCommand] (default task-1) [74283e25]
Transaction rolled-back for command
'org.ovirt.engine.core.bll.UpdateVmCommand'.
2019-02-27 13:37:28,483-05 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-1) [74283e25] EVENT_ID: USER_FAILED_UPDATE_VM(58), Failed to
update VM vm2 (User: admin@inter
nal-authz).
2019-02-27 13:37:28,485-05 INFO
[org.ovirt.engine.core.bll.UpdateVmCommand] (default task-1) [74283e25]
Lock freed to object 'EngineLock:{exclusiveLocks='[vm2=VM_NAME]',
sharedLocks='[3500eb82-e5e2-4e24-b41c-ea
02d9f6adee=VM]'}'
2019-02-27 13:37:28,485-05 DEBUG
[org.ovirt.engine.core.common.di.interceptor.DebugLoggingInterceptor]
(default task-1) [74283e25] method: runAction, params: [UpdateVm,
VmManagementParametersBase:{commandId='340
59769-05b9-429e-8356-f6b9b9953f55', user='admin', commandType='UpdateVm',
vmId='3500eb82-e5e2-4e24-b41c-ea02d9f6adee'}], timeElapsed: 6618ms
2019-02-27 13:37:28,486-05 ERROR
[org.ovirt.engine.api.restapi.resource.AbstractBackendResource] (default
task-1) [] Operation Failed: []
2019-02-27 13:37:28,579-05 DEBUG
[org.ovirt.engine.core.common.di.interceptor.DebugLoggingInterceptor]
(EE-ManagedThreadFactory-engineScheduled-Thread-85) [] method: get, params:
[e29c0ba1-464c-4eb4-a8f2-c6933d9
9969a], timeElapsed: 3ms
vdsm:
2019-02-27 13:37:21,987-0500 INFO (jsonrpc/1) [vdsm.api] FINISH lease_info
error=No such lease 3500eb82-e5e2-4e24-b41c-ea02d9f6adee
from=::ffff:192.168.201.4,43920,
flow_id=117dec74-ad59-4b12-8148-b2c130337c10,
task_id=9c297d41-0aa7-4c74-b268-b710e666bc6c (api:52)
2019-02-27 13:37:21,988-0500 ERROR (jsonrpc/1) [storage.TaskManager.Task]
(Task='9c297d41-0aa7-4c74-b268-b710e666bc6c') Unexpected error (task:875)
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882,
in _run
return fn(*args, **kargs)
File "<string>", line 2, in lease_info
File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 50, in
method
ret = func(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 3702,
in lease_info
info = dom.lease_info(lease.lease_id)
File "/usr/lib/python2.7/site-packages/vdsm/storage/sd.py", line 674, in
lease_info
return vol.lookup(lease_id)
File "/usr/lib/python2.7/site-packages/vdsm/storage/xlease.py", line 553,
in lookup
raise NoSuchLease(lease_id)
NoSuchLease: No such lease 3500eb82-e5e2-4e24-b41c-ea02d9f6adee
2019-02-27 13:37:21,988-0500 INFO (jsonrpc/1) [storage.TaskManager.Task]
(Task='9c297d41-0aa7-4c74-b268-b710e666bc6c') aborting: Task is aborted:
u'No such lease 3500eb82-e5e2-4e24-b41c-ea02d9f6adee' - code 100
(task:1181)
2019-02-27 13:37:21,988-0500 ERROR (jsonrpc/1) [storage.Dispatcher] FINISH
lease_info error=No such lease 3500eb82-e5e2-4e24-b41c-ea02d9f6adee
(dispatcher:87)
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/storage/dispatcher.py", line
74, in wrapper
result = ctask.prepare(func, *args, **kwargs)
File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 108,
in wrapper
return m(self, *a, **kw)
File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 1189,
in prepare
raise self.error
NoSuchLease: No such lease 3500eb82-e5e2-4e24-b41c-ea02d9f6adee
5 years, 9 months
ovirt-vmconsole-1.0.6-3.fc28.noarch: Non-fatal POSTUN scriptlet failure
by Nir Soffer
When updating Fedora 28 host with ovirt-release-master repo, I see this
error:
Running scriptlet: ovirt-vmconsole-1.0.6-3.fc28.noarch
143/251
libsemanage.semanage_module_key_set_name: Name
/usr/share/selinux/packages/ovirt-vmconsole/ovirt_vmconsole.pp is invalid.
semodule: Failed on
/usr/share/selinux/packages/ovirt-vmconsole/ovirt_vmconsole.pp!
warning: %postun(ovirt-vmconsole-1.0.6-3.fc28.noarch) scriptlet failed,
exit status 1
Non-fatal POSTUN scriptlet failure in rpm package ovirt-vmconsole
Non-fatal POSTUN scriptlet failure in rpm package ovirt-vmconsole
I did not try to use it, I don't know if we have a real issue. Looks like
we need to either
fix the issue or silence the warnings.
Nir
5 years, 9 months
[Ovirt] [CQ weekly status] [08-03-2019]
by Dafna Ron
Hi,
This mail is to provide the current status of CQ and allow people to review
status before and after the weekend.
Please refer to below colour map for further information on the meaning of
the colours.
*CQ-4.2*: GREEN (#1)
Last CQ job failure on 4.2 was 06-03-2019 on project ovirt-ansible-vm-infra
due to missing jtk2 package. this issue is now fixed with patch:
https://gerrit.ovirt.org/#/c/98304/
*CQ-4.3*: RED (#1)
Last job to fail is for project imgbased. the failure is due to the random
failures we have been having from test add_vm2_lease. this issue is
currently debated in the list.
New repo and packages were updated by Galit due to failure in CQ for
missing packages: https://gerrit.ovirt.org/#/c/98312/
*CQ-Master:* RED (#1)
We were failing this week due to missing packages which was fixed by:
https://gerrit.ovirt.org/#/c/98304/
We also had two regressions which were fixed by:
https://gerrit.ovirt.org/#/c/98317/ and by
https://gerrit.ovirt.org/#/c/98371/
All projects are currently passing
Current running jobs for 4.2 [1] and master [2] can be found here:
[1]
http://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-4.2_change-...
[2]
http://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-master_chan...
Happy week!
Dafna
-------------------------------------------------------------------------------------------------------------------
COLOUR MAP
Green = job has been passing successfully
** green for more than 3 days may suggest we need a review of our test
coverage
1.
1-3 days GREEN (#1)
2.
4-7 days GREEN (#2)
3.
Over 7 days GREEN (#3)
Yellow = intermittent failures for different projects but no lasting or
current regressions
** intermittent would be a healthy project as we expect a number of
failures during the week
** I will not report any of the solved failures or regressions.
1.
Solved job failures YELLOW (#1)
2.
Solved regressions YELLOW (#2)
Red = job has been failing
** Active Failures. The colour will change based on the amount of time the
project/s has been broken. Only active regressions would be reported.
1.
1-3 days RED (#1)
2.
4-7 days RED (#2)
3.
Over 7 days RED (#3)
5 years, 9 months
ovirt and AMD virtualization bug
by Hetz Ben Hamo
Hi,
I've done some research on a bug which I found on the AMD Zen/Zen+ based
CPU's. Apparently, when using nested virtualization, the CPU doesn't expose
the "monitor" CPU flag.
While other virtualization solutions (Xen, ESXi, Hyper-V) don't care about
this issue and let you run with nested without any problem, oVirt doesn't
let you launch or create any VM and it complains that this "monitor" CPU
flag is missing.
Since QEMU/KVM cannot do anything about it, I was wondering if the ovirt
dev team could ignore this flag please.
Thanks
5 years, 9 months
ovirt-optimizer: subproject health check
by Sandro Bonazzola
Hi,
oVirt Optimizer has been introduced in oVirt 3.5.2 back in 2014.
Last update on master branch is dated November 2017 with version 0.15
The package has been shipped till oVirt 4.2.8 and it has not been shipped
in oVirt 4.3 despite no official deprecation has been announced.
According to Gerrit, current maintainers are Doron Fediuck and Martin Sivak.
According to Bugzilla there is one bug open since February 2018:
Bug 1545077 <https://bugzilla.redhat.com/show_bug.cgi?id=1545077> - cancel
optimization button is enabled all the time
which is currently untargeted, unassigned and without activity since March
2018.
The project has feature / development documentation at
https://ovirt.org/develop/release-management/features/sla/optaplanner.html
but no official documentation.
Can we declare this subproject retired from oVirt project and drop CI
automation for it?
Thanks,
--
SANDRO BONAZZOLA
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo(a)redhat.com
<https://red.ht/sig>
5 years, 9 months
ovirt-scheduler-proxy: subproject health check
by Sandro Bonazzola
Hi,
oVirt Scheduler Proxy has been introduced in oVirt 3.3.1 back in 2013.
Last update on master branch is dated November 2017 with 0.1.8 released.
The package has been shipped till oVirt 4.2.8 and it has not been shipped
in oVirt 4.3 despite no official deprecation has been announced.
According to Gerrit, current maintainers are Doron Fediuck and Martin Sivak.
According to Bugzilla there are no open bugs or RFEs for this project.
The project has no documentation available on oVirt website.
Can we declare this subproject retired from oVirt project and drop CI
automation for it?
Thanks,
--
SANDRO BONAZZOLA
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo(a)redhat.com
<https://red.ht/sig>
5 years, 9 months
[ OST Failure Report ] [ oVirt Master ] [ 07-03-2019 ] [ 008_basic_ui_sanity.initialize_firefox]
by Galit Rosenthal
Hi,
we are failing basic suite master (vdsm)
Can you please have a look at the issue?
*Error Message:*
Message: Error forwarding the new session new session request for
webdriver should contain a location header or an
'application/json;charset=UTF-8' response body with the session ID.
*from the [2]:*
*08:22:04* + xmllint --format
/dev/shm/ost/deployment-basic-suite-master/008_basic_ui_sanity.py.junit.xml*08:22:04*
<?xml version="1.0" encoding="UTF-8"?>*08:22:04* <testsuite
name="nosetests" tests="7" errors="1" failures="0" skip="0">*08:22:04*
<testcase classname="008_basic_ui_sanity" name="init"
time="0.001"/>*08:22:04* <testcase classname="008_basic_ui_sanity"
name="start_grid" time="144.413">*08:22:04*
<system-out><![CDATA[executing shell: docker ps*08:22:04* CONTAINER ID
IMAGE COMMAND CREATED
STATUS PORTS NAMES*08:22:04* *08:22:04*
*08:22:04* executing shell: docker rm -f grid_node_chrome
grid_node_firefox selenium-hub*08:22:04* *08:22:04* Error response
from daemon: No such container: grid_node_chrome*08:22:04* Error
response from daemon: No such container: grid_node_firefox*08:22:04*
Error response from daemon: No such container: selenium-hub*08:22:04*
*08:22:04* executing shell: docker kill grid_node_chrome
grid_node_firefox selenium-hub*08:22:04* *08:22:04* Error response
from daemon: Cannot kill container grid_node_chrome: No such
container: grid_node_chrome*08:22:04* Error response from daemon:
Cannot kill container grid_node_firefox: No such container:
grid_node_firefox*08:22:04* Error response from daemon: Cannot kill
container selenium-hub: No such container: selenium-hub*08:22:04*
*08:22:04* executing shell: docker ps*08:22:04* CONTAINER ID
IMAGE COMMAND CREATED STATUS
PORTS NAMES*08:22:04* *08:22:04* *08:22:04*
executing shell: docker network ls*08:22:04* NETWORK ID NAME
DRIVER SCOPE*08:22:04* f710c07f46da
bridge bridge local*08:22:04* 7d05935c9513
host host local*08:22:04*
415e4fd8022f none null
local*08:22:04* *08:22:04* *08:22:04* executing shell: docker network
rm grid*08:22:04* *08:22:04* Error response from daemon: network grid
not found*08:22:04* *08:22:04* executing shell: docker network
ls*08:22:04* NETWORK ID NAME DRIVER
SCOPE*08:22:04* f710c07f46da bridge bridge
local*08:22:04* 7d05935c9513 host host
local*08:22:04* 415e4fd8022f none
null local*08:22:04* *08:22:04* *08:22:04* executing
shell: docker network ls*08:22:04* NETWORK ID NAME
DRIVER SCOPE*08:22:04* f710c07f46da bridge
bridge local*08:22:04* 7d05935c9513 host
host local*08:22:04* 415e4fd8022f
none null local*08:22:04* *08:22:04*
*08:22:04* creating docker network for grid*08:22:04* executing shell:
docker network create grid*08:22:04*
7406c15c141eccac968e5fba9f091d83831a29c92fc9f61b95bc4fba47f5e73f*08:22:04*
*Global link to the CQ fail[3]*
[1]
https://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-master_cha...
[2]
https://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-master_cha...
[3]
https://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-master_cha...
--
GALIT ROSENTHAL
SOFTWARE ENGINEER
Red Hat
<https://www.redhat.com/>
galit(a)gmail.com T: 972-9-7692230
<https://red.ht/sig>
5 years, 9 months