On Thu, Feb 28, 2019 at 11:52 AM Dafna Ron <dron@redhat.com> wrote:

We have a failure on the project in basic suite, master branch. The recent failure was in patch:
https://gerrit.ovirt.org/#/c/98087/1 - Add pre-seed for ovirt-web-ui

CQ is pointing at the below as the root cause (which was merged a while back):
https://gerrit.ovirt.org/#/c/97491/ - Add pre-seed for ovirt-web-ui

Can you please check the issue as it seems both patches are changing the same thing and the project seem to be broken since https://gerrit.ovirt.org/#/c/97491/3

Latest failure:


errors from logs:

2019-02-27 13:37:28,479-05 ERROR [org.ovirt.engine.core.bll.UpdateVmCommand] (default task-1) [74283e25] Transaction rolled-back for command 'org.ovirt.engine.core.bll.UpdateVmCommand'.
2019-02-27 13:37:28,483-05 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-1) [74283e25] EVENT_ID: USER_FAILED_UPDATE_VM(58), Failed to update VM vm2 (User: admin@inter
2019-02-27 13:37:28,485-05 INFO  [org.ovirt.engine.core.bll.UpdateVmCommand] (default task-1) [74283e25] Lock freed to object 'EngineLock:{exclusiveLocks='[vm2=VM_NAME]', sharedLocks='[3500eb82-e5e2-4e24-b41c-ea
2019-02-27 13:37:28,485-05 DEBUG [org.ovirt.engine.core.common.di.interceptor.DebugLoggingInterceptor] (default task-1) [74283e25] method: runAction, params: [UpdateVm, VmManagementParametersBase:{commandId='340
59769-05b9-429e-8356-f6b9b9953f55', user='admin', commandType='UpdateVm', vmId='3500eb82-e5e2-4e24-b41c-ea02d9f6adee'}], timeElapsed: 6618ms
2019-02-27 13:37:28,486-05 ERROR [org.ovirt.engine.api.restapi.resource.AbstractBackendResource] (default task-1) [] Operation Failed: []
2019-02-27 13:37:28,579-05 DEBUG [org.ovirt.engine.core.common.di.interceptor.DebugLoggingInterceptor] (EE-ManagedThreadFactory-engineScheduled-Thread-85) [] method: get, params: [e29c0ba1-464c-4eb4-a8f2-c6933d9
9969a], timeElapsed: 3ms


2019-02-27 13:37:21,987-0500 INFO  (jsonrpc/1) [vdsm.api] FINISH lease_info error=No such lease 3500eb82-e5e2-4e24-b41c-ea02d9f6adee from=::ffff:,43920, flow_id=117dec74-ad59-4b12-8148-b2c130337c10,
 task_id=9c297d41-0aa7-4c74-b268-b710e666bc6c (api:52)
2019-02-27 13:37:21,988-0500 ERROR (jsonrpc/1) [storage.TaskManager.Task] (Task='9c297d41-0aa7-4c74-b268-b710e666bc6c') Unexpected error (task:875)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882, in _run
    return fn(*args, **kargs)
  File "<string>", line 2, in lease_info
  File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 50, in method
    ret = func(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 3702, in lease_info
    info = dom.lease_info(lease.lease_id)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/sd.py", line 674, in lease_info
    return vol.lookup(lease_id)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/xlease.py", line 553, in lookup
    raise NoSuchLease(lease_id)
NoSuchLease: No such lease 3500eb82-e5e2-4e24-b41c-ea02d9f6adee

This is not an error of vdsm. Someone asked for information about a non-existing lease,
and vdsm fail the request with the expected error:

    NoSuchLease: No such lease 3500eb82-e5e2-4e24-b41c-ea02d9f6adee

But since the error is not a public error, vdsm logger handle it as unexpected error and
create lot of noise in vdsm log.

On engine side, this creates even worse noise, as we can see here:
(look for 2019-02-27 13:37:21,996-05)

All these noise could be avoided if vdsm was using a proper public error.

But, this error is not related to the actual failing test, this is just noise hiding the real problem.

Thanks for reporting this.