[JIRA] (OVIRT-2828) Re: [ovirt-devel] Re: Check patch failure in vdsm

[ https://ovirt-jira.atlassian.net/browse/OVIRT-2828?page=com.atlassian.jira.p... ] Anton Marchukov reassigned OVIRT-2828: -------------------------------------- Assignee: Nir Soffer (was: infra)
Re: [ovirt-devel] Re: Check patch failure in vdsm -------------------------------------------------
Key: OVIRT-2828 URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2828 Project: oVirt - virtualization made easy Issue Type: By-EMAIL Reporter: Nir Soffer Assignee: Nir Soffer
On Tue, Nov 12, 2019 at 5:34 PM Miguel Duarte de Mora Barroso <mdbarroso@redhat.com> wrote:
On Mon, Nov 11, 2019 at 11:12 AM Eyal Shenitzky <eshenitz@redhat.com> wrote:
Hi,
I encounter the following error for py36 test in vdsm check patch:
... ... 14:03:37 File "/usr/local/lib/python3.7/site-packages/pluggy/callers.py", line 187, in _multicall 14:03:37 res = hook_impl.function(*args) 14:03:37 File "/usr/local/lib/python3.7/site-packages/pluggy/manager.py", line 86, in <lambda> 14:03:37 firstresult=hook.spec.opts.get("firstresult") if hook.spec else False, 14:03:37 File "/usr/local/lib/python3.7/site-packages/pluggy/manager.py", line 92, in _hookexec 14:03:37 return self._inner_hookexec(hook, methods, kwargs) 14:03:37 File "/usr/local/lib/python3.7/site-packages/pluggy/hooks.py", line 286, in __call__ 14:03:37 return self._hookexec(self, self.get_hookimpls(), kwargs) 14:03:37 File "/home/jenkins/workspace/vdsm_standard-check-patch/vdsm/.tox/storage-py37/lib/python3.7/site-packages/_pytest/config/__init__.py", line 82, in main 14:03:37 return config.hook.pytest_cmdline_main(config=config) 14:03:37 File "/home/jenkins/workspace/vdsm_standard-check-patch/vdsm/.tox/storage-py37/bin/pytest", line 8, in <module> 14:03:37 sys.exit(main()) 14:03:37 [Inferior 1 (process 22145) detached] 14:03:37 ============================================================= 14:03:37 = Terminating watched process = 14:03:37 ============================================================= 14:03:37 PROFILE {"command": ["python", "py-watch", "600", "pytest", "-m", "not (slow or stress)", "--durations=10", "--cov=vdsm.storage", "--cov-report=html:htmlcov-storage-py37", "--cov-fail-under=62", "storage"], "cpu": 39.921942808919184, "elapsed": 604.4699757099152, "idrss": 0, "inblock": 1693453, "isrss": 0, "ixrss": 0, "majflt": 2, "maxrss": 331172, "minflt": 5606489, "msgrcv": 0, "msgsnd": 0, "name": "storage-py37", "nivcsw": 139819, "nsignals": 0, "nswap": 0, "nvcsw": 187576, "oublock": 2495645, "start": 1573386812.7961884, "status": 143, "stime": 118.260961, "utime": 123.055197} 14:03:37 ERROR: InvocationError for command /home/jenkins/workspace/vdsm_standard-check-patch/vdsm/.tox/storage-py37/bin/python profile storage-py37 python py-watch 600 pytest -m 'not (slow or stress)' --durations=10 --cov=vdsm.storage --cov-report=html:htmlcov-storage-py37 --cov-fail-under=62 storage (exited with code 143)
Is there any known issue?
Anyone able to pitch in ? I think something similar is happening in [0], also on check-patch [1].
[0] - https://jenkins.ovirt.org/blue/organizations/jenkins/vdsm_standard-check-pat...
Yes it looks the same issue.
[1] - https://gerrit.ovirt.org/#/c/104274/ Jenkins slaves are very slow recently. I suspect we run too many jobs concurrently or using too many virtual cpus.
-- This message was sent by Atlassian Jira (v1001.0.0-SNAPSHOT#100114)
participants (1)
-
Anton Marchukov (oVirt JIRA)