Hi Milan,
(Adding infra-support to open a ticket)
For the first job, the automation/deploy.sh script failed, which means
vdsm failed to install inside the VM created by Lago. I couldn't
figure out why as the 'deploy.sh' script was missing the bash '-x'
flag. The /var/log/messages doesn't show any VDSM logs, so I assume it
failed before. Anyways, now that[1] is merged - it should be easier to
debug this next time.
For the second job - this is due to Lago internal reposerver still
being up from a previous run on the slave. It seems that this[2] vdsm
check-merged job on April 05 caused it, when it timed-out without
terminating properly. This is quite rare I should say, we can keep
this ticket to check if it happens again.
Either way - I think both failures are unrelated, best would be(if
still relevant - as check-merged probably ran a few times since) to
re-trigger and see if it replicates.
Hi Nadav,
thank you for explanation. I don't think the failures replicate;
hopefully their causes will be identified and fixed if their occur
again.
[1]
https://gerrit.ovirt.org/#/c/75348/2
[2]
http://jenkins.ovirt.org/job/vdsm_master_check-merged-el7-x86_64/1492/con...
On Fri, Apr 7, 2017 at 10:32 AM, Milan Zamazal <mzamazal(a)redhat.com> wrote:
> Hi,
>
> a series of 4 my Vdsm patches was merged yesterday and Jenkins has
> failed on two of them in check-merged. See
>
http://jenkins.ovirt.org/job/vdsm_master_check-merged-el7-x86_64/1504/
> and
>
http://jenkins.ovirt.org/job/vdsm_master_check-merged-el7-x86_64/1506/.
>
> The corresponding errors were:
>
> 16:20:09 + lago ovirt deploy
> 16:20:09 current session does not belong to lago group.
> 16:20:09 @ Deploy oVirt environment:
> 16:20:09 # ovirt-role metadata entry will be soon deprecated, instead you
> should use the vm-provider entry in the domain definition and set it no one
> of: ovirt-node, ovirt-engine, ovirt-host
> 16:20:09 # Deploy environment:
> 16:20:09 * [Thread-2] Deploy VM vdsm_functional_tests_host-el7:
> 16:20:23 - STDERR
> 16:20:23
> 16:20:23
> 16:20:23 Exiting on user cancel
> 16:20:23
> 16:20:23 * [Thread-2] Deploy VM vdsm_functional_tests_host-el7: ERROR (in
0:00:13)
> 16:20:23 Error while running thread
> 16:20:23 Traceback (most recent call last):
> 16:20:23 File "/usr/lib/python2.7/site-packages/lago/utils.py", line 57,
in _ret_via_queue
> 16:20:23 queue.put({'return': func()})
> 16:20:23 File "/usr/lib/python2.7/site-packages/lago/prefix.py", line
1339, in _deploy_host
> 16:20:23 host.name(),
> 16:20:23 RuntimeError:
>
/home/jenkins/workspace/vdsm_master_check-merged-el7-x86_64/vdsm/automation/vdsm_functional/default/scripts/_home_jenkins_workspace_vdsm_master_check-merged-el7-x86_64_vdsm_automation_deploy.sh
> failed with status 1 on vdsm_functional_tests_host-el7
> 16:20:23 # Deploy environment: ERROR (in 0:00:13)
> 16:20:23 @ Deploy oVirt environment: ERROR (in 0:00:14)
> 16:20:23 Error occured, aborting
>
> and
>
> 16:21:32 + lago ovirt deploy
> 16:21:33 current session does not belong to lago group.
> 16:21:33 @ Deploy oVirt environment:
> 16:21:33 # ovirt-role metadata entry will be soon deprecated, instead you
> should use the vm-provider entry in the domain definition and set it no one
> of: ovirt-node, ovirt-engine, ovirt-host
> 16:21:33 @ Deploy oVirt environment: ERROR (in 0:00:00)
> 16:21:33 Error occured, aborting
> 16:21:33 Traceback (most recent call last):
> 16:21:33 File "/usr/lib/python2.7/site-packages/ovirtlago/cmd.py", line
303, in do_run
> 16:21:33 self.cli_plugins[args.ovirtverb].do_run(args)
> 16:21:33 File "/usr/lib/python2.7/site-packages/lago/plugins/cli.py",
line 184, in do_run
> 16:21:33 self._do_run(**vars(args))
> 16:21:33 File "/usr/lib/python2.7/site-packages/lago/utils.py", line 495,
in wrapper
> 16:21:33 return func(*args, **kwargs)
> 16:21:33 File "/usr/lib/python2.7/site-packages/lago/utils.py", line 506,
in wrapper
> 16:21:33 return func(*args, prefix=prefix, **kwargs)
> 16:21:33 File "/usr/lib/python2.7/site-packages/ovirtlago/cmd.py", line
164, in do_deploy
> 16:21:33 prefix.deploy()
> 16:21:33 File "/usr/lib/python2.7/site-packages/lago/log_utils.py", line
633, in wrapper
> 16:21:33 return func(*args, **kwargs)
> 16:21:33 File "/usr/lib/python2.7/site-packages/ovirtlago/reposetup.py",
line 110, in wrapper
> 16:21:33 with utils.repo_server_context(args[0]):
> 16:21:33 File "/usr/lib64/python2.7/contextlib.py", line 17, in
__enter__
> 16:21:33 return self.gen.next()
> 16:21:33 File "/usr/lib/python2.7/site-packages/ovirtlago/utils.py", line
97, in repo_server_context
> 16:21:33 root_dir=prefix.paths.internal_repo(),
> 16:21:33 File "/usr/lib/python2.7/site-packages/ovirtlago/utils.py", line
73, in _create_http_server
> 16:21:33 generate_request_handler(root_dir),
> 16:21:33 File "/usr/lib64/python2.7/SocketServer.py", line 419, in
__init__
> 16:21:33 self.server_bind()
> 16:21:33 File "/usr/lib64/python2.7/BaseHTTPServer.py", line 108, in
server_bind
> 16:21:33 SocketServer.TCPServer.server_bind(self)
> 16:21:33 File "/usr/lib64/python2.7/SocketServer.py", line 430, in
server_bind
> 16:21:33 self.socket.bind(self.server_address)
> 16:21:33 File "/usr/lib64/python2.7/socket.py", line 224, in meth
> 16:21:33 return getattr(self._sock,name)(*args)
> 16:21:33 error: [Errno 98] Address already in use
>
> Do you know what's wrong?
>
> Thanks,
> Milan
> _______________________________________________
> Infra mailing list
> Infra(a)ovirt.org
>
http://lists.ovirt.org/mailman/listinfo/infra