]
Barak Korren commented on OVIRT-1712:
-------------------------------------
{quote}
They also had leftovers of ovirt-master_change-queue-tester in the Jenkins work directory,
so this may be the job causing the issue.
{quote}
No, that last job to run on a slave always leaves its $WORKSPACE behind on that slave so
if it runs on it again, some stuff is already cached for it.
We need to check the OST cleanup code and the jobs that previously ran on the slaves to
see why the '{{lago serve}} process on the port was not killed. We should probably
also modify how '{{lago serve}}' works so its less likely to influence other Lago
environments trying to run on the same node, and less likely to stay behind.
Re: Manual OST fails
--------------------
Key: OVIRT-1712
URL:
https://ovirt-jira.atlassian.net/browse/OVIRT-1712
Project: oVirt - virtualization made easy
Issue Type: By-EMAIL
Reporter: eyal edri
Assignee: infra
Evgheni,
Was there any change recently to Lago slaves?
On Fri, Oct 20, 2017 at 11:05 AM, Piotr Kliczewski <
piotr.kliczewski(a)gmail.com> wrote:
> I attempted to run manual OST twice and both failed with below issue.
> Can someone take a look?
>
> Thanks,
> Piotr
>
> 2017-10-20 07:59:12,485::log_utils.py::__exit__::607::ovirtlago.prefix:
> :DEBUG::
> File "/usr/lib/python2.7/site-packages/lago/log_utils.py", line 636,
> in wrapper
> return func(*args, **kwargs)
> File "/usr/lib/python2.7/site-packages/ovirtlago/reposetup.py", line
> 111, in wrapper
> with utils.repo_server_context(args[0]):
> File "/usr/lib64/python2.7/contextlib.py", line 17, in __enter__
> return self.gen.next()
> File "/usr/lib/python2.7/site-packages/ovirtlago/utils.py", line
> 100, in repo_server_context
> root_dir=prefix.paths.internal_repo(),
> File "/usr/lib/python2.7/site-packages/ovirtlago/utils.py", line 76,
> in _create_http_server
> generate_request_handler(root_dir),
> File "/usr/lib64/python2.7/SocketServer.py", line 419, in __init__
> self.server_bind()
> File "/usr/lib64/python2.7/BaseHTTPServer.py", line 108, in server_bind
> SocketServer.TCPServer.server_bind(self)
> File "/usr/lib64/python2.7/SocketServer.py", line 430, in server_bind
> self.socket.bind(self.server_address)
> File "/usr/lib64/python2.7/socket.py", line 224, in meth
> return getattr(self._sock,name)(*args)
>
> 2017-10-20 07:59:12,485::cmd.py::do_run::365::root::ERROR::Error
> occured, aborting
> Traceback (most recent call last):
> File "/usr/lib/python2.7/site-packages/ovirtlago/cmd.py", line 362, in
> do_run
> self.cli_plugins[args.ovirtverb].do_run(args)
> File "/usr/lib/python2.7/site-packages/lago/plugins/cli.py", line
> 184, in do_run
> self._do_run(**vars(args))
> File "/usr/lib/python2.7/site-packages/lago/utils.py", line 501, in
> wrapper
> return func(*args, **kwargs)
> File "/usr/lib/python2.7/site-packages/lago/utils.py", line 512, in
> wrapper
> return func(*args, prefix=prefix, **kwargs)
> File "/usr/lib/python2.7/site-packages/ovirtlago/cmd.py", line 166,
> in do_deploy
> prefix.deploy()
> File "/usr/lib/python2.7/site-packages/lago/log_utils.py", line 636,
> in wrapper
> return func(*args, **kwargs)
> File "/usr/lib/python2.7/site-packages/ovirtlago/reposetup.py", line
> 111, in wrapper
> with utils.repo_server_context(args[0]):
> File "/usr/lib64/python2.7/contextlib.py", line 17, in __enter__
> return self.gen.next()
> File "/usr/lib/python2.7/site-packages/ovirtlago/utils.py", line
> 100, in repo_server_context
> root_dir=prefix.paths.internal_repo(),
> File "/usr/lib/python2.7/site-packages/ovirtlago/utils.py", line 76,
> in _create_http_server
> generate_request_handler(root_dir),
> File "/usr/lib64/python2.7/SocketServer.py", line 419, in __init__
> self.server_bind()
> File "/usr/lib64/python2.7/BaseHTTPServer.py", line 108, in server_bind
> SocketServer.TCPServer.server_bind(self)
> File "/usr/lib64/python2.7/SocketServer.py", line 430, in server_bind
> self.socket.bind(self.server_address)
> File "/usr/lib64/python2.7/socket.py", line 224, in meth
> return getattr(self._sock,name)(*args)
> error: [Errno 98] Address already in use
> _______________________________________________
> Infra mailing list
> Infra(a)ovirt.org
>
http://lists.ovirt.org/mailman/listinfo/infra
>
>
>
--
Eyal edri
MANAGER
RHV DevOps
EMEA VIRTUALIZATION R&D
Red Hat EMEA <
https://www.redhat.com/>
<
https://red.ht/sig> TRIED. TESTED. TRUSTED. <
https://redhat.com/trusted>
phone: +972-9-7692018
irc: eedri (on #tlv #rhev-dev #rhev-integ)