On Wed, Nov 15, 2017 at 1:35 PM, Dafna Ron <dron@redhat.com> wrote:
> Didi,
>
> Thank you for your detailed explanation and for taking the time to debug
> this issue.
>
> I opened the following Jira's:
>
> 1. for increasing entropy in the hosts:
> https://ovirt-jira.atlassian.net/browse/OVIRT-1763
I do not think anymore this is the main reason for slowness, although
it might still be useful to verify/improve.
yuvalt pointed out in a private discussion that openssl lib does nothing
related to random numbers in its pre/post install scripts.
It does call ldconfig, as do several other packages, which can take
quite a lot of time. Some took less. 3 minutes is definitely not
reasonable.
Also see this, from engine log:
2017-11-13 11:07:17,026-05 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling. update: 398/570: 1:NetworkManager-team-1.8.0-AuditLogDirector]
(VdsDeploy) [45082420] EVENT_ID: VDS_INSTALL_IN_PROGRESS(509),
Installing Host lago-upgrade-from-release-suite-master-host0. Yum
11.el7_4.x86_64.
2017-11-13 11:07:30,573-05 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling. obsoleting: 399/570: 1:NetworkManager-ppp-1.8.0-11.AuditLogDirector]
(VdsDeploy) [45082420] EVENT_ID: VDS_INSTALL_IN_PROGRESS(509),
Installing Host lago-upgrade-from-release-suite-master-host0. Yum
el7_4.x86_64.
2017-11-13 11:07:45,137-05 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling. update: 400/570: 1:NetworkManager-tui-1.8.0-11.AuditLogDirector]
(VdsDeploy) [45082420] EVENT_ID: VDS_INSTALL_IN_PROGRESS(509),
Installing Host lago-upgrade-from-release-suite-master-host0. Yum
el7_4.x86_64.
2017-11-13 11:07:57,842-05 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling. update: 401/570: audit-2.7.6-3.el7.x86_64.AuditLogDirector]
(VdsDeploy) [45082420] EVENT_ID: VDS_INSTALL_IN_PROGRESS(509),
Installing Host lago-upgrade-from-release-suite-master-host0. Yum
That's ~ 15 seconds per package, and the first 3 have no scripts at all.
I'd say there is some serious storage issue there - bad disk, loaded
storage network/hardware, something like this.
>
> 2. I added a comment to the not all logs are downloaded Jira regarded a
> workaround which would save the logs on a different location:
> https://ovirt-jira.atlassian.net/browse/OVIRT-1583
>
> 3. adding /tmp logs to the job logs:
> https://ovirt-jira.atlassian.net/browse/OVIRT-1764
>
> Again, thank you for your help Didi.
>
> Dafna
>
>
>
> On 11/15/2017 09:01 AM, Yedidyah Bar David wrote:
>> On Tue, Nov 14, 2017 at 5:48 PM, Dafna Ron <dron@redhat.com> wrote:
>>> Hi,
>>>
>>> We had a failure in upgrade suite for 002_bootstrap.add_hosts. I am not
>>> seeing any error that can suggest on an issue in engine.
>>>
>>> I can see in the host messages host that we have stopped writing to the log
>>> for 15 minutes and it may suggest that there is something that is keeping
>>> the host from starting which causes us to fail the test on timeout.
>>>
>>> However, i can use some help in determining the cause for this failure and
>>> weather its connected to the bootstrap_add_host test in upgrade.
>>>
>>> Link to suspected patches: As I said, I do not think its related, but this
>>> is the patch that was reported.
>>>
>>> https://gerrit.ovirt.org/#/c/83854/
>>>
>>>
>>> Link to Job:
>>>
>>> http://jenkins.ovirt.org/job/ovirt-master_change-queue- tester/3795/
>>>
>>>
>>> Link to all logs:
>>>
>>> http://jenkins.ovirt.org/job/ovirt-master_change-queue- tester/3795/artifact/
>>>
>>>
>>> (Relevant) error snippet from the log:
>>>
>>> <error>
>>>
>>> Test error:
>>>
>>> Error Message
>>>
>>> False != True after 900 seconds
>>>
>>> Stacktrace
>>>
>>> Traceback (most recent call last):
>>> File "/usr/lib64/python2.7/unittest/case.py", line 369, in run
>>> testMethod()
>>> File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in runTest
>>> self.test(*self.arg)
>>> File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py" , line 129, in
>>> wrapped_test
>>> test()
>>> File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py" , line 59, in
>>> wrapper
>>> return func(get_test_prefix(), *args, **kwargs)
>>> File
>>> "/home/jenkins/workspace/ovirt-master_change-queue- tester/ovirt-system-tests/ upgrade-from-release-suite- master/test-scenarios-after- upgrade/002_bootstrap.py",
>>> line 187, in add_hosts
>>> testlib.assert_true_within(_host_is_up_4, timeout=15*60)
>>> File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py" , line 263, in
>>> assert_true_within
>>> assert_equals_within(func, True, timeout, allowed_exceptions)
>>> File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py" , line 237, in
>>> assert_equals_within
>>> '%s != %s after %s seconds' % (res, value, timeout)
>>> AssertionError: False != True after 900 seconds
>>>
>> Hi,
>>
>> It took me way too long to find the log file that includes the above
>> stack trace. There is a known bug that pressing '(all files in zip)' does
>> not get all of them, only some. If we can't fix it, please add something
>> that will create such a zip/tar/whatever as part of the job, so that we
>> do not rely on jenkins.
>>
>> Something causes things to be very slow, no idea what in particular,
>> but could not find a real problem.
>>
>> Since we timed out and killed the engine in the middle of host-deploy,
>> we do not have full logs of it. These are copied to the engine from the
>> host only after it finishes. Before that, the log is written on the host
>> in /tmp. We might want to make ost collect that as well, to help debug
>> similar cases.
>>
>> What we can see in engine log is that we start installing the host [1]:
>>
>> 2017-11-13 11:00:12,331-05 INFO
>> [org.ovirt.engine.core.bll.hostdeploy. InstallVdsInternalCommand]
>> (EE-ManagedThreadFactory-engine-Thread-1) [45082420] Before
>> Installation host 83090668-908c-4f49-8690-5348ec12f931,
>> lago-upgrade-from-release-suite-master-host0
>> 2017-11-13 11:00:12,354-05 INFO
>> [org.ovirt.engine.core.vdsbroker. SetVdsStatusVDSCommand]
>> (EE-ManagedThreadFactory-engine-Thread-1) [45082420] START,
>> SetVdsStatusVDSCommand(HostName =
>> lago-upgrade-from-release-suite-master-host0,
>> SetVdsStatusVDSCommandParameters:{hostId='83090668-908c- 4f49-8690-5348ec12f931',
>> status='Installing', nonOperationalReason='NONE',
>> stopSpmFailureLogged='false', maintenanceReason='null'}), log id:
>> 22a831f7
>>
>> First line from host-deploy is:
>>
>> 2017-11-13 11:00:13,335-05 INFO
>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling. AuditLogDirector]
>> (VdsDeploy) [45082420] EVENT_ID: VDS_INSTALL_IN_PROGRESS(509),
>> Installing Host lago-upgrade-from-release-suite-master-host0. Stage:
>> Initializing.
>>
>> which is one second later, ok.
>>
>> ...
>>
>> 2017-11-13 11:00:13,459-05 INFO
>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling. AuditLogDirector]
>> (VdsDeploy) [45082420] EVENT_ID: VDS_INSTALL_IN_PROGRESS(509),
>> Installing Host lago-upgrade-from-release-suite-master-host0. Stage:
>> Environment packages setup.
>>
>> It goes install (immediately) packages that it needs for itself,
>> in this case it updates the package 'dmidecode', from 2.12-9
>> to 3.0-5. I guess our image is a bit old.
>>
>> ...
>>
>> 2017-11-13 11:00:19,516-05 INFO
>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling. AuditLogDirector]
>> (VdsDeploy) [45082420] EVENT_ID: VDS_INSTALL_IN_PROGRESS(509),
>> Installing Host lago-upgrade-from-release-suite-master-host0. Yum
>> Verify: 2/2: dmidecode.x86_64 1:2.12-9.el7 - ud.
>>
>> So it took (the equivalent of 'yum update dmidecode') 6 seconds.
>> Reasonable.
>>
>> ...
>>
>> Later on, it starts installing/updating the actual packages it
>> should install/update. Starts with:
>>
>> 2017-11-13 11:00:24,982-05 INFO
>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling. AuditLogDirector]
>> (VdsDeploy) [45082420] EVENT_ID: VDS_INSTALL_IN_PROGRESS(509),
>> Installing Host lago-upgrade-from-release-suite-master-host0. Yum
>> Status: Downloading Packages.
>>
>> Then there are many lines about each package download. First is:
>>
>> 2017-11-13 11:00:25,049-05 INFO
>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling. AuditLogDirector]
>> (VdsDeploy) [45082420] EVENT_ID: VDS_INSTALL_IN_PROGRESS(509),
>> Installing Host lago-upgrade-from-release-suite-master-host0. Yum
>> Download/Verify: GeoIP-1.5.0-11.el7.x86_64.
>>
>> Last is:
>>
>> 2017-11-13 11:00:43,790-05 INFO
>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling. AuditLogDirector]
>> (VdsDeploy) [45082420] EVENT_ID: VDS_INSTALL_IN_PROGRESS(509),
>> Installing Host lago-upgrade-from-release-suite-master-host0. Yum
>> Download/Verify: zlib-1.2.7-17.el7.x86_64.
>>
>> Later we see it updates 570 packages. So downloading 570 packages
>> took ~ 20 seconds, good too. Then it installs them. First is:
>>
>> 2017-11-13 11:00:46,114-05 INFO
>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling. AuditLogDirector]
>> (VdsDeploy) [45082420] EVENT_ID: VDS_INSTALL_IN_PROGRESS(509),
>> Installing Host lago-upgrade-from-release-suite-master-host0. Yum
>> update: 1/570: libgcc-4.8.5-16.el7.x86_64.
>>
>> And then many similar ones. I only skimmed through them, trying to
>> find large gaps - didn't write a script - and a large one I find is:
>>
>> 2017-11-13 11:10:04,522-05 INFO
>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling. AuditLogDirector]
>> (VdsDeploy) [45082420] EVENT_ID: VDS_INSTALL_IN_PROGRESS(509),
>> Installing Host lago-upgrade-from-release-suite-master-host0. Yum
>> updated: 506/570: audit.
>>
>> 2017-11-13 11:13:06,483-05 INFO
>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling. AuditLogDirector]
>> (VdsDeploy) [45082420] EVENT_ID: VDS_INSTALL_IN_PROGRESS(509),
>> Installing Host lago-upgrade-from-release-suite-master-host0. Yum
>> updated: 507/570: openssl-libs.
>>
>> In between these two lines, there are a few unrelated ones, from some
>> other engine thread. So something caused it to need 3 minutes to install
>> openssl-libs.
>>
>> A wild guess: Perhaps we have low entropy, and openssl does something
>> that needs entropy?
>>
>> Checking engine-setup log there [2], I see that the following command
>> took almost 1.5 minutes to run:
>>
>> 2017-11-13 10:58:19,459-0500 DEBUG
>> otopi.plugins.ovirt_engine_setup.ovirt_engine.config.sso
>> plugin.executeRaw:813 execute:
>> ('/usr/share/ovirt-engine/bin/ovirt-engine-crypto-tool.sh',
>> 'pbe-encode', '--password=env:pass'), executable='None', cwd='None',
>> env={'pass': '**FILTERED**', 'LESSOPEN': '||/usr/bin/lesspipe.sh %s',
>> 'SSH_CLIENT': '192.168.200.1 52372 22', 'SELINUX_USE_CURRENT_RANGE':
>> '', 'LOGNAME': 'root', 'USER': 'root', 'OVIRT_ENGINE_JAVA_HOME':
>> u'/usr/lib/jvm/jre', 'PATH':
>> '/opt/rh/rh-postgresql95/root/usr/bin:/usr/local/sbin:/usr/ local/bin:/usr/sbin:/usr/bin',
>> 'HOME': '/root', 'OVIRT_JBOSS_HOME':
>> '/usr/share/ovirt-engine-wildfly', 'LD_LIBRARY_PATH':
>> '/opt/rh/rh-postgresql95/root/usr/lib64', 'LANG': 'en_US.UTF-8',
>> 'SHELL': '/bin/bash', 'LIBRARY_PATH':
>> '/opt/rh/rh-postgresql95/root/usr/lib64', 'SHLVL': '4',
>> 'POSTGRESQLENV':
>> 'COMMAND/pg_dump=str:/opt/rh/rh-postgresql95/root/usr/bin/ pg_dump
>> COMMAND/psql=str:/opt/rh/rh-postgresql95/root/usr/bin/psql
>> COMMAND/pg_restore=str:/opt/rh/rh-postgresql95/root/usr/ bin/pg_restore
>> COMMAND/postgresql-setup=str:/opt/rh/rh-postgresql95/root/ usr/bin/postgresql-setup
>> OVESETUP_PROVISIONING/postgresService=str:rh- postgresql95-postgresql
>> OVESETUP_PROVISIONING/postgresConf=str:/var/opt/rh/ rh-postgresql95/lib/pgsql/ data/postgresql.conf
>> OVESETUP_PROVISIONING/postgresPgHba=str:/var/opt/rh/ rh-postgresql95/lib/pgsql/ data/pg_hba.conf
>> OVESETUP_PROVISIONING/postgresPgVersion=str:/var/ opt/rh/rh-postgresql95/lib/ pgsql/data/PG_VERSION',
>> 'MANPATH': '/opt/rh/rh-postgresql95/root/usr/share/man:', 'X_SCLS':
>> 'rh-postgresql95 ', 'XDG_RUNTIME_DIR': '/run/user/0',
>> 'OVIRT_ENGINE_JAVA_HOME_FORCE': '1', 'PYTHONPATH':
>> '/usr/share/ovirt-engine/setup/bin/..::', 'SELINUX_ROLE_REQUESTED':
>> '', 'MAIL': '/var/mail/root', 'PKG_CONFIG_PATH':
>> '/opt/rh/rh-postgresql95/root/usr/lib64/pkgconfig', 'XDG_SESSION_ID':
>> '14', 'sclenv': 'rh-postgresql95', 'XDG_CONFIG_DIRS':
>> '/etc/opt/rh/rh-postgresql95/xdg:/etc/xdg', 'JAVACONFDIRS':
>> '/etc/opt/rh/rh-postgresql95/java:/etc/java',
>> 'SELINUX_LEVEL_REQUESTED': '', 'XDG_DATA_DIRS':
>> '/opt/rh/rh-postgresql95/root/usr/share', 'PWD': '/root', 'CPATH':
>> '/opt/rh/rh-postgresql95/root/usr/include', 'OTOPI_LOGFILE':
>> '/var/log/ovirt-engine/setup/ovirt-engine-setup- 20171113105548-qrc7zo.log',
>> 'SSH_CONNECTION': '192.168.200.1 52372 192.168.200.3 22',
>> 'OTOPI_EXECDIR': '/root'}
>> 2017-11-13 10:59:43,545-0500 DEBUG
>> otopi.plugins.ovirt_engine_setup.ovirt_engine.config.sso
>> plugin.executeRaw:863 execute-result:
>> ('/usr/share/ovirt-engine/bin/ovirt-engine-crypto-tool.sh',
>> 'pbe-encode', '--password=env:pass'), rc=0
>> 2017-11-13 10:59:43,546-0500 DEBUG
>> otopi.plugins.ovirt_engine_setup.ovirt_engine.config.sso
>> plugin.execute:921 execute-output:
>> ('/usr/share/ovirt-engine/bin/ovirt-engine-crypto-tool.sh',
>> 'pbe-encode', '--password=env:pass') stdout:
>> eyJhcnRpZmFjdCI6IkVudmVsb3BlUEJFIiwic2FsdCI6ImNjSmhFbHRnUEJx eUlNTUJSaU1OdFRYL3M0RGRJT1hOSW JjV2F1NFZGT0U9Iiwic2VjcmV0Ijoi K2ZlTzVyZm9kNGlsVmZLRENaRjdseV VQZHZnWnBTWUF0cnBYUWVpQnJaTT0i LCJ2ZXJzaW9uIjoiMSIsIml0ZXJhdG lvbnMiOiI0MDAwIiwiYWxnb3JpdGht IjoiUEJLREYyV2l0aEhtYWNTSEExIn 0=
>>
>> 2017-11-13 10:59:43,546-0500 DEBUG
>> otopi.plugins.ovirt_engine_setup.ovirt_engine.config.sso
>> plugin.execute:926 execute-output:
>> ('/usr/share/ovirt-engine/bin/ovirt-engine-crypto-tool.sh',
>> 'pbe-encode', '--password=env:pass') stderr:
>>
>> This is almost always due to not enough entropy.
>>
>> So please check if everything - from physical machines to
>> lago/ost/libvirt/etc., makes sure to supply all VMs enough
>> entropy. We usually do this using virtio-rng.
>>
>> [1] http://jenkins.ovirt.org/job/ovirt-master_change-queue- tester/3795/artifact/exported- artifacts/upgrade-from- release-suit-master-el7/test_ logs/upgrade-from-release- suite-master/post-002_ bootstrap.py/lago-upgrade- from-release-suite-master- engine/_var_log/ovirt-engine/ engine.log
>> [2] http://jenkins.ovirt.org/job/ovirt-master_change-queue- tester/3795/artifact/exported- artifacts/upgrade-from- release-suit-master-el7/test_ logs/upgrade-from-release- suite-master/post-002_ bootstrap.py/lago-upgrade- from-release-suite-master- engine/_var_log/ovirt-engine/ setup/ovirt-engine-setup- 20171113105548-qrc7zo.log
>>
>>> lago log:
>>>
>>> 2017-11-13
>>> 15:31:23,212::log_utils.py::__enter__::600::lago.prefix:: INFO::ESC[0mESC[0m
>>> 2017-11-13
>>> 15:31:23,213::log_utils.py::__enter__::600::lago.ssh::DEBUG: :start
>>> task:0468ed2f-b174-4d94-bc66-2b6e08087a86:Get ssh client for
>>> lago-upgrade-from-release-suite-master-host0:
>>> 2017-11-13
>>> 15:31:23,213::log_utils.py::__enter__::600::lago.ssh::DEBUG: :start
>>> task:91372413-f7fd-4b72-85b9-9f5216ca7ae9:Get ssh client for
>>> lago-upgrade-from-release-suite-master-engine:
>>> 2017-11-13 15:31:23,213::ssh.py::get_ssh_client::339::lago.ssh::DEBUG:: Still
>>> got 1 tries for lago-upgrade-from-release-suite-master-host0
>>> 2017-11-13 15:31:23,213::ssh.py::get_ssh_client::339::lago.ssh::DEBUG:: Still
>>> got 1 tries for lago-upgrade-from-release-suite-master-engine
>>> 2017-11-13
>>> 15:31:26,220::ssh.py::get_ssh_client::354::lago.ssh::DEBUG:: Socket error
>>> connecting to lago-upgrade-from-release-suite-master-engine: [Errno None]
>>> Unable to connect to port 22 on 192.168.200.3
>>> 2017-11-13
>>> 15:31:26,221::ssh.py::get_ssh_client::354::lago.ssh::DEBUG:: Socket error
>>> connecting to lago-upgrade-from-release-suite-master-host0: [Errno None]
>>> Unable to connect to port 22 on 192.168.200.2
>>> 2017-11-13 15:31:27,222::log_utils.py::__exit__::611::lago.ssh::DEBUG:: end
>>> task:91372413-f7fd-4b72-85b9-9f5216ca7ae9:Get ssh client for
>>> lago-upgrade-from-release-suite-master-engine:
>>> 2017-11-13 15:31:27,222::log_utils.py::__exit__::611::lago.ssh::DEBUG:: end
>>> task:0468ed2f-b174-4d94-bc66-2b6e08087a86:Get ssh client for
>>> lago-upgrade-from-release-suite-master-host0:
>>> 2017-11-13 15:31:27,222::ssh.py::wait_for_ssh::129::lago.ssh::DEBUG: :Got
>>> exception while sshing to lago-upgrade-from-release-suite-master-engine:
>>> Timed out (in 4 s) trying to ssh to
>>> lago-upgrade-from-release-suite-master-engine
>>> 2017-11-13 15:31:27,222::ssh.py::wait_for_ssh::129::lago.ssh::DEBUG: :Got
>>> exception while sshing to lago-upgrade-from-release-suite-master-host0:
>>> Timed out (in 4 s) trying to ssh to
>>> lago-upgrade-from-release-suite-master-host0
>>> 2017-11-13
>>> 15:31:28,224::log_utils.py::__enter__::600::lago.ssh::DEBUG: :start
>>> task:fc88f641-b012-4636-a471-9ccaaf361a53:Get ssh client for
>>> lago-upgrade-from-release-suite-master-engine:
>>> 2017-11-13
>>> 15:31:28,224::log_utils.py::__enter__::600::lago.ssh::DEBUG: :start
>>> task:afb01a46-6338-407f-b7c9-9d5c6b91404d:Get ssh client for
>>> lago-upgrade-from-release-suite-master-host0:
>>> 2017-11-13 15:31:28,224::ssh.py::get_ssh_client::339::lago.ssh::DEBUG:: Still
>>> got 1 tries for lago-upgrade-from-release-suite-master-engine
>>> 2017-11-13 15:31:28,225::ssh.py::get_ssh_client::339::lago.ssh::DEBUG:: Still
>>> got 1 tries for lago-upgrade-from-release-suite-master-host0
>>> 2017-11-13
>>> 15:31:28,226::ssh.py::get_ssh_client::354::lago.ssh::DEBUG:: Socket error
>>> connecting to lago-upgrade-from-release-suite-master-host0: [Errno None]
>>> Unable to connect to port 22 on 192.168.200.2
>>> 2017-11-13 15:31:29,228::log_utils.py::__exit__::611::lago.ssh::DEBUG:: end
>>> task:afb01a46-6338-407f-b7c9-9d5c6b91404d:Get ssh client for
>>> lago-upgrade-from-release-suite-master-host0:
>>> 2017-11-13 15:31:29,228::ssh.py::wait_for_ssh::129::lago.ssh::DEBUG: :Got
>>> exception while sshing to lago-upgrade-from-release-suite-master-host0:
>>> Timed out (in 1 s) trying to ssh to
>>> lago-upgrade-from-release-suite-master-host0
>>> 2017-11-13
>>> 15:31:29,229::ssh.py::get_ssh_client::354::lago.ssh::DEBUG:: Socket error
>>> connecting to lago-upgrade-from-release-suite-master-engine: [Errno None]
>>> Unable to connect to port 22 on 192.168.200.3
>>> 2017-11-13 15:31:30,229::log_utils.py::__exit__::611::lago.ssh::DEBUG:: end
>>> task:fc88f641-b012-4636-a471-9ccaaf361a53:Get ssh client for
>>> lago-upgrade-from-release-suite-master-engine:
>>> 2017-11-13
>>> 15:31:30,230::log_utils.py::__enter__::600::lago.ssh::DEBUG: :start
>>> task:9df5ddca-bbc4-485a-9f3a-b0b9a5bb5990:Get ssh client for
>>> lago-upgrade-from-release-suite-master-host0:
>>> 2017-11-13 15:31:30,230::ssh.py::wait_for_ssh::129::lago.ssh::DEBUG: :Got
>>> exception while sshing to lago-upgrade-from-release-suite-master-engine:
>>> Timed out (in 2 s) trying to ssh to
>>> lago-upgrade-from-release-suite-master-engine
>>> 2017-11-13 15:31:30,230::ssh.py::get_ssh_client::339::lago.ssh::DEBUG:: Still
>>> got 1 tries for lago-upgrade-from-release-suite-master-host0
>>> 2017-11-13
>>> 15:31:30,231::ssh.py::get_ssh_client::354::lago.ssh::DEBUG:: Socket error
>>> connecting to lago-upgrade-from-release-suite-master-host0: [Errno None]
>>> Unable to connect to port 22 on 192.168.200.2
>>> 2017-11-13
>>> 15:31:31,231::log_utils.py::__enter__::600::lago.ssh::DEBUG: :start
>>> task:53fe67da-a632-49fe-b697-e8a2c4cb7d23:Get ssh client for
>>> lago-upgrade-from-release-suite-master-engine:
>>> 2017-11-13 15:31:31,232::ssh.py::get_ssh_client::339::lago.ssh::DEBUG:: Still
>>> got 1 tries for lago-upgrade-from-release-suite-master-engine
>>> 2017-11-13
>>> 15:31:31,232::ssh.py::get_ssh_client::354::lago.ssh::DEBUG:: Socket error
>>> connecting to lago-upgrade-from-release-suite-master-engine: [Errno None]
>>> Unable to connect to port 22 on 192.168.200.3
>>> 2017-11-13 15:31:31,232::log_utils.py::__exit__::611::lago.ssh::DEBUG:: end
>>> task:9df5ddca-bbc4-485a-9f3a-b0b9a5bb5990:Get ssh client for
>>> lago-upgrade-from-release-suite-master-host0:
>>> 2017-11-13 15:31:31,233::ssh.py::wait_for_ssh::129::lago.ssh::DEBUG: :Got
>>> exception while sshing to lago-upgrade-from-release-suite-master-host0:
>>> Timed out (in 1 s) trying to ssh to
>>> lago-upgrade-from-release-suite-master-host0
>>> 2017-11-13 15:31:32,234::log_utils.py::__exit__::611::lago.ssh::DEBUG:: end
>>> task:53fe67da-a632-49fe-b697-e8a2c4cb7d23:Get ssh client for
>>> lago-upgrade-from-release-suite-master-engine:
>>> 2017-11-13 15:31:32,234::ssh.py::wait_for_ssh::129::lago.ssh::DEBUG: :Got
>>> exception while sshing to lago-upgrade-from-release-suite-master-engine:
>>> Timed out (in 1 s) trying to ssh to
>>> lago-upgrade-from-release-suite-master-engine
>>> 2017-11-13
>>> 15:31:32,234::log_utils.py::__enter__::600::lago.ssh::DEBUG: :start
>>> task:1969bf5b-4e66-43d7-91c6-a28949e98fe8:Get ssh client for
>>> lago-upgrade-from-release-suite-master-host0:
>>> 2017-11-13 15:31:32,234::ssh.py::get_ssh_client::339::lago.ssh::DEBUG:: Still
>>> got 1 tries for lago-upgrade-from-release-suite-master-host0
>>> 2017-11-13
>>> 15:31:32,235::ssh.py::get_ssh_client::354::lago.ssh::DEBUG:: Socket error
>>> connecting to lago-upgrade-from-release-suite-master-host0: [Errno None]
>>> Unable to connect to port 22 on 192.168.200.2
>>> 2017-11-13
>>> 15:31:33,235::log_utils.py::__enter__::600::lago.ssh::DEBUG: :start
>>> task:5fd0e3c3-d83c-46f9-9c88-bc739aa9c430:Get ssh client for
>>> lago-upgrade-from-release-suite-master-engine:
>>> 2017-11-13 15:31:33,235::ssh.py::get_ssh_client::339::lago.ssh::DEBUG:: Still
>>> got 1 tries for lago-upgrade-from-release-suite-master-engine
>>> 2017-11-13 15:31:33,236::log_utils.py::__exit__::611::lago.ssh::DEBUG:: end
>>> task:1969bf5b-4e66-43d7-91c6-a28949e98fe8:Get ssh client for
>>> lago-upgrade-from-release-suite-master-host0:
>>> 2017-11-13
>>> 15:31:33,236::ssh.py::get_ssh_client::354::lago.ssh::DEBUG:: Socket error
>>> connecting to lago-upgrade-from-release-suite-master-engine: [Errno None]
>>> Unable to connect to port 22 on 192.168.200.3
>>> 2017-11-13 15:31:33,237::ssh.py::wait_for_ssh::129::lago.ssh::DEBUG: :Got
>>> exception while sshing to lago-upgrade-from-release-suite-master-host0:
>>> Timed out (in 1 s) trying to ssh to
>>> lago-upgrade-from-release-suite-master-host0
>>> 2017-11-13 15:31:34,238::log_utils.py::__exit__::611::lago.ssh::DEBUG:: end
>>> task:5fd0e3c3-d83c-46f9-9c88-bc739aa9c430:Get ssh client for
>>> lago-upgrade-from-release-suite-master-engine:
>>> 2017-11-13 15:31:34,238::ssh.py::wait_for_ssh::129::lago.ssh::DEBUG: :Got