[oVirt Jenkins] ovirt-system-tests_he-basic-ansible-suite-master - Build # 604 - Failure!

Project: http://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-ansible-suite-maste... Build: http://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-ansible-suite-maste... Build Number: 604 Build Status: Failure Triggered By: Started by timer ------------------------------------- Changes Since Last Success: ------------------------------------- Changes for Build #604 [Gal Ben Haim] Adding dr suite [Barak Korren] Remove the populate_mock function from mock_runner [Daniel Belenky] mock_runner: store shell cmd in a variable [Barak Korren] mock_runner: Added timeout param [Barak Korren] Make whitelist repo configurable via env vars [Daniel Belenky] stdci_runner: let mock_runner manage timeout [Greg Sheremeta] remove 4.1 change queue from ovirt-engine-nodejs config ----------------- Failed Tests: ----------------- 1 tests failed. FAILED: 012_local_maintenance_sdk.local_maintenance Error Message: False != True after 600 seconds -------------------- >> begin captured logging << -------------------- root: INFO: * Waiting For System Stability... root: INFO: * Performing Deactivation... --------------------- >> end captured logging << --------------------- Stack Trace: File "/usr/lib64/python2.7/unittest/case.py", line 369, in run testMethod() File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 142, in wrapped_test test() File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 60, in wrapper return func(get_test_prefix(), *args, **kwargs) File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 79, in wrapper prefix.virt_env.engine_vm().get_api(api_ver=4), *args, **kwargs File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 60, in wrapper return func(get_test_prefix(), *args, **kwargs) File "/home/jenkins/workspace/ovirt-system-tests_he-basic-ansible-suite-master/ovirt-system-tests/he-basic-ansible-suite-master/test-scenarios/012_local_maintenance_sdk.py", line 65, in local_maintenance lambda: host_service.get().status == types.HostStatus.MAINTENANCE or File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 286, in assert_true_within_long assert_equals_within_long(func, True, allowed_exceptions) File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 273, in assert_equals_within_long func, value, LONG_TIMEOUT, allowed_exceptions=allowed_exceptions File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 252, in assert_equals_within '%s != %s after %s seconds' % (res, value, timeout) 'False != True after 600 seconds\n-------------------- >> begin captured logging << --------------------\nroot: INFO: * Waiting For System Stability...\nroot: INFO: * Performing Deactivation...\n--------------------- >> end captured logging << ---------------------'

Project: http://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-ansible-suite-maste... Build: http://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-ansible-suite-maste... Build Number: 605 Build Status: Still Failing Triggered By: Started by timer ------------------------------------- Changes Since Last Success: ------------------------------------- Changes for Build #604 [Gal Ben Haim] Adding dr suite [Barak Korren] Remove the populate_mock function from mock_runner [Daniel Belenky] mock_runner: store shell cmd in a variable [Barak Korren] mock_runner: Added timeout param [Barak Korren] Make whitelist repo configurable via env vars [Daniel Belenky] stdci_runner: let mock_runner manage timeout [Greg Sheremeta] remove 4.1 change queue from ovirt-engine-nodejs config Changes for Build #605 [Gal Ben Haim] Adding dr suite ----------------- Failed Tests: ----------------- 1 tests failed. FAILED: 012_local_maintenance_sdk.local_maintenance Error Message: False != True after 600 seconds -------------------- >> begin captured logging << -------------------- root: INFO: * Waiting For System Stability... root: INFO: * Performing Deactivation... --------------------- >> end captured logging << --------------------- Stack Trace: File "/usr/lib64/python2.7/unittest/case.py", line 369, in run testMethod() File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 142, in wrapped_test test() File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 60, in wrapper return func(get_test_prefix(), *args, **kwargs) File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 79, in wrapper prefix.virt_env.engine_vm().get_api(api_ver=4), *args, **kwargs File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 60, in wrapper return func(get_test_prefix(), *args, **kwargs) File "/home/jenkins/workspace/ovirt-system-tests_he-basic-ansible-suite-master/ovirt-system-tests/he-basic-ansible-suite-master/test-scenarios/012_local_maintenance_sdk.py", line 65, in local_maintenance lambda: host_service.get().status == types.HostStatus.MAINTENANCE or File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 286, in assert_true_within_long assert_equals_within_long(func, True, allowed_exceptions) File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 273, in assert_equals_within_long func, value, LONG_TIMEOUT, allowed_exceptions=allowed_exceptions File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 252, in assert_equals_within '%s != %s after %s seconds' % (res, value, timeout) 'False != True after 600 seconds\n-------------------- >> begin captured logging << --------------------\nroot: INFO: * Waiting For System Stability...\nroot: INFO: * Performing Deactivation...\n--------------------- >> end captured logging << ---------------------'

Project: http://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-ansible-suite-maste... Build: http://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-ansible-suite-maste... Build Number: 606 Build Status: Still Failing Triggered By: Started by timer ------------------------------------- Changes Since Last Success: ------------------------------------- Changes for Build #604 [Gal Ben Haim] Adding dr suite [Barak Korren] Remove the populate_mock function from mock_runner [Daniel Belenky] mock_runner: store shell cmd in a variable [Barak Korren] mock_runner: Added timeout param [Barak Korren] Make whitelist repo configurable via env vars [Daniel Belenky] stdci_runner: let mock_runner manage timeout [Greg Sheremeta] remove 4.1 change queue from ovirt-engine-nodejs config Changes for Build #605 [Gal Ben Haim] Adding dr suite Changes for Build #606 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file [Barak Korren] Enable CI for kubevirt/client-python [Daniel Belenky] Add timeout config to stdci dsl [Daniel Belenky] stdci_runner.groovy: utilize DSL's timeout cfg ----------------- Failed Tests: ----------------- 1 tests failed. FAILED: 012_local_maintenance_sdk.local_maintenance Error Message: False != True after 600 seconds -------------------- >> begin captured logging << -------------------- root: INFO: * Waiting For System Stability... root: INFO: * Performing Deactivation... --------------------- >> end captured logging << --------------------- Stack Trace: File "/usr/lib64/python2.7/unittest/case.py", line 369, in run testMethod() File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 142, in wrapped_test test() File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 60, in wrapper return func(get_test_prefix(), *args, **kwargs) File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 79, in wrapper prefix.virt_env.engine_vm().get_api(api_ver=4), *args, **kwargs File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 60, in wrapper return func(get_test_prefix(), *args, **kwargs) File "/home/jenkins/workspace/ovirt-system-tests_he-basic-ansible-suite-master/ovirt-system-tests/he-basic-ansible-suite-master/test-scenarios/012_local_maintenance_sdk.py", line 65, in local_maintenance lambda: host_service.get().status == types.HostStatus.MAINTENANCE or File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 286, in assert_true_within_long assert_equals_within_long(func, True, allowed_exceptions) File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 273, in assert_equals_within_long func, value, LONG_TIMEOUT, allowed_exceptions=allowed_exceptions File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 252, in assert_equals_within '%s != %s after %s seconds' % (res, value, timeout) 'False != True after 600 seconds\n-------------------- >> begin captured logging << --------------------\nroot: INFO: * Waiting For System Stability...\nroot: INFO: * Performing Deactivation...\n--------------------- >> end captured logging << ---------------------'

Project: http://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-ansible-suite-maste... Build: http://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-ansible-suite-maste... Build Number: 607 Build Status: Still Failing Triggered By: Started by timer ------------------------------------- Changes Since Last Success: ------------------------------------- Changes for Build #604 [Gal Ben Haim] Adding dr suite [Barak Korren] Remove the populate_mock function from mock_runner [Daniel Belenky] mock_runner: store shell cmd in a variable [Barak Korren] mock_runner: Added timeout param [Barak Korren] Make whitelist repo configurable via env vars [Daniel Belenky] stdci_runner: let mock_runner manage timeout [Greg Sheremeta] remove 4.1 change queue from ovirt-engine-nodejs config Changes for Build #605 [Gal Ben Haim] Adding dr suite Changes for Build #606 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file [Barak Korren] Enable CI for kubevirt/client-python [Daniel Belenky] Add timeout config to stdci dsl [Daniel Belenky] stdci_runner.groovy: utilize DSL's timeout cfg Changes for Build #607 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file ----------------- Failed Tests: ----------------- 1 tests failed. FAILED: 012_local_maintenance_sdk.local_maintenance Error Message: False != True after 600 seconds -------------------- >> begin captured logging << -------------------- root: INFO: * Waiting For System Stability... root: INFO: * Performing Deactivation... --------------------- >> end captured logging << --------------------- Stack Trace: File "/usr/lib64/python2.7/unittest/case.py", line 369, in run testMethod() File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 142, in wrapped_test test() File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 60, in wrapper return func(get_test_prefix(), *args, **kwargs) File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 79, in wrapper prefix.virt_env.engine_vm().get_api(api_ver=4), *args, **kwargs File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 60, in wrapper return func(get_test_prefix(), *args, **kwargs) File "/home/jenkins/workspace/ovirt-system-tests_he-basic-ansible-suite-master/ovirt-system-tests/he-basic-ansible-suite-master/test-scenarios/012_local_maintenance_sdk.py", line 65, in local_maintenance lambda: host_service.get().status == types.HostStatus.MAINTENANCE or File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 286, in assert_true_within_long assert_equals_within_long(func, True, allowed_exceptions) File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 273, in assert_equals_within_long func, value, LONG_TIMEOUT, allowed_exceptions=allowed_exceptions File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 252, in assert_equals_within '%s != %s after %s seconds' % (res, value, timeout) 'False != True after 600 seconds\n-------------------- >> begin captured logging << --------------------\nroot: INFO: * Waiting For System Stability...\nroot: INFO: * Performing Deactivation...\n--------------------- >> end captured logging << ---------------------'

Project: http://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-ansible-suite-maste... Build: http://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-ansible-suite-maste... Build Number: 608 Build Status: Still Failing Triggered By: Started by timer ------------------------------------- Changes Since Last Success: ------------------------------------- Changes for Build #604 [Gal Ben Haim] Adding dr suite [Barak Korren] Remove the populate_mock function from mock_runner [Daniel Belenky] mock_runner: store shell cmd in a variable [Barak Korren] mock_runner: Added timeout param [Barak Korren] Make whitelist repo configurable via env vars [Daniel Belenky] stdci_runner: let mock_runner manage timeout [Greg Sheremeta] remove 4.1 change queue from ovirt-engine-nodejs config Changes for Build #605 [Gal Ben Haim] Adding dr suite Changes for Build #606 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file [Barak Korren] Enable CI for kubevirt/client-python [Daniel Belenky] Add timeout config to stdci dsl [Daniel Belenky] stdci_runner.groovy: utilize DSL's timeout cfg Changes for Build #607 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #608 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file ----------------- Failed Tests: ----------------- 1 tests failed. FAILED: 008_restart_he_vm.restart_he_vm Error Message: could not parse JSON: {"1": {"conf_on_shared_storage": true, "live-data": true, "extra": "metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=2772 (Fri Sep 7 02:55:40 2018)\nhost-id=1\nscore=3000\nvm_conf_refresh_time=2772 (Fri Sep 7 02:55:40 2018)\nconf_on_shared_storage=True\nmaintenance=False\nstate=GlobalMaintenance\nstopped=False\n", "hostname": "lago-he-basic-ansible-suite-master-host-0", "host-id": 1, "engine-status": {"reason": "failed liveliness check", "health": "bad", "vm": "up", "detail": "Up"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "b642f5be", "local_conf_timestamp": 2772, "host-ts": 2772}, "2": {"conf_on_shared_storage": true, "live-data": true, "extra": "metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=2771 (Fri Sep 7 02:55:39 2018)\nhost-id=2\nscore=3000\nvm_conf_refresh_time=2771 (Fri Sep 7 02:55:39 2018)\nconf_on_shared_storage=True\nmaintenance=False\nstate=GlobalMaintenance\nstopped=False\n", "hostname": "lago-he-basic-ansible-suite-master-host-1", "host- -------------------- >> begin captured logging << -------------------- lago.ssh: DEBUG: start task:0148d290-97ba-42c9-b78e-616c813b22bf:Get ssh client for lago-he-basic-ansible-suite-master-host-0: lago.ssh: DEBUG: end task:0148d290-97ba-42c9-b78e-616c813b22bf:Get ssh client for lago-he-basic-ansible-suite-master-host-0: lago.ssh: DEBUG: Running a363a2fe on lago-he-basic-ansible-suite-master-host-0: hosted-engine --vm-status --json lago.ssh: DEBUG: Command a363a2fe on lago-he-basic-ansible-suite-master-host-0 returned with 0 lago.ssh: DEBUG: Command a363a2fe on lago-he-basic-ansible-suite-master-host-0 output: {"1": {"conf_on_shared_storage": true, "live-data": true, "extra": "metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=2599 (Fri Sep 7 02:52:47 2018)\nhost-id=1\nscore=3400\nvm_conf_refresh_time=2599 (Fri Sep 7 02:52:47 2018)\nconf_on_shared_storage=True\nmaintenance=False\nstate=EngineUp\nstopped=False\n", "hostname": "lago-he-basic-ansible-suite-master-host-0", "host-id": 1, "engine-status": {"health": "good", "vm": "up", "detail": "Up"}, "score": 3400, "stopped": false, "maintenance": false, "crc32": "3e907b1b", "local_conf_timestamp": 2599, "host-ts": 2599}, "2": {"conf_on_shared_storage": true, "live-data": true, "extra": "metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=2600 (Fri Sep 7 02:52:48 2018)\nhost-id=2\nscore=3000\nvm_conf_refresh_time=2600 (Fri Sep 7 02:52:48 2018)\nconf_on_shared_storage=True\nmaintenance=False\nstate=EngineDown\nstopped=False\n", "hostname": "lago-he-basic-ansible-suite-master-host-1", "host-id": 2, "engine-status": {"reason": "vm not running on this host", "health": "bad", "vm": "down_unexpected", "detail": "unknown"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "e8ac0ce1", "local_conf_timestamp": 2600, "host-ts": 2600}, "global_maintenance": true} root: INFO: * Shutting down HE VM on host: lago-he-basic-ansible-suite-master-host-0 lago.ssh: DEBUG: start task:b74bbe86-d612-41c6-80f2-71f425bcbe1c:Get ssh client for lago-he-basic-ansible-suite-master-host-0: lago.ssh: DEBUG: end task:b74bbe86-d612-41c6-80f2-71f425bcbe1c:Get ssh client for lago-he-basic-ansible-suite-master-host-0: lago.ssh: DEBUG: Running a3e4bdb2 on lago-he-basic-ansible-suite-master-host-0: hosted-engine --vm-shutdown lago.ssh: DEBUG: Command a3e4bdb2 on lago-he-basic-ansible-suite-master-host-0 returned with 0 root: INFO: * Command succeeded root: INFO: * Waiting for VM to be down... lago.ssh: DEBUG: start task:f780be22-86eb-4b14-91c6-bdc6e00dc13b:Get ssh client for lago-he-basic-ansible-suite-master-host-0: lago.ssh: DEBUG: end task:f780be22-86eb-4b14-91c6-bdc6e00dc13b:Get ssh client for lago-he-basic-ansible-suite-master-host-0: lago.ssh: DEBUG: Running a507720c on lago-he-basic-ansible-suite-master-host-0: hosted-engine --vm-status --json lago.ssh: DEBUG: Command a507720c on lago-he-basic-ansible-suite-master-host-0 returned with 0 lago.ssh: DEBUG: Command a507720c on lago-he-basic-ansible-suite-master-host-0 output: {"1": {"conf_on_shared_storage": true, "live-data": true, "extra": "metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=2599 (Fri Sep 7 02:52:47 2018)\nhost-id=1\nscore=3400\nvm_conf_refresh_time=2599 (Fri Sep 7 02:52:47 2018)\nconf_on_shared_storage=True\nmaintenance=False\nstate=EngineUp\nstopped=False\n", "hostname": "lago-he-basic-ansible-suite-master-host-0", "host-id": 1, "engine-status": {"health": "good", "vm": "up", "detail": "Up"}, "score": 3400, "stopped": false, "maintenance": false, "crc32": "3e907b1b", "local_conf_timestamp": 2599, "host-ts": 2599}, "2": {"conf_on_shared_storage": true, "live-data": true, "extra": "metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=2600 (Fri Sep 7 02:52:48 2018)\nhost-id=2\nscore=3000\nvm_conf_refresh_time=2600 (Fri Sep 7 02:52:48 2018)\nconf_on_shared_storage=True\nmaintenance=False\nstate=EngineDown\nstopped=False\n", "hostname": "lago-he-basic-ansible-suite-master-host-1", "host-id": 2, "engine-status": {"reason": "vm not running on this host", "health": "bad", "vm": "down_unexpected", "detail": "unknown"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "e8ac0ce1", "local_conf_timestamp": 2600, "host-ts": 2600}, "global_maintenance": true} lago.ssh: DEBUG: start task:7850e30c-e26a-4894-8914-c722f7c8ae33:Get ssh client for lago-he-basic-ansible-suite-master-host-0: lago.ssh: DEBUG: end task:7850e30c-e26a-4894-8914-c722f7c8ae33:Get ssh client for lago-he-basic-ansible-suite-master-host-0: lago.ssh: DEBUG: Running ab9ec64c on lago-he-basic-ansible-suite-master-host-0: hosted-engine --vm-status --json lago.ssh: DEBUG: Command ab9ec64c on lago-he-basic-ansible-suite-master-host-0 returned with 0 lago.ssh: DEBUG: Command ab9ec64c on lago-he-basic-ansible-suite-master-host-0 output: {"1": {"conf_on_shared_storage": true, "live-data": true, "extra": "metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=2609 (Fri Sep 7 02:52:57 2018)\nhost-id=1\nscore=3000\nvm_conf_refresh_time=2609 (Fri Sep 7 02:52:57 2018)\nconf_on_shared_storage=True\nmaintenance=False\nstate=GlobalMaintenance\nstopped=False\n", "hostname": "lago-he-basic-ansible-suite-master-host-0", "host-id": 1, "engine-status": {"health": "good", "vm": "up", "detail": "Up"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "efc01efd", "local_conf_timestamp": 2609, "host-ts": 2609}, "2": {"conf_on_shared_storage": true, "live-data": true, "extra": "metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=2600 (Fri Sep 7 02:52:48 2018)\nhost-id=2\nscore=3000\nvm_conf_refresh_time=2600 (Fri Sep 7 02:52:48 2018)\nconf_on_shared_storage=True\nmaintenance=False\nstate=EngineDown\nstopped=False\n", "hostname": "lago-he-basic-ansible-suite-master-host-1", "host-id": 2, "engine-status": {"reason": "vm not running on this host", "health": "bad", "vm": "down_unexpected", "detail": "unknown"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "e8ac0ce1", "local_conf_timestamp": 2600, "host-ts": 2600}, "global_maintenance": true} lago.ssh: DEBUG: start task:73a0814e-0cf9-4396-8c93-7909aefe4c97:Get ssh client for lago-he-basic-ansible-suite-master-host-0: lago.ssh: DEBUG: end task:73a0814e-0cf9-4396-8c93-7909aefe4c97:Get ssh client for lago-he-basic-ansible-suite-master-host-0: lago.ssh: DEBUG: Running adff7936 on lago-he-basic-ansible-suite-master-host-0: hosted-engine --vm-status --json lago.ssh: DEBUG: Command adff7936 on lago-he-basic-ansible-suite-master-host-0 returned with 0 lago.ssh: DEBUG: Command adff7936 on lago-he-basic-ansible-suite-master-host-0 output: {"1": {"conf_on_shared_storage": true, "live-data": true, "extra": "metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=2619 (Fri Sep 7 02:53:07 2018)\nhost-id=1\nscore=3000\nvm_conf_refresh_time=2619 (Fri Sep 7 02:53:07 2018)\nconf_on_shared_storage=True\nmaintenance=False\nstate=GlobalMaintenance\nstopped=False\n", "hostname": "lago-he-basic-ansible-suite-master-host-0", "host-id": 1, "engine-status": {"reason": "bad vm status", "health": "bad", "vm": "down_unexpected", "detail": "Down"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "11b18f20", "local_conf_timestamp": 2619, "host-ts": 2619}, "2": {"conf_on_shared_storage": true, "live-data": true, "extra": "metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=2610 (Fri Sep 7 02:52:59 2018)\nhost-id=2\nscore=3000\nvm_conf_refresh_time=2611 (Fri Sep 7 02:52:59 2018)\nconf_on_shared_storage=True\nmaintenance=False\nstate=EngineDown\nstopped=False\n", "hostname": "lago-he-basic-ansible-suite-master-host-1", "host-id": 2, "engine-status": {"reason": "vm not running on this host", "health": "bad", "vm": "down_unexpected", "detail": "unknown"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "dcabbd6b", "local_conf_timestamp": 2611, "host-ts": 2610}, "global_maintenance": true} root: INFO: * VM is down. root: INFO: * Stopping services... lago.ssh: DEBUG: start task:7151443a-3382-43da-b1f1-c0401583d5d6:Get ssh client for lago-he-basic-ansible-suite-master-host-0: lago.ssh: DEBUG: end task:7151443a-3382-43da-b1f1-c0401583d5d6:Get ssh client for lago-he-basic-ansible-suite-master-host-0: lago.ssh: DEBUG: Running ae846dee on lago-he-basic-ansible-suite-master-host-0: systemctl stop vdsmd ovirt-ha-broker ovirt-ha-agent lago.ssh: DEBUG: Command ae846dee on lago-he-basic-ansible-suite-master-host-0 returned with 0 root: INFO: * Starting services... lago.ssh: DEBUG: start task:bd93de3a-8477-4561-9fea-c9b6de1c1b3a:Get ssh client for lago-he-basic-ansible-suite-master-host-0: lago.ssh: DEBUG: end task:bd93de3a-8477-4561-9fea-c9b6de1c1b3a:Get ssh client for lago-he-basic-ansible-suite-master-host-0: lago.ssh: DEBUG: Running b8285586 on lago-he-basic-ansible-suite-master-host-0: systemctl start vdsmd ovirt-ha-broker ovirt-ha-agent lago.ssh: DEBUG: Command b8285586 on lago-he-basic-ansible-suite-master-host-0 returned with 0 root: INFO: * Waiting for agent to be ready... lago.ssh: DEBUG: start task:dc27fa1c-dd11-401b-bbac-7d7bcc145240:Get ssh client for lago-he-basic-ansible-suite-master-host-0: lago.ssh: DEBUG: end task:dc27fa1c-dd11-401b-bbac-7d7bcc145240:Get ssh client for lago-he-basic-ansible-suite-master-host-0: lago.ssh: DEBUG: Running ba988070 on lago-he-basic-ansible-suite-master-host-0: hosted-engine --vm-status lago.ssh: DEBUG: Command ba988070 on lago-he-basic-ansible-suite-master-host-0 returned with 1 lago.ssh: DEBUG: Command ba988070 on lago-he-basic-ansible-suite-master-host-0 output: The hosted engine configuration has not been retrieved from shared storage. Please ensure that ovirt-ha-agent is running and the storage server is reachable. lago.ssh: DEBUG: start task:e5c06cb0-2d27-4b4e-8394-0fad85438b17:Get ssh client for lago-he-basic-ansible-suite-master-host-0: lago.ssh: DEBUG: end task:e5c06cb0-2d27-4b4e-8394-0fad85438b17:Get ssh client for lago-he-basic-ansible-suite-master-host-0: lago.ssh: DEBUG: Running c0f0e7aa on lago-he-basic-ansible-suite-master-host-0: hosted-engine --vm-status lago.ssh: DEBUG: Command c0f0e7aa on lago-he-basic-ansible-suite-master-host-0 returned with 1 lago.ssh: DEBUG: Command c0f0e7aa on lago-he-basic-ansible-suite-master-host-0 output: The hosted engine configuration has not been retrieved from shared storage. Please ensure that ovirt-ha-agent is running and the storage server is reachable. lago.ssh: DEBUG: start task:c62d9da2-1e89-4227-a9d4-c6d7ad42d0b2:Get ssh client for lago-he-basic-ansible-suite-master-host-0: lago.ssh: DEBUG: end task:c62d9da2-1e89-4227-a9d4-c6d7ad42d0b2:Get ssh client for lago-he-basic-ansible-suite-master-host-0: lago.ssh: DEBUG: Running c324d108 on lago-he-basic-ansible-suite-master-host-0: hosted-engine --vm-status lago.ssh: DEBUG: Command c324d108 on lago-he-basic-ansible-suite-master-host-0 returned with 1 lago.ssh: DEBUG: Command c324d108 on lago-he-basic-ansible-suite-master-host-0 output: The hosted engine configuration has not been retrieved from shared storage. Please ensure that ovirt-ha-agent is running and the storage server is reachable. lago.ssh: DEBUG: start task:6a3f3ee3-678e-48d4-85a3-437b983736d6:Get ssh client for lago-he-basic-ansible-suite-master-host-0: lago.ssh: DEBUG: end task:6a3f3ee3-678e-48d4-85a3-437b983736d6:Get ssh client for lago-he-basic-ansible-suite-master-host-0: lago.ssh: DEBUG: Running c554ad90 on lago-he-basic-ansible-suite-master-host-0: hosted-engine --vm-status lago.ssh: DEBUG: Command c554ad90 on lago-he-basic-ansible-suite-master-host-0 returned with 1 lago.ssh: DEBUG: Command c554ad90 on lago-he-basic-ansible-suite-master-host-0 output: The hosted engine configuration has not been retrieved from shared storage. Please ensure that ovirt-ha-agent is running and the storage server is reachable. lago.ssh: DEBUG: start task:be7fa34d-c089-4e0a-a5df-1065896c0ad3:Get ssh client for lago-he-basic-ansible-suite-master-host-0: lago.ssh: DEBUG: end task:be7fa34d-c089-4e0a-a5df-1065896c0ad3:Get ssh client for lago-he-basic-ansible-suite-master-host-0: lago.ssh: DEBUG: Running c7834d06 on lago-he-basic-ansible-suite-master-host-0: hosted-engine --vm-status lago.ssh: DEBUG: Command c7834d06 on lago-he-basic-ansible-suite-master-host-0 returned with 1 lago.ssh: DEBUG: Command c7834d06 on lago-he-basic-ansible-suite-master-host-0 output: The hosted engine configuration has not been retrieved from shared storage. Please ensure that ovirt-ha-agent is running and the storage server is reachable. lago.ssh: DEBUG: start task:4e788f25-696f-443a-9713-3749b602fa42:Get ssh client for lago-he-basic-ansible-suite-master-host-0: lago.ssh: DEBUG: end task:4e788f25-696f-443a-9713-3749b602fa42:Get ssh client for lago-he-basic-ansible-suite-master-host-0: lago.ssh: DEBUG: Running c9acb7e8 on lago-he-basic-ansible-suite-master-host-0: hosted-engine --vm-status lago.ssh: DEBUG: Command c9acb7e8 on lago-he-basic-ansible-suite-master-host-0 returned with 1 lago.ssh: DEBUG: Command c9acb7e8 on lago-he-basic-ansible-suite-master-host-0 output: The hosted engine configuration has not been retrieved from shared storage. Please ensure that ovirt-ha-agent is running and the storage server is reachable. lago.ssh: DEBUG: start task:c7aababd-80ff-479d-8eda-4a5fd9afa601:Get ssh client for lago-he-basic-ansible-suite-master-host-0: lago.ssh: DEBUG: end task:c7aababd-80ff-479d-8eda-4a5fd9afa601:Get ssh client for lago-he-basic-ansible-suite-master-host-0: lago.ssh: DEBUG: Running cbd47e02 on lago-he-basic-ansible-suite-master-host-0: hosted-engine --vm-status lago.ssh: DEBUG: Command cbd47e02 on lago-he-basic-ansible-suite-master-host-0 returned with 1 lago.ssh: DEBUG: Command cbd47e02 on lago-he-basic-ansible-suite-master-host-0 output: The hosted engine configuration has not been retrieved from shared storage. Please ensure that ovirt-ha-agent is running and the storage server is reachable. lago.ssh: DEBUG: start task:93635936-0c48-4201-a674-56751d7504b7:Get ssh client for lago-he-basic-ansible-suite-master-host-0: lago.ssh: DEBUG: end task:93635936-0c48-4201-a674-56751d7504b7:Get ssh client for lago-he-basic-ansible-suite-master-host-0: lago.ssh: DEBUG: Running cdfc7e14 on lago-he-basic-ansible-suite-master-host-0: hosted-engine --vm-status lago.ssh: DEBUG: Command cdfc7e14 on lago-he-basic-ansible-suite-master-host-0 returned with 1 lago.ssh: DEBUG: Command cdfc7e14 on lago-he-basic-ansible-suite-master-host-0 output: The hosted engine configuration has not been retrieved from shared storage. Please ensure that ovirt-ha-agent is running and the storage server is reachable. lago.ssh: DEBUG: start task:8ad03421-e843-4501-aac9-130e5518bc9c:Get ssh client for lago-he-basic-ansible-suite-master-host-0: lago.ssh: DEBUG: end task:8ad03421-e843-4501-aac9-130e5518bc9c:Get ssh client for lago-he-basic-ansible-suite-master-host-0: lago.ssh: DEBUG: Running d02681f8 on lago-he-basic-ansible-suite-master-host-0: hosted-engine --vm-status lago.ssh: DEBUG: Command d02681f8 on lago-he-basic-ansible-suite-master-host-0 returned with 1 lago.ssh: DEBUG: Command d02681f8 on lago-he-basic-ansible-suite-master-host-0 output: The hosted engine configuration has not been retrieved from shared storage. Please ensure that ovirt-ha-agent is running and the storage server is reachable. lago.ssh: DEBUG: start task:70aaef21-30d4-4999-9c7d-13c8f4182ade:Get ssh client for lago-he-basic-ansible-suite-master-host-0: lago.ssh: DEBUG: end task:70aaef21-30d4-4999-9c7d-13c8f4182ade:Get ssh client for lago-he-basic-ansible-suite-master-host-0: lago.ssh: DEBUG: Running d263c17e on lago-he-basic-ansible-suite-master-host-0: hosted-engine --vm-status lago.ssh: DEBUG: Command d263c17e on lago-he-basic-ansible-suite-master-host-0 returned with 0 lago.ssh: DEBUG: Command d263c17e on lago-he-basic-ansible-suite-master-host-0 output: !! Cluster is in GLOBAL MAINTENANCE mode !! --== Host lago-he-basic-ansible-suite-master-host-0 (id: 1) status ==-- conf_on_shared_storage : True Status up-to-date : True Hostname : lago-he-basic-ansible-suite-master-host-0 Host ID : 1 Engine status : {"reason": "bad vm status", "health": "bad", "vm": "down", "detail": "Down"} Score : 0 stopped : False Local maintenance : False crc32 : 47cc4b36 local_conf_timestamp : 2682 Host timestamp : 2681 Extra metadata (valid at timestamp): metadata_parse_version=1 metadata_feature_version=1 timestamp=2681 (Fri Sep 7 02:54:09 2018) host-id=1 score=0 vm_conf_refresh_time=2682 (Fri Sep 7 02:54:09 2018) conf_on_shared_storage=True maintenance=False state=ReinitializeFSM stopped=False --== Host lago-he-basic-ansible-suite-master-host-1 (id: 2) status ==-- conf_on_shared_storage : True Status up-to-date : False Hostname : lago-he-basic-ansible-suite-master-host-1 Host ID : 2 Engine status : unknown stale-data Score : 3000 stopped : False Local maintenance : False crc32 : f45378aa local_conf_timestamp : 2681 Host timestamp : 2681 Extra metadata (valid at timestamp): metadata_parse_version=1 metadata_feature_version=1 timestamp=2681 (Fri Sep 7 02:54:09 2018) host-id=2 score=3000 vm_conf_refresh_time=2681 (Fri Sep 7 02:54:09 2018) conf_on_shared_storage=True maintenance=False state=GlobalMaintenance stopped=False !! Cluster is in GLOBAL MAINTENANCE mode !! root: INFO: * Agent is ready. root: INFO: * Starting VM... lago.ssh: DEBUG: start task:9cc18d62-477b-4ad4-b8a9-f1148b742fcf:Get ssh client for lago-he-basic-ansible-suite-master-host-0: lago.ssh: DEBUG: end task:9cc18d62-477b-4ad4-b8a9-f1148b742fcf:Get ssh client for lago-he-basic-ansible-suite-master-host-0: lago.ssh: DEBUG: Running d2dbe726 on lago-he-basic-ansible-suite-master-host-0: hosted-engine --vm-start lago.ssh: DEBUG: Command d2dbe726 on lago-he-basic-ansible-suite-master-host-0 returned with 0 lago.ssh: DEBUG: Command d2dbe726 on lago-he-basic-ansible-suite-master-host-0 output: VM exists and is down, cleaning up and restarting root: INFO: * Command succeeded root: INFO: * Waiting for VM to be UP... lago.ssh: DEBUG: start task:3addeda7-f9e2-4353-a9b4-253e376b1a27:Get ssh client for lago-he-basic-ansible-suite-master-host-0: lago.ssh: DEBUG: end task:3addeda7-f9e2-4353-a9b4-253e376b1a27:Get ssh client for lago-he-basic-ansible-suite-master-host-0: lago.ssh: DEBUG: Running d443e7ee on lago-he-basic-ansible-suite-master-host-0: hosted-engine --vm-status --json lago.ssh: DEBUG: Command d443e7ee on lago-he-basic-ansible-suite-master-host-0 returned with 0 lago.ssh: DEBUG: Command d443e7ee on lago-he-basic-ansible-suite-master-host-0 output: {"1": {"conf_on_shared_storage": true, "live-data": true, "extra": "metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=2681 (Fri Sep 7 02:54:09 2018)\nhost-id=1\nscore=0\nvm_conf_refresh_time=2682 (Fri Sep 7 02:54:09 2018)\nconf_on_shared_storage=True\nmaintenance=False\nstate=ReinitializeFSM\nstopped=False\n", "hostname": "lago-he-basic-ansible-suite-master-host-0", "host-id": 1, "engine-status": {"reason": "bad vm status", "health": "bad", "vm": "down", "detail": "Down"}, "score": 0, "stopped": false, "maintenance": false, "crc32": "47cc4b36", "local_conf_timestamp": 2682, "host-ts": 2681}, "2": {"conf_on_shared_storage": true, "live-data": false, "extra": "metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=2681 (Fri Sep 7 02:54:09 2018)\nhost-id=2\nscore=3000\nvm_conf_refresh_time=2681 (Fri Sep 7 02:54:09 2018)\nconf_on_shared_storage=True\nmaintenance=False\nstate=GlobalMaintenance\nstopped=False\n", "hostname": "lago-he-basic-ansible-suite-master-host-1", "host-id": 2, "engine-status": {"reason": "vm not running on this host", "health": "bad", "vm": "down_unexpected", "detail": "unknown"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "f45378aa", "local_conf_timestamp": 2681, "host-ts": 2681}, "global_maintenance": true} lago.ssh: DEBUG: start task:fc485d4c-5293-4ab4-87cf-bfd1f5f9155e:Get ssh client for lago-he-basic-ansible-suite-master-host-0: lago.ssh: DEBUG: end task:fc485d4c-5293-4ab4-87cf-bfd1f5f9155e:Get ssh client for lago-he-basic-ansible-suite-master-host-0: lago.ssh: DEBUG: Running dae93090 on lago-he-basic-ansible-suite-master-host-0: hosted-engine --vm-status --json lago.ssh: DEBUG: Command dae93090 on lago-he-basic-ansible-suite-master-host-0 returned with 0 lago.ssh: DEBUG: Command dae93090 on lago-he-basic-ansible-suite-master-host-0 output: {"1": {"conf_on_shared_storage": true, "live-data": true, "extra": "metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=2691 (Fri Sep 7 02:54:19 2018)\nhost-id=1\nscore=3000\nvm_conf_refresh_time=2691 (Fri Sep 7 02:54:19 2018)\nconf_on_shared_storage=True\nmaintenance=False\nstate=GlobalMaintenance\nstopped=False\n", "hostname": "lago-he-basic-ansible-suite-master-host-0", "host-id": 1, "engine-status": {"reason": "bad vm status", "health": "bad", "vm": "up", "detail": "Powering up"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "1359b99e", "local_conf_timestamp": 2691, "host-ts": 2691}, "2": {"conf_on_shared_storage": true, "live-data": false, "extra": "metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=2681 (Fri Sep 7 02:54:09 2018)\nhost-id=2\nscore=3000\nvm_conf_refresh_time=2681 (Fri Sep 7 02:54:09 2018)\nconf_on_shared_storage=True\nmaintenance=False\nstate=GlobalMaintenance\nstopped=False\n", "hostname": "lago-he-basic-ansible-suite-master-host-1", "host-id": 2, "engine-status": {"reason": "vm not running on this host", "health": "bad", "vm": "down_unexpected", "detail": "unknown"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "f45378aa", "local_conf_timestamp": 2681, "host-ts": 2681}, "global_maintenance": true} root: INFO: * VM is UP. root: INFO: * Waiting for engine to start... lago.ssh: DEBUG: start task:95631ba4-13e5-445c-9cb3-39e2af4652e9:Get ssh client for lago-he-basic-ansible-suite-master-host-0: lago.ssh: DEBUG: end task:95631ba4-13e5-445c-9cb3-39e2af4652e9:Get ssh client for lago-he-basic-ansible-suite-master-host-0: lago.ssh: DEBUG: Running db87eb7c on lago-he-basic-ansible-suite-master-host-0: hosted-engine --vm-status --json lago.ssh: DEBUG: Command db87eb7c on lago-he-basic-ansible-suite-master-host-0 returned with 0 lago.ssh: DEBUG: Command db87eb7c on lago-he-basic-ansible-suite-master-host-0 output: {"1": {"conf_on_shared_storage": true, "live-data": true, "extra": "metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=2691 (Fri Sep 7 02:54:19 2018)\nhost-id=1\nscore=3000\nvm_conf_refresh_time=2691 (Fri Sep 7 02:54:19 2018)\nconf_on_shared_storage=True\nmaintenance=False\nstate=GlobalMaintenance\nstopped=False\n", "hostname": "lago-he-basic-ansible-suite-master-host-0", "host-id": 1, "engine-status": {"reason": "bad vm status", "health": "bad", "vm": "up", "detail": "Powering up"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "1359b99e", "local_conf_timestamp": 2691, "host-ts": 2691}, "2": {"conf_on_shared_storage": true, "live-data": false, "extra": "metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=2681 (Fri Sep 7 02:54:09 2018)\nhost-id=2\nscore=3000\nvm_conf_refresh_time=2681 (Fri Sep 7 02:54:09 2018)\nconf_on_shared_storage=True\nmaintenance=False\nstate=GlobalMaintenance\nstopped=False\n", "hostname": "lago-he-basic-ansible-suite-master-host-1", "host-id": 2, "engine-status": {"reason": "vm not running on this host", "health": "bad", "vm": "down_unexpected", "detail": "unknown"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "f45378aa", "local_conf_timestamp": 2681, "host-ts": 2681}, "global_maintenance": true} lago.ssh: DEBUG: start task:b7efe5e9-4704-4bd5-9630-7ccfdf8276aa:Get ssh client for lago-he-basic-ansible-suite-master-host-0: lago.ssh: DEBUG: end task:b7efe5e9-4704-4bd5-9630-7ccfdf8276aa:Get ssh client for lago-he-basic-ansible-suite-master-host-0: lago.ssh: DEBUG: Running e21d882a on lago-he-basic-ansible-suite-master-host-0: hosted-engine --vm-status --json lago.ssh: DEBUG: Command e21d882a on lago-he-basic-ansible-suite-master-host-0 returned with 0 lago.ssh: DEBUG: Command e21d882a on lago-he-basic-ansible-suite-master-host-0 output: {"1": {"conf_on_shared_storage": true, "live-data": true, "extra": "metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=2701 (Fri Sep 7 02:54:29 2018)\nhost-id=1\nscore=3000\nvm_conf_refresh_time=2701 (Fri Sep 7 02:54:29 2018)\nconf_on_shared_storage=True\nmaintenance=False\nstate=GlobalMaintenance\nstopped=False\n", "hostname": "lago-he-basic-ansible-suite-master-host-0", "host-id": 1, "engine-status": {"reason": "bad vm status", "health": "bad", "vm": "up", "detail": "Powering up"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "932ad914", "local_conf_timestamp": 2701, "host-ts": 2701}, "2": {"conf_on_shared_storage": true, "live-data": false, "extra": "metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=2701 (Fri Sep 7 02:54:29 2018)\nhost-id=2\nscore=3000\nvm_conf_refresh_time=2701 (Fri Sep 7 02:54:29 2018)\nconf_on_shared_storage=True\nmaintenance=False\nstate=GlobalMaintenance\nstopped=False\n", "hostname": "lago-he-basic-ansible-suite-master-host-1", "host-id": 2, "engine-status": {"reason": "vm not running on this host", "health": "bad", "vm": "down_unexpected", "detail": "unknown"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "95862647", "local_conf_timestamp": 2701, "host-ts": 2701}, "global_maintenance": true} lago.ssh: DEBUG: start task:9701c7b1-5454-4a60-ae77-951711f76e0e:Get ssh client for lago-he-basic-ansible-suite-master-host-0: lago.ssh: DEBUG: end task:9701c7b1-5454-4a60-ae77-951711f76e0e:Get ssh client for lago-he-basic-ansible-suite-master-host-0: lago.ssh: DEBUG: Running e4878552 on lago-he-basic-ansible-suite-master-host-0: hosted-engine --vm-status --json lago.ssh: DEBUG: Command e4878552 on lago-he-basic-ansible-suite-master-host-0 returned with 0 lago.ssh: DEBUG: Command e4878552 on lago-he-basic-ansible-suite-master-host-0 output: {"1": {"conf_on_shared_storage": true, "live-data": true, "extra": "metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=2711 (Fri Sep 7 02:54:39 2018)\nhost-id=1\nscore=3000\nvm_conf_refresh_time=2711 (Fri Sep 7 02:54:39 2018)\nconf_on_shared_storage=True\nmaintenance=False\nstate=GlobalMaintenance\nstopped=False\n", "hostname": "lago-he-basic-ansible-suite-master-host-0", "host-id": 1, "engine-status": {"reason": "bad vm status", "health": "bad", "vm": "up", "detail": "Powering up"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "36c74d9a", "local_conf_timestamp": 2711, "host-ts": 2711}, "2": {"conf_on_shared_storage": true, "live-data": false, "extra": "metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=2711 (Fri Sep 7 02:54:39 2018)\nhost-id=2\nscore=3000\nvm_conf_refresh_time=2711 (Fri Sep 7 02:54:39 2018)\nconf_on_shared_storage=True\nmaintenance=False\nstate=GlobalMaintenance\nstopped=False\n", "hostname": "lago-he-basic-ansible-suite-master-host-1", "host-id": 2, "engine-status": {"reason": "vm not running on this host", "health": "bad", "vm": "down_unexpected", "detail": "unknown"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "d686a75b", "local_conf_timestamp": 2711, "host-ts": 2711}, "global_maintenance": true} lago.ssh: DEBUG: start task:a8271245-f82f-4c48-ad4f-221cc34729da:Get ssh client for lago-he-basic-ansible-suite-master-host-0: lago.ssh: DEBUG: end task:a8271245-f82f-4c48-ad4f-221cc34729da:Get ssh client for lago-he-basic-ansible-suite-master-host-0: lago.ssh: DEBUG: Running e6ccc8cc on lago-he-basic-ansible-suite-master-host-0: hosted-engine --vm-status --json lago.ssh: DEBUG: Command e6ccc8cc on lago-he-basic-ansible-suite-master-host-0 returned with 0 lago.ssh: DEBUG: Command e6ccc8cc on lago-he-basic-ansible-suite-master-host-0 output: {"1": {"conf_on_shared_storage": true, "live-data": true, "extra": "metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=2711 (Fri Sep 7 02:54:39 2018)\nhost-id=1\nscore=3000\nvm_conf_refresh_time=2711 (Fri Sep 7 02:54:39 2018)\nconf_on_shared_storage=True\nmaintenance=False\nstate=GlobalMaintenance\nstopped=False\n", "hostname": "lago-he-basic-ansible-suite-master-host-0", "host-id": 1, "engine-status": {"reason": "bad vm status", "health": "bad", "vm": "up", "detail": "Powering up"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "36c74d9a", "local_conf_timestamp": 2711, "host-ts": 2711}, "2": {"conf_on_shared_storage": true, "live-data": false, "extra": "metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=2711 (Fri Sep 7 02:54:39 2018)\nhost-id=2\nscore=3000\nvm_conf_refresh_time=2711 (Fri Sep 7 02:54:39 2018)\nconf_on_shared_storage=True\nmaintenance=False\nstate=GlobalMaintenance\nstopped=False\n", "hostname": "lago-he-basic-ansible-suite-master-host-1", "host-id": 2, "engine-status": {"reason": "vm not running on this host", "health": "bad", "vm": "down_unexpected", "detail": "unknown"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "d686a75b", "local_conf_timestamp": 2711, "host-ts": 2711}, "global_maintenance": true} lago.ssh: DEBUG: start task:034ad030-8ad8-4638-a664-850f858b4321:Get ssh client for lago-he-basic-ansible-suite-master-host-0: lago.ssh: DEBUG: end task:034ad030-8ad8-4638-a664-850f858b4321:Get ssh client for lago-he-basic-ansible-suite-master-host-0: lago.ssh: DEBUG: Running e92c85f8 on lago-he-basic-ansible-suite-master-host-0: hosted-engine --vm-status --json lago.ssh: DEBUG: Command e92c85f8 on lago-he-basic-ansible-suite-master-host-0 returned with 0 lago.ssh: DEBUG: Command e92c85f8 on lago-he-basic-ansible-suite-master-host-0 output: {"1": {"conf_on_shared_storage": true, "live-data": true, "extra": "metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=2711 (Fri Sep 7 02:54:39 2018)\nhost-id=1\nscore=3000\nvm_conf_refresh_time=2711 (Fri Sep 7 02:54:39 2018)\nconf_on_shared_storage=True\nmaintenance=False\nstate=GlobalMaintenance\nstopped=False\n", "hostname": "lago-he-basic-ansible-suite-master-host-0", "host-id": 1, "engine-status": {"reason": "bad vm status", "health": "bad", "vm": "up", "detail": "Powering up"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "36c74d9a", "local_conf_timestamp": 2711, "host-ts": 2711}, "2": {"conf_on_shared_storage": true, "live-data": false, "extra": "metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=2711 (Fri Sep 7 02:54:39 2018)\nhost-id=2\nscore=3000\nvm_conf_refresh_time=2711 (Fri Sep 7 02:54:39 2018)\nconf_on_shared_storage=True\nmaintenance=False\nstate=GlobalMaintenance\nstopped=False\n", "hostname": "lago-he-basic-ansible-suite-master-host-1", "host-id": 2, "engine-status": {"reason": "vm not running on this host", "health": "bad", "vm": "down_unexpected", "detail": "unknown"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "d686a75b", "local_conf_timestamp": 2711, "host-ts": 2711}, "global_maintenance": true} lago.ssh: DEBUG: start task:a40d8d79-8663-4d44-a25a-694d71248391:Get ssh client for lago-he-basic-ansible-suite-master-host-0: lago.ssh: DEBUG: end task:a40d8d79-8663-4d44-a25a-694d71248391:Get ssh client for lago-he-basic-ansible-suite-master-host-0: lago.ssh: DEBUG: Running eb852b52 on lago-he-basic-ansible-suite-master-host-0: hosted-engine --vm-status --json lago.ssh: DEBUG: Command eb852b52 on lago-he-basic-ansible-suite-master-host-0 returned with 0 lago.ssh: DEBUG: Command eb852b52 on lago-he-basic-ansible-suite-master-host-0 output: {"1": {"conf_on_shared_storage": true, "live-data": true, "extra": "metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=2721 (Fri Sep 7 02:54:49 2018)\nhost-id=1\nscore=3000\nvm_conf_refresh_time=2722 (Fri Sep 7 02:54:49 2018)\nconf_on_shared_storage=True\nmaintenance=False\nstate=GlobalMaintenance\nstopped=False\n", "hostname": "lago-he-basic-ansible-suite-master-host-0", "host-id": 1, "engine-status": {"reason": "failed liveliness check", "health": "bad", "vm": "up", "detail": "Up"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "f3a4667f", "local_conf_timestamp": 2722, "host-ts": 2721}, "2": {"conf_on_shared_storage": true, "live-data": false, "extra": "metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=2721 (Fri Sep 7 02:54:49 2018)\nhost-id=2\nscore=3000\nvm_conf_refresh_time=2721 (Fri Sep 7 02:54:49 2018)\nconf_on_shared_storage=True\nmaintenance=False\nstate=GlobalMaintenance\nstopped=False\n", "hostname": "lago-he-basic-ansible-suite-master-host-1", "host-id": 2, "engine-status": {"reason": "vm not running on this host", "health": "bad", "vm": "down_unexpected", "detail": "unknown"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "1387247f", "local_conf_timestamp": 2721, "host-ts": 2721}, "global_maintenance": true} lago.ssh: DEBUG: start task:c1f770dc-80c9-488a-9e6e-9de963931606:Get ssh client for lago-he-basic-ansible-suite-master-host-0: lago.ssh: DEBUG: end task:c1f770dc-80c9-488a-9e6e-9de963931606:Get ssh client for lago-he-basic-ansible-suite-master-host-0: lago.ssh: DEBUG: Running eddd99e8 on lago-he-basic-ansible-suite-master-host-0: hosted-engine --vm-status --json lago.ssh: DEBUG: Command eddd99e8 on lago-he-basic-ansible-suite-master-host-0 returned with 0 lago.ssh: DEBUG: Command eddd99e8 on lago-he-basic-ansible-suite-master-host-0 output: {"1": {"conf_on_shared_storage": true, "live-data": true, "extra": "metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=2721 (Fri Sep 7 02:54:49 2018)\nhost-id=1\nscore=3000\nvm_conf_refresh_time=2722 (Fri Sep 7 02:54:49 2018)\nconf_on_shared_storage=True\nmaintenance=False\nstate=GlobalMaintenance\nstopped=False\n", "hostname": "lago-he-basic-ansible-suite-master-host-0", "host-id": 1, "engine-status": {"reason": "failed liveliness check", "health": "bad", "vm": "up", "detail": "Up"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "f3a4667f", "local_conf_timestamp": 2722, "host-ts": 2721}, "2": {"conf_on_shared_storage": true, "live-data": false, "extra": "metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=2721 (Fri Sep 7 02:54:49 2018)\nhost-id=2\nscore=3000\nvm_conf_refresh_time=2721 (Fri Sep 7 02:54:49 2018)\nconf_on_shared_storage=True\nmaintenance=False\nstate=GlobalMaintenance\nstopped=False\n", "hostname": "lago-he-basic-ansible-suite-master-host-1", "host-id": 2, "engine-status": {"reason": "vm not running on this host", "health": "bad", "vm": "down_unexpected", "detail": "unknown"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "1387247f", "local_conf_timestamp": 2721, "host-ts": 2721}, "global_maintenance": true} lago.ssh: DEBUG: start task:be04d79e-a05e-4a0b-8f5e-12b2ab159815:Get ssh client for lago-he-basic-ansible-suite-master-host-0: lago.ssh: DEBUG: end task:be04d79e-a05e-4a0b-8f5e-12b2ab159815:Get ssh client for lago-he-basic-ansible-suite-master-host-0: lago.ssh: DEBUG: Running f052e3b8 on lago-he-basic-ansible-suite-master-host-0: hosted-engine --vm-status --json lago.ssh: DEBUG: Command f052e3b8 on lago-he-basic-ansible-suite-master-host-0 returned with 0 lago.ssh: DEBUG: Command f052e3b8 on lago-he-basic-ansible-suite-master-host-0 output: {"1": {"conf_on_shared_storage": true, "live-data": true, "extra": "metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=2731 (Fri Sep 7 02:54:58 2018)\nhost-id=1\nscore=3000\nvm_conf_refresh_time=2731 (Fri Sep 7 02:54:59 2018)\nconf_on_shared_storage=True\nmaintenance=False\nstate=GlobalMaintenance\nstopped=False\n", "hostname": "lago-he-basic-ansible-suite-master-host-0", "host-id": 1, "engine-status": {"reason": "failed liveliness check", "health": "bad", "vm": "up", "detail": "Up"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "78d12f00", "local_conf_timestamp": 2731, "host-ts": 2731}, "2": {"conf_on_shared_storage": true, "live-data": false, "extra": "metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=2721 (Fri Sep 7 02:54:49 2018)\nhost-id=2\nscore=3000\nvm_conf_refresh_time=2721 (Fri Sep 7 02:54:49 2018)\nconf_on_shared_storage=True\nmaintenance=False\nstate=GlobalMaintenance\nstopped=False\n", "hostname": "lago-he-basic-ansible-suite-master-host-1", "host-id": 2, "engine-status": {"reason": "vm not running on this host", "health": "bad", "vm": "down_unexpected", "detail": "unknown"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "1387247f", "local_conf_timestamp": 2721, "host-ts": 2721}, "global_maintenance": true} lago.ssh: DEBUG: start task:39213b21-16d2-4090-b54f-e55be19ea299:Get ssh client for lago-he-basic-ansible-suite-master-host-0: lago.ssh: DEBUG: end task:39213b21-16d2-4090-b54f-e55be19ea299:Get ssh client for lago-he-basic-ansible-suite-master-host-0: lago.ssh: DEBUG: Running f2b118b4 on lago-he-basic-ansible-suite-master-host-0: hosted-engine --vm-status --json lago.ssh: DEBUG: Command f2b118b4 on lago-he-basic-ansible-suite-master-host-0 returned with 0 lago.ssh: DEBUG: Command f2b118b4 on lago-he-basic-ansible-suite-master-host-0 output: {"1": {"conf_on_shared_storage": true, "live-data": true, "extra": "metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=2731 (Fri Sep 7 02:54:58 2018)\nhost-id=1\nscore=3000\nvm_conf_refresh_time=2731 (Fri Sep 7 02:54:59 2018)\nconf_on_shared_storage=True\nmaintenance=False\nstate=GlobalMaintenance\nstopped=False\n", "hostname": "lago-he-basic-ansible-suite-master-host-0", "host-id": 1, "engine-status": {"reason": "failed liveliness check", "health": "bad", "vm": "up", "detail": "Up"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "78d12f00", "local_conf_timestamp": 2731, "host-ts": 2731}, "2": {"conf_on_shared_storage": true, "live-data": false, "extra": "metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=2721 (Fri Sep 7 02:54:49 2018)\nhost-id=2\nscore=3000\nvm_conf_refresh_time=2721 (Fri Sep 7 02:54:49 2018)\nconf_on_shared_storage=True\nmaintenance=False\nstate=GlobalMaintenance\nstopped=False\n", "hostname": "lago-he-basic-ansible-suite-master-host-1", "host-id": 2, "engine-status": {"reason": "vm not running on this host", "health": "bad", "vm": "down_unexpected", "detail": "unknown"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "1387247f", "local_conf_timestamp": 2721, "host-ts": 2721}, "global_maintenance": true} lago.ssh: DEBUG: start task:fd59ea63-060f-48ae-bf59-97d684ed9e21:Get ssh client for lago-he-basic-ansible-suite-master-host-0: lago.ssh: DEBUG: end task:fd59ea63-060f-48ae-bf59-97d684ed9e21:Get ssh client for lago-he-basic-ansible-suite-master-host-0: lago.ssh: DEBUG: Running f51da478 on lago-he-basic-ansible-suite-master-host-0: hosted-engine --vm-status --json lago.ssh: DEBUG: Command f51da478 on lago-he-basic-ansible-suite-master-host-0 returned with 0 lago.ssh: DEBUG: Command f51da478 on lago-he-basic-ansible-suite-master-host-0 output: {"1": {"conf_on_shared_storage": true, "live-data": true, "extra": "metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=2731 (Fri Sep 7 02:54:58 2018)\nhost-id=1\nscore=3000\nvm_conf_refresh_time=2731 (Fri Sep 7 02:54:59 2018)\nconf_on_shared_storage=True\nmaintenance=False\nstate=GlobalMaintenance\nstopped=False\n", "hostname": "lago-he-basic-ansible-suite-master-host-0", "host-id": 1, "engine-status": {"reason": "failed liveliness check", "health": "bad", "vm": "up", "detail": "Up"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "78d12f00", "local_conf_timestamp": 2731, "host-ts": 2731}, "2": {"conf_on_shared_storage": true, "live-data": false, "extra": "metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=2721 (Fri Sep 7 02:54:49 2018)\nhost-id=2\nscore=3000\nvm_conf_refresh_time=2721 (Fri Sep 7 02:54:49 2018)\nconf_on_shared_storage=True\nmaintenance=False\nstate=GlobalMaintenance\nstopped=False\n", "hostname": "lago-he-basic-ansible-suite-master-host-1", "host-id": 2, "engine-status": {"reason": "vm not running on this host", "health": "bad", "vm": "down_unexpected", "detail": "unknown"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "1387247f", "local_conf_timestamp": 2721, "host-ts": 2721}, "global_maintenance": true} lago.ssh: DEBUG: start task:c7b5cd6a-5630-484c-9817-1904741bd302:Get ssh client for lago-he-basic-ansible-suite-master-host-0: lago.ssh: DEBUG: end task:c7b5cd6a-5630-484c-9817-1904741bd302:Get ssh client for lago-he-basic-ansible-suite-master-host-0: lago.ssh: DEBUG: Running f76ea5f6 on lago-he-basic-ansible-suite-master-host-0: hosted-engine --vm-status --json lago.ssh: DEBUG: Command f76ea5f6 on lago-he-basic-ansible-suite-master-host-0 returned with 0 lago.ssh: DEBUG: Command f76ea5f6 on lago-he-basic-ansible-suite-master-host-0 output: {"1": {"conf_on_shared_storage": true, "live-data": true, "extra": "metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=2741 (Fri Sep 7 02:55:09 2018)\nhost-id=1\nscore=3000\nvm_conf_refresh_time=2741 (Fri Sep 7 02:55:09 2018)\nconf_on_shared_storage=True\nmaintenance=False\nstate=GlobalMaintenance\nstopped=False\n", "hostname": "lago-he-basic-ansible-suite-master-host-0", "host-id": 1, "engine-status": {"reason": "failed liveliness check", "health": "bad", "vm": "up", "detail": "Up"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "07a5655b", "local_conf_timestamp": 2741, "host-ts": 2741}, "2": {"conf_on_shared_storage": true, "live-data": false, "extra": "metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=2731 (Fri Sep 7 02:54:59 2018)\nhost-id=2\nscore=3000\nvm_conf_refresh_time=2731 (Fri Sep 7 02:54:59 2018)\nconf_on_shared_storage=True\nmaintenance=False\nstate=GlobalMaintenance\nstopped=False\n", "hostname": "lago-he-basic-ansible-suite-master-host-1", "host-id": 2, "engine-status": {"reason": "vm not running on this host", "health": "bad", "vm": "down_unexpected", "detail": "unknown"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "5087a563", "local_conf_timestamp": 2731, "host-ts": 2731}, "global_maintenance": true} lago.ssh: DEBUG: start task:0c02897b-b630-4399-af02-1b34df63f71b:Get ssh client for lago-he-basic-ansible-suite-master-host-0: lago.ssh: DEBUG: end task:0c02897b-b630-4399-af02-1b34df63f71b:Get ssh client for lago-he-basic-ansible-suite-master-host-0: lago.ssh: DEBUG: Running f9d13ae8 on lago-he-basic-ansible-suite-master-host-0: hosted-engine --vm-status --json lago.ssh: DEBUG: Command f9d13ae8 on lago-he-basic-ansible-suite-master-host-0 returned with 0 lago.ssh: DEBUG: Command f9d13ae8 on lago-he-basic-ansible-suite-master-host-0 output: {"1": {"conf_on_shared_storage": true, "live-data": true, "extra": "metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=2741 (Fri Sep 7 02:55:09 2018)\nhost-id=1\nscore=3000\nvm_conf_refresh_time=2741 (Fri Sep 7 02:55:09 2018)\nconf_on_shared_storage=True\nmaintenance=False\nstate=GlobalMaintenance\nstopped=False\n", "hostname": "lago-he-basic-ansible-suite-master-host-0", "host-id": 1, "engine-status": {"reason": "failed liveliness check", "health": "bad", "vm": "up", "detail": "Up"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "07a5655b", "local_conf_timestamp": 2741, "host-ts": 2741}, "2": {"conf_on_shared_storage": true, "live-data": false, "extra": "metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=2731 (Fri Sep 7 02:54:59 2018)\nhost-id=2\nscore=3000\nvm_conf_refresh_time=2731 (Fri Sep 7 02:54:59 2018)\nconf_on_shared_storage=True\nmaintenance=False\nstate=GlobalMaintenance\nstopped=False\n", "hostname": "lago-he-basic-ansible-suite-master-host-1", "host-id": 2, "engine-status": {"reason": "vm not running on this host", "health": "bad", "vm": "down_unexpected", "detail": "unknown"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "5087a563", "local_conf_timestamp": 2731, "host-ts": 2731}, "global_maintenance": true} lago.ssh: DEBUG: start task:b1616944-ec5a-458c-b006-76487cda12a8:Get ssh client for lago-he-basic-ansible-suite-master-host-0: lago.ssh: DEBUG: end task:b1616944-ec5a-458c-b006-76487cda12a8:Get ssh client for lago-he-basic-ansible-suite-master-host-0: lago.ssh: DEBUG: Running fc411168 on lago-he-basic-ansible-suite-master-host-0: hosted-engine --vm-status --json lago.ssh: DEBUG: Command fc411168 on lago-he-basic-ansible-suite-master-host-0 returned with 0 lago.ssh: DEBUG: Command fc411168 on lago-he-basic-ansible-suite-master-host-0 output: {"1": {"conf_on_shared_storage": true, "live-data": true, "extra": "metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=2751 (Fri Sep 7 02:55:19 2018)\nhost-id=1\nscore=3000\nvm_conf_refresh_time=2751 (Fri Sep 7 02:55:19 2018)\nconf_on_shared_storage=True\nmaintenance=False\nstate=GlobalMaintenance\nstopped=False\n", "hostname": "lago-he-basic-ansible-suite-master-host-0", "host-id": 1, "engine-status": {"reason": "failed liveliness check", "health": "bad", "vm": "up", "detail": "Up"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "15d97d9e", "local_conf_timestamp": 2751, "host-ts": 2751}, "2": {"conf_on_shared_storage": true, "live-data": true, "extra": "metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=2751 (Fri Sep 7 02:55:19 2018)\nhost-id=2\nscore=3000\nvm_conf_refresh_time=2751 (Fri Sep 7 02:55:19 2018)\nconf_on_shared_storage=True\nmaintenance=False\nstate=GlobalMaintenance\nstopped=False\n", "hostname": "lago-he-basic-ansible-suite-master-host-1", "host-id": 2, "engine-status": {"reason": "vm not running on this host", "health": "bad", "vm": "down_unexpected", "detail": "unknown"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "01f5a56a", "local_conf_timestamp": 2751, "host-ts": 2751}, "global_maintenance": true} lago.ssh: DEBUG: start task:9fccacca-c35c-43d4-96d7-ee9b7c896561:Get ssh client for lago-he-basic-ansible-suite-master-host-0: lago.ssh: DEBUG: end task:9fccacca-c35c-43d4-96d7-ee9b7c896561:Get ssh client for lago-he-basic-ansible-suite-master-host-0: lago.ssh: DEBUG: Running ff0e5dd8 on lago-he-basic-ansible-suite-master-host-0: hosted-engine --vm-status --json lago.ssh: DEBUG: Command ff0e5dd8 on lago-he-basic-ansible-suite-master-host-0 returned with 0 lago.ssh: DEBUG: Command ff0e5dd8 on lago-he-basic-ansible-suite-master-host-0 output: {"1": {"conf_on_shared_storage": true, "live-data": true, "extra": "metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=2751 (Fri Sep 7 02:55:19 2018)\nhost-id=1\nscore=3000\nvm_conf_refresh_time=2751 (Fri Sep 7 02:55:19 2018)\nconf_on_shared_storage=True\nmaintenance=False\nstate=GlobalMaintenance\nstopped=False\n", "hostname": "lago-he-basic-ansible-suite-master-host-0", "host-id": 1, "engine-status": {"reason": "failed liveliness check", "health": "bad", "vm": "up", "detail": "Up"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "15d97d9e", "local_conf_timestamp": 2751, "host-ts": 2751}, "2": {"conf_on_shared_storage": true, "live-data": true, "extra": "metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=2751 (Fri Sep 7 02:55:19 2018)\nhost-id=2\nscore=3000\nvm_conf_refresh_time=2751 (Fri Sep 7 02:55:19 2018)\nconf_on_shared_storage=True\nmaintenance=False\nstate=GlobalMaintenance\nstopped=False\n", "hostname": "lago-he-basic-ansible-suite-master-host-1", "host-id": 2, "engine-status": {"reason": "vm not running on this host", "health": "bad", "vm": "down_unexpected", "detail": "unknown"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "01f5a56a", "local_conf_timestamp": 2751, "host-ts": 2751}, "global_maintenance": true} lago.ssh: DEBUG: start task:b3be4966-4ce1-4017-8000-c7bb06f8407a:Get ssh client for lago-he-basic-ansible-suite-master-host-0: lago.ssh: DEBUG: end task:b3be4966-4ce1-4017-8000-c7bb06f8407a:Get ssh client for lago-he-basic-ansible-suite-master-host-0: lago.ssh: DEBUG: Running 0160a136 on lago-he-basic-ansible-suite-master-host-0: hosted-engine --vm-status --json lago.ssh: DEBUG: Command 0160a136 on lago-he-basic-ansible-suite-master-host-0 returned with 0 lago.ssh: DEBUG: Command 0160a136 on lago-he-basic-ansible-suite-master-host-0 output: {"1": {"conf_on_shared_storage": true, "live-data": true, "extra": "metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=2751 (Fri Sep 7 02:55:19 2018)\nhost-id=1\nscore=3000\nvm_conf_refresh_time=2751 (Fri Sep 7 02:55:19 2018)\nconf_on_shared_storage=True\nmaintenance=False\nstate=GlobalMaintenance\nstopped=False\n", "hostname": "lago-he-basic-ansible-suite-master-host-0", "host-id": 1, "engine-status": {"reason": "failed liveliness check", "health": "bad", "vm": "up", "detail": "Up"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "15d97d9e", "local_conf_timestamp": 2751, "host-ts": 2751}, "2": {"conf_on_shared_storage": true, "live-data": true, "extra": "metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=2751 (Fri Sep 7 02:55:19 2018)\nhost-id=2\nscore=3000\nvm_conf_refresh_time=2751 (Fri Sep 7 02:55:19 2018)\nconf_on_shared_storage=True\nmaintenance=False\nstate=GlobalMaintenance\nstopped=False\n", "hostname": "lago-he-basic-ansible-suite-master-host-1", "host-id": 2, "engine-status": {"reason": "vm not running on this host", "health": "bad", "vm": "down_unexpected", "detail": "unknown"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "01f5a56a", "local_conf_timestamp": 2751, "host-ts": 2751}, "global_maintenance": true} lago.ssh: DEBUG: start task:91ee2d32-a2a3-4308-a616-47a0939ef3af:Get ssh client for lago-he-basic-ansible-suite-master-host-0: lago.ssh: DEBUG: end task:91ee2d32-a2a3-4308-a616-47a0939ef3af:Get ssh client for lago-he-basic-ansible-suite-master-host-0: lago.ssh: DEBUG: Running 040ba282 on lago-he-basic-ansible-suite-master-host-0: hosted-engine --vm-status --json lago.ssh: DEBUG: Command 040ba282 on lago-he-basic-ansible-suite-master-host-0 returned with 0 lago.ssh: DEBUG: Command 040ba282 on lago-he-basic-ansible-suite-master-host-0 output: {"1": {"conf_on_shared_storage": true, "live-data": true, "extra": "metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=2761 (Fri Sep 7 02:55:29 2018)\nhost-id=1\nscore=3000\nvm_conf_refresh_time=2762 (Fri Sep 7 02:55:30 2018)\nconf_on_shared_storage=True\nmaintenance=False\nstate=GlobalMaintenance\nstopped=False\n", "hostname": "lago-he-basic-ansible-suite-master-host-0", "host-id": 1, "engine-status": {"reason": "failed liveliness check", "health": "bad", "vm": "up", "detail": "Up"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "ba54056b", "local_conf_timestamp": 2762, "host-ts": 2761}, "2": {"conf_on_shared_storage": true, "live-data": true, "extra": "metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=2761 (Fri Sep 7 02:55:29 2018)\nhost-id=2\nscore=3000\nvm_conf_refresh_time=2761 (Fri Sep 7 02:55:29 2018)\nconf_on_shared_storage=True\nmaintenance=False\nstate=GlobalMaintenance\nstopped=False\n", "hostname": "lago-he-basic-ansible-suite-master-host-1", "host-id": 2, "engine-status": {"reason": "vm not running on this host", "health": "bad", "vm": "down_unexpected", "detail": "unknown"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "c4f4264e", "local_conf_timestamp": 2761, "host-ts": 2761}, "global_maintenance": true} lago.ssh: DEBUG: start task:a77979f4-60d8-435a-a066-ca83ab361658:Get ssh client for lago-he-basic-ansible-suite-master-host-0: lago.ssh: DEBUG: end task:a77979f4-60d8-435a-a066-ca83ab361658:Get ssh client for lago-he-basic-ansible-suite-master-host-0: lago.ssh: DEBUG: Running 06646870 on lago-he-basic-ansible-suite-master-host-0: hosted-engine --vm-status --json lago.ssh: DEBUG: Command 06646870 on lago-he-basic-ansible-suite-master-host-0 returned with 0 lago.ssh: DEBUG: Command 06646870 on lago-he-basic-ansible-suite-master-host-0 output: {"1": {"conf_on_shared_storage": true, "live-data": true, "extra": "metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=2761 (Fri Sep 7 02:55:29 2018)\nhost-id=1\nscore=3000\nvm_conf_refresh_time=2762 (Fri Sep 7 02:55:30 2018)\nconf_on_shared_storage=True\nmaintenance=False\nstate=GlobalMaintenance\nstopped=False\n", "hostname": "lago-he-basic-ansible-suite-master-host-0", "host-id": 1, "engine-status": {"reason": "failed liveliness check", "health": "bad", "vm": "up", "detail": "Up"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "ba54056b", "local_conf_timestamp": 2762, "host-ts": 2761}, "2": {"conf_on_shared_storage": true, "live-data": true, "extra": "metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=2761 (Fri Sep 7 02:55:29 2018)\nhost-id=2\nscore=3000\nvm_conf_refresh_time=2761 (Fri Sep 7 02:55:29 2018)\nconf_on_shared_storage=True\nmaintenance=False\nstate=GlobalMaintenance\nstopped=False\n", "hostname": "lago-he-basic-ansible-suite-master-host-1", "host-id": 2, "engine-status": {"reason": "vm not running on this host", "health": "bad", "vm": "down_unexpected", "detail": "unknown"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "c4f4264e", "local_conf_timestamp": 2761, "host-ts": 2761}, "global_maintenance": true} lago.ssh: DEBUG: start task:5ffce9af-34cc-4d06-acd6-b1e0eb6ff123:Get ssh client for lago-he-basic-ansible-suite-master-host-0: lago.ssh: DEBUG: end task:5ffce9af-34cc-4d06-acd6-b1e0eb6ff123:Get ssh client for lago-he-basic-ansible-suite-master-host-0: lago.ssh: DEBUG: Running 08bfe054 on lago-he-basic-ansible-suite-master-host-0: hosted-engine --vm-status --json lago.ssh: DEBUG: Command 08bfe054 on lago-he-basic-ansible-suite-master-host-0 returned with 0 lago.ssh: DEBUG: Command 08bfe054 on lago-he-basic-ansible-suite-master-host-0 output: {"1": {"conf_on_shared_storage": true, "live-data": true, "extra": "metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=2772 (Fri Sep 7 02:55:40 2018)\nhost-id=1\nscore=3000\nvm_conf_refresh_time=2772 (Fri Sep 7 02:55:40 2018)\nconf_on_shared_storage=True\nmaintenance=False\nstate=GlobalMaintenance\nstopped=False\n", "hostname": "lago-he-basic-ansible-suite-master-host-0", "host-id": 1, "engine-status": {"reason": "failed liveliness check", "health": "bad", "vm": "up", "detail": "Up"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "b642f5be", "local_conf_timestamp": 2772, "host-ts": 2772}, "2": {"conf_on_shared_storage": true, "live-data": true, "extra": "metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=2771 (Fri Sep 7 02:55:39 2018)\nhost-id=2\nscore=3000\nvm_conf_refresh_time=2771 (Fri Sep 7 02:55:39 2018)\nconf_on_shared_storage=True\nmaintenance=False\nstate=GlobalMaintenance\nstopped=False\n", "hostname": "lago-he-basic-ansible-suite-master-host-1", "host-id": 2, "engine-status": {"reason": "vm not running on this host", "health": "bad", "vm": "down_unexpected", "detail": "unknown"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "87f4a752", "local_conf_timestamp": 2771, "host-ts": 2771}, "global_maintenance": true} lago.ssh: DEBUG: start task:5c719791-874a-466a-93b6-95736ae5832e:Get ssh client for lago-he-basic-ansible-suite-master-host-0: lago.ssh: DEBUG: end task:5c719791-874a-466a-93b6-95736ae5832e:Get ssh client for lago-he-basic-ansible-suite-master-host-0: lago.ssh: DEBUG: Running 0b0c64d6 on lago-he-basic-ansible-suite-master-host-0: hosted-engine --vm-status --json lago.ssh: DEBUG: Command 0b0c64d6 on lago-he-basic-ansible-suite-master-host-0 returned with 0 lago.ssh: DEBUG: Command 0b0c64d6 on lago-he-basic-ansible-suite-master-host-0 output: {"1": {"conf_on_shared_storage": true, "live-data": true, "extra": "metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=2772 (Fri Sep 7 02:55:40 2018)\nhost-id=1\nscore=3000\nvm_conf_refresh_time=2772 (Fri Sep 7 02:55:40 2018)\nconf_on_shared_storage=True\nmaintenance=False\nstate=GlobalMaintenance\nstopped=False\n", "hostname": "lago-he-basic-ansible-suite-master-host-0", "host-id": 1, "engine-status": {"reason": "failed liveliness check", "health": "bad", "vm": "up", "detail": "Up"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "b642f5be", "local_conf_timestamp": 2772, "host-ts": 2772}, "2": {"conf_on_shared_storage": true, "live-data": true, "extra": "metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=2771 (Fri Sep 7 02:55:39 2018)\nhost-id=2\nscore=3000\nvm_conf_refresh_time=2771 (Fri Sep 7 02:55:39 2018)\nconf_on_shared_storage=True\nmaintenance=False\nstate=GlobalMaintenance\nstopped=False\n", "hostname": "lago-he-basic-ansible-suite-master-host-1", "host- ovirtlago.testlib: ERROR: * Unhandled exception in <function <lambda> at 0x7ff918199b18> Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 234, in assert_equals_within res = func() File "/home/jenkins/workspace/ovirt-system-tests_he-basic-ansible-suite-master/ovirt-system-tests/he-basic-ansible-suite-master/test-scenarios/008_restart_he_vm.py", line 187, in <lambda> for k, v in _get_he_status(host).items() File "/home/jenkins/workspace/ovirt-system-tests_he-basic-ansible-suite-master/ovirt-system-tests/he-basic-ansible-suite-master/test-scenarios/008_restart_he_vm.py", line 128, in _get_he_status raise RuntimeError('could not parse JSON: %s' % ret.out) RuntimeError: could not parse JSON: {"1": {"conf_on_shared_storage": true, "live-data": true, "extra": "metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=2772 (Fri Sep 7 02:55:40 2018)\nhost-id=1\nscore=3000\nvm_conf_refresh_time=2772 (Fri Sep 7 02:55:40 2018)\nconf_on_shared_storage=True\nmaintenance=False\nstate=GlobalMaintenance\nstopped=False\n", "hostname": "lago-he-basic-ansible-suite-master-host-0", "host-id": 1, "engine-status": {"reason": "failed liveliness check", "health": "bad", "vm": "up", "detail": "Up"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "b642f5be", "local_conf_timestamp": 2772, "host-ts": 2772}, "2": {"conf_on_shared_storage": true, "live-data": true, "extra": "metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=2771 (Fri Sep 7 02:55:39 2018)\nhost-id=2\nscore=3000\nvm_conf_refresh_time=2771 (Fri Sep 7 02:55:39 2018)\nconf_on_shared_storage=True\nmaintenance=False\nstate=GlobalMaintenance\nstopped=False\n", "hostname": "lago-he-basic-ansible-suite-master-host-1", "host- --------------------- >> end captured logging << --------------------- Stack Trace: File "/usr/lib64/python2.7/unittest/case.py", line 369, in run testMethod() File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 142, in wrapped_test test() File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 60, in wrapper return func(get_test_prefix(), *args, **kwargs) File "/home/jenkins/workspace/ovirt-system-tests_he-basic-ansible-suite-master/ovirt-system-tests/he-basic-ansible-suite-master/test-scenarios/008_restart_he_vm.py", line 53, in restart_he_vm _wait_for_engine_health(host) File "/home/jenkins/workspace/ovirt-system-tests_he-basic-ansible-suite-master/ovirt-system-tests/he-basic-ansible-suite-master/test-scenarios/008_restart_he_vm.py", line 185, in _wait_for_engine_health testlib.assert_true_within_long(lambda: any( File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 286, in assert_true_within_long assert_equals_within_long(func, True, allowed_exceptions) File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 273, in assert_equals_within_long func, value, LONG_TIMEOUT, allowed_exceptions=allowed_exceptions File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 234, in assert_equals_within res = func() File "/home/jenkins/workspace/ovirt-system-tests_he-basic-ansible-suite-master/ovirt-system-tests/he-basic-ansible-suite-master/test-scenarios/008_restart_he_vm.py", line 187, in <lambda> for k, v in _get_he_status(host).items() File "/home/jenkins/workspace/ovirt-system-tests_he-basic-ansible-suite-master/ovirt-system-tests/he-basic-ansible-suite-master/test-scenarios/008_restart_he_vm.py", line 128, in _get_he_status raise RuntimeError('could not parse JSON: %s' % ret.out) 'could not parse JSON: {"1": {"conf_on_shared_storage": true, "live-data": true, "extra": "metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=2772 (Fri Sep 7 02:55:40 2018)\\nhost-id=1\\nscore=3000\\nvm_conf_refresh_time=2772 (Fri Sep 7 02:55:40 2018)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=GlobalMaintenance\\nstopped=False\\n", "hostname": "lago-he-basic-ansible-suite-master-host-0", "host-id": 1, "engine-status": {"reason": "failed liveliness check", "health": "bad", "vm": "up", "detail": "Up"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "b642f5be", "local_conf_timestamp": 2772, "host-ts": 2772}, "2": {"conf_on_shared_storage": true, "live-data": true, "extra": "metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=2771 (Fri Sep 7 02:55:39 2018)\\nhost-id=2\\nscore=3000\\nvm_conf_refresh_time=2771 (Fri Sep 7 02:55:39 2018)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=GlobalMaintenance\\nstopped=False\\n", "hostname": "lago-he-basic-ansible-suite-master-host-1", "host-\n-------------------- >> begin captured logging << --------------------\nlago.ssh: DEBUG: start task:0148d290-97ba-42c9-b78e-616c813b22bf:Get ssh client for lago-he-basic-ansible-suite-master-host-0:\nlago.ssh: DEBUG: end task:0148d290-97ba-42c9-b78e-616c813b22bf:Get ssh client for lago-he-basic-ansible-suite-master-host-0:\nlago.ssh: DEBUG: Running a363a2fe on lago-he-basic-ansible-suite-master-host-0: hosted-engine --vm-status --json\nlago.ssh: DEBUG: Command a363a2fe on lago-he-basic-ansible-suite-master-host-0 returned with 0\nlago.ssh: DEBUG: Command a363a2fe on lago-he-basic-ansible-suite-master-host-0 output:\n {"1": {"conf_on_shared_storage": true, "live-data": true, "extra": "metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=2599 (Fri Sep 7 02:52:47 2018)\\nhost-id=1\\nscore=3400\\nvm_conf_refresh_time=2599 (Fri Sep 7 02:52:47 2018)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineUp\\nstopped=False\\n", "hostname": "lago-he-basic-ansible-suite-master-host-0", "host-id": 1, "engine-status": {"health": "good", "vm": "up", "detail": "Up"}, "score": 3400, "stopped": false, "maintenance": false, "crc32": "3e907b1b", "local_conf_timestamp": 2599, "host-ts": 2599}, "2": {"conf_on_shared_storage": true, "live-data": true, "extra": "metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=2600 (Fri Sep 7 02:52:48 2018)\\nhost-id=2\\nscore=3000\\nvm_conf_refresh_time=2600 (Fri Sep 7 02:52:48 2018)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineDown\\nstopped=False\\n", "hostname": "lago-he-basic-ansible-suite-master-host-1", "host-id": 2, "engine-status": {"reason": "vm not running on this host", "health": "bad", "vm": "down_unexpected", "detail": "unknown"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "e8ac0ce1", "local_conf_timestamp": 2600, "host-ts": 2600}, "global_maintenance": true}\n\nroot: INFO: * Shutting down HE VM on host: lago-he-basic-ansible-suite-master-host-0\nlago.ssh: DEBUG: start task:b74bbe86-d612-41c6-80f2-71f425bcbe1c:Get ssh client for lago-he-basic-ansible-suite-master-host-0:\nlago.ssh: DEBUG: end task:b74bbe86-d612-41c6-80f2-71f425bcbe1c:Get ssh client for lago-he-basic-ansible-suite-master-host-0:\nlago.ssh: DEBUG: Running a3e4bdb2 on lago-he-basic-ansible-suite-master-host-0: hosted-engine --vm-shutdown\nlago.ssh: DEBUG: Command a3e4bdb2 on lago-he-basic-ansible-suite-master-host-0 returned with 0\nroot: INFO: * Command succeeded\nroot: INFO: * Waiting for VM to be down...\nlago.ssh: DEBUG: start task:f780be22-86eb-4b14-91c6-bdc6e00dc13b:Get ssh client for lago-he-basic-ansible-suite-master-host-0:\nlago.ssh: DEBUG: end task:f780be22-86eb-4b14-91c6-bdc6e00dc13b:Get ssh client for lago-he-basic-ansible-suite-master-host-0:\nlago.ssh: DEBUG: Running a507720c on lago-he-basic-ansible-suite-master-host-0: hosted-engine --vm-status --json\nlago.ssh: DEBUG: Command a507720c on lago-he-basic-ansible-suite-master-host-0 returned with 0\nlago.ssh: DEBUG: Command a507720c on lago-he-basic-ansible-suite-master-host-0 output:\n {"1": {"conf_on_shared_storage": true, "live-data": true, "extra": "metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=2599 (Fri Sep 7 02:52:47 2018)\\nhost-id=1\\nscore=3400\\nvm_conf_refresh_time=2599 (Fri Sep 7 02:52:47 2018)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineUp\\nstopped=False\\n", "hostname": "lago-he-basic-ansible-suite-master-host-0", "host-id": 1, "engine-status": {"health": "good", "vm": "up", "detail": "Up"}, "score": 3400, "stopped": false, "maintenance": false, "crc32": "3e907b1b", "local_conf_timestamp": 2599, "host-ts": 2599}, "2": {"conf_on_shared_storage": true, "live-data": true, "extra": "metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=2600 (Fri Sep 7 02:52:48 2018)\\nhost-id=2\\nscore=3000\\nvm_conf_refresh_time=2600 (Fri Sep 7 02:52:48 2018)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineDown\\nstopped=False\\n", "hostname": "lago-he-basic-ansible-suite-master-host-1", "host-id": 2, "engine-status": {"reason": "vm not running on this host", "health": "bad", "vm": "down_unexpected", "detail": "unknown"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "e8ac0ce1", "local_conf_timestamp": 2600, "host-ts": 2600}, "global_maintenance": true}\n\nlago.ssh: DEBUG: start task:7850e30c-e26a-4894-8914-c722f7c8ae33:Get ssh client for lago-he-basic-ansible-suite-master-host-0:\nlago.ssh: DEBUG: end task:7850e30c-e26a-4894-8914-c722f7c8ae33:Get ssh client for lago-he-basic-ansible-suite-master-host-0:\nlago.ssh: DEBUG: Running ab9ec64c on lago-he-basic-ansible-suite-master-host-0: hosted-engine --vm-status --json\nlago.ssh: DEBUG: Command ab9ec64c on lago-he-basic-ansible-suite-master-host-0 returned with 0\nlago.ssh: DEBUG: Command ab9ec64c on lago-he-basic-ansible-suite-master-host-0 output:\n {"1": {"conf_on_shared_storage": true, "live-data": true, "extra": "metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=2609 (Fri Sep 7 02:52:57 2018)\\nhost-id=1\\nscore=3000\\nvm_conf_refresh_time=2609 (Fri Sep 7 02:52:57 2018)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=GlobalMaintenance\\nstopped=False\\n", "hostname": "lago-he-basic-ansible-suite-master-host-0", "host-id": 1, "engine-status": {"health": "good", "vm": "up", "detail": "Up"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "efc01efd", "local_conf_timestamp": 2609, "host-ts": 2609}, "2": {"conf_on_shared_storage": true, "live-data": true, "extra": "metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=2600 (Fri Sep 7 02:52:48 2018)\\nhost-id=2\\nscore=3000\\nvm_conf_refresh_time=2600 (Fri Sep 7 02:52:48 2018)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineDown\\nstopped=False\\n", "hostname": "lago-he-basic-ansible-suite-master-host-1", "host-id": 2, "engine-status": {"reason": "vm not running on this host", "health": "bad", "vm": "down_unexpected", "detail": "unknown"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "e8ac0ce1", "local_conf_timestamp": 2600, "host-ts": 2600}, "global_maintenance": true}\n\nlago.ssh: DEBUG: start task:73a0814e-0cf9-4396-8c93-7909aefe4c97:Get ssh client for lago-he-basic-ansible-suite-master-host-0:\nlago.ssh: DEBUG: end task:73a0814e-0cf9-4396-8c93-7909aefe4c97:Get ssh client for lago-he-basic-ansible-suite-master-host-0:\nlago.ssh: DEBUG: Running adff7936 on lago-he-basic-ansible-suite-master-host-0: hosted-engine --vm-status --json\nlago.ssh: DEBUG: Command adff7936 on lago-he-basic-ansible-suite-master-host-0 returned with 0\nlago.ssh: DEBUG: Command adff7936 on lago-he-basic-ansible-suite-master-host-0 output:\n {"1": {"conf_on_shared_storage": true, "live-data": true, "extra": "metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=2619 (Fri Sep 7 02:53:07 2018)\\nhost-id=1\\nscore=3000\\nvm_conf_refresh_time=2619 (Fri Sep 7 02:53:07 2018)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=GlobalMaintenance\\nstopped=False\\n", "hostname": "lago-he-basic-ansible-suite-master-host-0", "host-id": 1, "engine-status": {"reason": "bad vm status", "health": "bad", "vm": "down_unexpected", "detail": "Down"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "11b18f20", "local_conf_timestamp": 2619, "host-ts": 2619}, "2": {"conf_on_shared_storage": true, "live-data": true, "extra": "metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=2610 (Fri Sep 7 02:52:59 2018)\\nhost-id=2\\nscore=3000\\nvm_conf_refresh_time=2611 (Fri Sep 7 02:52:59 2018)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineDown\\nstopped=False\\n", "hostname": "lago-he-basic-ansible-suite-master-host-1", "host-id": 2, "engine-status": {"reason": "vm not running on this host", "health": "bad", "vm": "down_unexpected", "detail": "unknown"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "dcabbd6b", "local_conf_timestamp": 2611, "host-ts": 2610}, "global_maintenance": true}\n\nroot: INFO: * VM is down.\nroot: INFO: * Stopping services...\nlago.ssh: DEBUG: start task:7151443a-3382-43da-b1f1-c0401583d5d6:Get ssh client for lago-he-basic-ansible-suite-master-host-0:\nlago.ssh: DEBUG: end task:7151443a-3382-43da-b1f1-c0401583d5d6:Get ssh client for lago-he-basic-ansible-suite-master-host-0:\nlago.ssh: DEBUG: Running ae846dee on lago-he-basic-ansible-suite-master-host-0: systemctl stop vdsmd ovirt-ha-broker ovirt-ha-agent\nlago.ssh: DEBUG: Command ae846dee on lago-he-basic-ansible-suite-master-host-0 returned with 0\nroot: INFO: * Starting services...\nlago.ssh: DEBUG: start task:bd93de3a-8477-4561-9fea-c9b6de1c1b3a:Get ssh client for lago-he-basic-ansible-suite-master-host-0:\nlago.ssh: DEBUG: end task:bd93de3a-8477-4561-9fea-c9b6de1c1b3a:Get ssh client for lago-he-basic-ansible-suite-master-host-0:\nlago.ssh: DEBUG: Running b8285586 on lago-he-basic-ansible-suite-master-host-0: systemctl start vdsmd ovirt-ha-broker ovirt-ha-agent\nlago.ssh: DEBUG: Command b8285586 on lago-he-basic-ansible-suite-master-host-0 returned with 0\nroot: INFO: * Waiting for agent to be ready...\nlago.ssh: DEBUG: start task:dc27fa1c-dd11-401b-bbac-7d7bcc145240:Get ssh client for lago-he-basic-ansible-suite-master-host-0:\nlago.ssh: DEBUG: end task:dc27fa1c-dd11-401b-bbac-7d7bcc145240:Get ssh client for lago-he-basic-ansible-suite-master-host-0:\nlago.ssh: DEBUG: Running ba988070 on lago-he-basic-ansible-suite-master-host-0: hosted-engine --vm-status\nlago.ssh: DEBUG: Command ba988070 on lago-he-basic-ansible-suite-master-host-0 returned with 1\nlago.ssh: DEBUG: Command ba988070 on lago-he-basic-ansible-suite-master-host-0 output:\n The hosted engine configuration has not been retrieved from shared storage. Please ensure that ovirt-ha-agent is running and the storage server is reachable.\n\nlago.ssh: DEBUG: start task:e5c06cb0-2d27-4b4e-8394-0fad85438b17:Get ssh client for lago-he-basic-ansible-suite-master-host-0:\nlago.ssh: DEBUG: end task:e5c06cb0-2d27-4b4e-8394-0fad85438b17:Get ssh client for lago-he-basic-ansible-suite-master-host-0:\nlago.ssh: DEBUG: Running c0f0e7aa on lago-he-basic-ansible-suite-master-host-0: hosted-engine --vm-status\nlago.ssh: DEBUG: Command c0f0e7aa on lago-he-basic-ansible-suite-master-host-0 returned with 1\nlago.ssh: DEBUG: Command c0f0e7aa on lago-he-basic-ansible-suite-master-host-0 output:\n The hosted engine configuration has not been retrieved from shared storage. Please ensure that ovirt-ha-agent is running and the storage server is reachable.\n\nlago.ssh: DEBUG: start task:c62d9da2-1e89-4227-a9d4-c6d7ad42d0b2:Get ssh client for lago-he-basic-ansible-suite-master-host-0:\nlago.ssh: DEBUG: end task:c62d9da2-1e89-4227-a9d4-c6d7ad42d0b2:Get ssh client for lago-he-basic-ansible-suite-master-host-0:\nlago.ssh: DEBUG: Running c324d108 on lago-he-basic-ansible-suite-master-host-0: hosted-engine --vm-status\nlago.ssh: DEBUG: Command c324d108 on lago-he-basic-ansible-suite-master-host-0 returned with 1\nlago.ssh: DEBUG: Command c324d108 on lago-he-basic-ansible-suite-master-host-0 output:\n The hosted engine configuration has not been retrieved from shared storage. Please ensure that ovirt-ha-agent is running and the storage server is reachable.\n\nlago.ssh: DEBUG: start task:6a3f3ee3-678e-48d4-85a3-437b983736d6:Get ssh client for lago-he-basic-ansible-suite-master-host-0:\nlago.ssh: DEBUG: end task:6a3f3ee3-678e-48d4-85a3-437b983736d6:Get ssh client for lago-he-basic-ansible-suite-master-host-0:\nlago.ssh: DEBUG: Running c554ad90 on lago-he-basic-ansible-suite-master-host-0: hosted-engine --vm-status\nlago.ssh: DEBUG: Command c554ad90 on lago-he-basic-ansible-suite-master-host-0 returned with 1\nlago.ssh: DEBUG: Command c554ad90 on lago-he-basic-ansible-suite-master-host-0 output:\n The hosted engine configuration has not been retrieved from shared storage. Please ensure that ovirt-ha-agent is running and the storage server is reachable.\n\nlago.ssh: DEBUG: start task:be7fa34d-c089-4e0a-a5df-1065896c0ad3:Get ssh client for lago-he-basic-ansible-suite-master-host-0:\nlago.ssh: DEBUG: end task:be7fa34d-c089-4e0a-a5df-1065896c0ad3:Get ssh client for lago-he-basic-ansible-suite-master-host-0:\nlago.ssh: DEBUG: Running c7834d06 on lago-he-basic-ansible-suite-master-host-0: hosted-engine --vm-status\nlago.ssh: DEBUG: Command c7834d06 on lago-he-basic-ansible-suite-master-host-0 returned with 1\nlago.ssh: DEBUG: Command c7834d06 on lago-he-basic-ansible-suite-master-host-0 output:\n The hosted engine configuration has not been retrieved from shared storage. Please ensure that ovirt-ha-agent is running and the storage server is reachable.\n\nlago.ssh: DEBUG: start task:4e788f25-696f-443a-9713-3749b602fa42:Get ssh client for lago-he-basic-ansible-suite-master-host-0:\nlago.ssh: DEBUG: end task:4e788f25-696f-443a-9713-3749b602fa42:Get ssh client for lago-he-basic-ansible-suite-master-host-0:\nlago.ssh: DEBUG: Running c9acb7e8 on lago-he-basic-ansible-suite-master-host-0: hosted-engine --vm-status\nlago.ssh: DEBUG: Command c9acb7e8 on lago-he-basic-ansible-suite-master-host-0 returned with 1\nlago.ssh: DEBUG: Command c9acb7e8 on lago-he-basic-ansible-suite-master-host-0 output:\n The hosted engine configuration has not been retrieved from shared storage. Please ensure that ovirt-ha-agent is running and the storage server is reachable.\n\nlago.ssh: DEBUG: start task:c7aababd-80ff-479d-8eda-4a5fd9afa601:Get ssh client for lago-he-basic-ansible-suite-master-host-0:\nlago.ssh: DEBUG: end task:c7aababd-80ff-479d-8eda-4a5fd9afa601:Get ssh client for lago-he-basic-ansible-suite-master-host-0:\nlago.ssh: DEBUG: Running cbd47e02 on lago-he-basic-ansible-suite-master-host-0: hosted-engine --vm-status\nlago.ssh: DEBUG: Command cbd47e02 on lago-he-basic-ansible-suite-master-host-0 returned with 1\nlago.ssh: DEBUG: Command cbd47e02 on lago-he-basic-ansible-suite-master-host-0 output:\n The hosted engine configuration has not been retrieved from shared storage. Please ensure that ovirt-ha-agent is running and the storage server is reachable.\n\nlago.ssh: DEBUG: start task:93635936-0c48-4201-a674-56751d7504b7:Get ssh client for lago-he-basic-ansible-suite-master-host-0:\nlago.ssh: DEBUG: end task:93635936-0c48-4201-a674-56751d7504b7:Get ssh client for lago-he-basic-ansible-suite-master-host-0:\nlago.ssh: DEBUG: Running cdfc7e14 on lago-he-basic-ansible-suite-master-host-0: hosted-engine --vm-status\nlago.ssh: DEBUG: Command cdfc7e14 on lago-he-basic-ansible-suite-master-host-0 returned with 1\nlago.ssh: DEBUG: Command cdfc7e14 on lago-he-basic-ansible-suite-master-host-0 output:\n The hosted engine configuration has not been retrieved from shared storage. Please ensure that ovirt-ha-agent is running and the storage server is reachable.\n\nlago.ssh: DEBUG: start task:8ad03421-e843-4501-aac9-130e5518bc9c:Get ssh client for lago-he-basic-ansible-suite-master-host-0:\nlago.ssh: DEBUG: end task:8ad03421-e843-4501-aac9-130e5518bc9c:Get ssh client for lago-he-basic-ansible-suite-master-host-0:\nlago.ssh: DEBUG: Running d02681f8 on lago-he-basic-ansible-suite-master-host-0: hosted-engine --vm-status\nlago.ssh: DEBUG: Command d02681f8 on lago-he-basic-ansible-suite-master-host-0 returned with 1\nlago.ssh: DEBUG: Command d02681f8 on lago-he-basic-ansible-suite-master-host-0 output:\n The hosted engine configuration has not been retrieved from shared storage. Please ensure that ovirt-ha-agent is running and the storage server is reachable.\n\nlago.ssh: DEBUG: start task:70aaef21-30d4-4999-9c7d-13c8f4182ade:Get ssh client for lago-he-basic-ansible-suite-master-host-0:\nlago.ssh: DEBUG: end task:70aaef21-30d4-4999-9c7d-13c8f4182ade:Get ssh client for lago-he-basic-ansible-suite-master-host-0:\nlago.ssh: DEBUG: Running d263c17e on lago-he-basic-ansible-suite-master-host-0: hosted-engine --vm-status\nlago.ssh: DEBUG: Command d263c17e on lago-he-basic-ansible-suite-master-host-0 returned with 0\nlago.ssh: DEBUG: Command d263c17e on lago-he-basic-ansible-suite-master-host-0 output:\n \n\n!! Cluster is in GLOBAL MAINTENANCE mode !!\n\n\n\n--== Host lago-he-basic-ansible-suite-master-host-0 (id: 1) status ==--\n\nconf_on_shared_storage : True\nStatus up-to-date : True\nHostname : lago-he-basic-ansible-suite-master-host-0\nHost ID : 1\nEngine status : {"reason": "bad vm status", "health": "bad", "vm": "down", "detail": "Down"}\nScore : 0\nstopped : False\nLocal maintenance : False\ncrc32 : 47cc4b36\nlocal_conf_timestamp : 2682\nHost timestamp : 2681\nExtra metadata (valid at timestamp):\n\tmetadata_parse_version=1\n\tmetadata_feature_version=1\n\ttimestamp=2681 (Fri Sep 7 02:54:09 2018)\n\thost-id=1\n\tscore=0\n\tvm_conf_refresh_time=2682 (Fri Sep 7 02:54:09 2018)\n\tconf_on_shared_storage=True\n\tmaintenance=False\n\tstate=ReinitializeFSM\n\tstopped=False\n\n\n--== Host lago-he-basic-ansible-suite-master-host-1 (id: 2) status ==--\n\nconf_on_shared_storage : True\nStatus up-to-date : False\nHostname : lago-he-basic-ansible-suite-master-host-1\nHost ID : 2\nEngine status : unknown stale-data\nScore : 3000\nstopped : False\nLocal maintenance : False\ncrc32 : f45378aa\nlocal_conf_timestamp : 2681\nHost timestamp : 2681\nExtra metadata (valid at timestamp):\n\tmetadata_parse_version=1\n\tmetadata_feature_version=1\n\ttimestamp=2681 (Fri Sep 7 02:54:09 2018)\n\thost-id=2\n\tscore=3000\n\tvm_conf_refresh_time=2681 (Fri Sep 7 02:54:09 2018)\n\tconf_on_shared_storage=True\n\tmaintenance=False\n\tstate=GlobalMaintenance\n\tstopped=False\n\n\n!! Cluster is in GLOBAL MAINTENANCE mode !!\n\n\nroot: INFO: * Agent is ready.\nroot: INFO: * Starting VM...\nlago.ssh: DEBUG: start task:9cc18d62-477b-4ad4-b8a9-f1148b742fcf:Get ssh client for lago-he-basic-ansible-suite-master-host-0:\nlago.ssh: DEBUG: end task:9cc18d62-477b-4ad4-b8a9-f1148b742fcf:Get ssh client for lago-he-basic-ansible-suite-master-host-0:\nlago.ssh: DEBUG: Running d2dbe726 on lago-he-basic-ansible-suite-master-host-0: hosted-engine --vm-start\nlago.ssh: DEBUG: Command d2dbe726 on lago-he-basic-ansible-suite-master-host-0 returned with 0\nlago.ssh: DEBUG: Command d2dbe726 on lago-he-basic-ansible-suite-master-host-0 output:\n VM exists and is down, cleaning up and restarting\n\nroot: INFO: * Command succeeded\nroot: INFO: * Waiting for VM to be UP...\nlago.ssh: DEBUG: start task:3addeda7-f9e2-4353-a9b4-253e376b1a27:Get ssh client for lago-he-basic-ansible-suite-master-host-0:\nlago.ssh: DEBUG: end task:3addeda7-f9e2-4353-a9b4-253e376b1a27:Get ssh client for lago-he-basic-ansible-suite-master-host-0:\nlago.ssh: DEBUG: Running d443e7ee on lago-he-basic-ansible-suite-master-host-0: hosted-engine --vm-status --json\nlago.ssh: DEBUG: Command d443e7ee on lago-he-basic-ansible-suite-master-host-0 returned with 0\nlago.ssh: DEBUG: Command d443e7ee on lago-he-basic-ansible-suite-master-host-0 output:\n {"1": {"conf_on_shared_storage": true, "live-data": true, "extra": "metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=2681 (Fri Sep 7 02:54:09 2018)\\nhost-id=1\\nscore=0\\nvm_conf_refresh_time=2682 (Fri Sep 7 02:54:09 2018)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=ReinitializeFSM\\nstopped=False\\n", "hostname": "lago-he-basic-ansible-suite-master-host-0", "host-id": 1, "engine-status": {"reason": "bad vm status", "health": "bad", "vm": "down", "detail": "Down"}, "score": 0, "stopped": false, "maintenance": false, "crc32": "47cc4b36", "local_conf_timestamp": 2682, "host-ts": 2681}, "2": {"conf_on_shared_storage": true, "live-data": false, "extra": "metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=2681 (Fri Sep 7 02:54:09 2018)\\nhost-id=2\\nscore=3000\\nvm_conf_refresh_time=2681 (Fri Sep 7 02:54:09 2018)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=GlobalMaintenance\\nstopped=False\\n", "hostname": "lago-he-basic-ansible-suite-master-host-1", "host-id": 2, "engine-status": {"reason": "vm not running on this host", "health": "bad", "vm": "down_unexpected", "detail": "unknown"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "f45378aa", "local_conf_timestamp": 2681, "host-ts": 2681}, "global_maintenance": true}\n\nlago.ssh: DEBUG: start task:fc485d4c-5293-4ab4-87cf-bfd1f5f9155e:Get ssh client for lago-he-basic-ansible-suite-master-host-0:\nlago.ssh: DEBUG: end task:fc485d4c-5293-4ab4-87cf-bfd1f5f9155e:Get ssh client for lago-he-basic-ansible-suite-master-host-0:\nlago.ssh: DEBUG: Running dae93090 on lago-he-basic-ansible-suite-master-host-0: hosted-engine --vm-status --json\nlago.ssh: DEBUG: Command dae93090 on lago-he-basic-ansible-suite-master-host-0 returned with 0\nlago.ssh: DEBUG: Command dae93090 on lago-he-basic-ansible-suite-master-host-0 output:\n {"1": {"conf_on_shared_storage": true, "live-data": true, "extra": "metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=2691 (Fri Sep 7 02:54:19 2018)\\nhost-id=1\\nscore=3000\\nvm_conf_refresh_time=2691 (Fri Sep 7 02:54:19 2018)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=GlobalMaintenance\\nstopped=False\\n", "hostname": "lago-he-basic-ansible-suite-master-host-0", "host-id": 1, "engine-status": {"reason": "bad vm status", "health": "bad", "vm": "up", "detail": "Powering up"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "1359b99e", "local_conf_timestamp": 2691, "host-ts": 2691}, "2": {"conf_on_shared_storage": true, "live-data": false, "extra": "metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=2681 (Fri Sep 7 02:54:09 2018)\\nhost-id=2\\nscore=3000\\nvm_conf_refresh_time=2681 (Fri Sep 7 02:54:09 2018)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=GlobalMaintenance\\nstopped=False\\n", "hostname": "lago-he-basic-ansible-suite-master-host-1", "host-id": 2, "engine-status": {"reason": "vm not running on this host", "health": "bad", "vm": "down_unexpected", "detail": "unknown"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "f45378aa", "local_conf_timestamp": 2681, "host-ts": 2681}, "global_maintenance": true}\n\nroot: INFO: * VM is UP.\nroot: INFO: * Waiting for engine to start...\nlago.ssh: DEBUG: start task:95631ba4-13e5-445c-9cb3-39e2af4652e9:Get ssh client for lago-he-basic-ansible-suite-master-host-0:\nlago.ssh: DEBUG: end task:95631ba4-13e5-445c-9cb3-39e2af4652e9:Get ssh client for lago-he-basic-ansible-suite-master-host-0:\nlago.ssh: DEBUG: Running db87eb7c on lago-he-basic-ansible-suite-master-host-0: hosted-engine --vm-status --json\nlago.ssh: DEBUG: Command db87eb7c on lago-he-basic-ansible-suite-master-host-0 returned with 0\nlago.ssh: DEBUG: Command db87eb7c on lago-he-basic-ansible-suite-master-host-0 output:\n {"1": {"conf_on_shared_storage": true, "live-data": true, "extra": "metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=2691 (Fri Sep 7 02:54:19 2018)\\nhost-id=1\\nscore=3000\\nvm_conf_refresh_time=2691 (Fri Sep 7 02:54:19 2018)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=GlobalMaintenance\\nstopped=False\\n", "hostname": "lago-he-basic-ansible-suite-master-host-0", "host-id": 1, "engine-status": {"reason": "bad vm status", "health": "bad", "vm": "up", "detail": "Powering up"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "1359b99e", "local_conf_timestamp": 2691, "host-ts": 2691}, "2": {"conf_on_shared_storage": true, "live-data": false, "extra": "metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=2681 (Fri Sep 7 02:54:09 2018)\\nhost-id=2\\nscore=3000\\nvm_conf_refresh_time=2681 (Fri Sep 7 02:54:09 2018)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=GlobalMaintenance\\nstopped=False\\n", "hostname": "lago-he-basic-ansible-suite-master-host-1", "host-id": 2, "engine-status": {"reason": "vm not running on this host", "health": "bad", "vm": "down_unexpected", "detail": "unknown"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "f45378aa", "local_conf_timestamp": 2681, "host-ts": 2681}, "global_maintenance": true}\n\nlago.ssh: DEBUG: start task:b7efe5e9-4704-4bd5-9630-7ccfdf8276aa:Get ssh client for lago-he-basic-ansible-suite-master-host-0:\nlago.ssh: DEBUG: end task:b7efe5e9-4704-4bd5-9630-7ccfdf8276aa:Get ssh client for lago-he-basic-ansible-suite-master-host-0:\nlago.ssh: DEBUG: Running e21d882a on lago-he-basic-ansible-suite-master-host-0: hosted-engine --vm-status --json\nlago.ssh: DEBUG: Command e21d882a on lago-he-basic-ansible-suite-master-host-0 returned with 0\nlago.ssh: DEBUG: Command e21d882a on lago-he-basic-ansible-suite-master-host-0 output:\n {"1": {"conf_on_shared_storage": true, "live-data": true, "extra": "metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=2701 (Fri Sep 7 02:54:29 2018)\\nhost-id=1\\nscore=3000\\nvm_conf_refresh_time=2701 (Fri Sep 7 02:54:29 2018)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=GlobalMaintenance\\nstopped=False\\n", "hostname": "lago-he-basic-ansible-suite-master-host-0", "host-id": 1, "engine-status": {"reason": "bad vm status", "health": "bad", "vm": "up", "detail": "Powering up"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "932ad914", "local_conf_timestamp": 2701, "host-ts": 2701}, "2": {"conf_on_shared_storage": true, "live-data": false, "extra": "metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=2701 (Fri Sep 7 02:54:29 2018)\\nhost-id=2\\nscore=3000\\nvm_conf_refresh_time=2701 (Fri Sep 7 02:54:29 2018)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=GlobalMaintenance\\nstopped=False\\n", "hostname": "lago-he-basic-ansible-suite-master-host-1", "host-id": 2, "engine-status": {"reason": "vm not running on this host", "health": "bad", "vm": "down_unexpected", "detail": "unknown"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "95862647", "local_conf_timestamp": 2701, "host-ts": 2701}, "global_maintenance": true}\n\nlago.ssh: DEBUG: start task:9701c7b1-5454-4a60-ae77-951711f76e0e:Get ssh client for lago-he-basic-ansible-suite-master-host-0:\nlago.ssh: DEBUG: end task:9701c7b1-5454-4a60-ae77-951711f76e0e:Get ssh client for lago-he-basic-ansible-suite-master-host-0:\nlago.ssh: DEBUG: Running e4878552 on lago-he-basic-ansible-suite-master-host-0: hosted-engine --vm-status --json\nlago.ssh: DEBUG: Command e4878552 on lago-he-basic-ansible-suite-master-host-0 returned with 0\nlago.ssh: DEBUG: Command e4878552 on lago-he-basic-ansible-suite-master-host-0 output:\n {"1": {"conf_on_shared_storage": true, "live-data": true, "extra": "metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=2711 (Fri Sep 7 02:54:39 2018)\\nhost-id=1\\nscore=3000\\nvm_conf_refresh_time=2711 (Fri Sep 7 02:54:39 2018)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=GlobalMaintenance\\nstopped=False\\n", "hostname": "lago-he-basic-ansible-suite-master-host-0", "host-id": 1, "engine-status": {"reason": "bad vm status", "health": "bad", "vm": "up", "detail": "Powering up"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "36c74d9a", "local_conf_timestamp": 2711, "host-ts": 2711}, "2": {"conf_on_shared_storage": true, "live-data": false, "extra": "metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=2711 (Fri Sep 7 02:54:39 2018)\\nhost-id=2\\nscore=3000\\nvm_conf_refresh_time=2711 (Fri Sep 7 02:54:39 2018)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=GlobalMaintenance\\nstopped=False\\n", "hostname": "lago-he-basic-ansible-suite-master-host-1", "host-id": 2, "engine-status": {"reason": "vm not running on this host", "health": "bad", "vm": "down_unexpected", "detail": "unknown"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "d686a75b", "local_conf_timestamp": 2711, "host-ts": 2711}, "global_maintenance": true}\n\nlago.ssh: DEBUG: start task:a8271245-f82f-4c48-ad4f-221cc34729da:Get ssh client for lago-he-basic-ansible-suite-master-host-0:\nlago.ssh: DEBUG: end task:a8271245-f82f-4c48-ad4f-221cc34729da:Get ssh client for lago-he-basic-ansible-suite-master-host-0:\nlago.ssh: DEBUG: Running e6ccc8cc on lago-he-basic-ansible-suite-master-host-0: hosted-engine --vm-status --json\nlago.ssh: DEBUG: Command e6ccc8cc on lago-he-basic-ansible-suite-master-host-0 returned with 0\nlago.ssh: DEBUG: Command e6ccc8cc on lago-he-basic-ansible-suite-master-host-0 output:\n {"1": {"conf_on_shared_storage": true, "live-data": true, "extra": "metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=2711 (Fri Sep 7 02:54:39 2018)\\nhost-id=1\\nscore=3000\\nvm_conf_refresh_time=2711 (Fri Sep 7 02:54:39 2018)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=GlobalMaintenance\\nstopped=False\\n", "hostname": "lago-he-basic-ansible-suite-master-host-0", "host-id": 1, "engine-status": {"reason": "bad vm status", "health": "bad", "vm": "up", "detail": "Powering up"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "36c74d9a", "local_conf_timestamp": 2711, "host-ts": 2711}, "2": {"conf_on_shared_storage": true, "live-data": false, "extra": "metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=2711 (Fri Sep 7 02:54:39 2018)\\nhost-id=2\\nscore=3000\\nvm_conf_refresh_time=2711 (Fri Sep 7 02:54:39 2018)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=GlobalMaintenance\\nstopped=False\\n", "hostname": "lago-he-basic-ansible-suite-master-host-1", "host-id": 2, "engine-status": {"reason": "vm not running on this host", "health": "bad", "vm": "down_unexpected", "detail": "unknown"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "d686a75b", "local_conf_timestamp": 2711, "host-ts": 2711}, "global_maintenance": true}\n\nlago.ssh: DEBUG: start task:034ad030-8ad8-4638-a664-850f858b4321:Get ssh client for lago-he-basic-ansible-suite-master-host-0:\nlago.ssh: DEBUG: end task:034ad030-8ad8-4638-a664-850f858b4321:Get ssh client for lago-he-basic-ansible-suite-master-host-0:\nlago.ssh: DEBUG: Running e92c85f8 on lago-he-basic-ansible-suite-master-host-0: hosted-engine --vm-status --json\nlago.ssh: DEBUG: Command e92c85f8 on lago-he-basic-ansible-suite-master-host-0 returned with 0\nlago.ssh: DEBUG: Command e92c85f8 on lago-he-basic-ansible-suite-master-host-0 output:\n {"1": {"conf_on_shared_storage": true, "live-data": true, "extra": "metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=2711 (Fri Sep 7 02:54:39 2018)\\nhost-id=1\\nscore=3000\\nvm_conf_refresh_time=2711 (Fri Sep 7 02:54:39 2018)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=GlobalMaintenance\\nstopped=False\\n", "hostname": "lago-he-basic-ansible-suite-master-host-0", "host-id": 1, "engine-status": {"reason": "bad vm status", "health": "bad", "vm": "up", "detail": "Powering up"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "36c74d9a", "local_conf_timestamp": 2711, "host-ts": 2711}, "2": {"conf_on_shared_storage": true, "live-data": false, "extra": "metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=2711 (Fri Sep 7 02:54:39 2018)\\nhost-id=2\\nscore=3000\\nvm_conf_refresh_time=2711 (Fri Sep 7 02:54:39 2018)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=GlobalMaintenance\\nstopped=False\\n", "hostname": "lago-he-basic-ansible-suite-master-host-1", "host-id": 2, "engine-status": {"reason": "vm not running on this host", "health": "bad", "vm": "down_unexpected", "detail": "unknown"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "d686a75b", "local_conf_timestamp": 2711, "host-ts": 2711}, "global_maintenance": true}\n\nlago.ssh: DEBUG: start task:a40d8d79-8663-4d44-a25a-694d71248391:Get ssh client for lago-he-basic-ansible-suite-master-host-0:\nlago.ssh: DEBUG: end task:a40d8d79-8663-4d44-a25a-694d71248391:Get ssh client for lago-he-basic-ansible-suite-master-host-0:\nlago.ssh: DEBUG: Running eb852b52 on lago-he-basic-ansible-suite-master-host-0: hosted-engine --vm-status --json\nlago.ssh: DEBUG: Command eb852b52 on lago-he-basic-ansible-suite-master-host-0 returned with 0\nlago.ssh: DEBUG: Command eb852b52 on lago-he-basic-ansible-suite-master-host-0 output:\n {"1": {"conf_on_shared_storage": true, "live-data": true, "extra": "metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=2721 (Fri Sep 7 02:54:49 2018)\\nhost-id=1\\nscore=3000\\nvm_conf_refresh_time=2722 (Fri Sep 7 02:54:49 2018)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=GlobalMaintenance\\nstopped=False\\n", "hostname": "lago-he-basic-ansible-suite-master-host-0", "host-id": 1, "engine-status": {"reason": "failed liveliness check", "health": "bad", "vm": "up", "detail": "Up"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "f3a4667f", "local_conf_timestamp": 2722, "host-ts": 2721}, "2": {"conf_on_shared_storage": true, "live-data": false, "extra": "metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=2721 (Fri Sep 7 02:54:49 2018)\\nhost-id=2\\nscore=3000\\nvm_conf_refresh_time=2721 (Fri Sep 7 02:54:49 2018)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=GlobalMaintenance\\nstopped=False\\n", "hostname": "lago-he-basic-ansible-suite-master-host-1", "host-id": 2, "engine-status": {"reason": "vm not running on this host", "health": "bad", "vm": "down_unexpected", "detail": "unknown"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "1387247f", "local_conf_timestamp": 2721, "host-ts": 2721}, "global_maintenance": true}\n\nlago.ssh: DEBUG: start task:c1f770dc-80c9-488a-9e6e-9de963931606:Get ssh client for lago-he-basic-ansible-suite-master-host-0:\nlago.ssh: DEBUG: end task:c1f770dc-80c9-488a-9e6e-9de963931606:Get ssh client for lago-he-basic-ansible-suite-master-host-0:\nlago.ssh: DEBUG: Running eddd99e8 on lago-he-basic-ansible-suite-master-host-0: hosted-engine --vm-status --json\nlago.ssh: DEBUG: Command eddd99e8 on lago-he-basic-ansible-suite-master-host-0 returned with 0\nlago.ssh: DEBUG: Command eddd99e8 on lago-he-basic-ansible-suite-master-host-0 output:\n {"1": {"conf_on_shared_storage": true, "live-data": true, "extra": "metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=2721 (Fri Sep 7 02:54:49 2018)\\nhost-id=1\\nscore=3000\\nvm_conf_refresh_time=2722 (Fri Sep 7 02:54:49 2018)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=GlobalMaintenance\\nstopped=False\\n", "hostname": "lago-he-basic-ansible-suite-master-host-0", "host-id": 1, "engine-status": {"reason": "failed liveliness check", "health": "bad", "vm": "up", "detail": "Up"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "f3a4667f", "local_conf_timestamp": 2722, "host-ts": 2721}, "2": {"conf_on_shared_storage": true, "live-data": false, "extra": "metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=2721 (Fri Sep 7 02:54:49 2018)\\nhost-id=2\\nscore=3000\\nvm_conf_refresh_time=2721 (Fri Sep 7 02:54:49 2018)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=GlobalMaintenance\\nstopped=False\\n", "hostname": "lago-he-basic-ansible-suite-master-host-1", "host-id": 2, "engine-status": {"reason": "vm not running on this host", "health": "bad", "vm": "down_unexpected", "detail": "unknown"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "1387247f", "local_conf_timestamp": 2721, "host-ts": 2721}, "global_maintenance": true}\n\nlago.ssh: DEBUG: start task:be04d79e-a05e-4a0b-8f5e-12b2ab159815:Get ssh client for lago-he-basic-ansible-suite-master-host-0:\nlago.ssh: DEBUG: end task:be04d79e-a05e-4a0b-8f5e-12b2ab159815:Get ssh client for lago-he-basic-ansible-suite-master-host-0:\nlago.ssh: DEBUG: Running f052e3b8 on lago-he-basic-ansible-suite-master-host-0: hosted-engine --vm-status --json\nlago.ssh: DEBUG: Command f052e3b8 on lago-he-basic-ansible-suite-master-host-0 returned with 0\nlago.ssh: DEBUG: Command f052e3b8 on lago-he-basic-ansible-suite-master-host-0 output:\n {"1": {"conf_on_shared_storage": true, "live-data": true, "extra": "metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=2731 (Fri Sep 7 02:54:58 2018)\\nhost-id=1\\nscore=3000\\nvm_conf_refresh_time=2731 (Fri Sep 7 02:54:59 2018)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=GlobalMaintenance\\nstopped=False\\n", "hostname": "lago-he-basic-ansible-suite-master-host-0", "host-id": 1, "engine-status": {"reason": "failed liveliness check", "health": "bad", "vm": "up", "detail": "Up"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "78d12f00", "local_conf_timestamp": 2731, "host-ts": 2731}, "2": {"conf_on_shared_storage": true, "live-data": false, "extra": "metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=2721 (Fri Sep 7 02:54:49 2018)\\nhost-id=2\\nscore=3000\\nvm_conf_refresh_time=2721 (Fri Sep 7 02:54:49 2018)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=GlobalMaintenance\\nstopped=False\\n", "hostname": "lago-he-basic-ansible-suite-master-host-1", "host-id": 2, "engine-status": {"reason": "vm not running on this host", "health": "bad", "vm": "down_unexpected", "detail": "unknown"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "1387247f", "local_conf_timestamp": 2721, "host-ts": 2721}, "global_maintenance": true}\n\nlago.ssh: DEBUG: start task:39213b21-16d2-4090-b54f-e55be19ea299:Get ssh client for lago-he-basic-ansible-suite-master-host-0:\nlago.ssh: DEBUG: end task:39213b21-16d2-4090-b54f-e55be19ea299:Get ssh client for lago-he-basic-ansible-suite-master-host-0:\nlago.ssh: DEBUG: Running f2b118b4 on lago-he-basic-ansible-suite-master-host-0: hosted-engine --vm-status --json\nlago.ssh: DEBUG: Command f2b118b4 on lago-he-basic-ansible-suite-master-host-0 returned with 0\nlago.ssh: DEBUG: Command f2b118b4 on lago-he-basic-ansible-suite-master-host-0 output:\n {"1": {"conf_on_shared_storage": true, "live-data": true, "extra": "metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=2731 (Fri Sep 7 02:54:58 2018)\\nhost-id=1\\nscore=3000\\nvm_conf_refresh_time=2731 (Fri Sep 7 02:54:59 2018)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=GlobalMaintenance\\nstopped=False\\n", "hostname": "lago-he-basic-ansible-suite-master-host-0", "host-id": 1, "engine-status": {"reason": "failed liveliness check", "health": "bad", "vm": "up", "detail": "Up"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "78d12f00", "local_conf_timestamp": 2731, "host-ts": 2731}, "2": {"conf_on_shared_storage": true, "live-data": false, "extra": "metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=2721 (Fri Sep 7 02:54:49 2018)\\nhost-id=2\\nscore=3000\\nvm_conf_refresh_time=2721 (Fri Sep 7 02:54:49 2018)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=GlobalMaintenance\\nstopped=False\\n", "hostname": "lago-he-basic-ansible-suite-master-host-1", "host-id": 2, "engine-status": {"reason": "vm not running on this host", "health": "bad", "vm": "down_unexpected", "detail": "unknown"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "1387247f", "local_conf_timestamp": 2721, "host-ts": 2721}, "global_maintenance": true}\n\nlago.ssh: DEBUG: start task:fd59ea63-060f-48ae-bf59-97d684ed9e21:Get ssh client for lago-he-basic-ansible-suite-master-host-0:\nlago.ssh: DEBUG: end task:fd59ea63-060f-48ae-bf59-97d684ed9e21:Get ssh client for lago-he-basic-ansible-suite-master-host-0:\nlago.ssh: DEBUG: Running f51da478 on lago-he-basic-ansible-suite-master-host-0: hosted-engine --vm-status --json\nlago.ssh: DEBUG: Command f51da478 on lago-he-basic-ansible-suite-master-host-0 returned with 0\nlago.ssh: DEBUG: Command f51da478 on lago-he-basic-ansible-suite-master-host-0 output:\n {"1": {"conf_on_shared_storage": true, "live-data": true, "extra": "metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=2731 (Fri Sep 7 02:54:58 2018)\\nhost-id=1\\nscore=3000\\nvm_conf_refresh_time=2731 (Fri Sep 7 02:54:59 2018)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=GlobalMaintenance\\nstopped=False\\n", "hostname": "lago-he-basic-ansible-suite-master-host-0", "host-id": 1, "engine-status": {"reason": "failed liveliness check", "health": "bad", "vm": "up", "detail": "Up"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "78d12f00", "local_conf_timestamp": 2731, "host-ts": 2731}, "2": {"conf_on_shared_storage": true, "live-data": false, "extra": "metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=2721 (Fri Sep 7 02:54:49 2018)\\nhost-id=2\\nscore=3000\\nvm_conf_refresh_time=2721 (Fri Sep 7 02:54:49 2018)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=GlobalMaintenance\\nstopped=False\\n", "hostname": "lago-he-basic-ansible-suite-master-host-1", "host-id": 2, "engine-status": {"reason": "vm not running on this host", "health": "bad", "vm": "down_unexpected", "detail": "unknown"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "1387247f", "local_conf_timestamp": 2721, "host-ts": 2721}, "global_maintenance": true}\n\nlago.ssh: DEBUG: start task:c7b5cd6a-5630-484c-9817-1904741bd302:Get ssh client for lago-he-basic-ansible-suite-master-host-0:\nlago.ssh: DEBUG: end task:c7b5cd6a-5630-484c-9817-1904741bd302:Get ssh client for lago-he-basic-ansible-suite-master-host-0:\nlago.ssh: DEBUG: Running f76ea5f6 on lago-he-basic-ansible-suite-master-host-0: hosted-engine --vm-status --json\nlago.ssh: DEBUG: Command f76ea5f6 on lago-he-basic-ansible-suite-master-host-0 returned with 0\nlago.ssh: DEBUG: Command f76ea5f6 on lago-he-basic-ansible-suite-master-host-0 output:\n {"1": {"conf_on_shared_storage": true, "live-data": true, "extra": "metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=2741 (Fri Sep 7 02:55:09 2018)\\nhost-id=1\\nscore=3000\\nvm_conf_refresh_time=2741 (Fri Sep 7 02:55:09 2018)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=GlobalMaintenance\\nstopped=False\\n", "hostname": "lago-he-basic-ansible-suite-master-host-0", "host-id": 1, "engine-status": {"reason": "failed liveliness check", "health": "bad", "vm": "up", "detail": "Up"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "07a5655b", "local_conf_timestamp": 2741, "host-ts": 2741}, "2": {"conf_on_shared_storage": true, "live-data": false, "extra": "metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=2731 (Fri Sep 7 02:54:59 2018)\\nhost-id=2\\nscore=3000\\nvm_conf_refresh_time=2731 (Fri Sep 7 02:54:59 2018)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=GlobalMaintenance\\nstopped=False\\n", "hostname": "lago-he-basic-ansible-suite-master-host-1", "host-id": 2, "engine-status": {"reason": "vm not running on this host", "health": "bad", "vm": "down_unexpected", "detail": "unknown"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "5087a563", "local_conf_timestamp": 2731, "host-ts": 2731}, "global_maintenance": true}\n\nlago.ssh: DEBUG: start task:0c02897b-b630-4399-af02-1b34df63f71b:Get ssh client for lago-he-basic-ansible-suite-master-host-0:\nlago.ssh: DEBUG: end task:0c02897b-b630-4399-af02-1b34df63f71b:Get ssh client for lago-he-basic-ansible-suite-master-host-0:\nlago.ssh: DEBUG: Running f9d13ae8 on lago-he-basic-ansible-suite-master-host-0: hosted-engine --vm-status --json\nlago.ssh: DEBUG: Command f9d13ae8 on lago-he-basic-ansible-suite-master-host-0 returned with 0\nlago.ssh: DEBUG: Command f9d13ae8 on lago-he-basic-ansible-suite-master-host-0 output:\n {"1": {"conf_on_shared_storage": true, "live-data": true, "extra": "metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=2741 (Fri Sep 7 02:55:09 2018)\\nhost-id=1\\nscore=3000\\nvm_conf_refresh_time=2741 (Fri Sep 7 02:55:09 2018)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=GlobalMaintenance\\nstopped=False\\n", "hostname": "lago-he-basic-ansible-suite-master-host-0", "host-id": 1, "engine-status": {"reason": "failed liveliness check", "health": "bad", "vm": "up", "detail": "Up"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "07a5655b", "local_conf_timestamp": 2741, "host-ts": 2741}, "2": {"conf_on_shared_storage": true, "live-data": false, "extra": "metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=2731 (Fri Sep 7 02:54:59 2018)\\nhost-id=2\\nscore=3000\\nvm_conf_refresh_time=2731 (Fri Sep 7 02:54:59 2018)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=GlobalMaintenance\\nstopped=False\\n", "hostname": "lago-he-basic-ansible-suite-master-host-1", "host-id": 2, "engine-status": {"reason": "vm not running on this host", "health": "bad", "vm": "down_unexpected", "detail": "unknown"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "5087a563", "local_conf_timestamp": 2731, "host-ts": 2731}, "global_maintenance": true}\n\nlago.ssh: DEBUG: start task:b1616944-ec5a-458c-b006-76487cda12a8:Get ssh client for lago-he-basic-ansible-suite-master-host-0:\nlago.ssh: DEBUG: end task:b1616944-ec5a-458c-b006-76487cda12a8:Get ssh client for lago-he-basic-ansible-suite-master-host-0:\nlago.ssh: DEBUG: Running fc411168 on lago-he-basic-ansible-suite-master-host-0: hosted-engine --vm-status --json\nlago.ssh: DEBUG: Command fc411168 on lago-he-basic-ansible-suite-master-host-0 returned with 0\nlago.ssh: DEBUG: Command fc411168 on lago-he-basic-ansible-suite-master-host-0 output:\n {"1": {"conf_on_shared_storage": true, "live-data": true, "extra": "metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=2751 (Fri Sep 7 02:55:19 2018)\\nhost-id=1\\nscore=3000\\nvm_conf_refresh_time=2751 (Fri Sep 7 02:55:19 2018)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=GlobalMaintenance\\nstopped=False\\n", "hostname": "lago-he-basic-ansible-suite-master-host-0", "host-id": 1, "engine-status": {"reason": "failed liveliness check", "health": "bad", "vm": "up", "detail": "Up"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "15d97d9e", "local_conf_timestamp": 2751, "host-ts": 2751}, "2": {"conf_on_shared_storage": true, "live-data": true, "extra": "metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=2751 (Fri Sep 7 02:55:19 2018)\\nhost-id=2\\nscore=3000\\nvm_conf_refresh_time=2751 (Fri Sep 7 02:55:19 2018)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=GlobalMaintenance\\nstopped=False\\n", "hostname": "lago-he-basic-ansible-suite-master-host-1", "host-id": 2, "engine-status": {"reason": "vm not running on this host", "health": "bad", "vm": "down_unexpected", "detail": "unknown"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "01f5a56a", "local_conf_timestamp": 2751, "host-ts": 2751}, "global_maintenance": true}\n\nlago.ssh: DEBUG: start task:9fccacca-c35c-43d4-96d7-ee9b7c896561:Get ssh client for lago-he-basic-ansible-suite-master-host-0:\nlago.ssh: DEBUG: end task:9fccacca-c35c-43d4-96d7-ee9b7c896561:Get ssh client for lago-he-basic-ansible-suite-master-host-0:\nlago.ssh: DEBUG: Running ff0e5dd8 on lago-he-basic-ansible-suite-master-host-0: hosted-engine --vm-status --json\nlago.ssh: DEBUG: Command ff0e5dd8 on lago-he-basic-ansible-suite-master-host-0 returned with 0\nlago.ssh: DEBUG: Command ff0e5dd8 on lago-he-basic-ansible-suite-master-host-0 output:\n {"1": {"conf_on_shared_storage": true, "live-data": true, "extra": "metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=2751 (Fri Sep 7 02:55:19 2018)\\nhost-id=1\\nscore=3000\\nvm_conf_refresh_time=2751 (Fri Sep 7 02:55:19 2018)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=GlobalMaintenance\\nstopped=False\\n", "hostname": "lago-he-basic-ansible-suite-master-host-0", "host-id": 1, "engine-status": {"reason": "failed liveliness check", "health": "bad", "vm": "up", "detail": "Up"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "15d97d9e", "local_conf_timestamp": 2751, "host-ts": 2751}, "2": {"conf_on_shared_storage": true, "live-data": true, "extra": "metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=2751 (Fri Sep 7 02:55:19 2018)\\nhost-id=2\\nscore=3000\\nvm_conf_refresh_time=2751 (Fri Sep 7 02:55:19 2018)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=GlobalMaintenance\\nstopped=False\\n", "hostname": "lago-he-basic-ansible-suite-master-host-1", "host-id": 2, "engine-status": {"reason": "vm not running on this host", "health": "bad", "vm": "down_unexpected", "detail": "unknown"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "01f5a56a", "local_conf_timestamp": 2751, "host-ts": 2751}, "global_maintenance": true}\n\nlago.ssh: DEBUG: start task:b3be4966-4ce1-4017-8000-c7bb06f8407a:Get ssh client for lago-he-basic-ansible-suite-master-host-0:\nlago.ssh: DEBUG: end task:b3be4966-4ce1-4017-8000-c7bb06f8407a:Get ssh client for lago-he-basic-ansible-suite-master-host-0:\nlago.ssh: DEBUG: Running 0160a136 on lago-he-basic-ansible-suite-master-host-0: hosted-engine --vm-status --json\nlago.ssh: DEBUG: Command 0160a136 on lago-he-basic-ansible-suite-master-host-0 returned with 0\nlago.ssh: DEBUG: Command 0160a136 on lago-he-basic-ansible-suite-master-host-0 output:\n {"1": {"conf_on_shared_storage": true, "live-data": true, "extra": "metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=2751 (Fri Sep 7 02:55:19 2018)\\nhost-id=1\\nscore=3000\\nvm_conf_refresh_time=2751 (Fri Sep 7 02:55:19 2018)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=GlobalMaintenance\\nstopped=False\\n", "hostname": "lago-he-basic-ansible-suite-master-host-0", "host-id": 1, "engine-status": {"reason": "failed liveliness check", "health": "bad", "vm": "up", "detail": "Up"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "15d97d9e", "local_conf_timestamp": 2751, "host-ts": 2751}, "2": {"conf_on_shared_storage": true, "live-data": true, "extra": "metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=2751 (Fri Sep 7 02:55:19 2018)\\nhost-id=2\\nscore=3000\\nvm_conf_refresh_time=2751 (Fri Sep 7 02:55:19 2018)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=GlobalMaintenance\\nstopped=False\\n", "hostname": "lago-he-basic-ansible-suite-master-host-1", "host-id": 2, "engine-status": {"reason": "vm not running on this host", "health": "bad", "vm": "down_unexpected", "detail": "unknown"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "01f5a56a", "local_conf_timestamp": 2751, "host-ts": 2751}, "global_maintenance": true}\n\nlago.ssh: DEBUG: start task:91ee2d32-a2a3-4308-a616-47a0939ef3af:Get ssh client for lago-he-basic-ansible-suite-master-host-0:\nlago.ssh: DEBUG: end task:91ee2d32-a2a3-4308-a616-47a0939ef3af:Get ssh client for lago-he-basic-ansible-suite-master-host-0:\nlago.ssh: DEBUG: Running 040ba282 on lago-he-basic-ansible-suite-master-host-0: hosted-engine --vm-status --json\nlago.ssh: DEBUG: Command 040ba282 on lago-he-basic-ansible-suite-master-host-0 returned with 0\nlago.ssh: DEBUG: Command 040ba282 on lago-he-basic-ansible-suite-master-host-0 output:\n {"1": {"conf_on_shared_storage": true, "live-data": true, "extra": "metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=2761 (Fri Sep 7 02:55:29 2018)\\nhost-id=1\\nscore=3000\\nvm_conf_refresh_time=2762 (Fri Sep 7 02:55:30 2018)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=GlobalMaintenance\\nstopped=False\\n", "hostname": "lago-he-basic-ansible-suite-master-host-0", "host-id": 1, "engine-status": {"reason": "failed liveliness check", "health": "bad", "vm": "up", "detail": "Up"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "ba54056b", "local_conf_timestamp": 2762, "host-ts": 2761}, "2": {"conf_on_shared_storage": true, "live-data": true, "extra": "metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=2761 (Fri Sep 7 02:55:29 2018)\\nhost-id=2\\nscore=3000\\nvm_conf_refresh_time=2761 (Fri Sep 7 02:55:29 2018)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=GlobalMaintenance\\nstopped=False\\n", "hostname": "lago-he-basic-ansible-suite-master-host-1", "host-id": 2, "engine-status": {"reason": "vm not running on this host", "health": "bad", "vm": "down_unexpected", "detail": "unknown"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "c4f4264e", "local_conf_timestamp": 2761, "host-ts": 2761}, "global_maintenance": true}\n\nlago.ssh: DEBUG: start task:a77979f4-60d8-435a-a066-ca83ab361658:Get ssh client for lago-he-basic-ansible-suite-master-host-0:\nlago.ssh: DEBUG: end task:a77979f4-60d8-435a-a066-ca83ab361658:Get ssh client for lago-he-basic-ansible-suite-master-host-0:\nlago.ssh: DEBUG: Running 06646870 on lago-he-basic-ansible-suite-master-host-0: hosted-engine --vm-status --json\nlago.ssh: DEBUG: Command 06646870 on lago-he-basic-ansible-suite-master-host-0 returned with 0\nlago.ssh: DEBUG: Command 06646870 on lago-he-basic-ansible-suite-master-host-0 output:\n {"1": {"conf_on_shared_storage": true, "live-data": true, "extra": "metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=2761 (Fri Sep 7 02:55:29 2018)\\nhost-id=1\\nscore=3000\\nvm_conf_refresh_time=2762 (Fri Sep 7 02:55:30 2018)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=GlobalMaintenance\\nstopped=False\\n", "hostname": "lago-he-basic-ansible-suite-master-host-0", "host-id": 1, "engine-status": {"reason": "failed liveliness check", "health": "bad", "vm": "up", "detail": "Up"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "ba54056b", "local_conf_timestamp": 2762, "host-ts": 2761}, "2": {"conf_on_shared_storage": true, "live-data": true, "extra": "metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=2761 (Fri Sep 7 02:55:29 2018)\\nhost-id=2\\nscore=3000\\nvm_conf_refresh_time=2761 (Fri Sep 7 02:55:29 2018)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=GlobalMaintenance\\nstopped=False\\n", "hostname": "lago-he-basic-ansible-suite-master-host-1", "host-id": 2, "engine-status": {"reason": "vm not running on this host", "health": "bad", "vm": "down_unexpected", "detail": "unknown"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "c4f4264e", "local_conf_timestamp": 2761, "host-ts": 2761}, "global_maintenance": true}\n\nlago.ssh: DEBUG: start task:5ffce9af-34cc-4d06-acd6-b1e0eb6ff123:Get ssh client for lago-he-basic-ansible-suite-master-host-0:\nlago.ssh: DEBUG: end task:5ffce9af-34cc-4d06-acd6-b1e0eb6ff123:Get ssh client for lago-he-basic-ansible-suite-master-host-0:\nlago.ssh: DEBUG: Running 08bfe054 on lago-he-basic-ansible-suite-master-host-0: hosted-engine --vm-status --json\nlago.ssh: DEBUG: Command 08bfe054 on lago-he-basic-ansible-suite-master-host-0 returned with 0\nlago.ssh: DEBUG: Command 08bfe054 on lago-he-basic-ansible-suite-master-host-0 output:\n {"1": {"conf_on_shared_storage": true, "live-data": true, "extra": "metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=2772 (Fri Sep 7 02:55:40 2018)\\nhost-id=1\\nscore=3000\\nvm_conf_refresh_time=2772 (Fri Sep 7 02:55:40 2018)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=GlobalMaintenance\\nstopped=False\\n", "hostname": "lago-he-basic-ansible-suite-master-host-0", "host-id": 1, "engine-status": {"reason": "failed liveliness check", "health": "bad", "vm": "up", "detail": "Up"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "b642f5be", "local_conf_timestamp": 2772, "host-ts": 2772}, "2": {"conf_on_shared_storage": true, "live-data": true, "extra": "metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=2771 (Fri Sep 7 02:55:39 2018)\\nhost-id=2\\nscore=3000\\nvm_conf_refresh_time=2771 (Fri Sep 7 02:55:39 2018)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=GlobalMaintenance\\nstopped=False\\n", "hostname": "lago-he-basic-ansible-suite-master-host-1", "host-id": 2, "engine-status": {"reason": "vm not running on this host", "health": "bad", "vm": "down_unexpected", "detail": "unknown"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "87f4a752", "local_conf_timestamp": 2771, "host-ts": 2771}, "global_maintenance": true}\n\nlago.ssh: DEBUG: start task:5c719791-874a-466a-93b6-95736ae5832e:Get ssh client for lago-he-basic-ansible-suite-master-host-0:\nlago.ssh: DEBUG: end task:5c719791-874a-466a-93b6-95736ae5832e:Get ssh client for lago-he-basic-ansible-suite-master-host-0:\nlago.ssh: DEBUG: Running 0b0c64d6 on lago-he-basic-ansible-suite-master-host-0: hosted-engine --vm-status --json\nlago.ssh: DEBUG: Command 0b0c64d6 on lago-he-basic-ansible-suite-master-host-0 returned with 0\nlago.ssh: DEBUG: Command 0b0c64d6 on lago-he-basic-ansible-suite-master-host-0 output:\n {"1": {"conf_on_shared_storage": true, "live-data": true, "extra": "metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=2772 (Fri Sep 7 02:55:40 2018)\\nhost-id=1\\nscore=3000\\nvm_conf_refresh_time=2772 (Fri Sep 7 02:55:40 2018)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=GlobalMaintenance\\nstopped=False\\n", "hostname": "lago-he-basic-ansible-suite-master-host-0", "host-id": 1, "engine-status": {"reason": "failed liveliness check", "health": "bad", "vm": "up", "detail": "Up"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "b642f5be", "local_conf_timestamp": 2772, "host-ts": 2772}, "2": {"conf_on_shared_storage": true, "live-data": true, "extra": "metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=2771 (Fri Sep 7 02:55:39 2018)\\nhost-id=2\\nscore=3000\\nvm_conf_refresh_time=2771 (Fri Sep 7 02:55:39 2018)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=GlobalMaintenance\\nstopped=False\\n", "hostname": "lago-he-basic-ansible-suite-master-host-1", "host-\novirtlago.testlib: ERROR: * Unhandled exception in <function <lambda> at 0x7ff918199b18>\nTraceback (most recent call last):\n File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 234, in assert_equals_within\n res = func()\n File "/home/jenkins/workspace/ovirt-system-tests_he-basic-ansible-suite-master/ovirt-system-tests/he-basic-ansible-suite-master/test-scenarios/008_restart_he_vm.py", line 187, in <lambda>\n for k, v in _get_he_status(host).items()\n File "/home/jenkins/workspace/ovirt-system-tests_he-basic-ansible-suite-master/ovirt-system-tests/he-basic-ansible-suite-master/test-scenarios/008_restart_he_vm.py", line 128, in _get_he_status\n raise RuntimeError(\'could not parse JSON: %s\' % ret.out)\nRuntimeError: could not parse JSON: {"1": {"conf_on_shared_storage": true, "live-data": true, "extra": "metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=2772 (Fri Sep 7 02:55:40 2018)\\nhost-id=1\\nscore=3000\\nvm_conf_refresh_time=2772 (Fri Sep 7 02:55:40 2018)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=GlobalMaintenance\\nstopped=False\\n", "hostname": "lago-he-basic-ansible-suite-master-host-0", "host-id": 1, "engine-status": {"reason": "failed liveliness check", "health": "bad", "vm": "up", "detail": "Up"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "b642f5be", "local_conf_timestamp": 2772, "host-ts": 2772}, "2": {"conf_on_shared_storage": true, "live-data": true, "extra": "metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=2771 (Fri Sep 7 02:55:39 2018)\\nhost-id=2\\nscore=3000\\nvm_conf_refresh_time=2771 (Fri Sep 7 02:55:39 2018)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=GlobalMaintenance\\nstopped=False\\n", "hostname": "lago-he-basic-ansible-suite-master-host-1", "host-\n--------------------- >> end captured logging << ---------------------'

Project: http://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-ansible-suite-maste... Build: http://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-ansible-suite-maste... Build Number: 609 Build Status: Still Failing Triggered By: Started by timer ------------------------------------- Changes Since Last Success: ------------------------------------- Changes for Build #604 [Gal Ben Haim] Adding dr suite [Barak Korren] Remove the populate_mock function from mock_runner [Daniel Belenky] mock_runner: store shell cmd in a variable [Barak Korren] mock_runner: Added timeout param [Barak Korren] Make whitelist repo configurable via env vars [Daniel Belenky] stdci_runner: let mock_runner manage timeout [Greg Sheremeta] remove 4.1 change queue from ovirt-engine-nodejs config Changes for Build #605 [Gal Ben Haim] Adding dr suite Changes for Build #606 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file [Barak Korren] Enable CI for kubevirt/client-python [Daniel Belenky] Add timeout config to stdci dsl [Daniel Belenky] stdci_runner.groovy: utilize DSL's timeout cfg Changes for Build #607 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #608 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #609 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file ----------------- Failed Tests: ----------------- 1 tests failed. FAILED: 012_local_maintenance_sdk.local_maintenance Error Message: False != True after 600 seconds -------------------- >> begin captured logging << -------------------- root: INFO: * Waiting For System Stability... root: INFO: * Performing Deactivation... --------------------- >> end captured logging << --------------------- Stack Trace: File "/usr/lib64/python2.7/unittest/case.py", line 369, in run testMethod() File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 142, in wrapped_test test() File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 60, in wrapper return func(get_test_prefix(), *args, **kwargs) File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 79, in wrapper prefix.virt_env.engine_vm().get_api(api_ver=4), *args, **kwargs File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 60, in wrapper return func(get_test_prefix(), *args, **kwargs) File "/home/jenkins/workspace/ovirt-system-tests_he-basic-ansible-suite-master/ovirt-system-tests/he-basic-ansible-suite-master/test-scenarios/012_local_maintenance_sdk.py", line 65, in local_maintenance lambda: host_service.get().status == types.HostStatus.MAINTENANCE or File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 286, in assert_true_within_long assert_equals_within_long(func, True, allowed_exceptions) File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 273, in assert_equals_within_long func, value, LONG_TIMEOUT, allowed_exceptions=allowed_exceptions File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 252, in assert_equals_within '%s != %s after %s seconds' % (res, value, timeout) 'False != True after 600 seconds\n-------------------- >> begin captured logging << --------------------\nroot: INFO: * Waiting For System Stability...\nroot: INFO: * Performing Deactivation...\n--------------------- >> end captured logging << ---------------------'

Project: http://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-ansible-suite-maste... Build: http://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-ansible-suite-maste... Build Number: 610 Build Status: Still Failing Triggered By: Started by timer ------------------------------------- Changes Since Last Success: ------------------------------------- Changes for Build #604 [Gal Ben Haim] Adding dr suite [Barak Korren] Remove the populate_mock function from mock_runner [Daniel Belenky] mock_runner: store shell cmd in a variable [Barak Korren] mock_runner: Added timeout param [Barak Korren] Make whitelist repo configurable via env vars [Daniel Belenky] stdci_runner: let mock_runner manage timeout [Greg Sheremeta] remove 4.1 change queue from ovirt-engine-nodejs config Changes for Build #605 [Gal Ben Haim] Adding dr suite Changes for Build #606 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file [Barak Korren] Enable CI for kubevirt/client-python [Daniel Belenky] Add timeout config to stdci dsl [Daniel Belenky] stdci_runner.groovy: utilize DSL's timeout cfg Changes for Build #607 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #608 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #609 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #610 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file ----------------- Failed Tests: ----------------- 1 tests failed. FAILED: 012_local_maintenance_sdk.local_maintenance Error Message: False != True after 600 seconds -------------------- >> begin captured logging << -------------------- root: INFO: * Waiting For System Stability... root: INFO: * Performing Deactivation... --------------------- >> end captured logging << --------------------- Stack Trace: File "/usr/lib64/python2.7/unittest/case.py", line 369, in run testMethod() File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 142, in wrapped_test test() File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 60, in wrapper return func(get_test_prefix(), *args, **kwargs) File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 79, in wrapper prefix.virt_env.engine_vm().get_api(api_ver=4), *args, **kwargs File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 60, in wrapper return func(get_test_prefix(), *args, **kwargs) File "/home/jenkins/workspace/ovirt-system-tests_he-basic-ansible-suite-master/ovirt-system-tests/he-basic-ansible-suite-master/test-scenarios/012_local_maintenance_sdk.py", line 65, in local_maintenance lambda: host_service.get().status == types.HostStatus.MAINTENANCE or File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 286, in assert_true_within_long assert_equals_within_long(func, True, allowed_exceptions) File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 273, in assert_equals_within_long func, value, LONG_TIMEOUT, allowed_exceptions=allowed_exceptions File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 252, in assert_equals_within '%s != %s after %s seconds' % (res, value, timeout) 'False != True after 600 seconds\n-------------------- >> begin captured logging << --------------------\nroot: INFO: * Waiting For System Stability...\nroot: INFO: * Performing Deactivation...\n--------------------- >> end captured logging << ---------------------'

Project: http://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-ansible-suite-maste... Build: http://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-ansible-suite-maste... Build Number: 611 Build Status: Still Failing Triggered By: Started by timer ------------------------------------- Changes Since Last Success: ------------------------------------- Changes for Build #604 [Gal Ben Haim] Adding dr suite [Barak Korren] Remove the populate_mock function from mock_runner [Daniel Belenky] mock_runner: store shell cmd in a variable [Barak Korren] mock_runner: Added timeout param [Barak Korren] Make whitelist repo configurable via env vars [Daniel Belenky] stdci_runner: let mock_runner manage timeout [Greg Sheremeta] remove 4.1 change queue from ovirt-engine-nodejs config Changes for Build #605 [Gal Ben Haim] Adding dr suite Changes for Build #606 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file [Barak Korren] Enable CI for kubevirt/client-python [Daniel Belenky] Add timeout config to stdci dsl [Daniel Belenky] stdci_runner.groovy: utilize DSL's timeout cfg Changes for Build #607 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #608 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #609 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #610 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #611 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file ----------------- Failed Tests: ----------------- 1 tests failed. FAILED: 012_local_maintenance_sdk.local_maintenance Error Message: False != True after 600 seconds -------------------- >> begin captured logging << -------------------- root: INFO: * Waiting For System Stability... root: INFO: * Performing Deactivation... --------------------- >> end captured logging << --------------------- Stack Trace: File "/usr/lib64/python2.7/unittest/case.py", line 369, in run testMethod() File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 142, in wrapped_test test() File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 60, in wrapper return func(get_test_prefix(), *args, **kwargs) File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 79, in wrapper prefix.virt_env.engine_vm().get_api(api_ver=4), *args, **kwargs File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 60, in wrapper return func(get_test_prefix(), *args, **kwargs) File "/home/jenkins/workspace/ovirt-system-tests_he-basic-ansible-suite-master/ovirt-system-tests/he-basic-ansible-suite-master/test-scenarios/012_local_maintenance_sdk.py", line 65, in local_maintenance lambda: host_service.get().status == types.HostStatus.MAINTENANCE or File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 286, in assert_true_within_long assert_equals_within_long(func, True, allowed_exceptions) File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 273, in assert_equals_within_long func, value, LONG_TIMEOUT, allowed_exceptions=allowed_exceptions File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 252, in assert_equals_within '%s != %s after %s seconds' % (res, value, timeout) 'False != True after 600 seconds\n-------------------- >> begin captured logging << --------------------\nroot: INFO: * Waiting For System Stability...\nroot: INFO: * Performing Deactivation...\n--------------------- >> end captured logging << ---------------------'

On Sat, Sep 8, 2018 at 9:31 AM <jenkins@jenkins.phx.ovirt.org> wrote:
Project: http://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-ansible-suite-maste... Build: http://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-ansible-suite-maste... Build Number: 611 Build Status: Still Failing Triggered By: Started by timer
------------------------------------- Changes Since Last Success: ------------------------------------- Changes for Build #604 [Gal Ben Haim] Adding dr suite
[Barak Korren] Remove the populate_mock function from mock_runner
[Daniel Belenky] mock_runner: store shell cmd in a variable
[Barak Korren] mock_runner: Added timeout param
[Barak Korren] Make whitelist repo configurable via env vars
[Daniel Belenky] stdci_runner: let mock_runner manage timeout
[Greg Sheremeta] remove 4.1 change queue from ovirt-engine-nodejs config
Changes for Build #605 [Gal Ben Haim] Adding dr suite
Changes for Build #606 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file
[Barak Korren] Enable CI for kubevirt/client-python
[Daniel Belenky] Add timeout config to stdci dsl
[Daniel Belenky] stdci_runner.groovy: utilize DSL's timeout cfg
Changes for Build #607 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file
Changes for Build #608 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file
Changes for Build #609 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file
Changes for Build #610 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file
Changes for Build #611 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file
----------------- Failed Tests: ----------------- 1 tests failed. FAILED: 012_local_maintenance_sdk.local_maintenance
Error Message: False != True after 600 seconds -------------------- >> begin captured logging << -------------------- root: INFO: * Waiting For System Stability... root: INFO: * Performing Deactivation... --------------------- >> end captured logging << ---------------------
Stack Trace: File "/usr/lib64/python2.7/unittest/case.py", line 369, in run testMethod() File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 142, in wrapped_test test() File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 60, in wrapper return func(get_test_prefix(), *args, **kwargs) File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 79, in wrapper prefix.virt_env.engine_vm().get_api(api_ver=4), *args, **kwargs File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 60, in wrapper return func(get_test_prefix(), *args, **kwargs) File "/home/jenkins/workspace/ovirt-system-tests_he-basic-ansible-suite-master/ovirt-system-tests/he-basic-ansible-suite-master/test-scenarios/012_local_maintenance_sdk.py", line 65, in local_maintenance lambda: host_service.get().status == types.HostStatus.MAINTENANCE or File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 286, in assert_true_within_long assert_equals_within_long(func, True, allowed_exceptions) File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 273, in assert_equals_within_long func, value, LONG_TIMEOUT, allowed_exceptions=allowed_exceptions File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 252, in assert_equals_within '%s != %s after %s seconds' % (res, value, timeout) 'False != True after 600 seconds\n-------------------- >> begin captured logging << --------------------\nroot: INFO: * Waiting For System Stability...\nroot: INFO: * Performing Deactivation...\n--------------------- >> end captured logging << ---------------------'
According to: 2018-09-08 02:58:05,274-04 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.GetStatsAsyncVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-27) [] Failed in 'Get Host Statistics' method 2018-09-08 02:58:05,275-04 WARN [org.ovirt.engine.core.vdsbroker.vdsbroker.GetStatsAsyncVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-27) [] Unexpected return value: Status [code=-32603, message=Internal JSON-RPC error: {'reason': '[Errno 22] Invalid argument'}] 2018-09-08 02:58:05,294-04 ERROR [org.ovirt.engine.core.vdsbroker.monitoring.HostMonitoring] (EE-ManagedThreadFactory-engineScheduled-Thread-27) [] Unable to GetStats: VDSErrorException: VDSGenericException: VDSErrorException: Failed to Get Host Statistics, error = Internal JSON-RPC error: {'reason': '[Errno 22] Invalid argument'}, code = -32603 https://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-ansible-suite-mast... in seams that at a certain point the engine fails to communicate with the host due to a protocol issue. I filed a bug here: https://bugzilla.redhat.com/show_bug.cgi?id=1626160

Project: http://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-ansible-suite-maste... Build: http://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-ansible-suite-maste... Build Number: 612 Build Status: Still Failing Triggered By: Started by timer ------------------------------------- Changes Since Last Success: ------------------------------------- Changes for Build #604 [Gal Ben Haim] Adding dr suite [Barak Korren] Remove the populate_mock function from mock_runner [Daniel Belenky] mock_runner: store shell cmd in a variable [Barak Korren] mock_runner: Added timeout param [Barak Korren] Make whitelist repo configurable via env vars [Daniel Belenky] stdci_runner: let mock_runner manage timeout [Greg Sheremeta] remove 4.1 change queue from ovirt-engine-nodejs config Changes for Build #605 [Gal Ben Haim] Adding dr suite Changes for Build #606 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file [Barak Korren] Enable CI for kubevirt/client-python [Daniel Belenky] Add timeout config to stdci dsl [Daniel Belenky] stdci_runner.groovy: utilize DSL's timeout cfg Changes for Build #607 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #608 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #609 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #610 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #611 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #612 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file ----------------- Failed Tests: ----------------- 1 tests failed. FAILED: 012_local_maintenance_sdk.local_maintenance Error Message: False != True after 600 seconds -------------------- >> begin captured logging << -------------------- root: INFO: * Waiting For System Stability... root: INFO: * Performing Deactivation... --------------------- >> end captured logging << --------------------- Stack Trace: File "/usr/lib64/python2.7/unittest/case.py", line 369, in run testMethod() File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 142, in wrapped_test test() File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 60, in wrapper return func(get_test_prefix(), *args, **kwargs) File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 79, in wrapper prefix.virt_env.engine_vm().get_api(api_ver=4), *args, **kwargs File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 60, in wrapper return func(get_test_prefix(), *args, **kwargs) File "/home/jenkins/workspace/ovirt-system-tests_he-basic-ansible-suite-master/ovirt-system-tests/he-basic-ansible-suite-master/test-scenarios/012_local_maintenance_sdk.py", line 65, in local_maintenance lambda: host_service.get().status == types.HostStatus.MAINTENANCE or File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 286, in assert_true_within_long assert_equals_within_long(func, True, allowed_exceptions) File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 273, in assert_equals_within_long func, value, LONG_TIMEOUT, allowed_exceptions=allowed_exceptions File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 252, in assert_equals_within '%s != %s after %s seconds' % (res, value, timeout) 'False != True after 600 seconds\n-------------------- >> begin captured logging << --------------------\nroot: INFO: * Waiting For System Stability...\nroot: INFO: * Performing Deactivation...\n--------------------- >> end captured logging << ---------------------'

Project: http://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-ansible-suite-maste... Build: http://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-ansible-suite-maste... Build Number: 613 Build Status: Still Failing Triggered By: Started by timer ------------------------------------- Changes Since Last Success: ------------------------------------- Changes for Build #604 [Gal Ben Haim] Adding dr suite [Barak Korren] Remove the populate_mock function from mock_runner [Daniel Belenky] mock_runner: store shell cmd in a variable [Barak Korren] mock_runner: Added timeout param [Barak Korren] Make whitelist repo configurable via env vars [Daniel Belenky] stdci_runner: let mock_runner manage timeout [Greg Sheremeta] remove 4.1 change queue from ovirt-engine-nodejs config Changes for Build #605 [Gal Ben Haim] Adding dr suite Changes for Build #606 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file [Barak Korren] Enable CI for kubevirt/client-python [Daniel Belenky] Add timeout config to stdci dsl [Daniel Belenky] stdci_runner.groovy: utilize DSL's timeout cfg Changes for Build #607 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #608 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #609 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #610 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #611 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #612 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #613 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file ----------------- Failed Tests: ----------------- 1 tests failed. FAILED: 012_local_maintenance_sdk.local_maintenance Error Message: False != True after 600 seconds -------------------- >> begin captured logging << -------------------- root: INFO: * Waiting For System Stability... root: INFO: * Performing Deactivation... --------------------- >> end captured logging << --------------------- Stack Trace: File "/usr/lib64/python2.7/unittest/case.py", line 369, in run testMethod() File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 142, in wrapped_test test() File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 60, in wrapper return func(get_test_prefix(), *args, **kwargs) File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 79, in wrapper prefix.virt_env.engine_vm().get_api(api_ver=4), *args, **kwargs File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 60, in wrapper return func(get_test_prefix(), *args, **kwargs) File "/home/jenkins/workspace/ovirt-system-tests_he-basic-ansible-suite-master/ovirt-system-tests/he-basic-ansible-suite-master/test-scenarios/012_local_maintenance_sdk.py", line 65, in local_maintenance lambda: host_service.get().status == types.HostStatus.MAINTENANCE or File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 286, in assert_true_within_long assert_equals_within_long(func, True, allowed_exceptions) File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 273, in assert_equals_within_long func, value, LONG_TIMEOUT, allowed_exceptions=allowed_exceptions File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 252, in assert_equals_within '%s != %s after %s seconds' % (res, value, timeout) 'False != True after 600 seconds\n-------------------- >> begin captured logging << --------------------\nroot: INFO: * Waiting For System Stability...\nroot: INFO: * Performing Deactivation...\n--------------------- >> end captured logging << ---------------------'

Project: http://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-ansible-suite-maste... Build: http://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-ansible-suite-maste... Build Number: 614 Build Status: Still Failing Triggered By: Started by timer ------------------------------------- Changes Since Last Success: ------------------------------------- Changes for Build #604 [Gal Ben Haim] Adding dr suite [Barak Korren] Remove the populate_mock function from mock_runner [Daniel Belenky] mock_runner: store shell cmd in a variable [Barak Korren] mock_runner: Added timeout param [Barak Korren] Make whitelist repo configurable via env vars [Daniel Belenky] stdci_runner: let mock_runner manage timeout [Greg Sheremeta] remove 4.1 change queue from ovirt-engine-nodejs config Changes for Build #605 [Gal Ben Haim] Adding dr suite Changes for Build #606 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file [Barak Korren] Enable CI for kubevirt/client-python [Daniel Belenky] Add timeout config to stdci dsl [Daniel Belenky] stdci_runner.groovy: utilize DSL's timeout cfg Changes for Build #607 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #608 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #609 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #610 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #611 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #612 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #613 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #614 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file ----------------- Failed Tests: ----------------- 1 tests failed. FAILED: 012_local_maintenance_sdk.local_maintenance Error Message: False != True after 600 seconds -------------------- >> begin captured logging << -------------------- root: INFO: * Waiting For System Stability... root: INFO: * Performing Deactivation... --------------------- >> end captured logging << --------------------- Stack Trace: File "/usr/lib64/python2.7/unittest/case.py", line 369, in run testMethod() File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 142, in wrapped_test test() File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 60, in wrapper return func(get_test_prefix(), *args, **kwargs) File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 79, in wrapper prefix.virt_env.engine_vm().get_api(api_ver=4), *args, **kwargs File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 60, in wrapper return func(get_test_prefix(), *args, **kwargs) File "/home/jenkins/workspace/ovirt-system-tests_he-basic-ansible-suite-master/ovirt-system-tests/he-basic-ansible-suite-master/test-scenarios/012_local_maintenance_sdk.py", line 65, in local_maintenance lambda: host_service.get().status == types.HostStatus.MAINTENANCE or File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 286, in assert_true_within_long assert_equals_within_long(func, True, allowed_exceptions) File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 273, in assert_equals_within_long func, value, LONG_TIMEOUT, allowed_exceptions=allowed_exceptions File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 252, in assert_equals_within '%s != %s after %s seconds' % (res, value, timeout) 'False != True after 600 seconds\n-------------------- >> begin captured logging << --------------------\nroot: INFO: * Waiting For System Stability...\nroot: INFO: * Performing Deactivation...\n--------------------- >> end captured logging << ---------------------'

Project: http://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-ansible-suite-maste... Build: http://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-ansible-suite-maste... Build Number: 615 Build Status: Still Failing Triggered By: Started by timer ------------------------------------- Changes Since Last Success: ------------------------------------- Changes for Build #604 [Gal Ben Haim] Adding dr suite [Barak Korren] Remove the populate_mock function from mock_runner [Daniel Belenky] mock_runner: store shell cmd in a variable [Barak Korren] mock_runner: Added timeout param [Barak Korren] Make whitelist repo configurable via env vars [Daniel Belenky] stdci_runner: let mock_runner manage timeout [Greg Sheremeta] remove 4.1 change queue from ovirt-engine-nodejs config Changes for Build #605 [Gal Ben Haim] Adding dr suite Changes for Build #606 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file [Barak Korren] Enable CI for kubevirt/client-python [Daniel Belenky] Add timeout config to stdci dsl [Daniel Belenky] stdci_runner.groovy: utilize DSL's timeout cfg Changes for Build #607 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #608 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #609 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #610 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #611 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #612 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #613 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #614 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #615 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file ----------------- Failed Tests: ----------------- 1 tests failed. FAILED: 012_local_maintenance_sdk.local_maintenance Error Message: False != True after 600 seconds -------------------- >> begin captured logging << -------------------- root: INFO: * Waiting For System Stability... root: INFO: * Performing Deactivation... --------------------- >> end captured logging << --------------------- Stack Trace: File "/usr/lib64/python2.7/unittest/case.py", line 369, in run testMethod() File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 142, in wrapped_test test() File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 60, in wrapper return func(get_test_prefix(), *args, **kwargs) File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 79, in wrapper prefix.virt_env.engine_vm().get_api(api_ver=4), *args, **kwargs File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 60, in wrapper return func(get_test_prefix(), *args, **kwargs) File "/home/jenkins/workspace/ovirt-system-tests_he-basic-ansible-suite-master/ovirt-system-tests/he-basic-ansible-suite-master/test-scenarios/012_local_maintenance_sdk.py", line 65, in local_maintenance lambda: host_service.get().status == types.HostStatus.MAINTENANCE or File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 286, in assert_true_within_long assert_equals_within_long(func, True, allowed_exceptions) File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 273, in assert_equals_within_long func, value, LONG_TIMEOUT, allowed_exceptions=allowed_exceptions File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 252, in assert_equals_within '%s != %s after %s seconds' % (res, value, timeout) 'False != True after 600 seconds\n-------------------- >> begin captured logging << --------------------\nroot: INFO: * Waiting For System Stability...\nroot: INFO: * Performing Deactivation...\n--------------------- >> end captured logging << ---------------------'

Project: http://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-ansible-suite-maste... Build: http://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-ansible-suite-maste... Build Number: 616 Build Status: Still Failing Triggered By: Started by timer ------------------------------------- Changes Since Last Success: ------------------------------------- Changes for Build #604 [Gal Ben Haim] Adding dr suite [Barak Korren] Remove the populate_mock function from mock_runner [Daniel Belenky] mock_runner: store shell cmd in a variable [Barak Korren] mock_runner: Added timeout param [Barak Korren] Make whitelist repo configurable via env vars [Daniel Belenky] stdci_runner: let mock_runner manage timeout [Greg Sheremeta] remove 4.1 change queue from ovirt-engine-nodejs config Changes for Build #605 [Gal Ben Haim] Adding dr suite Changes for Build #606 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file [Barak Korren] Enable CI for kubevirt/client-python [Daniel Belenky] Add timeout config to stdci dsl [Daniel Belenky] stdci_runner.groovy: utilize DSL's timeout cfg Changes for Build #607 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #608 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #609 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #610 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #611 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #612 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #613 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #614 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #615 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #616 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file ----------------- Failed Tests: ----------------- 1 tests failed. FAILED: 012_local_maintenance_sdk.local_maintenance Error Message: False != True after 600 seconds -------------------- >> begin captured logging << -------------------- root: INFO: * Waiting For System Stability... root: INFO: * Performing Deactivation... --------------------- >> end captured logging << --------------------- Stack Trace: File "/usr/lib64/python2.7/unittest/case.py", line 369, in run testMethod() File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 142, in wrapped_test test() File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 60, in wrapper return func(get_test_prefix(), *args, **kwargs) File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 79, in wrapper prefix.virt_env.engine_vm().get_api(api_ver=4), *args, **kwargs File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 60, in wrapper return func(get_test_prefix(), *args, **kwargs) File "/home/jenkins/workspace/ovirt-system-tests_he-basic-ansible-suite-master/ovirt-system-tests/he-basic-ansible-suite-master/test-scenarios/012_local_maintenance_sdk.py", line 65, in local_maintenance lambda: host_service.get().status == types.HostStatus.MAINTENANCE or File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 286, in assert_true_within_long assert_equals_within_long(func, True, allowed_exceptions) File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 273, in assert_equals_within_long func, value, LONG_TIMEOUT, allowed_exceptions=allowed_exceptions File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 252, in assert_equals_within '%s != %s after %s seconds' % (res, value, timeout) 'False != True after 600 seconds\n-------------------- >> begin captured logging << --------------------\nroot: INFO: * Waiting For System Stability...\nroot: INFO: * Performing Deactivation...\n--------------------- >> end captured logging << ---------------------'

Project: http://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-ansible-suite-maste... Build: http://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-ansible-suite-maste... Build Number: 617 Build Status: Still Failing Triggered By: Started by timer ------------------------------------- Changes Since Last Success: ------------------------------------- Changes for Build #604 [Gal Ben Haim] Adding dr suite [Barak Korren] Remove the populate_mock function from mock_runner [Daniel Belenky] mock_runner: store shell cmd in a variable [Barak Korren] mock_runner: Added timeout param [Barak Korren] Make whitelist repo configurable via env vars [Daniel Belenky] stdci_runner: let mock_runner manage timeout [Greg Sheremeta] remove 4.1 change queue from ovirt-engine-nodejs config Changes for Build #605 [Gal Ben Haim] Adding dr suite Changes for Build #606 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file [Barak Korren] Enable CI for kubevirt/client-python [Daniel Belenky] Add timeout config to stdci dsl [Daniel Belenky] stdci_runner.groovy: utilize DSL's timeout cfg Changes for Build #607 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #608 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #609 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #610 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #611 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #612 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #613 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #614 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #615 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #616 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #617 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file ----------------- Failed Tests: ----------------- 1 tests failed. FAILED: 012_local_maintenance_sdk.local_maintenance Error Message: False != True after 600 seconds -------------------- >> begin captured logging << -------------------- root: INFO: * Waiting For System Stability... root: INFO: * Performing Deactivation... --------------------- >> end captured logging << --------------------- Stack Trace: File "/usr/lib64/python2.7/unittest/case.py", line 369, in run testMethod() File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 142, in wrapped_test test() File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 60, in wrapper return func(get_test_prefix(), *args, **kwargs) File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 79, in wrapper prefix.virt_env.engine_vm().get_api(api_ver=4), *args, **kwargs File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 60, in wrapper return func(get_test_prefix(), *args, **kwargs) File "/home/jenkins/workspace/ovirt-system-tests_he-basic-ansible-suite-master/ovirt-system-tests/he-basic-ansible-suite-master/test-scenarios/012_local_maintenance_sdk.py", line 65, in local_maintenance lambda: host_service.get().status == types.HostStatus.MAINTENANCE or File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 286, in assert_true_within_long assert_equals_within_long(func, True, allowed_exceptions) File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 273, in assert_equals_within_long func, value, LONG_TIMEOUT, allowed_exceptions=allowed_exceptions File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 252, in assert_equals_within '%s != %s after %s seconds' % (res, value, timeout) 'False != True after 600 seconds\n-------------------- >> begin captured logging << --------------------\nroot: INFO: * Waiting For System Stability...\nroot: INFO: * Performing Deactivation...\n--------------------- >> end captured logging << ---------------------'

Project: http://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-ansible-suite-maste... Build: http://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-ansible-suite-maste... Build Number: 618 Build Status: Still Failing Triggered By: Started by timer ------------------------------------- Changes Since Last Success: ------------------------------------- Changes for Build #604 [Gal Ben Haim] Adding dr suite [Barak Korren] Remove the populate_mock function from mock_runner [Daniel Belenky] mock_runner: store shell cmd in a variable [Barak Korren] mock_runner: Added timeout param [Barak Korren] Make whitelist repo configurable via env vars [Daniel Belenky] stdci_runner: let mock_runner manage timeout [Greg Sheremeta] remove 4.1 change queue from ovirt-engine-nodejs config Changes for Build #605 [Gal Ben Haim] Adding dr suite Changes for Build #606 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file [Barak Korren] Enable CI for kubevirt/client-python [Daniel Belenky] Add timeout config to stdci dsl [Daniel Belenky] stdci_runner.groovy: utilize DSL's timeout cfg Changes for Build #607 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #608 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #609 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #610 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #611 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #612 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #613 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #614 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #615 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #616 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #617 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #618 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file [Yuval Turgeman] ovirt-node-ng: remove nightly master build [Sandro Bonazzola] ovirt-iso-uploader: drop 4.1 jobs ----------------- Failed Tests: ----------------- 1 tests failed. FAILED: 012_local_maintenance_sdk.local_maintenance Error Message: False != True after 600 seconds -------------------- >> begin captured logging << -------------------- root: INFO: * Waiting For System Stability... root: INFO: * Performing Deactivation... --------------------- >> end captured logging << --------------------- Stack Trace: File "/usr/lib64/python2.7/unittest/case.py", line 369, in run testMethod() File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 142, in wrapped_test test() File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 60, in wrapper return func(get_test_prefix(), *args, **kwargs) File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 79, in wrapper prefix.virt_env.engine_vm().get_api(api_ver=4), *args, **kwargs File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 60, in wrapper return func(get_test_prefix(), *args, **kwargs) File "/home/jenkins/workspace/ovirt-system-tests_he-basic-ansible-suite-master/ovirt-system-tests/he-basic-ansible-suite-master/test-scenarios/012_local_maintenance_sdk.py", line 65, in local_maintenance lambda: host_service.get().status == types.HostStatus.MAINTENANCE or File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 286, in assert_true_within_long assert_equals_within_long(func, True, allowed_exceptions) File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 273, in assert_equals_within_long func, value, LONG_TIMEOUT, allowed_exceptions=allowed_exceptions File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 252, in assert_equals_within '%s != %s after %s seconds' % (res, value, timeout) 'False != True after 600 seconds\n-------------------- >> begin captured logging << --------------------\nroot: INFO: * Waiting For System Stability...\nroot: INFO: * Performing Deactivation...\n--------------------- >> end captured logging << ---------------------'

Project: http://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-ansible-suite-maste... Build: http://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-ansible-suite-maste... Build Number: 619 Build Status: Still Failing Triggered By: Started by timer ------------------------------------- Changes Since Last Success: ------------------------------------- Changes for Build #604 [Gal Ben Haim] Adding dr suite [Barak Korren] Remove the populate_mock function from mock_runner [Daniel Belenky] mock_runner: store shell cmd in a variable [Barak Korren] mock_runner: Added timeout param [Barak Korren] Make whitelist repo configurable via env vars [Daniel Belenky] stdci_runner: let mock_runner manage timeout [Greg Sheremeta] remove 4.1 change queue from ovirt-engine-nodejs config Changes for Build #605 [Gal Ben Haim] Adding dr suite Changes for Build #606 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file [Barak Korren] Enable CI for kubevirt/client-python [Daniel Belenky] Add timeout config to stdci dsl [Daniel Belenky] stdci_runner.groovy: utilize DSL's timeout cfg Changes for Build #607 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #608 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #609 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #610 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #611 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #612 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #613 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #614 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #615 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #616 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #617 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #618 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file [Yuval Turgeman] ovirt-node-ng: remove nightly master build [Sandro Bonazzola] ovirt-iso-uploader: drop 4.1 jobs Changes for Build #619 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file ----------------- Failed Tests: ----------------- 1 tests failed. FAILED: 012_local_maintenance_sdk.local_maintenance Error Message: False != True after 600 seconds -------------------- >> begin captured logging << -------------------- root: INFO: * Waiting For System Stability... root: INFO: * Performing Deactivation... --------------------- >> end captured logging << --------------------- Stack Trace: File "/usr/lib64/python2.7/unittest/case.py", line 369, in run testMethod() File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 142, in wrapped_test test() File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 60, in wrapper return func(get_test_prefix(), *args, **kwargs) File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 79, in wrapper prefix.virt_env.engine_vm().get_api(api_ver=4), *args, **kwargs File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 60, in wrapper return func(get_test_prefix(), *args, **kwargs) File "/home/jenkins/workspace/ovirt-system-tests_he-basic-ansible-suite-master/ovirt-system-tests/he-basic-ansible-suite-master/test-scenarios/012_local_maintenance_sdk.py", line 65, in local_maintenance lambda: host_service.get().status == types.HostStatus.MAINTENANCE or File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 286, in assert_true_within_long assert_equals_within_long(func, True, allowed_exceptions) File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 273, in assert_equals_within_long func, value, LONG_TIMEOUT, allowed_exceptions=allowed_exceptions File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 252, in assert_equals_within '%s != %s after %s seconds' % (res, value, timeout) 'False != True after 600 seconds\n-------------------- >> begin captured logging << --------------------\nroot: INFO: * Waiting For System Stability...\nroot: INFO: * Performing Deactivation...\n--------------------- >> end captured logging << ---------------------'

Project: http://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-ansible-suite-maste... Build: http://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-ansible-suite-maste... Build Number: 620 Build Status: Still Failing Triggered By: Started by timer ------------------------------------- Changes Since Last Success: ------------------------------------- Changes for Build #604 [Gal Ben Haim] Adding dr suite [Barak Korren] Remove the populate_mock function from mock_runner [Daniel Belenky] mock_runner: store shell cmd in a variable [Barak Korren] mock_runner: Added timeout param [Barak Korren] Make whitelist repo configurable via env vars [Daniel Belenky] stdci_runner: let mock_runner manage timeout [Greg Sheremeta] remove 4.1 change queue from ovirt-engine-nodejs config Changes for Build #605 [Gal Ben Haim] Adding dr suite Changes for Build #606 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file [Barak Korren] Enable CI for kubevirt/client-python [Daniel Belenky] Add timeout config to stdci dsl [Daniel Belenky] stdci_runner.groovy: utilize DSL's timeout cfg Changes for Build #607 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #608 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #609 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #610 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #611 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #612 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #613 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #614 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #615 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #616 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #617 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #618 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file [Yuval Turgeman] ovirt-node-ng: remove nightly master build [Sandro Bonazzola] ovirt-iso-uploader: drop 4.1 jobs Changes for Build #619 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #620 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file ----------------- Failed Tests: ----------------- 1 tests failed. FAILED: 012_local_maintenance_sdk.local_maintenance Error Message: False != True after 600 seconds -------------------- >> begin captured logging << -------------------- root: INFO: * Waiting For System Stability... root: INFO: * Performing Deactivation... --------------------- >> end captured logging << --------------------- Stack Trace: File "/usr/lib64/python2.7/unittest/case.py", line 369, in run testMethod() File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 142, in wrapped_test test() File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 60, in wrapper return func(get_test_prefix(), *args, **kwargs) File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 79, in wrapper prefix.virt_env.engine_vm().get_api(api_ver=4), *args, **kwargs File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 60, in wrapper return func(get_test_prefix(), *args, **kwargs) File "/home/jenkins/workspace/ovirt-system-tests_he-basic-ansible-suite-master/ovirt-system-tests/he-basic-ansible-suite-master/test-scenarios/012_local_maintenance_sdk.py", line 65, in local_maintenance lambda: host_service.get().status == types.HostStatus.MAINTENANCE or File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 286, in assert_true_within_long assert_equals_within_long(func, True, allowed_exceptions) File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 273, in assert_equals_within_long func, value, LONG_TIMEOUT, allowed_exceptions=allowed_exceptions File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 252, in assert_equals_within '%s != %s after %s seconds' % (res, value, timeout) 'False != True after 600 seconds\n-------------------- >> begin captured logging << --------------------\nroot: INFO: * Waiting For System Stability...\nroot: INFO: * Performing Deactivation...\n--------------------- >> end captured logging << ---------------------'

Project: http://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-ansible-suite-maste... Build: http://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-ansible-suite-maste... Build Number: 621 Build Status: Still Failing Triggered By: Started by timer ------------------------------------- Changes Since Last Success: ------------------------------------- Changes for Build #604 [Gal Ben Haim] Adding dr suite [Barak Korren] Remove the populate_mock function from mock_runner [Daniel Belenky] mock_runner: store shell cmd in a variable [Barak Korren] mock_runner: Added timeout param [Barak Korren] Make whitelist repo configurable via env vars [Daniel Belenky] stdci_runner: let mock_runner manage timeout [Greg Sheremeta] remove 4.1 change queue from ovirt-engine-nodejs config Changes for Build #605 [Gal Ben Haim] Adding dr suite Changes for Build #606 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file [Barak Korren] Enable CI for kubevirt/client-python [Daniel Belenky] Add timeout config to stdci dsl [Daniel Belenky] stdci_runner.groovy: utilize DSL's timeout cfg Changes for Build #607 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #608 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #609 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #610 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #611 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #612 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #613 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #614 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #615 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #616 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #617 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #618 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file [Yuval Turgeman] ovirt-node-ng: remove nightly master build [Sandro Bonazzola] ovirt-iso-uploader: drop 4.1 jobs Changes for Build #619 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #620 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #621 [Miguel Duarte Barroso] Repos: add python2-ovsdbapp to the reposync config [Sandro Bonazzola] ovirt-iso-uploader: branched for 4.2 ----------------- Failed Tests: ----------------- 1 tests failed. FAILED: 010_local_maintenance_cli.local_maintenance Error Message: could not parse JSON: {"1": {"conf_on_shared_storage": true, "live-data": true, "extra": "metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=3236 (Tue Sep 11 11:04:10 2018)\nhost-id=1\nscore=3000\nvm_conf_refresh_time=3237 (Tue Sep 11 11:04:11 2018)\nconf_on_shared_storage=True\nmaintenance=False\nstate=EngineMigratingAway\nstopped=False\n", "hostname": "lago-he-basic-ansible-suite-master-host-0", "host-id": 1, "engine-status": {"health": "good", "vm": "up", "detail": "Migration Source"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "bf35865f", "local_conf_timestamp": 3237, "host-ts": 3236}, "2": {"conf_on_shared_storage": true, "live-data": true, "extra": "metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=3229 (Tue Sep 11 11:04:04 2018)\nhost-id=2\nscore=3000\nvm_conf_refresh_time=3229 (Tue Sep 11 11:04:04 2018)\nconf_on_shared_storage=True\nmaintenance=False\nstate=EngineDown\nstopped=False\n", "hostname": "lago-he-basic-ansible-suite-master-host-1", "host-id": 2, "engine-status": {" -------------------- >> begin captured logging << -------------------- root: INFO: * Waiting For System Stability... lago.ssh: DEBUG: start task:259b0c73-be7f-4576-8bdd-e856a6ad648e:Get ssh client for lago-he-basic-ansible-suite-master-host-0: lago.ssh: DEBUG: end task:259b0c73-be7f-4576-8bdd-e856a6ad648e:Get ssh client for lago-he-basic-ansible-suite-master-host-0: lago.ssh: DEBUG: Running e96f0894 on lago-he-basic-ansible-suite-master-host-0: hosted-engine --vm-status --json lago.ssh: DEBUG: Command e96f0894 on lago-he-basic-ansible-suite-master-host-0 returned with 0 lago.ssh: DEBUG: Command e96f0894 on lago-he-basic-ansible-suite-master-host-0 output: {"1": {"conf_on_shared_storage": true, "live-data": true, "extra": "metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=3216 (Tue Sep 11 11:03:50 2018)\nhost-id=1\nscore=3400\nvm_conf_refresh_time=3216 (Tue Sep 11 11:03:50 2018)\nconf_on_shared_storage=True\nmaintenance=False\nstate=EngineUp\nstopped=False\n", "hostname": "lago-he-basic-ansible-suite-master-host-0", "host-id": 1, "engine-status": {"health": "good", "vm": "up", "detail": "Up"}, "score": 3400, "stopped": false, "maintenance": false, "crc32": "44316020", "local_conf_timestamp": 3216, "host-ts": 3216}, "2": {"conf_on_shared_storage": true, "live-data": true, "extra": "metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=3209 (Tue Sep 11 11:03:43 2018)\nhost-id=2\nscore=3000\nvm_conf_refresh_time=3209 (Tue Sep 11 11:03:43 2018)\nconf_on_shared_storage=True\nmaintenance=False\nstate=EngineDown\nstopped=False\n", "hostname": "lago-he-basic-ansible-suite-master-host-1", "host-id": 2, "engine-status": {"reason": "vm not running on this host", "health": "bad", "vm": "down", "detail": "unknown"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "451318ef", "local_conf_timestamp": 3209, "host-ts": 3209}, "global_maintenance": false} lago.ssh: DEBUG: start task:aa1ffebb-8d9d-4e12-a497-b14a4cfce32b:Get ssh client for lago-he-basic-ansible-suite-master-host-0: lago.ssh: DEBUG: end task:aa1ffebb-8d9d-4e12-a497-b14a4cfce32b:Get ssh client for lago-he-basic-ansible-suite-master-host-0: lago.ssh: DEBUG: Running e9de0d3e on lago-he-basic-ansible-suite-master-host-0: hosted-engine --set-maintenance --mode=local lago.ssh: DEBUG: Command e9de0d3e on lago-he-basic-ansible-suite-master-host-0 returned with 0 root: INFO: * Waiting for engine to migrate... lago.ssh: DEBUG: start task:cdb4603c-a16c-4607-a94c-9403deb51456:Get ssh client for lago-he-basic-ansible-suite-master-host-0: lago.ssh: DEBUG: end task:cdb4603c-a16c-4607-a94c-9403deb51456:Get ssh client for lago-he-basic-ansible-suite-master-host-0: lago.ssh: DEBUG: Running ea6a152c on lago-he-basic-ansible-suite-master-host-0: hosted-engine --vm-status --json lago.ssh: DEBUG: Command ea6a152c on lago-he-basic-ansible-suite-master-host-0 returned with 0 lago.ssh: DEBUG: Command ea6a152c on lago-he-basic-ansible-suite-master-host-0 output: {"1": {"conf_on_shared_storage": true, "live-data": true, "extra": "metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=3226 (Tue Sep 11 11:04:00 2018)\nhost-id=1\nscore=3000\nvm_conf_refresh_time=3226 (Tue Sep 11 11:04:00 2018)\nconf_on_shared_storage=True\nmaintenance=False\nstate=EngineMigratingAway\nstopped=False\n", "hostname": "lago-he-basic-ansible-suite-master-host-0", "host-id": 1, "engine-status": {"health": "good", "vm": "up", "detail": "Up"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "fe67372c", "local_conf_timestamp": 3226, "host-ts": 3226}, "2": {"conf_on_shared_storage": true, "live-data": true, "extra": "metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=3219 (Tue Sep 11 11:03:53 2018)\nhost-id=2\nscore=3000\nvm_conf_refresh_time=3219 (Tue Sep 11 11:03:53 2018)\nconf_on_shared_storage=True\nmaintenance=False\nstate=EngineDown\nstopped=False\n", "hostname": "lago-he-basic-ansible-suite-master-host-1", "host-id": 2, "engine-status": {"reason": "vm not running on this host", "health": "bad", "vm": "down", "detail": "unknown"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "4d10b792", "local_conf_timestamp": 3219, "host-ts": 3219}, "global_maintenance": false} lago.ssh: DEBUG: start task:a5d9b5b4-0f8d-40a3-9e05-f280911e9e7c:Get ssh client for lago-he-basic-ansible-suite-master-host-0: lago.ssh: DEBUG: end task:a5d9b5b4-0f8d-40a3-9e05-f280911e9e7c:Get ssh client for lago-he-basic-ansible-suite-master-host-0: lago.ssh: DEBUG: Running f0eb2846 on lago-he-basic-ansible-suite-master-host-0: hosted-engine --vm-status --json lago.ssh: DEBUG: Command f0eb2846 on lago-he-basic-ansible-suite-master-host-0 returned with 0 lago.ssh: DEBUG: Command f0eb2846 on lago-he-basic-ansible-suite-master-host-0 output: {"1": {"conf_on_shared_storage": true, "live-data": true, "extra": "metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=3236 (Tue Sep 11 11:04:10 2018)\nhost-id=1\nscore=3000\nvm_conf_refresh_time=3237 (Tue Sep 11 11:04:11 2018)\nconf_on_shared_storage=True\nmaintenance=False\nstate=EngineMigratingAway\nstopped=False\n", "hostname": "lago-he-basic-ansible-suite-master-host-0", "host-id": 1, "engine-status": {"health": "good", "vm": "up", "detail": "Migration Source"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "bf35865f", "local_conf_timestamp": 3237, "host-ts": 3236}, "2": {"conf_on_shared_storage": true, "live-data": true, "extra": "metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=3229 (Tue Sep 11 11:04:04 2018)\nhost-id=2\nscore=3000\nvm_conf_refresh_time=3229 (Tue Sep 11 11:04:04 2018)\nconf_on_shared_storage=True\nmaintenance=False\nstate=EngineDown\nstopped=False\n", "hostname": "lago-he-basic-ansible-suite-master-host-1", "host-id": 2, "engine-status": {"reason": "vm not running on this host", "health": "bad", "vm": "down", "detail": "unknown"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "55144615", "local_conf_timestamp": 3229, "host-ts": 3229}, "global_maintenance": false} lago.ssh: DEBUG: start task:ac9e7cbf-4f2d-4af5-9131-db6a31ca99a2:Get ssh client for lago-he-basic-ansible-suite-master-host-0: lago.ssh: DEBUG: end task:ac9e7cbf-4f2d-4af5-9131-db6a31ca99a2:Get ssh client for lago-he-basic-ansible-suite-master-host-0: lago.ssh: DEBUG: Running f353dfd8 on lago-he-basic-ansible-suite-master-host-0: hosted-engine --vm-status --json lago.ssh: DEBUG: Command f353dfd8 on lago-he-basic-ansible-suite-master-host-0 returned with 0 lago.ssh: DEBUG: Command f353dfd8 on lago-he-basic-ansible-suite-master-host-0 output: {"1": {"conf_on_shared_storage": true, "live-data": true, "extra": "metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=3236 (Tue Sep 11 11:04:10 2018)\nhost-id=1\nscore=3000\nvm_conf_refresh_time=3237 (Tue Sep 11 11:04:11 2018)\nconf_on_shared_storage=True\nmaintenance=False\nstate=EngineMigratingAway\nstopped=False\n", "hostname": "lago-he-basic-ansible-suite-master-host-0", "host-id": 1, "engine-status": {"health": "good", "vm": "up", "detail": "Migration Source"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "bf35865f", "local_conf_timestamp": 3237, "host-ts": 3236}, "2": {"conf_on_shared_storage": true, "live-data": true, "extra": "metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=3229 (Tue Sep 11 11:04:04 2018)\nhost-id=2\nscore=3000\nvm_conf_refresh_time=3229 (Tue Sep 11 11:04:04 2018)\nconf_on_shared_storage=True\nmaintenance=False\nstate=EngineDown\nstopped=False\n", "hostname": "lago-he-basic-ansible-suite-master-host-1", "host-id": 2, "engine-status": {"reason": "vm not running on this host", "health": "bad", "vm": "down", "detail": "unknown"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "55144615", "local_conf_timestamp": 3229, "host-ts": 3229}, "global_maintenance": false} lago.ssh: DEBUG: start task:c43ce49d-f9c4-4879-88d2-ba1071c5cfab:Get ssh client for lago-he-basic-ansible-suite-master-host-0: lago.ssh: DEBUG: end task:c43ce49d-f9c4-4879-88d2-ba1071c5cfab:Get ssh client for lago-he-basic-ansible-suite-master-host-0: lago.ssh: DEBUG: Running f59e60c4 on lago-he-basic-ansible-suite-master-host-0: hosted-engine --vm-status --json lago.ssh: DEBUG: Command f59e60c4 on lago-he-basic-ansible-suite-master-host-0 returned with 0 lago.ssh: DEBUG: Command f59e60c4 on lago-he-basic-ansible-suite-master-host-0 output: {"1": {"conf_on_shared_storage": true, "live-data": true, "extra": "metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=3236 (Tue Sep 11 11:04:10 2018)\nhost-id=1\nscore=3000\nvm_conf_refresh_time=3237 (Tue Sep 11 11:04:11 2018)\nconf_on_shared_storage=True\nmaintenance=False\nstate=EngineMigratingAway\nstopped=False\n", "hostname": "lago-he-basic-ansible-suite-master-host-0", "host-id": 1, "engine-status": {"health": "good", "vm": "up", "detail": "Migration Source"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "bf35865f", "local_conf_timestamp": 3237, "host-ts": 3236}, "2": {"conf_on_shared_storage": true, "live-data": true, "extra": "metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=3229 (Tue Sep 11 11:04:04 2018)\nhost-id=2\nscore=3000\nvm_conf_refresh_time=3229 (Tue Sep 11 11:04:04 2018)\nconf_on_shared_storage=True\nmaintenance=False\nstate=EngineDown\nstopped=False\n", "hostname": "lago-he-basic-ansible-suite-master-host-1", "host-id": 2, "engine-status": {" ovirtlago.testlib: ERROR: * Unhandled exception in <function <lambda> at 0x7f10d8c6e668> Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 234, in assert_equals_within res = func() File "/home/jenkins/workspace/ovirt-system-tests_he-basic-ansible-suite-master/ovirt-system-tests/he-basic-ansible-suite-master/test-scenarios/010_local_maintenance_cli.py", line 36, in <lambda> testlib.assert_true_within_long(lambda: _get_he_status(host) File "/home/jenkins/workspace/ovirt-system-tests_he-basic-ansible-suite-master/ovirt-system-tests/he-basic-ansible-suite-master/test-scenarios/010_local_maintenance_cli.py", line 98, in _get_he_status raise RuntimeError('could not parse JSON: %s' % ret.out) RuntimeError: could not parse JSON: {"1": {"conf_on_shared_storage": true, "live-data": true, "extra": "metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=3236 (Tue Sep 11 11:04:10 2018)\nhost-id=1\nscore=3000\nvm_conf_refresh_time=3237 (Tue Sep 11 11:04:11 2018)\nconf_on_shared_storage=True\nmaintenance=False\nstate=EngineMigratingAway\nstopped=False\n", "hostname": "lago-he-basic-ansible-suite-master-host-0", "host-id": 1, "engine-status": {"health": "good", "vm": "up", "detail": "Migration Source"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "bf35865f", "local_conf_timestamp": 3237, "host-ts": 3236}, "2": {"conf_on_shared_storage": true, "live-data": true, "extra": "metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=3229 (Tue Sep 11 11:04:04 2018)\nhost-id=2\nscore=3000\nvm_conf_refresh_time=3229 (Tue Sep 11 11:04:04 2018)\nconf_on_shared_storage=True\nmaintenance=False\nstate=EngineDown\nstopped=False\n", "hostname": "lago-he-basic-ansible-suite-master-host-1", "host-id": 2, "engine-status": {" --------------------- >> end captured logging << --------------------- Stack Trace: File "/usr/lib64/python2.7/unittest/case.py", line 369, in run testMethod() File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 142, in wrapped_test test() File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 60, in wrapper return func(get_test_prefix(), *args, **kwargs) File "/home/jenkins/workspace/ovirt-system-tests_he-basic-ansible-suite-master/ovirt-system-tests/he-basic-ansible-suite-master/test-scenarios/010_local_maintenance_cli.py", line 125, in local_maintenance _wait_for_engine_migration(host, he_index, "bad", "Migration Destination") File "/home/jenkins/workspace/ovirt-system-tests_he-basic-ansible-suite-master/ovirt-system-tests/he-basic-ansible-suite-master/test-scenarios/010_local_maintenance_cli.py", line 36, in _wait_for_engine_migration testlib.assert_true_within_long(lambda: _get_he_status(host) File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 286, in assert_true_within_long assert_equals_within_long(func, True, allowed_exceptions) File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 273, in assert_equals_within_long func, value, LONG_TIMEOUT, allowed_exceptions=allowed_exceptions File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 234, in assert_equals_within res = func() File "/home/jenkins/workspace/ovirt-system-tests_he-basic-ansible-suite-master/ovirt-system-tests/he-basic-ansible-suite-master/test-scenarios/010_local_maintenance_cli.py", line 36, in <lambda> testlib.assert_true_within_long(lambda: _get_he_status(host) File "/home/jenkins/workspace/ovirt-system-tests_he-basic-ansible-suite-master/ovirt-system-tests/he-basic-ansible-suite-master/test-scenarios/010_local_maintenance_cli.py", line 98, in _get_he_status raise RuntimeError('could not parse JSON: %s' % ret.out) 'could not parse JSON: {"1": {"conf_on_shared_storage": true, "live-data": true, "extra": "metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=3236 (Tue Sep 11 11:04:10 2018)\\nhost-id=1\\nscore=3000\\nvm_conf_refresh_time=3237 (Tue Sep 11 11:04:11 2018)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineMigratingAway\\nstopped=False\\n", "hostname": "lago-he-basic-ansible-suite-master-host-0", "host-id": 1, "engine-status": {"health": "good", "vm": "up", "detail": "Migration Source"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "bf35865f", "local_conf_timestamp": 3237, "host-ts": 3236}, "2": {"conf_on_shared_storage": true, "live-data": true, "extra": "metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=3229 (Tue Sep 11 11:04:04 2018)\\nhost-id=2\\nscore=3000\\nvm_conf_refresh_time=3229 (Tue Sep 11 11:04:04 2018)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineDown\\nstopped=False\\n", "hostname": "lago-he-basic-ansible-suite-master-host-1", "host-id": 2, "engine-status": {"\n-------------------- >> begin captured logging << --------------------\nroot: INFO: * Waiting For System Stability...\nlago.ssh: DEBUG: start task:259b0c73-be7f-4576-8bdd-e856a6ad648e:Get ssh client for lago-he-basic-ansible-suite-master-host-0:\nlago.ssh: DEBUG: end task:259b0c73-be7f-4576-8bdd-e856a6ad648e:Get ssh client for lago-he-basic-ansible-suite-master-host-0:\nlago.ssh: DEBUG: Running e96f0894 on lago-he-basic-ansible-suite-master-host-0: hosted-engine --vm-status --json\nlago.ssh: DEBUG: Command e96f0894 on lago-he-basic-ansible-suite-master-host-0 returned with 0\nlago.ssh: DEBUG: Command e96f0894 on lago-he-basic-ansible-suite-master-host-0 output:\n {"1": {"conf_on_shared_storage": true, "live-data": true, "extra": "metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=3216 (Tue Sep 11 11:03:50 2018)\\nhost-id=1\\nscore=3400\\nvm_conf_refresh_time=3216 (Tue Sep 11 11:03:50 2018)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineUp\\nstopped=False\\n", "hostname": "lago-he-basic-ansible-suite-master-host-0", "host-id": 1, "engine-status": {"health": "good", "vm": "up", "detail": "Up"}, "score": 3400, "stopped": false, "maintenance": false, "crc32": "44316020", "local_conf_timestamp": 3216, "host-ts": 3216}, "2": {"conf_on_shared_storage": true, "live-data": true, "extra": "metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=3209 (Tue Sep 11 11:03:43 2018)\\nhost-id=2\\nscore=3000\\nvm_conf_refresh_time=3209 (Tue Sep 11 11:03:43 2018)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineDown\\nstopped=False\\n", "hostname": "lago-he-basic-ansible-suite-master-host-1", "host-id": 2, "engine-status": {"reason": "vm not running on this host", "health": "bad", "vm": "down", "detail": "unknown"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "451318ef", "local_conf_timestamp": 3209, "host-ts": 3209}, "global_maintenance": false}\n\nlago.ssh: DEBUG: start task:aa1ffebb-8d9d-4e12-a497-b14a4cfce32b:Get ssh client for lago-he-basic-ansible-suite-master-host-0:\nlago.ssh: DEBUG: end task:aa1ffebb-8d9d-4e12-a497-b14a4cfce32b:Get ssh client for lago-he-basic-ansible-suite-master-host-0:\nlago.ssh: DEBUG: Running e9de0d3e on lago-he-basic-ansible-suite-master-host-0: hosted-engine --set-maintenance --mode=local\nlago.ssh: DEBUG: Command e9de0d3e on lago-he-basic-ansible-suite-master-host-0 returned with 0\nroot: INFO: * Waiting for engine to migrate...\nlago.ssh: DEBUG: start task:cdb4603c-a16c-4607-a94c-9403deb51456:Get ssh client for lago-he-basic-ansible-suite-master-host-0:\nlago.ssh: DEBUG: end task:cdb4603c-a16c-4607-a94c-9403deb51456:Get ssh client for lago-he-basic-ansible-suite-master-host-0:\nlago.ssh: DEBUG: Running ea6a152c on lago-he-basic-ansible-suite-master-host-0: hosted-engine --vm-status --json\nlago.ssh: DEBUG: Command ea6a152c on lago-he-basic-ansible-suite-master-host-0 returned with 0\nlago.ssh: DEBUG: Command ea6a152c on lago-he-basic-ansible-suite-master-host-0 output:\n {"1": {"conf_on_shared_storage": true, "live-data": true, "extra": "metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=3226 (Tue Sep 11 11:04:00 2018)\\nhost-id=1\\nscore=3000\\nvm_conf_refresh_time=3226 (Tue Sep 11 11:04:00 2018)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineMigratingAway\\nstopped=False\\n", "hostname": "lago-he-basic-ansible-suite-master-host-0", "host-id": 1, "engine-status": {"health": "good", "vm": "up", "detail": "Up"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "fe67372c", "local_conf_timestamp": 3226, "host-ts": 3226}, "2": {"conf_on_shared_storage": true, "live-data": true, "extra": "metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=3219 (Tue Sep 11 11:03:53 2018)\\nhost-id=2\\nscore=3000\\nvm_conf_refresh_time=3219 (Tue Sep 11 11:03:53 2018)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineDown\\nstopped=False\\n", "hostname": "lago-he-basic-ansible-suite-master-host-1", "host-id": 2, "engine-status": {"reason": "vm not running on this host", "health": "bad", "vm": "down", "detail": "unknown"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "4d10b792", "local_conf_timestamp": 3219, "host-ts": 3219}, "global_maintenance": false}\n\nlago.ssh: DEBUG: start task:a5d9b5b4-0f8d-40a3-9e05-f280911e9e7c:Get ssh client for lago-he-basic-ansible-suite-master-host-0:\nlago.ssh: DEBUG: end task:a5d9b5b4-0f8d-40a3-9e05-f280911e9e7c:Get ssh client for lago-he-basic-ansible-suite-master-host-0:\nlago.ssh: DEBUG: Running f0eb2846 on lago-he-basic-ansible-suite-master-host-0: hosted-engine --vm-status --json\nlago.ssh: DEBUG: Command f0eb2846 on lago-he-basic-ansible-suite-master-host-0 returned with 0\nlago.ssh: DEBUG: Command f0eb2846 on lago-he-basic-ansible-suite-master-host-0 output:\n {"1": {"conf_on_shared_storage": true, "live-data": true, "extra": "metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=3236 (Tue Sep 11 11:04:10 2018)\\nhost-id=1\\nscore=3000\\nvm_conf_refresh_time=3237 (Tue Sep 11 11:04:11 2018)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineMigratingAway\\nstopped=False\\n", "hostname": "lago-he-basic-ansible-suite-master-host-0", "host-id": 1, "engine-status": {"health": "good", "vm": "up", "detail": "Migration Source"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "bf35865f", "local_conf_timestamp": 3237, "host-ts": 3236}, "2": {"conf_on_shared_storage": true, "live-data": true, "extra": "metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=3229 (Tue Sep 11 11:04:04 2018)\\nhost-id=2\\nscore=3000\\nvm_conf_refresh_time=3229 (Tue Sep 11 11:04:04 2018)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineDown\\nstopped=False\\n", "hostname": "lago-he-basic-ansible-suite-master-host-1", "host-id": 2, "engine-status": {"reason": "vm not running on this host", "health": "bad", "vm": "down", "detail": "unknown"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "55144615", "local_conf_timestamp": 3229, "host-ts": 3229}, "global_maintenance": false}\n\nlago.ssh: DEBUG: start task:ac9e7cbf-4f2d-4af5-9131-db6a31ca99a2:Get ssh client for lago-he-basic-ansible-suite-master-host-0:\nlago.ssh: DEBUG: end task:ac9e7cbf-4f2d-4af5-9131-db6a31ca99a2:Get ssh client for lago-he-basic-ansible-suite-master-host-0:\nlago.ssh: DEBUG: Running f353dfd8 on lago-he-basic-ansible-suite-master-host-0: hosted-engine --vm-status --json\nlago.ssh: DEBUG: Command f353dfd8 on lago-he-basic-ansible-suite-master-host-0 returned with 0\nlago.ssh: DEBUG: Command f353dfd8 on lago-he-basic-ansible-suite-master-host-0 output:\n {"1": {"conf_on_shared_storage": true, "live-data": true, "extra": "metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=3236 (Tue Sep 11 11:04:10 2018)\\nhost-id=1\\nscore=3000\\nvm_conf_refresh_time=3237 (Tue Sep 11 11:04:11 2018)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineMigratingAway\\nstopped=False\\n", "hostname": "lago-he-basic-ansible-suite-master-host-0", "host-id": 1, "engine-status": {"health": "good", "vm": "up", "detail": "Migration Source"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "bf35865f", "local_conf_timestamp": 3237, "host-ts": 3236}, "2": {"conf_on_shared_storage": true, "live-data": true, "extra": "metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=3229 (Tue Sep 11 11:04:04 2018)\\nhost-id=2\\nscore=3000\\nvm_conf_refresh_time=3229 (Tue Sep 11 11:04:04 2018)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineDown\\nstopped=False\\n", "hostname": "lago-he-basic-ansible-suite-master-host-1", "host-id": 2, "engine-status": {"reason": "vm not running on this host", "health": "bad", "vm": "down", "detail": "unknown"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "55144615", "local_conf_timestamp": 3229, "host-ts": 3229}, "global_maintenance": false}\n\nlago.ssh: DEBUG: start task:c43ce49d-f9c4-4879-88d2-ba1071c5cfab:Get ssh client for lago-he-basic-ansible-suite-master-host-0:\nlago.ssh: DEBUG: end task:c43ce49d-f9c4-4879-88d2-ba1071c5cfab:Get ssh client for lago-he-basic-ansible-suite-master-host-0:\nlago.ssh: DEBUG: Running f59e60c4 on lago-he-basic-ansible-suite-master-host-0: hosted-engine --vm-status --json\nlago.ssh: DEBUG: Command f59e60c4 on lago-he-basic-ansible-suite-master-host-0 returned with 0\nlago.ssh: DEBUG: Command f59e60c4 on lago-he-basic-ansible-suite-master-host-0 output:\n {"1": {"conf_on_shared_storage": true, "live-data": true, "extra": "metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=3236 (Tue Sep 11 11:04:10 2018)\\nhost-id=1\\nscore=3000\\nvm_conf_refresh_time=3237 (Tue Sep 11 11:04:11 2018)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineMigratingAway\\nstopped=False\\n", "hostname": "lago-he-basic-ansible-suite-master-host-0", "host-id": 1, "engine-status": {"health": "good", "vm": "up", "detail": "Migration Source"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "bf35865f", "local_conf_timestamp": 3237, "host-ts": 3236}, "2": {"conf_on_shared_storage": true, "live-data": true, "extra": "metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=3229 (Tue Sep 11 11:04:04 2018)\\nhost-id=2\\nscore=3000\\nvm_conf_refresh_time=3229 (Tue Sep 11 11:04:04 2018)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineDown\\nstopped=False\\n", "hostname": "lago-he-basic-ansible-suite-master-host-1", "host-id": 2, "engine-status": {"\novirtlago.testlib: ERROR: * Unhandled exception in <function <lambda> at 0x7f10d8c6e668>\nTraceback (most recent call last):\n File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 234, in assert_equals_within\n res = func()\n File "/home/jenkins/workspace/ovirt-system-tests_he-basic-ansible-suite-master/ovirt-system-tests/he-basic-ansible-suite-master/test-scenarios/010_local_maintenance_cli.py", line 36, in <lambda>\n testlib.assert_true_within_long(lambda: _get_he_status(host)\n File "/home/jenkins/workspace/ovirt-system-tests_he-basic-ansible-suite-master/ovirt-system-tests/he-basic-ansible-suite-master/test-scenarios/010_local_maintenance_cli.py", line 98, in _get_he_status\n raise RuntimeError(\'could not parse JSON: %s\' % ret.out)\nRuntimeError: could not parse JSON: {"1": {"conf_on_shared_storage": true, "live-data": true, "extra": "metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=3236 (Tue Sep 11 11:04:10 2018)\\nhost-id=1\\nscore=3000\\nvm_conf_refresh_time=3237 (Tue Sep 11 11:04:11 2018)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineMigratingAway\\nstopped=False\\n", "hostname": "lago-he-basic-ansible-suite-master-host-0", "host-id": 1, "engine-status": {"health": "good", "vm": "up", "detail": "Migration Source"}, "score": 3000, "stopped": false, "maintenance": false, "crc32": "bf35865f", "local_conf_timestamp": 3237, "host-ts": 3236}, "2": {"conf_on_shared_storage": true, "live-data": true, "extra": "metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=3229 (Tue Sep 11 11:04:04 2018)\\nhost-id=2\\nscore=3000\\nvm_conf_refresh_time=3229 (Tue Sep 11 11:04:04 2018)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineDown\\nstopped=False\\n", "hostname": "lago-he-basic-ansible-suite-master-host-1", "host-id": 2, "engine-status": {"\n--------------------- >> end captured logging << ---------------------'

Project: http://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-ansible-suite-maste... Build: http://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-ansible-suite-maste... Build Number: 622 Build Status: Still Failing Triggered By: Started by timer ------------------------------------- Changes Since Last Success: ------------------------------------- Changes for Build #604 [Gal Ben Haim] Adding dr suite [Barak Korren] Remove the populate_mock function from mock_runner [Daniel Belenky] mock_runner: store shell cmd in a variable [Barak Korren] mock_runner: Added timeout param [Barak Korren] Make whitelist repo configurable via env vars [Daniel Belenky] stdci_runner: let mock_runner manage timeout [Greg Sheremeta] remove 4.1 change queue from ovirt-engine-nodejs config Changes for Build #605 [Gal Ben Haim] Adding dr suite Changes for Build #606 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file [Barak Korren] Enable CI for kubevirt/client-python [Daniel Belenky] Add timeout config to stdci dsl [Daniel Belenky] stdci_runner.groovy: utilize DSL's timeout cfg Changes for Build #607 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #608 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #609 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #610 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #611 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #612 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #613 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #614 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #615 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #616 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #617 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #618 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file [Yuval Turgeman] ovirt-node-ng: remove nightly master build [Sandro Bonazzola] ovirt-iso-uploader: drop 4.1 jobs Changes for Build #619 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #620 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #621 [Miguel Duarte Barroso] Repos: add python2-ovsdbapp to the reposync config [Sandro Bonazzola] ovirt-iso-uploader: branched for 4.2 Changes for Build #622 [Miguel Duarte Barroso] Repos: add python2-ovsdbapp to the reposync config ----------------- Failed Tests: ----------------- 1 tests failed. FAILED: 012_local_maintenance_sdk.local_maintenance Error Message: False != True after 600 seconds -------------------- >> begin captured logging << -------------------- root: INFO: * Waiting For System Stability... root: INFO: * Performing Deactivation... --------------------- >> end captured logging << --------------------- Stack Trace: File "/usr/lib64/python2.7/unittest/case.py", line 369, in run testMethod() File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 142, in wrapped_test test() File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 60, in wrapper return func(get_test_prefix(), *args, **kwargs) File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 79, in wrapper prefix.virt_env.engine_vm().get_api(api_ver=4), *args, **kwargs File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 60, in wrapper return func(get_test_prefix(), *args, **kwargs) File "/home/jenkins/workspace/ovirt-system-tests_he-basic-ansible-suite-master/ovirt-system-tests/he-basic-ansible-suite-master/test-scenarios/012_local_maintenance_sdk.py", line 65, in local_maintenance lambda: host_service.get().status == types.HostStatus.MAINTENANCE or File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 286, in assert_true_within_long assert_equals_within_long(func, True, allowed_exceptions) File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 273, in assert_equals_within_long func, value, LONG_TIMEOUT, allowed_exceptions=allowed_exceptions File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 252, in assert_equals_within '%s != %s after %s seconds' % (res, value, timeout) 'False != True after 600 seconds\n-------------------- >> begin captured logging << --------------------\nroot: INFO: * Waiting For System Stability...\nroot: INFO: * Performing Deactivation...\n--------------------- >> end captured logging << ---------------------'

Project: http://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-ansible-suite-maste... Build: http://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-ansible-suite-maste... Build Number: 623 Build Status: Still Failing Triggered By: Started by timer ------------------------------------- Changes Since Last Success: ------------------------------------- Changes for Build #604 [Gal Ben Haim] Adding dr suite [Barak Korren] Remove the populate_mock function from mock_runner [Daniel Belenky] mock_runner: store shell cmd in a variable [Barak Korren] mock_runner: Added timeout param [Barak Korren] Make whitelist repo configurable via env vars [Daniel Belenky] stdci_runner: let mock_runner manage timeout [Greg Sheremeta] remove 4.1 change queue from ovirt-engine-nodejs config Changes for Build #605 [Gal Ben Haim] Adding dr suite Changes for Build #606 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file [Barak Korren] Enable CI for kubevirt/client-python [Daniel Belenky] Add timeout config to stdci dsl [Daniel Belenky] stdci_runner.groovy: utilize DSL's timeout cfg Changes for Build #607 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #608 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #609 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #610 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #611 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #612 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #613 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #614 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #615 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #616 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #617 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #618 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file [Yuval Turgeman] ovirt-node-ng: remove nightly master build [Sandro Bonazzola] ovirt-iso-uploader: drop 4.1 jobs Changes for Build #619 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #620 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #621 [Miguel Duarte Barroso] Repos: add python2-ovsdbapp to the reposync config [Sandro Bonazzola] ovirt-iso-uploader: branched for 4.2 Changes for Build #622 [Miguel Duarte Barroso] Repos: add python2-ovsdbapp to the reposync config Changes for Build #623 [Miguel Duarte Barroso] Repos: add python2-ovsdbapp to the reposync config ----------------- Failed Tests: ----------------- 1 tests failed. FAILED: 012_local_maintenance_sdk.local_maintenance Error Message: False != True after 600 seconds -------------------- >> begin captured logging << -------------------- root: INFO: * Waiting For System Stability... root: INFO: * Performing Deactivation... --------------------- >> end captured logging << --------------------- Stack Trace: File "/usr/lib64/python2.7/unittest/case.py", line 369, in run testMethod() File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 142, in wrapped_test test() File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 60, in wrapper return func(get_test_prefix(), *args, **kwargs) File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 79, in wrapper prefix.virt_env.engine_vm().get_api(api_ver=4), *args, **kwargs File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 60, in wrapper return func(get_test_prefix(), *args, **kwargs) File "/home/jenkins/workspace/ovirt-system-tests_he-basic-ansible-suite-master/ovirt-system-tests/he-basic-ansible-suite-master/test-scenarios/012_local_maintenance_sdk.py", line 65, in local_maintenance lambda: host_service.get().status == types.HostStatus.MAINTENANCE or File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 286, in assert_true_within_long assert_equals_within_long(func, True, allowed_exceptions) File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 273, in assert_equals_within_long func, value, LONG_TIMEOUT, allowed_exceptions=allowed_exceptions File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 252, in assert_equals_within '%s != %s after %s seconds' % (res, value, timeout) 'False != True after 600 seconds\n-------------------- >> begin captured logging << --------------------\nroot: INFO: * Waiting For System Stability...\nroot: INFO: * Performing Deactivation...\n--------------------- >> end captured logging << ---------------------'

Project: http://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-ansible-suite-maste... Build: http://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-ansible-suite-maste... Build Number: 624 Build Status: Still Failing Triggered By: Started by timer ------------------------------------- Changes Since Last Success: ------------------------------------- Changes for Build #604 [Gal Ben Haim] Adding dr suite [Barak Korren] Remove the populate_mock function from mock_runner [Daniel Belenky] mock_runner: store shell cmd in a variable [Barak Korren] mock_runner: Added timeout param [Barak Korren] Make whitelist repo configurable via env vars [Daniel Belenky] stdci_runner: let mock_runner manage timeout [Greg Sheremeta] remove 4.1 change queue from ovirt-engine-nodejs config Changes for Build #605 [Gal Ben Haim] Adding dr suite Changes for Build #606 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file [Barak Korren] Enable CI for kubevirt/client-python [Daniel Belenky] Add timeout config to stdci dsl [Daniel Belenky] stdci_runner.groovy: utilize DSL's timeout cfg Changes for Build #607 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #608 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #609 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #610 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #611 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #612 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #613 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #614 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #615 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #616 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #617 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #618 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file [Yuval Turgeman] ovirt-node-ng: remove nightly master build [Sandro Bonazzola] ovirt-iso-uploader: drop 4.1 jobs Changes for Build #619 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #620 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #621 [Miguel Duarte Barroso] Repos: add python2-ovsdbapp to the reposync config [Sandro Bonazzola] ovirt-iso-uploader: branched for 4.2 Changes for Build #622 [Miguel Duarte Barroso] Repos: add python2-ovsdbapp to the reposync config Changes for Build #623 [Miguel Duarte Barroso] Repos: add python2-ovsdbapp to the reposync config Changes for Build #624 [Miguel Duarte Barroso] Repos: add python2-ovsdbapp to the reposync config ----------------- Failed Tests: ----------------- 1 tests failed. FAILED: 012_local_maintenance_sdk.local_maintenance Error Message: False != True after 600 seconds -------------------- >> begin captured logging << -------------------- root: INFO: * Waiting For System Stability... root: INFO: * Performing Deactivation... --------------------- >> end captured logging << --------------------- Stack Trace: File "/usr/lib64/python2.7/unittest/case.py", line 369, in run testMethod() File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 142, in wrapped_test test() File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 60, in wrapper return func(get_test_prefix(), *args, **kwargs) File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 79, in wrapper prefix.virt_env.engine_vm().get_api(api_ver=4), *args, **kwargs File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 60, in wrapper return func(get_test_prefix(), *args, **kwargs) File "/home/jenkins/workspace/ovirt-system-tests_he-basic-ansible-suite-master/ovirt-system-tests/he-basic-ansible-suite-master/test-scenarios/012_local_maintenance_sdk.py", line 65, in local_maintenance lambda: host_service.get().status == types.HostStatus.MAINTENANCE or File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 286, in assert_true_within_long assert_equals_within_long(func, True, allowed_exceptions) File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 273, in assert_equals_within_long func, value, LONG_TIMEOUT, allowed_exceptions=allowed_exceptions File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 252, in assert_equals_within '%s != %s after %s seconds' % (res, value, timeout) 'False != True after 600 seconds\n-------------------- >> begin captured logging << --------------------\nroot: INFO: * Waiting For System Stability...\nroot: INFO: * Performing Deactivation...\n--------------------- >> end captured logging << ---------------------'

Project: http://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-ansible-suite-maste... Build: http://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-ansible-suite-maste... Build Number: 625 Build Status: Fixed Triggered By: Started by timer ------------------------------------- Changes Since Last Success: ------------------------------------- Changes for Build #604 [Gal Ben Haim] Adding dr suite [Barak Korren] Remove the populate_mock function from mock_runner [Daniel Belenky] mock_runner: store shell cmd in a variable [Barak Korren] mock_runner: Added timeout param [Barak Korren] Make whitelist repo configurable via env vars [Daniel Belenky] stdci_runner: let mock_runner manage timeout [Greg Sheremeta] remove 4.1 change queue from ovirt-engine-nodejs config Changes for Build #605 [Gal Ben Haim] Adding dr suite Changes for Build #606 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file [Barak Korren] Enable CI for kubevirt/client-python [Daniel Belenky] Add timeout config to stdci dsl [Daniel Belenky] stdci_runner.groovy: utilize DSL's timeout cfg Changes for Build #607 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #608 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #609 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #610 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #611 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #612 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #613 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #614 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #615 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #616 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #617 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #618 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file [Yuval Turgeman] ovirt-node-ng: remove nightly master build [Sandro Bonazzola] ovirt-iso-uploader: drop 4.1 jobs Changes for Build #619 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #620 [Ehud Yonasi] ovirt-master.repo: Added new packages to repo file Changes for Build #621 [Miguel Duarte Barroso] Repos: add python2-ovsdbapp to the reposync config [Sandro Bonazzola] ovirt-iso-uploader: branched for 4.2 Changes for Build #622 [Miguel Duarte Barroso] Repos: add python2-ovsdbapp to the reposync config Changes for Build #623 [Miguel Duarte Barroso] Repos: add python2-ovsdbapp to the reposync config Changes for Build #624 [Miguel Duarte Barroso] Repos: add python2-ovsdbapp to the reposync config Changes for Build #625 [Ales Musil] network: syncutil: Improve sync utility with kwargs [Gal Ben Haim] Add jobs for ovs-cni ----------------- Failed Tests: ----------------- All tests passed
participants (2)
-
jenkins@jenkins.phx.ovirt.org
-
Simone Tiraboschi