test_incremental_backup_vm2 failed (was: Change in ovirt-system-tests[master]: WIP: Unite basic-suite-master 002_ and 004_ with HE)
by Yedidyah Bar David
Hi all,
On Thu, Feb 18, 2021 at 11:17 AM Code Review <gerrit(a)ovirt.org> wrote:
>
> From Jenkins CI <jenkins(a)ovirt.org>:
>
> Jenkins CI has posted comments on this change. ( https://gerrit.ovirt.org/c/ovirt-system-tests/+/113452 )
This patch ^^^ makes 002_ and 004_ test modules identical between
basic-suite and he-basic-suite.
>
> Change subject: WIP: Unite basic-suite-master 002_ and 004_ with HE
> ......................................................................
>
>
> Patch Set 29: Continuous-Integration-1
>
> Build Failed
>
> https://jenkins.ovirt.org/job/ovirt-system-tests_standard-check-patch/15615/ : FAILURE
CI ran 3 suites on it. basic-suite and ansible-suite passed,
he-basic-suite failed [1] with this in engine.log [2]:
========================================================================
2021-02-18 10:06:33,389+01 INFO
[org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller]
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-81)
[test_incremental_backup] Exception in invoking callback of command
RedefineVmCheckpoint (20051278-ed39-40e5-9350-c47bd51fd6c0):
ClassCastException: class
org.ovirt.engine.core.bll.RedefineVmCheckpointCommand cannot be cast
to class org.ovirt.engine.core.bll.SerialChildExecutingCommand
(org.ovirt.engine.core.bll.RedefineVmCheckpointCommand and
org.ovirt.engine.core.bll.SerialChildExecutingCommand are in unnamed
module of loader 'deployment.engine.ear.bll.jar' @176ed0f7)
2021-02-18 10:06:33,389+01 ERROR
[org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller]
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-81)
[test_incremental_backup] Error invoking callback method 'doPolling'
for 'ACTIVE' command '20051278-ed39-40e5-9350-c47bd51fd6c0'
2021-02-18 10:06:33,390+01 ERROR
[org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller]
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-81)
[test_incremental_backup] Exception: java.lang.ClassCastException:
class org.ovirt.engine.core.bll.RedefineVmCheckpointCommand cannot be
cast to class org.ovirt.engine.core.bll.SerialChildExecutingCommand
(org.ovirt.engine.core.bll.RedefineVmCheckpointCommand and
org.ovirt.engine.core.bll.SerialChildExecutingCommand are in unnamed
module of loader 'deployment.engine.ear.bll.jar' @176ed0f7)
at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback.childCommandsExecutionEnded(SerialChildCommandsExecutionCallback.java:29)
at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.ChildCommandsCallbackBase.doPolling(ChildCommandsCallbackBase.java:80)
at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller.invokeCallbackMethodsImpl(CommandCallbacksPoller.java:175)
at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller.invokeCallbackMethods(CommandCallbacksPoller.java:109)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305)
at org.glassfish.javax.enterprise.concurrent//org.glassfish.enterprise.concurrent.internal.ManagedScheduledThreadPoolExecutor$ManagedScheduledFutureTask.access$201(ManagedScheduledThreadPoolExecutor.java:360)
at org.glassfish.javax.enterprise.concurrent//org.glassfish.enterprise.concurrent.internal.ManagedScheduledThreadPoolExecutor$ManagedScheduledFutureTask.run(ManagedScheduledThreadPoolExecutor.java:511)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
at org.glassfish.javax.enterprise.concurrent//org.glassfish.enterprise.concurrent.ManagedThreadFactoryImpl$ManagedThread.run(ManagedThreadFactoryImpl.java:227)
========================================================================
Any idea?
I also now see that basic-suite fails [3], but on a different test -
test_import_floating_disk. Not sure that's related.
Thanks and best regards,
[1] https://jenkins.ovirt.org/job/ovirt-system-tests_standard-check-patch/156...
[2] https://jenkins.ovirt.org/job/ovirt-system-tests_standard-check-patch/156...
[3] https://jenkins.ovirt.org/job/ovirt-system-tests_basic-suite-master_night...
>
>
> --
> To view, visit https://gerrit.ovirt.org/c/ovirt-system-tests/+/113452
> To unsubscribe, or for help writing mail filters, visit https://gerrit.ovirt.org/settings
>
> Gerrit-Project: ovirt-system-tests
> Gerrit-Branch: master
> Gerrit-Change-Id: Ied836cda6b622dbebdb869e0b83fa4c0e0b7ca2c
> Gerrit-Change-Number: 113452
> Gerrit-PatchSet: 29
> Gerrit-Owner: Yedidyah Bar David <didi(a)redhat.com>
> Gerrit-Reviewer: Anton Marchukov <amarchuk(a)redhat.com>
> Gerrit-Reviewer: Dafna Ron <dron(a)redhat.com>
> Gerrit-Reviewer: Dusan Fodor <dfodor(a)redhat.com>
> Gerrit-Reviewer: Gal Ben Haim <galbh2(a)gmail.com>
> Gerrit-Reviewer: Galit Rosenthal <grosenth(a)redhat.com>
> Gerrit-Reviewer: Jenkins CI <jenkins(a)ovirt.org>
> Gerrit-Reviewer: Name of user not set #1001916
> Gerrit-Reviewer: Yedidyah Bar David <didi(a)redhat.com>
> Gerrit-Reviewer: Zuul CI <zuul(a)ovirt.org>
> Gerrit-Comment-Date: Thu, 18 Feb 2021 09:17:18 +0000
> Gerrit-HasComments: No
> Gerrit-Has-Labels: Yes
> Gerrit-MessageType: comment
>
--
Didi
3 years, 9 months
Support for SSH keys other than RSA
by Artur Socha
Hi,
I have been recently working on adding support for SSH keys other than
RSA (communication between ovirt-engine and hosts(VDS-es)).
The entire effort is tracked in Bugzilla [1].
There are couple important changes I would like to share with you.
First and the most important is changing the way connection is verified.
Previously fingerprints (by default SHA-256 unless changed via
configuration) were used to verify if the connection between the engine
and the host could be established. Now public keys are compared instead
(with one exception for backward compatibility).
For backward compatibility ie. for previously added (legacy) hosts with
fingerprint calculated out of RSA public key (the key not stored in db)
the verification is done as before that means we compare fingerprints
only. After upgrade the whole setup is expected to work without any
manual intervention.
However, there are couple of options to 'migrate' legacy fingerprint to
whatever ssh server finds the strongest on the host:
1) In database remove sshkeyfingerprint value ie.
update vds_static set sshkeyfingerprint='' where vds_id = 'PUT_HERE_HOST_ID'
2) REST:prepare request with blank fingerprint for 'legacy' hosts
Please see the (documentation [2]). Fingerprint and public key will be
re-entered,
3) reinstall host / install new host
4) manually deploy key and update host's VDS_static.sshkeyfingerprint
and vds_static.public_key
On engine's UI side there is still a way to fetch fingerprints (on 'New
Host' panel but we anticipate that soon there will be a public key (open
ssh format) instead.
Please let me know if you have any questions, doubts or if you encounter
any issues around this area.
Patches (referenced in BZ[1]) has been merged into master and this
feature is expected to go with 4.4.5 upstream release.
[1] https://bugzilla.redhat.com/show_bug.cgi?id=1837221
[2]
https://jenkins.ovirt.org/job/ovirt-engine-api-model_standard-check-patch...
best,
Artur
3 years, 9 months
ansible: Unsupported parameters found in auth
by Yedidyah Bar David
Hi all,
he-basic-suite fails [1] with:
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Ensure that the
target datacenter is present]
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg":
"Unsupported parameters for (ovirt_datacenter) module: compress,
timeout found in auth. Supported parameters include: ca_file, headers,
hostname, insecure, kerberos, password, token, url, username"}
Any clue?
Thanks,
[1] https://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-suite-master/1918/
--
Didi
3 years, 9 months
"Too many open files" in vdsm.log after 380 migrations
by Yedidyah Bar David
Hi all,
I ran a loop of [1] (from [2]). The loop succeeded for ~ 380
iterations, then failed with 'Too many open files'. First failure was:
2021-02-08 02:21:15,702+0100 ERROR (jsonrpc/4) [storage.HSM] Could not
connect to storageServer (hsm:2446)
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/vdsm/storage/hsm.py", line
2443, in connectStorageServer
conObj.connect()
File "/usr/lib/python3.6/site-packages/vdsm/storage/storageServer.py",
line 449, in connect
return self._mountCon.connect()
File "/usr/lib/python3.6/site-packages/vdsm/storage/storageServer.py",
line 171, in connect
self._mount.mount(self.options, self._vfsType, cgroup=self.CGROUP)
File "/usr/lib/python3.6/site-packages/vdsm/storage/mount.py", line
210, in mount
cgroup=cgroup)
File "/usr/lib/python3.6/site-packages/vdsm/common/supervdsm.py",
line 56, in __call__
return callMethod()
File "/usr/lib/python3.6/site-packages/vdsm/common/supervdsm.py",
line 54, in <lambda>
**kwargs)
File "<string>", line 2, in mount
File "/usr/lib64/python3.6/multiprocessing/managers.py", line 772,
in _callmethod
raise convert_to_error(kind, result)
OSError: [Errno 24] Too many open files
But obviously, once it did, it continued failing for this reason on
many later operations.
Is this considered a bug? Do we actively try to prevent such cases? So
should I open one and attach logs? Or it can be considered a "corner
case"?
Using vdsm-4.40.50.3-37.git7883b3b43.el8.x86_64 from
ost-images-el8-he-installed-1-202102021144.x86_64 .
I can also let access to the machine(s) if needed, for now.
Thanks and best regards,
[1] https://gerrit.ovirt.org/gitweb?p=ovirt-system-tests.git;a=blob;f=he-basi...
[2] https://gerrit.ovirt.org/c/ovirt-system-tests/+/113300
--
Didi
3 years, 9 months
Network error during communication with the Host (was: [oVirt Jenkins] ovirt-system-tests_basic-suite-master_nightly - Build # 857 - Still Failing!)
by Yedidyah Bar David
On Tue, Feb 9, 2021 at 7:24 AM <jenkins(a)jenkins.phx.ovirt.org> wrote:
>
> Project: https://jenkins.ovirt.org/job/ovirt-system-tests_basic-suite-master_nightly/
> Build: https://jenkins.ovirt.org/job/ovirt-system-tests_basic-suite-master_night...
> Build Number: 857
> Build Status: Still Failing
> Triggered By: Started by timer
>
> -------------------------------------
> Changes Since Last Success:
> -------------------------------------
> Changes for Build #855
> [Eitan Raviv] network: remove fqdn from /etc/hosts
>
>
> Changes for Build #856
> [Eitan Raviv] network: remove fqdn from /etc/hosts
>
>
> Changes for Build #857
> [Eitan Raviv] network: remove fqdn from /etc/hosts
>
>
>
>
> -----------------
> Failed Tests:
> -----------------
> 1 tests failed.
> FAILED: basic-suite-master.test-scenarios.test_002_bootstrap.test_add_nfs_master_storage_domain
>
> Error Message:
> AssertionError: False != True after 600 seconds
>
> Stack Trace:
> engine = <ovirtsdk4.services.SystemService object at 0x7fda58b32828>
> event_id = [956]
>
> @contextlib.contextmanager
> def wait_for_event(engine, event_id):
> '''
> event_id could either be an int - a single
> event ID or a list - multiple event IDs
> that all will be checked
> '''
> events = engine.events_service()
> last_event = int(events.list(max=2)[0].id)
> try:
> > yield
>
> ../ost_utils/ost_utils/engine_utils.py:38:
> _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
>
> api = <ovirtsdk4.Connection object at 0x7fda58b32748>
> domain = <ovirtsdk4.types.StorageDomain object at 0x7fda58d1fac8>
> dc_name = 'test-dc'
>
> def add(api, domain, dc_name):
> system_service = api.system_service()
> sds_service = system_service.storage_domains_service()
> with engine_utils.wait_for_event(system_service, 956): # USER_ADD_STORAGE_DOMAIN(956)
> > sd = sds_service.add(domain)
>
> ../ost_utils/ost_utils/storage_utils/domain.py:33:
> _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
>
> self = <ovirtsdk4.services.StorageDomainsService object at 0x7fda57ca4f60>
> storage_domain = <ovirtsdk4.types.StorageDomain object at 0x7fda58d1fac8>
> headers = None, query = {}, wait = True, kwargs = {}
>
> def add(
> self,
> storage_domain,
> headers=None,
> query=None,
> wait=True,
> **kwargs
> ):
> """
> Adds a new storage domain.
> Creation of a new <<types/storage_domain,StorageDomain>> requires the `name`, `type`, `host`, and `storage`
> attributes. Identify the `host` attribute with the `id` or `name` attributes. In {product-name} 3.6 and
> later you can enable the wipe after delete option by default on the storage domain. To configure this, specify
> `wipe_after_delete` in the POST request. This option can be edited after the domain is created, but doing so will
> not change the wipe after delete property of disks that already exist.
> To add a new storage domain with specified `name`, `type`, `storage.type`, `storage.address`, and `storage.path`,
> and using a host with an id `123`, send a request like this:
> [source]
> ----
> POST /ovirt-engine/api/storageDomains
> ----
> With a request body like this:
> [source,xml]
> ----
> <storage_domain>
> <name>mydata</name>
> <type>data</type>
> <storage>
> <type>nfs</type>
> <address>mynfs.example.com</address>
> <path>/exports/mydata</path>
> </storage>
> <host>
> <name>myhost</name>
> </host>
> </storage_domain>
> ----
> To create a new NFS ISO storage domain send a request like this:
> [source,xml]
> ----
> <storage_domain>
> <name>myisos</name>
> <type>iso</type>
> <storage>
> <type>nfs</type>
> <address>mynfs.example.com</address>
> <path>/export/myisos</path>
> </storage>
> <host>
> <name>myhost</name>
> </host>
> </storage_domain>
> ----
> To create a new iSCSI storage domain send a request like this:
> [source,xml]
> ----
> <storage_domain>
> <name>myiscsi</name>
> <type>data</type>
> <storage>
> <type>iscsi</type>
> <logical_units>
> <logical_unit id="3600144f09dbd050000004eedbd340001"/>
> <logical_unit id="3600144f09dbd050000004eedbd340002"/>
> </logical_units>
> </storage>
> <host>
> <name>myhost</name>
> </host>
> </storage_domain>
> ----
>
>
> This method supports the following parameters:
>
> `storage_domain`:: The storage domain to add.
>
> `headers`:: Additional HTTP headers.
>
> `query`:: Additional URL query parameters.
>
> `wait`:: If `True` wait for the response.
> """
> # Check the types of the parameters:
> Service._check_types([
> ('storage_domain', storage_domain, types.StorageDomain),
> ])
>
> # Build the URL:
> query = query or {}
>
> # Send the request and wait for the response:
> > return self._internal_add(storage_domain, headers, query, wait)
>
> /usr/lib64/python3.6/site-packages/ovirtsdk4/services.py:26182:
> _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
>
> self = <ovirtsdk4.services.StorageDomainsService object at 0x7fda57ca4f60>
> object = <ovirtsdk4.types.StorageDomain object at 0x7fda58d1fac8>, headers = {}
> query = {}, wait = True
>
> def _internal_add(self, object, headers=None, query=None, wait=None):
> """
> Executes an `add` method.
> """
> # Populate the headers:
> headers = headers or {}
>
> # Send the request and wait for the response:
> request = http.Request(method='POST', path=self._path, query=query, headers=headers)
> request.body = writer.Writer.write(object, indent=True)
> context = self._connection.send(request)
>
> def callback(response):
> if response.code in [200, 201, 202]:
> return self._internal_read_body(response)
> else:
> self._check_fault(response)
>
> future = Future(self._connection, context, callback)
> > return future.wait() if wait else future
>
> /usr/lib64/python3.6/site-packages/ovirtsdk4/service.py:232:
> _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
>
> self = <ovirtsdk4.service.Future object at 0x7fda57ca49b0>
>
> def wait(self):
> """
> Waits till the result of the operation that created this future is
> available.
> """
> response = self._connection.wait(self._context)
> > return self._code(response)
>
> /usr/lib64/python3.6/site-packages/ovirtsdk4/service.py:55:
> _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
>
> response = <ovirtsdk4.http.Response object at 0x7fda57ca4f98>
>
> def callback(response):
> if response.code in [200, 201, 202]:
> return self._internal_read_body(response)
> else:
> > self._check_fault(response)
>
> /usr/lib64/python3.6/site-packages/ovirtsdk4/service.py:229:
> _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
>
> self = <ovirtsdk4.services.StorageDomainsService object at 0x7fda57ca4f60>
> response = <ovirtsdk4.http.Response object at 0x7fda57ca4f98>
>
> def _check_fault(self, response):
> """
> Reads the response body assuming that it contains a fault message,
> converts it to an exception and raises it.
>
> This method is intended for internal use by other
> components of the SDK. Refrain from using it directly,
> as backwards compatibility isn't guaranteed.
> """
>
> body = self._internal_read_body(response)
> if isinstance(body, types.Fault):
> > self._raise_error(response, body)
>
> /usr/lib64/python3.6/site-packages/ovirtsdk4/service.py:132:
> _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
>
> response = <ovirtsdk4.http.Response object at 0x7fda57ca4f98>
> detail = <ovirtsdk4.types.Fault object at 0x7fda57ca44e0>
>
> @staticmethod
> def _raise_error(response, detail=None):
> """
> Creates and raises an error containing the details of the given HTTP
> response and fault.
>
> This method is intended for internal use by other components of the
> SDK. Refrain from using it directly, as backwards compatibility isn't
> guaranteed.
> """
> fault = detail if isinstance(detail, types.Fault) else None
>
> msg = ''
> if fault:
> if fault.reason:
> if msg:
> msg += ' '
> msg = msg + 'Fault reason is "%s".' % fault.reason
> if fault.detail:
> if msg:
> msg += ' '
> msg = msg + 'Fault detail is "%s".' % fault.detail
> if response:
> if response.code:
> if msg:
> msg += ' '
> msg = msg + 'HTTP response code is %s.' % response.code
> if response.message:
> if msg:
> msg += ' '
> msg = msg + 'HTTP response message is "%s".' % response.message
>
> if isinstance(detail, six.string_types):
> if msg:
> msg += ' '
> msg = msg + detail + '.'
>
> class_ = Error
> if response is not None:
> if response.code in [401, 403]:
> class_ = AuthError
> elif response.code == 404:
> class_ = NotFoundError
>
> error = class_(msg)
> error.code = response.code if response else None
> error.fault = fault
> > raise error
> E ovirtsdk4.Error: Fault reason is "Operation Failed". Fault detail is "[Network error during communication with the Host.]". HTTP response code is 400.
Did anyone check this?
It started failing yesterday. Also for me, e.g.:
https://jenkins.ovirt.org/job/ovirt-system-tests_standard-check-patch/15458/
Thanks and best regards,
>
> /usr/lib64/python3.6/site-packages/ovirtsdk4/service.py:118: Error
>
> During handling of the above exception, another exception occurred:
>
> engine_api = <ovirtsdk4.Connection object at 0x7fda58b32748>
> sd_nfs_host_storage_ip = '192.168.202.2'
>
> @order_by(_TEST_LIST)
> @pytest.mark.skipif(MASTER_SD_TYPE != 'nfs', reason='not using nfs')
> def test_add_nfs_master_storage_domain(engine_api, sd_nfs_host_storage_ip):
> > add_nfs_storage_domain(engine_api, sd_nfs_host_storage_ip)
>
> ../basic-suite-master/test-scenarios/test_002_bootstrap.py:545:
> _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
> ../basic-suite-master/test-scenarios/test_002_bootstrap.py:554: in add_nfs_storage_domain
> nfs_version='v4_2')
> ../ost_utils/ost_utils/storage_utils/nfs.py:62: in add_domain
> domain.add(engine_api, p, dc_name)
> ../ost_utils/ost_utils/storage_utils/domain.py:37: in add
> lambda: sd_service.get().status == sdk4.types.StorageDomainStatus.UNATTACHED
> /usr/lib64/python3.6/contextlib.py:99: in __exit__
> self.gen.throw(type, value, traceback)
> ../ost_utils/ost_utils/engine_utils.py:44: in wait_for_event
> lambda:
> ../ost_utils/ost_utils/assertions.py:98: in assert_true_within_long
> assert_equals_within_long(func, True, allowed_exceptions)
> ../ost_utils/ost_utils/assertions.py:83: in assert_equals_within_long
> func, value, LONG_TIMEOUT, allowed_exceptions=allowed_exceptions
> _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
>
> func = <function wait_for_event.<locals>.<lambda> at 0x7fda57c9c2f0>
> value = True, timeout = 600, allowed_exceptions = [], initial_wait = 0
> error_message = 'False != True after 600 seconds'
>
> def assert_equals_within(
> func, value, timeout, allowed_exceptions=None, initial_wait=10,
> error_message=None
> ):
> allowed_exceptions = allowed_exceptions or []
> res = '<no-result-obtained>'
> with _EggTimer(timeout) as timer:
> while not timer.elapsed():
> try:
> res = func()
> if res == value:
> return
> except Exception as exc:
> if _instance_of_any(exc, allowed_exceptions):
> time.sleep(3)
> continue
>
> LOGGER.exception("Unhandled exception in %s", func)
> raise
>
> if initial_wait == 0:
> time.sleep(3)
> else:
> time.sleep(initial_wait)
> initial_wait = 0
> try:
> if error_message is None:
> error_message = '%s != %s after %s seconds' % (res, value, timeout)
> > raise AssertionError(error_message)
> E AssertionError: False != True after 600 seconds
>
> ../ost_utils/ost_utils/assertions.py:61: AssertionError_______________________________________________
> Infra mailing list -- infra(a)ovirt.org
> To unsubscribe send an email to infra-leave(a)ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
> List Archives: https://lists.ovirt.org/archives/list/infra@ovirt.org/message/ND5KFKIA7BN...
--
Didi
3 years, 9 months