Hello,
I would like to update on this week's failure and OST current status.
At the end of the week I am happy to say that we are no longer failing on
any issues and the testers are completely green.
We had a great week of many people rallying together to fix OST and resolve
all outstanding issues.
During the week we saw and resolved the following failures:
*Master: *
002_bootstrap.get_host_devices - this was related to timing issues with the
test and was fixed by:
https://gerrit.ovirt.org/87526
001_initialize_engine.initialize_engine - dwh service failed to start -
this seemed liked a packaging issue and was resolved.
*4.2:*
002_bootstrap.add_cluster - cluster.cpu.type error - this was resolved a
week ago and was resolved by patch:
https://gerrit.ovirt.org/#/c/87126/
003_00_metrics_bootstrap.metrics_and_log_collector - was resolved by
Shirly.
002_bootstrap.get_host_devices - this was related to timing issues with the
test and was fixed by:
https://gerrit.ovirt.org/87526
*4.1*
001_initialize_engine.initialize_engine - dwh service failed to start -
this seemed liked a packaging issue and was resolved.
Build-artifacts jobs failing has a high percentage but* many of the
build-artifacts failures were caused because of fcraw issues *and I am
hoping this would be resolved over time.
*below you can see the chart of OST resolved issues based on cause and
failures: *
* Based on feedback, I made some changes to the definitions
Code= regression of working components/functionalities
Configurations - package related issues
Other = failed build artifacts
Infra = infrastructure/OST/Lago related issues
[image: Inline image 2]
[image: Inline image 1]
*Below is a chart of resolved failures based on Ovirt Version: *
[image: Inline image 4]
[image: Inline image 3]
*Below is a chart showing failures by suite type: *
* Suite type None means that it was a failure that did not result in a
failed test such artifacts related or packaging related.
[image: Inline image 8]
[image: Inline image 7]
Thanks,
Dafna