[Users] Next Release Planning

Hi all, Now that oVirt 3.1 has shipped, we need to start the planning process for the next release. One of the major topics for this week's weekly sync is to review the release criteria. The criteria we used for 3.1 is laid out on the wiki [1]. I will be posting an equivalent version for the next release in the next couple days, but it will mostly be copy/paste from this page. Please think about release criteria and whether or not we want to add/remove/change things for this release. This needs to be determined now to make sure that the release process runs smoother down the line. Thanks Mike [1] http://wiki.ovirt.org/wiki/Second_Release

On 20/08/2012, at 10:02 PM, Mike Burns wrote:
Hi all,
Now that oVirt 3.1 has shipped, we need to start the planning process for the next release. One of the major topics for this week's weekly sync is to review the release criteria.
The criteria we used for 3.1 is laid out on the wiki [1]. I will be posting an equivalent version for the next release in the next couple days, but it will mostly be copy/paste from this page.
Please think about release criteria and whether or not we want to add/remove/change things for this release. This needs to be determined now to make sure that the release process runs smoother down the line.
Is there some way we can do an end-to-end platform test for most of the things mentioned there, to sanity check the binaries before announcement? Trying to think of some way to catch the "broken ISO" problem that 3.1 has with NFS storage. So, something similar doesn't occur again in future. Any ideas? Regards and best wishes, Justin Clift
Thanks
Mike
[1] http://wiki.ovirt.org/wiki/Second_Release
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Aeolus Community Manager http://www.aeolusproject.org

On Tue, 2012-08-21 at 19:52 +1000, Justin Clift wrote:
On 20/08/2012, at 10:02 PM, Mike Burns wrote:
Hi all,
Now that oVirt 3.1 has shipped, we need to start the planning process for the next release. One of the major topics for this week's weekly sync is to review the release criteria.
The criteria we used for 3.1 is laid out on the wiki [1]. I will be posting an equivalent version for the next release in the next couple days, but it will mostly be copy/paste from this page.
Please think about release criteria and whether or not we want to add/remove/change things for this release. This needs to be determined now to make sure that the release process runs smoother down the line.
Is there some way we can do an end-to-end platform test for most of the things mentioned there, to sanity check the binaries before announcement?
Trying to think of some way to catch the "broken ISO" problem that 3.1 has with NFS storage. So, something similar doesn't occur again in future.
Any ideas?
Yes, this makes a lot of sense to me. We should make an end-to-end sanity test with all components part of the release criteria. Mike
Regards and best wishes,
Justin Clift
Thanks
Mike
[1] http://wiki.ovirt.org/wiki/Second_Release
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Aeolus Community Manager http://www.aeolusproject.org
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

* Mike Burns <mburns@redhat.com> [2012-08-21 07:29]:
On Tue, 2012-08-21 at 19:52 +1000, Justin Clift wrote:
On 20/08/2012, at 10:02 PM, Mike Burns wrote:
Hi all,
Now that oVirt 3.1 has shipped, we need to start the planning process for the next release. One of the major topics for this week's weekly sync is to review the release criteria.
The criteria we used for 3.1 is laid out on the wiki [1]. I will be posting an equivalent version for the next release in the next couple days, but it will mostly be copy/paste from this page.
Please think about release criteria and whether or not we want to add/remove/change things for this release. This needs to be determined now to make sure that the release process runs smoother down the line.
Is there some way we can do an end-to-end platform test for most of the things mentioned there, to sanity check the binaries before announcement?
Trying to think of some way to catch the "broken ISO" problem that 3.1 has with NFS storage. So, something similar doesn't occur again in future.
Any ideas?
Yes, this makes a lot of sense to me. We should make an end-to-end sanity test with all components part of the release criteria.
From there, building/writing some tests using either engine-cli, or
I haven't seen much discussion around testing the complete stack as a whole. I'm wondering if the all-in-one build makes a good platform to build stack testing against? I don't really enjoying fixing up jboss or selinux or various other tweaks on test day when installing from scratch (though that does find some bugs), so all-in-one seems like a good sanity check. the ovirt-sdk python bindings seems like a good way to exercise the function of the release. With the nested mode supported, would it be possible to have a jenkins job run a test that booted the all-in-one iso and ran some tests against that?
Mike
Regards and best wishes,
Justin Clift
Thanks
Mike
[1] http://wiki.ovirt.org/wiki/Second_Release
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Aeolus Community Manager http://www.aeolusproject.org
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Board mailing list Board@ovirt.org http://lists.ovirt.org/mailman/listinfo/board
-- Ryan Harper Software Engineer; Linux Technology Center IBM Corp., Austin, Tx ryanh@us.ibm.com

On Tue, 2012-08-21 at 09:10 -0500, Ryan Harper wrote:
* Mike Burns <mburns@redhat.com> [2012-08-21 07:29]:
On Tue, 2012-08-21 at 19:52 +1000, Justin Clift wrote:
On 20/08/2012, at 10:02 PM, Mike Burns wrote:
Hi all,
Now that oVirt 3.1 has shipped, we need to start the planning process for the next release. One of the major topics for this week's weekly sync is to review the release criteria.
The criteria we used for 3.1 is laid out on the wiki [1]. I will be posting an equivalent version for the next release in the next couple days, but it will mostly be copy/paste from this page.
Please think about release criteria and whether or not we want to add/remove/change things for this release. This needs to be determined now to make sure that the release process runs smoother down the line.
Is there some way we can do an end-to-end platform test for most of the things mentioned there, to sanity check the binaries before announcement?
Trying to think of some way to catch the "broken ISO" problem that 3.1 has with NFS storage. So, something similar doesn't occur again in future.
Any ideas?
Yes, this makes a lot of sense to me. We should make an end-to-end sanity test with all components part of the release criteria.
I haven't seen much discussion around testing the complete stack as a whole. I'm wondering if the all-in-one build makes a good platform to build stack testing against?
I don't really enjoying fixing up jboss or selinux or various other tweaks on test day when installing from scratch (though that does find some bugs), so all-in-one seems like a good sanity check.
From there, building/writing some tests using either engine-cli, or the ovirt-sdk python bindings seems like a good way to exercise the function of the release.
With the nested mode supported, would it be possible to have a jenkins job run a test that booted the all-in-one iso and ran some tests against that?
Just want to point out that ^^ wouldn't catch that ovirt-node is un-usable due to a kernel/vdsm bug. allinone testing is a good idea to catch many issues, but we need to be running some sort of end-to-end testing with ovirt-node as well. Mike
Mike
Regards and best wishes,
Justin Clift
Thanks
Mike
[1] http://wiki.ovirt.org/wiki/Second_Release
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Aeolus Community Manager http://www.aeolusproject.org
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Board mailing list Board@ovirt.org http://lists.ovirt.org/mailman/listinfo/board

* Mike Burns <mburns@redhat.com> [2012-08-21 09:25]:
On Tue, 2012-08-21 at 09:10 -0500, Ryan Harper wrote:
Trying to think of some way to catch the "broken ISO" problem that 3.1 has with NFS storage. So, something similar doesn't occur again in future.
Any ideas?
Yes, this makes a lot of sense to me. We should make an end-to-end sanity test with all components part of the release criteria.
I haven't seen much discussion around testing the complete stack as a whole. I'm wondering if the all-in-one build makes a good platform to build stack testing against?
I don't really enjoying fixing up jboss or selinux or various other tweaks on test day when installing from scratch (though that does find some bugs), so all-in-one seems like a good sanity check.
From there, building/writing some tests using either engine-cli, or the ovirt-sdk python bindings seems like a good way to exercise the function of the release.
With the nested mode supported, would it be possible to have a jenkins job run a test that booted the all-in-one iso and ran some tests against that?
Just want to point out that ^^ wouldn't catch that ovirt-node is un-usable due to a kernel/vdsm bug. allinone testing is a good idea to catch many issues, but we need to be running some sort of end-to-end testing with ovirt-node as well.
Booting the image is the starting point. One of the tests to run on-top of an all-in-one would be attempting adding an NFS export domain from localhost. And the list goes on. Depending on the complexity of the tests, it may not lend itself to a jenkins job, but I think approach of writing engine level FVT and running it against the all-in-one is sound. -- Ryan Harper Software Engineer; Linux Technology Center IBM Corp., Austin, Tx ryanh@us.ibm.com
participants (3)
-
Justin Clift
-
Mike Burns
-
Ryan Harper