Once you have clarified why you are transforming your organization to become agile, it's time…
- What does the system test or integration team do during development sprints?
- What do Scrum delivery teams do during the release sprint or hardening sprint?
- What proportion of a Scrum team’s time should be split between supporting the release-level activities of an integration team, versus planning for the next release?
- Shouldn’t we stagger our product releases so that each group stays busy?
Many organizations have a system test team or integration team that is separate from the Scrum delivery team (the Design-Build-Test team). I sometimes get questions like those above from such organizations as they consider adopting an agile approach, particularly from large organizations that have to integrate the work of multiple teams. I’ll answer these questions after giving a couple definitions.
A System Test Team (also known as the Independent Verification and Validation Team, or the Integration and Regression Test Team, or the release-level integration team) is responsible for the execution of continuous integration and regression testing across teams and managing the resolution of the problems by feeding defects into delivery teams. They work closely with the System Team to continue to improve the continuous integration and testing capabilities.
A System Team (sometimes called an Enablement Team or the Build as a Service Team or other things) is responsible for building everything needed to support continuous integration, test environments and test data management. They develop the tools and support the automation. This is often a very small team – as small as one person or a couple of people. You only need this when the testing is going to require integration of multiple teams’ outputs for testing.
Now for the answers to the questions.
First off, read Dennis Stevens’ blog on six things teams can do that are better than writing code that can’t be tested. During a Release Sprint, delivery teams can be working on learning, reducing technical debt, improving unit testing and feature level validation, refactoring, or preparing for the next release. They should be instantly available to support any defect found in the hardening or release sprint, or be directly involved with the testing. Whatever time is remaining should be spent improving their capability and preparing for the next release.
During development sprints, the release level integration team (or system test team) may be doing integration and verification of everything delivered in the prior sprint. They should also be getting ready to test what is coming next. This means collaborating to refine acceptance criteria or to define tests for work about to go to delivery teams. They should be improving their automation and making an effort to understand (or critique) the delivery team’s unit test coverage. In addition, they should be improving their capability.
It is really important to avoid the late testing anti-patterns. Anti-pattern one: delivery teams not verifying their product effectively and throwing garbage over to a system test team. Anti-pattern two: teams not finishing features (a set of related stories that ultimately need to be tested together as a unit) throughout the release such that nothing is system testable until the end of the release. Anti-pattern three: deferring any amount of integration and regression testing until everything is complete.
The goal is to test as much as possible throughout the release, deferring very little to the end. This means delivery teams deliver verified and technically excellent features each sprint. Until they are perfect at this, they have work they can do to improve. This means system test teams integrate, verify and validate frequently (continuously). Until they are capable of doing this, they have work they can do to improve.
If you are looking for simple formula, you will be disappointed: There is not a simple percent-of-time formula for this. Look at where you are, coordinate with all the teams involved, improve your ability to minimize untested code, make sure work is ready ahead of development, and maximize flow of validated product through the system.
* Credit: I took an email from Dennis and massaged it into this blog post.