Status of Testing in Agile Projects

In agile projects, changes happen quickly. These changes mean that test status, test progress, and product quality are constantly evolving, and testers must find ways to communicate this information to the team so that they can make decisions to stay on track to successfully complete each iteration. In addition, changes can impact existing functionality from previous iterations. Therefore, manual and automated testing must be updated to effectively manage regression risk.

Communicating Test Status, Progress, and Product Quality

Agile teams make progress by having working software at the end of each iteration. To determine when the team will have working software, it must monitor the progress of all work elements in the iteration and release. Testers in agile teams use various methods to record test progress and status, including test automation results, the progress of test tasks and stories on the Agile Task Board, and burndown charts showing the team’s progress. These can then be shared with the rest of the team using media such as wiki dashboards and dashboard-style emails, as well as verbally during stand-up meetings. Agile teams can use tools that automatically generate status reports based on test results and task progress, which in turn update dashboards and wiki-style emails. This communication method also collects metrics from the testing process that can be used for process improvement. Automated test status communication also frees up testers to focus on test case development and execution.

Teams can use burndown diagrams to track progress across releases and within each iteration. A burndown chart represents the amount of work remaining to be done relative to the time allotted for the release or iteration.

To get an instant, detailed visual representation of the current status of the entire team, including testing status, teams can use Agiletask boards. Story cards, development tasks, test tasks, and other tasks created during iteration planning are recorded on the task board, often using color-coordinated cards to determine task type. During iteration, progress is managed by moving these tasks on the task board into columns such as “to do,” “in progress,” “to review,” and “done.” Agile teams can use tools to manage their story cards and Agiletask boards, which can automate dashboards and status updates.

Test tasks on the task board relate to the acceptance criteria defined for the user stories. When test automation scripts, manual tests, and exploratory tests reach “passed” status for a test task, the task is moved to the “done” column of the task board. The entire team reviews the status of the task board regularly, often during daily stand-up meetings, to ensure that tasks are moving at an acceptable rate on the board. If tasks (including test tasks) are not progressing or are progressing too slowly, the team reviews and addresses any issues that may be blocking the progress of those tasks.

The daily stand-up meeting is attended by all members of the agile team, including testers. At this meeting, they share their current status. The agenda for each member is:

  • What have you completed since the last meeting?
  • What do you want to complete by the next meeting?
  • What is still in your way?

Any issues that might impede testing progress are communicated at the daily stand-up meetings so that the entire team is aware of the issues and can resolve them accordingly.

To improve overall product quality, many agile teams conduct customer satisfaction surveys to gather feedback on whether the product is meeting customer expectations. To improve product quality, teams can also use other metrics similar to those captured in traditional development methods, such as test pass/fail rate, defect detection rate, confirmation, and regression test results, defect density, defects found and fixed, requirements coverage, risk coverage, code coverage, and code churn.

As with any lifecycle, the metrics collected and reported should be relevant and support decision-making. Metrics should not be used to reward, punish, or isolate individual team members.

Managing Regression Risk with Evolving Manual and Automated Test Cases

In an agile project, the product grows with each completed iteration. Therefore, the scope of testing also increases. In addition to testing code changes made in the current iteration, testers must also ensure that regression has not been introduced to features developed and tested in previous iterations. The risk of introducing regressions in agile development is high due to a large number of code changes (lines of code added, changed, or deleted from one version to another). Since responding to change is a core Agile principle, changes can also be made to features that have already been delivered to meet business requirements. To maintain velocity without incurring large technical debt, it is important that teams invest in test automation at all levels of testing as early as possible. It is also important that all testing resources such as automated tests, manual test cases, test data, and other test artifacts are kept up to date with each iteration. It is highly recommended that all test assets be managed in a configuration management tool to provide version control, ensure easy access for all team members, and support the implementation of changes required due to changed functionality while preserving the historical information of the test assets.

Because complete replay of all tests is rarely possible, especially in agile projects with tight timelines, testers must allocate time in each iteration to review manual and automated test cases from previous and current iterations to select test cases that are eligible for the regression test suite and to retire test cases that are no longer relevant. Tests written in earlier iterations to test specific functions may be of little value in later iterations due to function changes or new functions that change the behavior of those earlier functions.

When reviewing test cases, testers should consider suitability for automation. The team needs to automate as many tests as possible from previous and current iterations. This allows automated regression testing to reduce regression risk with less effort than would be required for manual regression testing. This reduced regression testing effort gives testers the opportunity to more thoroughly test new features and functionality in the current iteration.

It is important that testers are able to quickly identify and update test cases from previous iterations and/or versions that are affected by changes in the current iteration. Defining how the team will design, write, and store test cases should be done during release planning. Best practices for test development and execution must be established early and applied consistently. Shorter timeframes for testing and constant changes in each iteration compound the impact of poor test design and implementation practices.

Using test automation at all levels of testing enables agile teams to provide rapid feedback on product quality. Well-written automated tests provide a living document of system functionality. By checking automated tests and corresponding test results into the configuration management system aligned with product build versioning, agile teams can review tested functionality and test results for any build at any point in time.

Automated unit tests are run before source code is checked into the configuration management system mainline to ensure code changes do not break the software build. To avoid build interruptions that can slow the progress of the entire team, code should not be checked in until all automated unit tests have passed. The results of the automated unit tests provide immediate feedback on code and build quality, but not on product quality.

Automated acceptance tests are run regularly as part of the full system build as part of continuous integration. These tests are run at least daily against a full system build, but generally not at every code check-in, as they take longer than automated unit tests and could slow down code check-ins. Automated acceptance test results provide feedback on product quality in terms of regressions since the last build, but they do not provide overall product quality status.

Automated tests can be run continuously against the system. An initial subset of automated tests covering critical system features and integration points should be created immediately after a new build is deployed to the test environment. These tests are commonly referred to as build verification tests. The results of the build verification tests provide immediate feedback on the software after deployment, so teams don’t waste time testing an unstable build. The automated regression test set tests are typically run as part of the main daily build in the continuous integration environment and again when a new build is deployed to the test environment. Once an automated regression test fails, the team stops and investigates the reasons for the test failure. The test may have failed due to legitimate functional changes in the current iteration. In this case, the test and/or user story may need to be updated to reflect the new acceptance criteria. In this case, the test and/or user story will need to be updated to reflect the new acceptance criteria. However, if the test failed due to a bug, it is good practice for the team to fix the bug before proceeding with new features.

In addition to test automation, the following test tasks can also be automated:

  • Generating test data
  • Loading test data into systems
  • Deploying builds to test environments
  • Restoring a test environment (e.g., the database or website data files) to a baseline state
  • Comparing data outputs

Automating these tasks reduces overhead and allows the team to focus on developing and testing new features.

Source: ISTQB®: Certified Tester Foundation Level Agile Tester Syllabus Version 1.0

Was this article helpful?

Related Articles