The Differences between Testing in Traditional and Agile Approaches

Testing activities are related to development activities, and therefore testing varies in different lifecycles. Testers need to understand the differences between testing in traditional lifecycle models (e.g., sequential like the V-model or iterative like RUP) and agile lifecycles to work effectively and efficiently. Agile models differ in terms of how testing and development activities are integrated, project work products, designations, entry and exit criteria for different levels of testing, use of tools, and how independent testing can be used effectively.

Testers should keep in mind that organizations vary widely in their implementation of life cycles. Deviating from the ideals of agile lifecycles can be a smart adjustment and adaptation of practices. The ability to adapt to the context of a particular project, including the actual software development practices used, is a key success factor for testers.

Testing and Development Activities

One of the main differences between traditional lifecycles and agile lifecycles is the idea of very short iterations, with each iteration resulting in working software that delivers valuable functionality to business stakeholders. At the beginning of the project is a release planning phase. This is followed by a sequence of iterations. At the beginning of each iteration, there is an iteration planning period. Once the iteration scope is defined, the selected user stories are developed, integrated into the system, and tested. These iterations are very dynamic, with development, integration, and testing activities occurring during each iteration, with significant parallelism and overlap. Testing activities occur throughout the iteration, not as a final activity.

Developers, testers, and business stakeholders all play a role in testing, as in traditional lifecycles. Developers perform unit tests as they develop features against user stories. Testers then test those features. Business stakeholders also test the stories during implementation. They may use written test cases, or they may simply experiment with the feature and use it to provide quick feedback to the development team.

In some cases, hardening or stabilization iterations are performed periodically to address remaining defects and other forms of technical debt. However, the best practice is that no feature is considered finished until it has been integrated into the system and tested. Another good practice is to fix bugs remaining from the previous iteration at the beginning of the next iteration, as part of the backlog for that iteration (referred to as “fix bugs first”). However, some complain that this practice results in not knowing the total work to be done in the iteration and makes it harder to estimate when the remaining features can be done. At the end of the iteration sequence, there may be a series of release activities to get the software ready for delivery, although in some cases delivery occurs at the end of each iteration.

When risk-based testing is used as one of the testing strategies, high-level risk analysis occurs during release planning, and testers often drive this analysis. However, the specific quality risks associated with each iteration are identified and assessed during iteration planning. This risk analysis can influence the order of development and the priority and depth of testing for features. It also influences the estimate of the testing effort required for each feature.

Pairing is used in some agile practices (e.g., Extreme Programming). Pairing allows testers to work in pairs to test a feature. Pairing can also mean that a tester works with a developer to develop and test a feature. Pairing can be difficult when the test team is spread across multiple locations, but processes and tools can help enable distributed pairing.

Testers can also act as test and quality coaches within the team, sharing testing knowledge and supporting quality assurance work within the team. This fosters a sense of collective responsibility for the quality of the product.

In many agile teams, test automation occurs at all levels of testing, which can mean that testers spend a lot of time creating, executing, monitoring, and maintaining automated tests and results. Because of the heavy use of test automation, a higher percentage of manual testing in agile projects tends to be performed using experience- and bug-based techniques such as software attacks, exploratory testing, and debugging. While developers focus on creating unit tests, testers should focus on creating automated integration, system, and system integration tests. As a result, Agile teams tend to prefer testers with a strong technical and test automation background.

A fundamental principle of Agile is that changes can occur throughout the project. Therefore, lean documentation of the work product is preferred in Agile projects. Changes to existing functionality have an impact on testing, especially regression testing. The use of automated testing is one way to manage the testing effort associated with changes. However, it is important that the rate of change does not exceed the project team’s ability to manage the risks associated with those changes.

Project Work Products

Project work products of direct interest to agile testers generally fall into three categories:

  1. Business-oriented work products that describe what is needed (e.g., requirements specifications) and how it is to be used (e.g., user documentation)
  2. Development work products that describe how the system is built (e.g., database entity-relationship diagrams), that actually implement the system (e.g., code), or that evaluate individual pieces of code (e.g., automated unit tests)
  3. Test work products that describe how the system will be tested (e.g., test strategies and plans), that actually test the system (e.g., manual and automated tests), or that present the test results (e.g., test dashboards)

In a typical agile project, it is common practice to avoid creating large amounts of documentation. Instead, the focus is more on having working software along with automated tests that demonstrate conformance to requirements. This encouragement to reduce documentation only applies to documentation that does not provide value to the customer. In a successful agile project, a balance is struck between increasing efficiency by reducing documentation and providing sufficient documentation to support business, testing, development, and maintenance activities. The team must decide during release planning which work products are needed and how extensive the work product documentation needs to be.

Typical business-oriented work products in agile projects are user stories and acceptance criteria. User stories are the agile form of requirements specifications and should explain how the system should behave with respect to a single, related feature or function. A user story should define a feature small enough to be completed in a single iteration. Larger collections of related features, or a collection of sub-features that form a single complex feature, may be referred to as an “epic.” Epics can contain user stories for different development teams. For example, one user story may describe what is needed at the API (middleware) level, while another story describes what is needed at the UI (application) level. These collections can be developed over a series of sprints. Each epic and its associated user stories should have appropriate acceptance criteria.

Typical developer work products in agile projects include code. Agile developers also often create automated unit tests. These tests may be created after the code is developed. However, in some cases, developers create the tests incrementally, before a piece of code is written, to verify that that piece of code works as expected once it is written. While this approach is referred to as “test-first” or “test-driven development,” in reality the tests are more a form of executable low-level design specifications than tests.

The typical work products of testers in agile projects include automated tests as well as documents such as test plans, quality risk catalogs, manual tests, bug reports, and logs of test results. Documents are captured in as lightweight a manner as possible, which is often the case for these documents in traditional lifecycles. Testers also create test metrics from defect reports and test result logs, and again the emphasis is on a lean-approach.

In some Agile implementations, particularly for regulated, safety-critical, distributed, or highly complex projects and products, further formalization of these work products is required. For example, some teams are transforming user stories and acceptance criteria into more formal requirements specifications. Vertical and horizontal traceability reports can be created to satisfy auditors, regulations, and other requirements.

Test Levels

Test stages are test activities that are logically linked, often by the maturity or completeness of the object under test.

In sequential lifecycle models, test stages are often defined so that the output criteria of one stage are part of the input criteria for the next stage. In some iterative models, this rule does not apply. Test levels overlap. Requirements specification, design specification, and development activities may overlap with test levels.

In some agile lifecycles, overlap occurs because changes to requirements, design, and code can occur at any point in an iteration. Although Scrum in theory does not allow changes to user stories after iteration planning, in practice such changes sometimes occur. During an iteration, a given user story usually goes through the following testing activities in sequence:

  • Unit tests, which are usually performed by the developer.
  • Feature acceptance tests, which are sometimes divided into two activities:
    • Feature verification testing, which is often automated, can be performed by developers or testers and includes testing against the user story acceptance criteria
    • Feature validation testing, which is usually manual and involves developers, testers, and stakeholders from the business working together to determine if the feature is suitable for deployment, to improve visibility of progress made, and to get real feedback from stakeholders from the business

In addition, there is often a parallel process of regression testing during iteration. This involves re-running the automated unit tests and functional verification tests of the current iteration and previous iterations, usually through a continuous integration framework.

In some agile projects, there may be a system testing stage that begins as soon as the first user story is ready for such testing. This may include functional testing as well as non-functional testing for performance, reliability, usability, and other relevant types of testing.

Agile teams can use different forms of acceptance testing. Internal alpha testing and external beta testing can be performed either at the end of each iteration, after the completion of each iteration, or after a series of iterations. User acceptance testing, operational acceptance testing, regulatory acceptance testing, and contract acceptance testing can also be performed, either at the end of each iteration, after the completion of each iteration, or after a series of iterations.

Testing and Configuration Management

Agile projects often use automated tools to develop, test, and manage software development. Developers use tools for static analysis, unit testing, and code coverage. Developers continuously review code and unit tests in a configuration management system and use automated build and test frameworks. These frameworks enable continuous integration of new software into the system, with static analysis and unit tests running repeatedly as new software is checked in.

These automated tests can also include functional testing at the integration and system levels. Such automated functional tests can be created using functional test harnesses, open-source user interface functional test tools, or commercial tools and integrated with the automated tests that are executed as part of the continuous integration. In some cases, functional tests are separated from unit tests and run less frequently due to the duration of the functional tests. For example, the unit tests may be run every time new software is checked in, while the longer functional tests are run only every few days.

One goal of the automated tests is to confirm that the build works and can be installed. If an automated test fails, the team should fix the underlying bug in time for the next code check-in. This requires an investment in real-time test reports to provide good visibility into test results. This approach helps reduce costly and inefficient cycles of “build-install-fail-rebuild-reinstall” that can occur on many traditional projects, as changes that break the build or cause the software to fail to install are quickly identified.

Automated testing and build tools help manage the regression risk associated with the frequent changes that often occur in agile projects. However, relying too heavily on automated unit tests to manage these risks can be problematic, as unit tests often have limited effectiveness in detecting errors. Automated tests at the integration and system levels are also required.

Organizational Options for Independent Testing

Independent testers are often more effective at finding bugs. In some agile teams, developers create many of the tests in the form of automated tests. One or more testers may be embedded in the team and perform many of the testing tasks. However, because of the position of these testers within the team, they risk losing their independence and objective evaluation.

Other agile teams maintain completely independent, separate test teams and deploy testers as needed in the final days of each sprint. This helps maintain independence and allows these testers to provide an objective, unbiased evaluation of the software. However, time pressures, lack of understanding of the product’s new features, and relationship issues with business stakeholders and developers often lead to problems with this approach.

A third option is to have an independent, separate test team, where testers are assigned to an agile team for the long term at the beginning of the project, allowing them to maintain their independence while building a good understanding of the product and good relationships with other team members. In addition, the independent test team may have specialized testers outside of the agile teams working on long-term and/or iteration-independent activities, such as developing automated test tools, performing non-functional testing, creating and supporting test environments and data, and performing levels of testing that may not fit well into a sprint (e.g., systems integration testing).

Source: ISTQB®: Certified Tester Foundation Level Agile Tester Syllabus Version 1.0

Was this article helpful?

Related Articles