Many of the testing techniques and levels of testing that apply to traditional projects can also be applied to agile projects. However, for agile projects, there are some specific considerations and variations in testing techniques, terminology, and documentation that should be taken into account.
Acceptance Criteria, Adequate Coverage, and Other Information for Testing
In agile projects, initial requirements are outlined in the form of user stories in a prioritized backlog at the beginning of the project. The initial requirements are short and usually follow a predefined format (see Section 1.2.2). Non-functional requirements, such as usability and performance, are also important and can be specified as individual user stories or in conjunction with other functional user stories. Non-functional requirements may follow a predefined format or standard, such as [ISO25000], or an industry-specific standard.
The user stories serve as an important test baseline. Other possible test bases are:
- Experience from previous projects
- Existing functions, features, and quality characteristics of the system
- Code, architecture, and design
- User profiles (context, system configurations, and user behavior)
- Information about defects from existing and previous projects
- A categorization of defects in a defect taxonomy
- Applicable standards (e.g., [DO-178B] for avionics software)
- Quality risks
During each iteration, developers create code that implements the functions and features described in the user stories with the appropriate quality attributes, and this code is verified and validated through acceptance testing. To be testable, acceptance criteria should address the following topics, as relevant:
- Functional behavior: The externally observable behavior of user actions as input, executed under specific configurations.
- Quality attributes: How the system performs the specified behavior. Characteristics may also be referred to as quality attributes or non-functional requirements. Common quality attributes are performance, reliability, usability, etc.
- Scenarios (use cases): A sequence of actions between an external actor (often a user) and the system to achieve a specific goal or business task.
- Business rules: Activities that can be performed in the system only under certain conditions defined by external procedures and constraints (e.g., the procedures used by an insurance company to process insurance claims).
- External interfaces: Descriptions of the connections between the system being developed and the external world. External interfaces can be divided into different types (user interface, interface to other systems, etc.).
- Constraints: Any design or implementation constraints that limit what the developer can do. Devices with embedded software often have physical constraints such as size, weight, and interface ports to consider.
- Data definitions: The customer may describe the format, data type, allowable values, and default values for a data element in the composition of a complex business data structure (e.g., the zip code in a U.S. postal address).
In addition to the user stories and associated acceptance criteria, other information is relevant to the tester, such as:
- How the system will function and be used
- The system interfaces that can be used/allowed to test the system
- Whether the current tool support is sufficient
- Whether the tester has sufficient knowledge and skills to perform the required tests
Testers will often find during iterations that they need additional information (e.g., code coverage) and should work with other members of the agile team to obtain this information. Relevant information plays a role in determining whether a particular activity can be considered complete. This concept of defining “done” is critical in agile projects and is applied in several ways, as explained in the following subsections.
Test Levels
Each test level has its own definition of “done”. The following list contains examples that may be relevant for the different test levels.
- Unit tests
- 100% decision coverage, if possible, with a careful review of all infeasible paths
- Static analysis performed on all code
- No unresolved major bugs (ranked by priority and severity)
- No known unacceptable technical debt in the design and code
- Review of all code, unit tests, and unit test results
- All unit tests are automated
- Key features are within agreed-upon limits (e.g., performance)
- Integration testing
- All functional requirements are tested, including positive and negative tests, with the number of tests based on scope, complexity, and risks
- All interfaces between units are tested
- All quality risks are covered according to the agreed scope of testing
- No unresolved serious defects (prioritized by risk and importance)
- All defects found are reported
- All regression testing is automated where possible, with all automated tests stored in a common repository
- System tests
- End-to-end testing of user stories, features, and functions
- All user personas are covered
- Coverage of key system quality attributes (e.g., performance, robustness, reliability)
- Testing will be performed in a production-like environment, including all hardware and software for all supported configurations, where feasible.
- All quality risks will be covered according to the agreed scope of testing.
- All regression testing will be automated where possible, with all automated tests stored in a common repository.
- All defects found are reported and resolved as appropriate
- No unresolved serious defects (prioritized by risk and importance).
User Story
The definition of “done” for User Stories can be determined based on the following criteria:
- The user stories selected for iteration are complete, understood by the team, and have detailed, testable acceptance criteria.
- All elements of the user story are specified and reviewed, including acceptance testing for the user story, are complete
- Tasks required to implement and test the selected User Stories have been identified and estimated by the team.
Feature
Defining “done” for features that may span multiple user stories or epics may include the following:
- All constituent user stories with acceptance criteria are defined and approved by the customer
- The design is complete, with no known technical debt
- Code is complete, with no known technical debt or unfinished refactoring
- Unit tests have been performed and have achieved the defined coverage level
- Integration tests and system tests for the feature have been performed according to the defined coverage criteria
- There are no significant defects remaining to be fixed
- Feature documentation is complete, including release notes, user guides, and online help functions
Iteration
The definition of “done” for the iteration may include the following:
- All functions for the iteration are ready and tested individually according to the functional level criteria.
- Any non-critical bugs that cannot be fixed as part of the iteration are added to the product backlog and prioritized
- Integration of all functions for the iteration completed and tested
- Documentation written, reviewed, and approved
At this point, the software is potentially releasable because the iteration has been successfully completed, but not all iterations result in a release.
Release
The definition of completion of a release, which may span multiple iterations, may include the following areas:
- Coverage: all relevant elements of the test base for all content in the release have been covered by testing. The adequacy of coverage is based on what is new or has changed, its complexity and scope, and the associated risks of defects.
- Quality: defect intensity (e.g., how many defects are found per day or per transaction), defect density (e.g., the number of defects found versus the number of user stories, effort, and/or quality attributes), the estimated number of defects remaining are within acceptable limits, the consequences of unresolved and remaining defects (e.g., severity and priority) are understood and accepted, the residual risk associated with each identified quality risk is understood and accepted.
- Time: When the established delivery date is reached, the business considerations associated with release or non-release must be addressed.
- Cost: The estimated life-cycle cost should be used to calculate the return on investment for the delivered system (i.e., the calculated development and maintenance costs should be significantly lower than the expected total revenue of the product). The majority of lifecycle costs are often for maintenance after the product is released, as a number of defects enter production.
Applying Acceptance Test-Driven Development
Acceptance test-driven development is a test-oriented approach. The test cases are created before the user story is implemented. The test cases are created by the agile team, which includes the developer, tester, and business representatives, and can be manual or automated. The first step is a specification workshop where the user story is analyzed, discussed, and written by developers, testers, and business representatives. Incompletions, ambiguities, or errors in the user story are resolved during this process.
The next step is the creation of the tests. This can be done by the team together or by a single tester. In either case, an independent person, such as a company representative, validates the tests. The tests are examples that describe the specific characteristics of the user story. These examples help the team to properly implement the user story. Since examples and tests are the same things, these terms are often used interchangeably. The work begins with basic examples and open-ended questions.
Typically, the first tests are the positive tests, which confirm correct behavior without exception or error conditions and include the sequence of activities that will be performed if everything goes as expected. After the positive path tests, the team should write negative path tests and cover non-functional attributes (e.g., performance, usability). The tests are worded so that everyone involved can understand them. They consist of natural language sentences that include the required preconditions (if any), the inputs, and the corresponding outputs.
The examples must cover all features of the user story and should not add to the story. This means that there should be no example that describes an aspect of the user story that is not documented in the story itself. Also, no two examples should describe the same features of the user story.
Functional and Non-Functional Black Box Test Design
In agile testing, many tests are created by testers in parallel with the programming activities of the developers. Just as developers program based on user stories and acceptance criteria, testers create tests based on the user stories and their acceptance criteria. (Some tests, such as exploratory tests and other experience-based tests, are created later, during test execution, as discussed in Section 3.3.4.) To create these tests, testers can use traditional black-box test design techniques such as equivalence partitioning, limit analysis, decision tables, and state transition tests. Boundary value analysis, for example, can be used to select test values when a customer has only a limited number of items to choose from for purchase.
In many situations, non-functional requirements can be documented as user stories. Black-box test design techniques (such as boundary value analysis) can also be used to create tests for non-functional quality attributes. The user story may contain performance or reliability requirements. For example, a particular execution must not exceed a time limit or a certain number of operations must not occur more than a certain number of times.
Exploratory Testing and Agile Testing
Exploratory testing is important in agile projects due to the limited time available for test analysis and the limited details of user stories. For best results, exploratory testing should be combined with other experience-based techniques as part of a reactive testing strategy and mixed with other testing strategies such as analytical risk-based testing, analytical requirements-based testing, model-based testing, and regression-averse testing.
In exploratory testing, test design and test execution occur simultaneously and are guided by a prepared test charter. A test charter specifies the test conditions to be covered during a timebox testing session. In exploratory testing, the results of the last tests serve as the basis for the next test. The same white-box and black-box techniques can be used to design the tests as are used to conduct pre-planned tests.
A test charter may contain the following information:
- Actor: intended user of the system.
- Purpose: the subject of the charter, including the specific goal the actor wants to achieve, i.e., the test conditions
- Setup: what must be in place to begin testing?
- Priority: relative importance of this charter, based on the priority of the associated user story or risk level
- Reference: specifications (e.g., user story), risks, or other sources of information
- Data: all the data needed to carry out the charter
- Activities: a list of ideas about what the actor would like to do with the system (e.g., “log in to the system as a superuser”) and what would be interesting to test (both positive and negative tests)
- Oracle notes: how to evaluate the product to determine correct results (e.g., to record what happens on the screen and compare it to what is in the user manual)
- Variations: alternative actions and evaluations to complement the ideas described under Activities.
A method called session-based test management can be used to manage exploratory testing. A session is defined as an uninterrupted testing period that can last from 60 to 120 minutes. Test sessions include the following:
- Survey session (to learn how it works).
- Analysis session (to evaluate functionality or features).
- Deep coverage (corner cases, scenarios, interactions).
The quality of the tests depends on the testers’ ability to ask relevant questions about the test item. Examples include the following:
- What is the most important thing to find out about the system?
- In what ways can the system fail?
- What happens if…..?
- What should happen if…..?
- Are the customer’s needs, requirements, and expectations being met?
- Is it possible to install (and remove, if necessary) the system in all supported upgrade paths?
During test execution, the tester uses creativity, intuition, insight, and skill to find potential problems with the product. The tester must also have a good knowledge and understanding of the software under test, the business domain, how the software is used, and how to determine when the system is failing.
A number of heuristics can be used in testing. A heuristic can guide the tester in performing the tests and evaluating the results. Examples include:
- Limitations
- CRUD (create, read, update, delete)
- Variations in configuration
- Interruptions (e.g., logging off, shutting down, or restarting).
It is important that the auditor document the process as much as possible. Otherwise, it would be difficult to go back and see how a problem was discovered in the system. The following list provides examples of information that may be useful to document:
- Test coverage: what input data was used, how much was covered, and how much still needs to be tested
- Evaluation notes: observations during testing, do the system and the function under test appear to be stable, were bugs found, what is planned as a next step based on current observations, and any other list of ideas
- Risk/strategy list: Which risks have been covered and which remain among the most important, is the original strategy being followed, are changes needed?
- Problems, issues, and anomalies: unexpected behavior, questions about the effectiveness of the approach, concerns about the ideas/testing, test environment, test data, misunderstandings about the function, test script, or system under test
- Actual behavior: Record of the actual behavior of the system that needs to be saved (e.g., video, screen recordings, output files).
Logged information should be captured and/or summarized in some form of status management tools (e.g., test management tools, task management tools, task board) so that stakeholders can easily understand the current status of all tests being performed.
Source: ISTQB®: Certified Tester Foundation Level Agile Tester Syllabus Version 1.0