Techniques in Agile Projects

Many of the test techniques and testing levels that apply to traditional projects can also be applied to Agile projects. However, for Agile projects, there are some specific considerations and variances in test techniques, terminologies, and documentation that should be considered.

Acceptance Criteria, Adequate Coverage, and Other Information for Testing

Agile projects outline initial requirements as user stories in a prioritized backlog at the start of the project. Initial requirements are short and usually follow a predefined format (see Section 1.2.2). Non-functional requirements, such as usability and performance, are also important and can be specified as unique user stories or connected to other functional user stories. Non-functional requirements may follow a predefined format or standard, such as [ISO25000], or an industry specific standard.

The user stories serve as an important test basis. Other possible test bases include:

  • Experience from previous projects
  • Existing functions, features, and quality characteristics of the system
  • Code, architecture, and design
  • User profiles (context, system configurations, and user behavior)
  • Information on defects from existing and previous projects
  • A categorization of defects in a defect taxonomy
  • Applicable standards (e.g., [DO-178B] for avionics software)
  • Quality risks (see Section 3.2.1)

During each iteration, developers create code which implements the functions and features described in the user stories, with the relevant quality characteristics, and this code is verified and validated via acceptance testing. To be testable, acceptance criteria should address the following topics where relevant [Wiegers13]:

  • Functional behavior: The externally observable behavior with user actions as input operating under certain configurations.
  • Quality characteristics: How the system performs the specified behavior. The characteristics may also be referred to as quality attributes or non-functional requirements. Common quality characteristics are performance, reliability, usability, etc.
  • Scenarios (use cases): A sequence of actions between an external actor (often a user) and the system, in order to accomplish a specific goal or business task.
  • Business rules: Activities that can only be performed in the system under certain conditions defined by outside procedures and constraints (e.g., the procedures used by an insurance company to handle insurance claims).
  • External interfaces: Descriptions of the connections between the system to be developed and the outside world. External interfaces can be divided into different types (user interface, interface to other systems, etc.).
  • Constraints: Anydesign and implementation constraint that will restrict the options for the developer. Devices with embedded software must often respect physical constraints such as size, weight,and interface connections.
  • Data definitions: Thecustomer may describe the format, data type, allowed values, and default values for a data item in the composition of a complex business data structure(e.g.,the ZIPcode in a U.S.mail address).

In addition to the user storiesand their associated acceptance criteria, other information is relevant for the tester, including:

  • How the system is supposed to work and be used
  • The system interfaces that can be used/accessed to test the system
  • Whether current tool support is sufficient
  • Whether the tester has enough knowledge and skill to perform the necessary tests

Testers will often discover the need for additional information(e.g., code coverage)throughout the iterations and should work collaboratively with the rest of the Agileteam members to obtain that information. Relevant information playsa part in determiningwhether a particular activity can be considered done. This concept of the definition of done is critical in Agileprojectsand applies in a number of different waysas discussedin the following sub-subsections.

Test Levels
Each test level has its own definition of done. The followinglistgives examplesthat may be relevant for the different test levels.

  • Unit testing
    • 100% decision coverage where possible, with careful reviews of any infeasible paths
    • Static analysis performed on all code
    • No unresolved major defects (ranked based on priority and severity)
    • No known unacceptable technical debt remaining in the design and the code [Jones11]
    • All code, unit tests, and unit test resultsreviewed
    • All unit tests automated
    • Important characteristicsare within agreed limits(e.g., performance)
  • Integration testing
    • All functional requirements tested, including both positive and negative tests, with the number of tests based on size, complexity, and risks
    • All interfaces between units tested
    • All quality risks covered according to the agreed extent of testing
    • No unresolved major defects (prioritized according to risk and importance)
    • All defects found are reported
    • All regression tests automated, where possible, with all automated tests stored in a common repository
  • System testing
    • End-to-end tests of user stories,features, and functions
    • All user personas covered
    • The most important quality characteristics of the system covered (e.g.,performance, robustness, reliability)
    • Testing done in a production-like environment(s), including all hardware and software for all supported configurations, to the extent possible
    • All quality risks covered according to the agreed extent of testing
    • All regression tests automated, where possible, with all automated tests stored in a common repository
    • All defects found are reportedand possibly fixed
    • No unresolved major defects (prioritized according to risk and importance)

User Story
The definition of done for user stories may be determined by the following criteria:

  • The user stories selected for the iteration are complete, understood by the team, and have detailed, testable acceptance criteria
  • All the elements of the user story are specified and reviewed, including the user story acceptance tests, have been completed
  • Tasks necessary to implement and test the selected user stories have been identified and estimated by the team

Feature
The definition of done for features, which may span multiple user stories or epics, may include:

  • All constituent user stories, with acceptance criteria, are defined and approved by the customer
  • The design is complete, with no known technical debt
  • The code is complete, with no known technical debt or unfinished refactoring
  • Unit tests have been performed and have achieved the defined level of coverage
  • Integration tests and system tests for the feature have been performed according to the defined coverage criteria
  • No major defects remain to be corrected
  • Feature documentation is complete, which may include release notes, user manuals, and on-line help functions

Iteration
The definition of done for the iteration may include the following:

  • All features for the iteration are ready and individually tested according to the feature level criteria
  • Any non-critical defects that cannot be fixed within the constraints of the iteration added to the product backlog and prioritized
  • Integration of all features for the iteration completed and tested
  • Documentation written, reviewed, and approved

At this point, the software is potentially releasable because the iteration has been successfully completed, but not all iterations result in a release.

Release
The definition of done for a release, which may span multiple iterations, may include the following areas:

  • Coverage: All relevant test basis elements for all contents of the release have been covered by testing. The adequacy of the coverage is determined by what is new or changed, its complexity and size, and the associated risks of failure.
  • Quality: The defect intensity (e.g., how many defects are found per day or per transaction), the defect density (e.g., the number of defects found compared to the number of user stories, effort, and/or quality attributes), estimated number of remaining defects are within acceptable limits, the consequences of unresolved and remaining defects (e.g., the severity and priority) are understood and acceptable, the residual level of risk associated with each identified quality risk is understood and acceptable.
  • Time: If the pre-determined delivery date has been reached, the business considerations associated with releasing and not releasing need to be considered.
  • Cost: The estimated lifecycle cost should be used to calculate the return on investment for the delivered system (i.e., the calculated development and maintenance cost should be considerably lower than the expected total sales of the product). The main part of the lifecycle cost often comes from maintenance after the product has been released, due to the number of defects escaping to production.

Applying Acceptance Test-Driven Development

Acceptance test-driven development is a test-first approach. Test cases are created prior to implementing the user story. The test cases are created by the Agile team, including the developer, the tester, and the business representatives [Adzic09] and may be manual or automated. The first step is a specification workshop where the user story is analyzed, discussed, and written by developers, testers, and business representatives. Any incompleteness, ambiguities, or errors in the user story are fixed during this process.

The next step is to create the tests. This can be done by the team together or by the tester individually. In any case, an independent person such as a business representative validates the tests. The tests are examples that describe the specific characteristics of the user story. These examples will help the team implement the user story correctly. Since examples and tests are the same, these terms are often used interchangeably. The work starts with basic examples and open questions.

Typically, the first tests are the positive tests, confirming the correct behavior without exception or error conditions, comprising the sequence of activities executed if everything goes as expected. After the positive path tests are done, the team should write negative path tests and cover non-functional attributes as well (e.g., performance, usability). Tests are expressed in a way that every stakeholder is able to understand, containing sentences in natural language involving the necessary preconditions, if any, the inputs, and the related outputs.

The examples must cover all the characteristics of the user story and should not add to the story. This means that an example should not exist which describes an aspect of the user story not documented in the story itself. In addition, no two examples should describe the same characteristics of the user story.

Functional and Non-Functional Black Box Test Design

In Agile testing, many tests are created by testers concurrently with the developers’ programming activities. Just as the developers are programming based on the user stories and acceptance criteria, so are the testers creating tests based on user stories and their acceptance criteria. (Some tests, such as exploratory tests and some other experience-based tests, are created later, during test execution, as explained in Section 3.3.4.) Testers can apply traditional black box test design techniques such as equivalence partitioning, boundary value analysis, decision tables, and state transition testing to create these tests. For example, boundary value analysis could be used to select test values when a customer is limited in the number of items they may select for purchase.

In many situations, non-functional requirements can be documented as user stories. Black box test design techniques (such as boundary value analysis) can also be used to create tests for non-functional quality characteristics. The user story might contain performance or reliability requirements. For example, a given execution cannot exceed a time limit or a number of operations may fail less than a certain number of times.

For more information about the use of black box test design techniques, see the Foundation Level syllabus [ISTQB_FL_SYL] and the Advanced Level Test Analyst syllabus [ISTQB_ALTA_SYL].

Exploratory Testing and Agile Testing

Exploratory testing is important in Agile projects due to the limited time available for test analysis and the limited details of the user stories. In order to achieve the best results, exploratory testing should be combined with other experience-based techniques as part of a reactive testing strategy, blended with other testing strategies such as analytical risk-based testing, analytical requirements-based testing, model-based testing, and regression-averse testing. Test strategies and test strategy blending is discussed in the Foundation Level syllabus [ISTQB_FL_SYL].

In exploratory testing, test design and test execution occur at the same time, guided by a prepared test charter. A test charter provides the test conditions to cover during a time-boxed testing session. During exploratory testing, the results of the most recent tests guide the next test. The same white box and black box techniques can be used to design the tests as when performing pre-designed testing.

A test charter may include the following information:

  • Actor: intended user of the system
  • Purpose: the theme of the charter including what particular objective the actor wants to achieve, i.e., the test conditions
  • Setup: what needs to be in place in order to start the test execution
  • Priority: relative importance of this charter, based on the priority of the associated user story or the risk level
  • Reference: specifications (e.g., user story), risks, or other information sources
  • Data: whatever data is needed to carry out the charter
  • Activities: a list of ideas of what the actor may want to do with the system (e.g., “Log on to the system as a super user”) and what would be interesting to test (both positive and negative tests)
  • Oracle notes: how to evaluate the product to determine correct results (e.g., to capture what happens on the screen and compare to what is written in the user’s manual)
  • Variations: alternative actions and evaluations to complement the ideas described under activities

To manage exploratory testing, a method called session-based test management can be used. A session is defined as an uninterrupted period of testing which could last from 60 to 120 minutes. Test sessions include the following:

  • Survey session (to learn how it works)
  • Analysis session (evaluation of the functionality or characteristics)
  • Deep coverage (corner cases, scenarios, interactions)

The quality of the tests depends on the testers’ ability to ask relevant questions about what to test. Examples include the following:

  • What is most important to find out about the system?
  • In what way may the system fail?
  • What happens if…..?
  • What should happen when…..?
  • Are customer needs, requirements, and expectations fulfilled?
  • Is the system possible to install (and remove if necessary) in all supported upgrade paths?

During test execution, the tester uses creativity, intuition, cognition, and skill to find possible problems with the product. The tester also needs to have good knowledge and understanding of the software under test, the business domain, how the software is used, and how to determine when the system fails.

A set of heuristics can be applied when testing. A heuristic can guide the tester in how to perform the testing and to evaluate the results [Hendrickson]. Examples include:

  • Boundaries
  • CRUD (Create, Read, Update, Delete)
  • Configuration variations
  • Interruptions (e.g., log off, shut down, or reboot)

It is important for the tester to document the process as much as possible. Otherwise, it would be difficult to go back and see how a problem in the system was discovered. The following list provides examples of information that may be useful to document:

  • Test coverage: what input data have been used, how much has been covered, and how much remains to be tested
  • Evaluation notes: observations during testing, do the system and feature under test seem to be stable, were any defects found, what is planned as the next step according to the current observations, and any other list of ideas
  • Risk/strategy list: which risks have been covered and which ones remain among the most important ones, will the initial strategy be followed, does it need any changes
  • Issues, questions, and anomalies: any unexpected behavior, any questions regarding the efficiency of the approach, any concerns about the ideas/test attempts, test environment, test data, misunderstanding of the function, test script or the system under test
  • Actual behavior: recording of actual behavior of the system that needs to be saved (e.g., video, screen captures, output data files)

The information logged should be captured and/or summarized into some form of status management tools (e.g., test management tools, task management tools, the task board), in a way that makes it easy for stakeholders to understand the current status for all testing that was performed.

Source: ISTQB® Foundation Level Agile Tester – Syllabus Version 2014

Was this article helpful?

Related Articles