Performance Testing Activities

Depending on the type of software development cycle, performance testing is organized and performed differently.

Sequential development models
The ideal practice for performance testing in sequential development models is to include performance criteria as part of the acceptance criteria established at the beginning of a project. To reinforce the lifecycle view of testing, performance testing should be performed throughout the software development lifecycle. As the project progresses, each successive performance testing activity should be based on the items defined in the previous activities (see below).

  • Concept – Verify that the system performance objectives are defined as acceptance criteria for the project.
  • Requirements – Verify that the performance requirements are defined and accurately reflect the needs of the stakeholders.
  • Analysis and Design – Verify that the system design reflects the performance requirements.
  • Coding/Implementation – Verifying that the code is efficient and reflects the performance requirements and design.
  • Component testing – Performing performance testing at the component level.
  • Component integration testing – Performing performance testing at the component integration level.
  • System testing – Performing system-level performance testing that includes hardware, software, procedures, and data representative of the production environment. System interfaces may be simulated provided they realistically represent performance.
  • System integration testing – Conduct performance testing with the entire system representative of the production environment.
  • Acceptance testing – validating that system performance meets originally stated user requirements and acceptance criteria.

Iterative and incremental development models
In these development models, such as Agile, performance testing is also considered an iterative and incremental activity. Performance testing can occur as part of the first iteration or as an iteration dedicated solely to performance testing. However, in these lifecycle models, performance testing may also be performed by a separate team dedicated to performance testing.

Continuous integration (CI) is typically performed in iterative and incremental software development cycles, which enables highly automated execution of tests. The most common goal of testing in CI is to perform regression testing and ensure the stability of each build. Performance testing can be part of the automated testing performed in CI if the tests are designed to be executed at the building level. However, unlike functional automated testing, there are additional issues such as the following:

  • Setting up the performance test environment – This often requires a test environment that is available on-demand, such as a cloud-based performance test environment.
  • Determining which performance tests to automate in CI – Due to the short timeframe available for CI testing, CI performance tests may be a subset of more extensive performance tests performed by a team of specialists at other times during an iteration.
  • Creating Performance Tests for CI – The main goal of performance testing as part of CI is to ensure that a change does not have a negative impact on performance. Depending on the changes made for a particular build, new performance tests may be required.
  • Performing performance tests on parts of an application or system – This often requires that the tools and test environments be able to perform rapid performance tests, including the ability to select subsets of the applicable tests.

Performance testing in the iterative and incremental software development cycles may also have its own lifecycle activities:

  • Release Planning – In this activity, performance testing is considered from the perspective of all iterations of a release, from the first iteration to the last. Performance risks are identified and assessed, and remedial actions are planned. This often includes planning for final performance testing prior to the release of the application.
  • Iteration Planning – As part of each iteration, performance testing may be performed within the iteration and upon completion of each iteration.
  • Performance risks are assessed in more detail for each user story.
  • Creation of user stories – User stories often form the basis for performance requirements in agile methods, with the specific performance criteria described in the associated acceptance criteria. These are referred to as “non-functional” user stories.
  • Performance Test Design – Performance requirements and criteria described in specific user stories are used to design tests.
  • Coding/Implementation – During coding, performance tests may be performed at the component level. An example of this would be tuning algorithms for optimal performance efficiency.
  • Testing/Evaluation – While testing is typically performed in close proximity to development activities, performance testing may be performed as a separate activity depending on the scope and objective of performance testing during iteration. For example, if the goal of performance testing is to test the performance of the iteration as a completed set of user stories, a larger scope of performance testing is required than for performance testing of a single user story. This can be scheduled in a separate iteration for performance testing.
  • Delivery – As the application is introduced into the production environment with delivery, performance needs to be monitored to determine if the application is achieving the desired performance levels in actual use.

Commercial Off-the-Shelf (COTS) and other vendors/acquirer models
Many organizations do not develop their own applications and systems but are able to source software from vendors or from open source projects. In such supplier/acquisition models, performance is an important aspect that must be tested from both the supplier (vendor/developer) and buyer (customer) perspectives.

Regardless of the source of the application, it is often the customer’s responsibility to verify that the performance meets their requirements. For custom vendor-developed software, the performance requirements and associated acceptance criteria should be defined as part of the contract between the vendor and the customer. For COTS applications, it is the sole responsibility of the customer to verify the performance of the product in a realistic test environment prior to deployment.

Source: ISTQB®: Certified Tester Performance Testing Syllabus Version 1.0

Was this article helpful?

Related Articles