Derivation of performance test objectives
Stakeholders can include users and people with a business or technical background. They may have different objectives related to performance testing. Stakeholders determine the objectives, the terminology to be used, and the criteria to be used to determine if the objective has been met.
The objectives for performance testing are related to these different types of stakeholders. It has been useful to distinguish between user-based and technical objectives. User-based objectives focus primarily on end-user satisfaction and business goals. In general, users are less interested in what features are available or how a product is delivered. They just want to be able to do what they need to do.
Technical objectives, on the other hand, focus on operational issues and answering questions about a system’s scalability or the conditions under which performance degradation can occur.
The main objectives of performance testing include identifying potential risks, finding opportunities for improvement, and identifying necessary changes.
When gathering information from the various stakeholders, the following questions should be answered:
- What transactions will be executed as part of the performance test and what is the expected average response time?
- What system metrics are to be captured (e.g., memory usage, network throughput) and what values are expected?
- What performance improvements are expected from these tests compared to previous test cycles?
The Performance Test Plan
The Performance Test Plan (PTP) is a document that is created prior to performance testing. The PTP should be referenced in the test plan, which also includes relevant scheduling information. It will also be updated after performance testing has begun.
The following information should be included in a PTP:
Objective
The PTP objective describes the goals, strategies, and methods for performance testing. It provides a quantifiable answer to the key question of the adequacy and readiness of the system to perform under load.
Test Objectives
The overall test objectives for performance efficiency to be achieved by the system under test (SUT) are listed for each stakeholder type
System Overview
A brief description of the SUT provides context for measuring the performance test parameters. The overview should include a general description of the functionality to be tested under load.
Types of performance tests to be performed
The types of performance tests to be performed are listed (see Section 1.2), along with a description of the purpose of each type.
Acceptance Criteria
Performance tests are intended to determine the responsiveness, throughput, reliability, and/or scalability of the system under a given workload. In general, response time is important to the user, throughput is important to the business, and resource utilization is important to the system. Acceptability criteria should be established for all relevant measures and should address the following items as appropriate:
- General objectives of the performance test
- Service Level Agreements (SLAs)
- Baseline Values – A baseline is a set of measurements used to compare current and previously achieved performance measures. This can be used to demonstrate specific performance improvements and/or confirm achievement of test acceptance criteria. It may be necessary to first establish the baseline using cleaned data from a database, if possible.
Test Data
Test data includes a wide range of data that must be reported for a performance test. This data may include the following:
- User account data (e.g., user accounts available for concurrent login).
- User input data (e.g., the data a user would enter into the application to perform a business process)
- Database (e.g., the pre-populated database that is populated with data for use in testing).
The test data creation process should consider the following aspects:
- Data extraction from production data
- Importing data into the SUT
- Creation of new data
- Creation of backups that can be used to restore data when new test cycles are run
- Masking or anonymization of data. This practice is applied to production data that contains personal information and is required under the General Data Protection Regulation (GDPR). However, in performance testing, data masking poses additional risk as the data may not have the same characteristics as in the real world.
System Configuration
The system configuration section of the PTP contains the following technical information:
- A description of the specific system architecture, including servers (e.g., web, database, load balancer).
- Definition of multiple tiers
- Specific details of computer hardware (e.g., CPU cores, RAM, solid-state disks (SSD), hard drive disks (HDD) ), including versions
- Specific details of software (e.g., applications, operating systems, databases, services used to support the business) including versions
- External systems that work with the SUT, their configuration and version (e.g., e-commerce system with integration to NetSuite)
- SUT build/version identifier
Test environment
The test environment is often a separate environment that mimics the production environment but on a smaller scale. This section of the PTP should specify how the performance test results will be applied to the larger production environment. For some systems, the production environment is the only viable option for testing, but in this case, the specific risks of this type of testing need to be discussed.
Test tools are sometimes located outside the actual test environment and may require special access privileges to interact with system components. This is a consideration for the test environment and configuration.
Performance testing can also be done with a system component that can operate without other components. This is often less expensive than testing with the entire system and can be done once the component is developed.
Test Tools
This section provides a description of the test tools (and versions) used to script, execute, and monitor performance tests (see Chapter 5). This list typically includes
- Tool(s) for simulating user transactions
- Tools for providing load from multiple points within the system architecture (points of presence)
- Tools for monitoring system performance, including those described above under System Configuration
Profiles
Operational profiles provide a repeatable step-by-step flow through the application for a particular use of the system. Aggregation of these operational profiles results in a load profile (commonly referred to as a scenario).
Relevant Metrics
A large number of measurements and metrics can be collected during the execution of a performance test. However, too many measurements can complicate the analysis and negatively impact the actual performance of the application. For these reasons, it is important to identify the measurements and metrics that are most relevant to achieving the objectives of the performance test.
The following table, discussed in more detail in Section 4.4, shows a typical set of metrics for performance testing and monitoring. Performance test objectives should be defined for these metrics, as necessary, for the project:
Performance Metrics | |
---|---|
Type | Metric |
Virtual User Status | # Passed # Failed |
Transaction Response Time | Minimum Maximum Average 90% Percentile |
Transactions Per Second | # Passed / second # Failed / second # Total / second |
Hits (e.g., on database or webserver) | Hits / second ▪ Minimum ▪ Maximum ▪ Average ▪ Total |
Throughput | Bits / second ▪ Minimum ▪ Maximum ▪ Average ▪ Total |
HTTP Responses Per Second | Responses / second ▪ Minimum ▪ Maximum ▪ Average ▪ Total Response by HTTP response codes |
Performance Monitoring | |
---|---|
Type | Metric |
CPU usage | % of available CPU used |
Memory usage | % of available memory used |
Risks
Risks may include areas not measured in performance testing and limitations in performance testing (e.g., external interfaces that cannot be simulated, insufficient load, inability to monitor servers). Test environment limitations can also lead to risks (e.g., insufficient data, scaled-down environment).
Communication about performance testing
The tester must be able to communicate to all stakeholders the rationale for the performance testing approach and the activities to be performed (as described in the performance testing plan). The issues that need to be addressed in this communication can vary greatly by stakeholder, depending on whether they have a “business/user” interest or a more “technology/operations” focus.
Stakeholders with a business focus
The following factors should be considered when communicating with business-focused stakeholders:
- Stakeholders with a business focus are less interested in the distinction between functional and non-functional quality attributes.
- Technical issues related to tooling, scripting, and load generation are generally of secondary interest.
- The relationship between product risks and performance test objectives must be made clear.
- Stakeholders must be made aware of the balance between the cost of the planned performance tests and the representativeness of the performance test results compared to production conditions.
- The repeatability of the planned performance tests must be communicated. Will the test be difficult to repeat or can it be repeated with minimal effort?
- Project risks must be communicated. These include constraints and dependencies related to test setup, infrastructure requirements (e.g., hardware, tools, data, bandwidth, test environment, resources), and key personnel dependencies.
- High-level activities must be communicated along with a comprehensive plan including cost, schedule, and milestones.
Stakeholders with a technology focus
The following factors must be considered when communicating with technology-focused stakeholders:
- The planned approach to establishing the required load profiles must be explained and the expected participation of technology stakeholders must be made clear.
- Detailed steps in setting up and conducting the performance tests must be explained to show the relationship between the tests and the architectural risks.
- The steps required to make performance testing repeatable must be communicated. These may include organizational issues (e.g., involvement of key personnel) as well as technical issues.
- If test environments are to be shared, the schedule for performance testing must be communicated to ensure that test results are not compromised.
- Workarounds for the potential impact on actual users when performance testing must be conducted in the production environment must be communicated and accepted.
- Technical stakeholders must be clear about their tasks and their timing.
Source: ISTQB®: Certified Tester Performance Testing Syllabus Version 1.0