Principal Performance Testing Activities
Performance testing is iterative in nature. Each test provides valuable insights into application and system performance. The information gathered from one test is used to correct or optimize application and system parameters. The next test iteration will then show the results of the modifications, and so on until the test objectives are reached. Performance testing activities align with the ISTQB test process [ISTQB_FL_SYL].
Test Planning
Test planning is particularly important for performance testing due to the need for the allocation of test environments, test data, tools and human resources. In addition, this is the activity in which the scope of performance testing is established.
During test planning, risk identification and risk analysis activities are completed, and relevant information is updated in any test planning documentation (e.g., test plan, level test plan). Just as test planning is revisited and modified as needed, so are risks, risk levels and risk status modified to reflect changes in risk conditions.
Test Monitoring and Control
Control measures are defined to provide action plans should issues be encountered that might impact performance efficiency, such as
- increasing the load generation capacity if the infrastructure does not generate the desired loads as planned for particular performance tests
- changed, new, or replaced hardware
- changes to network components
- changes to software implementation
The performance test objectives are evaluated to check for exit criteria achievement.
Test Analysis
Effective performance tests are based on an analysis of performance requirements, test objectives, Service Level Agreements (SLA), IT architecture, process models and other items that comprise the test basis. This activity may be supported by modeling and analysis of system resource requirements and/or behavior using spreadsheets or capacity planning tools.
Specific test conditions are identified, such as load levels, timing conditions, and transactions to be tested. The required type(s) of performance test (e.g., load, stress, scalability) are then decided.
Test Design
Performance test cases are designed. These are generally created in modular form so that they may be used as the building blocks of larger, more complex performance tests (see section 4.2).
Test Implementation
In the implementation phase, performance test cases are ordered into performance test procedures. These performance test procedures should reflect the steps normally taken by the user and other functional activities that are to be covered during performance testing.
A test implementation activity is establishing and/or resetting the test environment before each test execution. Since performance testing is typically data-driven, a process is needed to establish test data that is representative of actual production data in volume and type so that production use can be simulated.
Test Execution
Test execution occurs when the performance test is conducted, often by using performance test tools. Test results are evaluated to determine if the system’s performance meets the requirements and other stated objectives. Any defects are reported.
Test Completion
Performance test results are provided to the stakeholders (e.g., architects, managers, and product owners) in a test summary report. The results are expressed through metrics which are often aggregated to simplify the meaning of the test results. Visual means of reporting, such as dashboards, are often used to express performance test results in ways that are easier to understand than text-based metrics.
Performance testing is often considered to be an ongoing activity in that it is performed at multiple times and at all test levels (component, integration, system, system integration, and acceptance testing). At the close of a defined period of performance testing, a point of test closure may be reached where designed tests, test tool assets (test cases and test procedures), test data, and other testware are archived or passed on to other testers for later use during system maintenance activities.
Categories of Performance Risks for Different Architectures
As mentioned previously, application or system performance varies considerably based on the architecture, application, and host environment. While it is not possible to provide a complete list of performance risks for all systems, the list below includes some typical types of risks associated with particular architectures:
Single Computer Systems
These are systems or applications that run entirely on one non-virtualized computer.
Performance can degrade due to
- excessive resource consumption, including memory leaks, background activities such as security software, slow storage subsystems (e.g., low-speed external devices or disk fragmentation), and operating system mismanagement.
- inefficient implementation of algorithms that do not make use of available resources (e.g., main memory) and, as a result, execute slower than required.
Multi-tier Systems
These are systems that run on multiple servers, each of which performs a specific set of tasks, such as database server, application server, and presentation server. Each server is, of course, a computer and subject to the risks given earlier. In addition, performance can degrade due to poor or non-scalable database design, network bottlenecks, and inadequate bandwidth or capacity on any single server.
Distributed Systems
These are systems of systems, similar to a multi-tier architecture, but the various servers may change dynamically, such as in an e-commerce system that accesses different inventory databases depending on the geographic location of the person placing the order. In addition to the risks associated with multi-tier architectures, this architecture can experience performance problems due to critical workflows or dataflows to, from, or through unreliable or unpredictable remote servers, especially when such servers suffer periodic connection problems or intermittent periods of intense load.
Virtualized Systems
These are systems where the physical hardware hosts multiple virtual computers.
These virtual machines may host single-computer systems and applications as well as servers that are part of a multi-tier or distributed architecture. Performance risks that arise specifically from virtualization include excessive load on the hardware across all the virtual machines or improper configuration of the host virtual machine resulting in inadequate resources.
Dynamic/Cloud-based Systems
These are systems that offer the ability to scale on demand, increasing capacity as the level of load increases. These systems are typically distributed and virtualized multi-tier systems, albeit with self-scaling features designed specifically to mitigate some of the performance risks associated with those architectures. However, there are risks associated with failures to properly configure these features during initial setup or subsequent updates.
Client –Server Systems
These are systems running on a client that communicate via a user interface with a single server, multi-tier server, or distributed server. Since there is code running on the client, the single computer risks apply to that code, while the server-side issues mentioned above also apply. Further, performance risks exist due to connection speed and reliability issues, network congestion at the client connection point (e.g., public Wi-Fi), and potential problems due to firewalls, packet inspection, and server load balancing.
Mobile Applications
These are applications running on a smartphone, tablet, or other mobile device. Such applications are subject to the risks mentioned for client-server and browser-based (web apps) applications. In addition, performance issues can arise due to the limited and variable resources and connectivity available on the mobile device (which can be affected by location, battery life, charge state, available memory on the device, and temperature). For those applications that use device sensors or radios such as accelerometers or Bluetooth, slow dataflows from those sources could create problems. Finally, mobile applications often have heavy interactions with other local mobile apps and remote web services, any of which can potentially become a performance efficiency bottleneck.
Embedded Real-time Systems
These are systems that work within or even control everyday things such as cars (e.g., entertainment systems and intelligent braking systems), elevators, traffic signals, Heating, Ventilation and Air Conditioning (HVAC) systems, and more. These systems often have many of the risks of mobile devices, including (increasingly) connectivity-related issues since these devices are connected to the Internet. However, the diminished performance of a mobile video game is usually not a safety hazard for the user, while such slowdowns in a vehicle’s braking system could prove catastrophic.
Mainframe Applications
These are applications—in many cases, decades-old applications—that support often mission-critical business functions in a data center, sometimes via batch processing.
Most are quite predictable and fast when used as originally designed, but many of these are now accessible via APIs, web services, or their databases, which can result in unexpected loads that affect the throughput of established applications.
Note that any particular application or system may incorporate two or more of the architectures listed above, which means that all relevant risks will apply to that application or system. In fact, given the Internet of Things and the explosion of mobile applications—two areas where extreme levels of interaction and connection are the rule—it is possible that all architectures are present in some form in an application, and thus all risks can apply.
While architecture is clearly an important technical decision with a profound impact on performance risks, other technical decisions also influence and create risks. For example, memory leaks are more common with languages that allow direct heap memory management, such as C and C++, and performance issues are different for relational versus non-relational databases. Such decisions extend all the way down to the design of individual functions or methods (e.g., the choice of a recursive as opposed to an iterative algorithm). As a tester, your ability to know about or even influence such decisions will vary, depending on the roles and responsibilities of testers within the organization and software development lifecycle.
Performance Risks Across the Software Development Lifecycle
The process of analyzing risks to the quality of a software product in general is discussed in various ISTQB syllabi (e.g., see [ISTQB_FL_SYL] and [ISTQB_ALTM_SYL]). You can also find discussions of specific risks and considerations associated with particular quality characteristics (e.g., [ISTQB_UT_SYL]), and from a business or technical perspective (e.g., see [ISTQB_ALTA_SYL] and [ISTQB_ALTTA_SYL], respectively). In this section, the focus is on performance-related risks to product quality, including ways that the process, the participants, and the considerations change.
For performance-related risks to the quality of the product, the process is:
- Identify risks to product quality, focusing on characteristics such as time behavior, resource utilization, and capacity.
- Assess the identified risks, ensuring that the relevant architecture categories (see Section 3.2) are addressed. Evaluate the overall level of risk for each identified risk in terms of likelihood and impact using clearly defined criteria.
- Take appropriate risk mitigation actions for each risk item based on the nature of the item and the level of risk.
- Manage risks on an ongoing basis to ensure that the risks are adequately mitigated prior to release.
As with quality risk analysis in general, the participants in this process should include both business and technical stakeholders. For performance-related risk analysis, the business stakeholders must include those with a particular awareness of how performance problems in production will actually affect customers, users, the business, and other downstream stakeholders. Business stakeholders must appreciate that intended usage, business-, societal-, or safety-criticality, potential financial and/or reputational damage, civil or criminal legal liability, and similar factors affect risk from a business perspective, creating risks and influencing the impact of failures.
Further, the technical stakeholders must include those with a deep understanding of the performance implications of relevant requirements, architecture, design, and implementation decisions. Technical stakeholders must appreciate that architecture, design, and implementation decisions affect performance risks from a technical perspective, creating risks and influencing the likelihood of defects.
The specific risk analysis process chosen should have the appropriate level of formality and rigor. For performance-related risks, it is especially important that the risk analysis process be started early and repeated regularly. In other words, the tester should avoid relying entirely on performance testing conducted towards the end of the system test and system integration test level. Many projects, especially larger and more complex systems of systems projects, have met with unfortunate surprises due to the late discovery of performance defects that resulted from requirements, design, architecture, and implementation decisions made early in the project. The emphasis should therefore be on an iterative approach to performance risk identification, assessment, mitigation, and management throughout the software development lifecycle.
For example, if large volumes of data will be handled via a relational database, the slow performance of many-to-many joins due to poor database design may only reveal itself during dynamic testing with large-scale test datasets, such as those used during system testing. However, a careful technical review that includes experienced database engineers can predict the problems prior to database implementation. After such a review, in an iterative approach, risks are identified and assessed again.
In addition, risk mitigation and management must span and influence the entire software development process, not just dynamic testing. For example, when critical performance-related decisions such as the expected number of transactions or simultaneous users cannot be specified early in the project, it is important that design and architecture decisions allow for highly variable scalability (e.g., on-demand cloud-based computing resources). This enables early risk mitigation decisions to be made.
Good performance engineering can help project teams avoid the late discovery of critical performance defects during higher test levels, such as system integration testing or user acceptance testing. Performance defects found at a late stage in the project can be extremely costly and may even lead to the cancellation of entire projects.
As with any type of quality risk, performance-related risks can never be completely avoided, i.e., some risk of performance-related production failure will always exist.
Therefore, the risk management process must include providing a realistic and specific evaluation of the residual level of risk to the business and technical stakeholders involved in the process. Simply stating, “Yes, it is still possible for customers to experience long delays during check out,” for example, is insufficient because it provides no indication of the amount of risk mitigation that has occurred or the level of risk that remains.
Instead, providing clear insight into the percentage of customers likely to experience delays equal to or exceeding certain thresholds will help people understand the status.
Performance Testing Activities
Performance testing activities will be organized and performed differently, depending on the type of software development lifecycle in use.
Sequential Development Models
The best practice for performance testing in sequential development models is to incorporate performance criteria into the acceptance criteria that are defined at the start of a project.Reinforcing the lifecycle view of testing, performance testing activities should be conducted throughout the software development lifecycle. As the project progresses, each successive performance test activity should be based on items defined in the prior activities, as shown below.
- Concept: Verify that system performance goals are defined as acceptance criteria for the project.
- Requirements: Verify that performance requirements are defined and represent stakeholder needs correctly.
- Analysis and Design: Verify that the system design reflects the performance requirements.
- Coding/Implementation: Verify that the code is efficient and reflects the requirements and design in terms of performance.
- Component Testing: Conduct component level performance testing.
- Component Integration Testing: Conduct performance testing at the component integration level.
- System Testing: Conduct performance testing at the system level, which includes hardware, software, procedures and data that are representative of the production environment. System interfaces may be simulated provided that they give a true representation of performance.
- System Integration Testing: Conduct performance testing with the entire system, which is representative of the production environment.
- Acceptance Testing: Validate that system performance meets the originally stated user needs and acceptance criteria.
Iterative and Incremental Development Models
In these development models, such as Agile, performance testing is also seen as an iterative and incremental activity (see [ISTQB_FL_AT]).Performance testing can occur as part of the first iteration, or as an iteration dedicated entirely to performance testing.
However, with these lifecycle models, the execution of performance testing may be performed by a separate team tasked with performance testing.
Continuous Integration (CI) is commonly performed in iterative and incremental software development lifecycles, which facilitates a highly automated execution of tests. The most common objective of testing in CI is to perform regression testing and ensure each build is stable. Performance testing can be part of the automated tests performed in CI if the tests are designed in such a way as to be executed at the build level. However, unlike functional automated tests, there are additional concerns such as the following:
- The setup of the performance test environment: This often requires a test environment that is available on demand, such as a cloud-based performance test environment.
- Determining which performance tests to automate in CI: Due to the short timeframe available for CI tests, CI performance tests may be a subset of more extensive performance tests that are conducted by a specialist team at other times during an iteration.
- Creating the performance tests for CI: The main objective of performance tests as part of CI is to ensure a change does not negatively impact performance. Depending on the changes made for any given build, new performance tests may be required.
- Executing performance tests on portions of an application or system: This frequently necessitates the availability of tools and test environments capable of rapid performance testing, including the ability to select subsets of applicable tests.
Performance testing in the iterative and incremental software development lifecycles can also have its own lifecycle activities:
- Release Planning: In this activity, performance testing is considered from the perspective of all iterations in a release, from the first iteration to the final iteration. Performance risks are identified and assessed, and mitigation measures are planned. This often includes the planning of any final performance testing before the release of the application.
- Iteration Planning: In the context of each iteration, performance testing may be performed within the iteration and as each iteration is completed.
Performance risks are assessed in more detail for each user story. - User Story Creation: User stories often form the basis of performance requirements in Agile methodologies, with the specific performance criteria described in the associated acceptance criteria. These are referred to as “non-functional” user stories.
- Design of performance tests: Performance requirements and criteria which are described in particular user stories are used for the design of tests (see Section 4.2)
- Coding/Implementation: During coding, performance testing may be performed at a component level. An example of this would be the tuning of algorithms for optimum performance efficiency.
- Testing/Evaluation evaluation: While testing is typically performed in close proximity to development activities, performance testing may be performed as a separate activity, depending on the scope and objectives of performance testing during the iteration. For example, if the goal of performance testing is to test the performance of the iteration as a completed set of user stories, a wider scope of performance testing will be needed than that seen in performance testing a single user story. This may be scheduled in a dedicated iteration for performance testing.
- Delivery: Since delivery will introduce the application to the production environment, performance will need to be monitored to determine if the application achieves the desired levels of performance in actual usage.
Commercial Off-the-Shelf (COTS) and other Supplier/Acquirer Models
Many organizations do not develop applications and systems themselves, but instead acquire software from vendor sources or from open-source projects. In such supplier/acquirer models, performance is an important consideration that requires testing from both the supplier (vendor/developer) and acquirer (customer)
perspectives.
Regardless of the source of the application, it is often the responsibility of the customer to validate that its performance meets their requirements. In the case of customized vendor-developed software, performance requirements and associated acceptance criteria should be specified as part of the contract between the vendor and customer. In the case of COTS applications, the customer has sole responsibility to test the performance of the product in a realistic test environment prior to deployment.
Source: ISTQB®: Certified Tester Performance Testing Syllabus Version 1.0