Performance efficiency (or simply “performance”) is an essential part of providing a “good experience” for users when they use their applications on a variety of fixed and mobile platforms. Performance testing plays a critical role in establishing acceptable levels of quality for the end-user and is often closely aligned with other disciplines such as usability engineering and performance engineering.
In addition, the evaluation of functional suitability, usability, and other quality attributes under load conditions, e.g., when conducting a performance testing test, load-specific issues can be uncovered that affect these attributes.
Performance tests are not limited to the web-based domain, where the end-user is the focus. the focus. They are also relevant to various application domains with a variety of system architectures, such as classic client-server, distributed, and embedded systems. Technically, performance efficiency is categorized in the ISO 25010 [ISO25000]. The Product Quality Model is categorized as a non-functional quality attribute with the three sub-features described below. Proper focus and prioritization will depend on the risks assessed and the needs of the various stakeholders. Analysis of test results may identify other risk areas that need to be addressed.
Timing: In general, assessing timing performance is the most common Goal of Performance Testing. This aspect of performance testing examines the ability of a component or system to respond to user or system inputs within a specified time and under specified conditions. Measurements of timing can range from the “end-to-end” time it takes the system to respond to user input to the number of CPU cycles it takes a software component to perform a given task.
Resource utilization: If the availability of system resources is identified as a risk, the utilization of these resources (e.g., the allocation of limited memory) can be investigated by performing specific performance tests.
Capacity: if problems with system behavior occur at the required capacity limits of the system (e.g., number of users or volume of data) is identified as a risk, performance testing can be performed to assess the suitability of the system architecture.
Performance testing often takes the form of experimentation that allows for measurement and analysis of specific system parameters enabled. These tests can be performed iteratively to support system analysis, design, and implementation to enable architectural decisions and influence stakeholder expectations.
The following principles for performance testing are particularly important.
- Tests must be aligned with the defined expectations of the various stakeholders, especially users, system developers, and operations personnel.
- Tests must be reproducible. Statistically identical results (within a certain tolerance) must be obtained by repeating the tests on an unmodified system.
- The tests must produce results that are both understandable and easily compared with the expectations of the stakeholders.
- Testing may be performed, resources permitting, on either complete or subsystems or test environments that are representative of the production system.
- Testing must be practically affordable and feasible within the time frame specified by the project.
All three of the above quality sub characteristics impact the scalability of the system under test (SUT).