Quality Characteristics for AI-Based Systems

Flexibility and adaptability

Flexibility and adaptability are closely related quality attributes. In this curriculum, flexibility is considered the ability of the system to be used in situations that were not part of the original system requirements, while adaptability is considered the ease with which the system can be modified for new situations, such as different hardware and changing operating environments.

Both flexibility and adaptability are useful when:

  • The operational environment is not fully known when the system is implemented.
  • The system is expected to cope with new operating environments.
  • The system is expected to adapt to new situations.
  • The system must determine when to change its behavior.

Self-learning AI-based systems are expected to exhibit all of the above characteristics. Consequently, they must be adaptable and have the potential to be flexible.

The requirements for flexibility and adaptability of an AI-based system should include details of any changes in the environment to which the system is expected to adapt. These requirements should also include constraints on the time and resources that the system can use to adapt (e.g., how long can it take to adapt to recognizing a new object type).


In defining autonomy, it is first important to recognize that a fully autonomous system would be completely independent of human supervision and control. In practice, complete autonomy is often undesirable. For example, fully self-driving cars, popularly referred to as “autonomous,” are officially classified as “fully automated driving.”

Many consider autonomous systems to be “smart” or “intelligent,” suggesting that they incorporate AI-based components to perform certain functions. For example, autonomous vehicles that need to be aware of the situation typically use multiple sensors and image processing to gather information about the vehicle’s immediate environment. Machine learning, particularly deep learning (see Section 6.1), has proven to be the most effective approach to perform this function. Autonomous systems may also include decision-making and control functions. Both can be effectively performed using AI-based components.

Although some AI-based systems are considered autonomous, this is not true of all AI-based systems. In this curriculum, autonomy is considered to be the ability of the system to operate for extended periods of time independent of human supervision and control. This can help determine the characteristics of an autonomous system that need to be specified and tested. For example, it must be known how long an autonomous system should operate satisfactorily without human intervention. In addition, it is important to identify the events at which the autonomous system must return control to its human controllers.


In this curriculum, evolution is considered to be the ability of the system to improve itself in response to changing external conditions. Some AI systems can be described as self-learning, and successful self-learning AI-based systems must incorporate this form of evolution.

AI-based systems often operate in an evolving environment. Like other forms of IT systems, an AI-based system must be flexible and adaptable enough to cope with changes in its operating environment.

Self-learning AI-based systems typically must cope with two types of change:

  • In one form of change, the system learns from its own decisions and its interactions with its environment.
  • In the other form of change, the system learns from changes made in its operating environment.

In both cases, the system ideally evolves to improve its effectiveness and efficiency. However, this evolution must be constrained to prevent the system from developing undesirable characteristics.
Any evolution must continue to meet the original system requirements and constraints. Where these are lacking, the system must be managed so that any evolution remains within constraints and is always consistent with human values. Section 2.6 provides examples of the impact of side-effects and reward hacking on self-learning AI-based systems.


In the context of AI-based systems, bias is a statistical measure of the distance between the results provided by the system and what are considered “fair results” that do not show favoritism toward a particular group. Unreasonable bias can be associated with attributes such as gender, race, ethnicity, sexual orientation, income level, and age. Cases of inappropriate bias in AI-based systems have been reported, for example, in systems that make bank lending recommendations, hiring systems, and judicial monitoring systems.

Bias can be introduced into many types of AI-based systems. For example, it is difficult to prevent expert bias from influencing the rules applied by an expert system. However, the widespread use of ML systems means that much of the discussion about bias is in the context of these systems.

ML systems are used to make decisions and predictions using algorithms that rely on collected data, and these two components can introduce bias into the results:

  • Algorithmic bias can occur when the learning algorithm is misconfigured, such as when it overvalues some data relative to others. This source of bias can be caused and controlled by tuning the hyperparameters of the ML algorithms (see Section 3.2).
  • Sampling bias can occur when the training data is not fully representative of the data space to which ML is applied.

Inappropriate bias is often caused by sampling bias, but can occasionally be caused by algorithmic bias.


Ethics is defined in the Cambridge Dictionary as:

A system of accepted beliefs that govern behavior, especially such a system based on morality

AI-based systems with advanced capabilities have a largely positive impact on people’s lives. As these systems have become more widespread, concerns have been raised about whether they are being used in an ethical manner.

What is considered ethical may change over time and may also vary from place to place and culture to culture.
Care must be taken to consider the different values of stakeholders when implementing an AI-based system from one place to another.

National and international guidelines on AI ethics exist in many countries and regions.
In 2019, the Organization for Economic Cooperation and Development published its Principles for AI, the first international standards agreed upon by governments for responsible AI development.
These principles were adopted by forty-two countries when they were published and are also endorsed by the European Commission. They contain practical policy recommendations as well as values-based principles for the “responsible use of trustworthy AI.” These can be summarized as follows:

  • AI should benefit people and the planet by promoting inclusive growth, sustainable development, and prosperity.
  • AI systems should respect the rule of law, human rights, democratic values, and diversity, and include appropriate safeguards to ensure a fair society.
  • AI should be transparent to ensure that people can understand and challenge the results.
  • AI systems must function robustly, safely, and reliably throughout their lifecycle, and risks should be continuously assessed.
  • Organizations and individuals that develop, deploy, or operate AI systems should be held accountable.

Side Effects and Reward Hacking

Side effects and reward hacking can cause AI-based systems to produce unexpected and even harmful results when the system attempts to achieve its goals.

Negative side effects can occur when the designer of an AI-based system specifies a goal that “focuses on completing some specific tasks in the environment, but ignores other aspects of the (potentially very large) environment, implicitly expressing indifference to environmental variables whose modification could actually be harmful”. A self-driving car with the goal of getting to its destination “as fuel-efficiently and safely as possible” may achieve this goal, but it has the side effect of making passengers extremely angry about the excessively long travel time.

Reward hacking can result in an AI-based system achieving a particular goal through a “clever” or “simple” solution that “perverts the spirit of the developer’s intent.” The goal can be “gamed,” so to speak. A common example of “reward hacking” is when an AI-based system teaches itself to play an arcade computer game. It is given the goal of achieving the “highest score,” and to do so, it simply hacks the data set that stores the highest score, rather than playing the game to achieve it.

Transparency, Interpretability, and Explainability

AI-based systems are typically deployed in domains where users need to trust these systems. This may be for security reasons, but also where privacy protection is required and where they can make potentially life-changing predictions and decisions.

Most users are presented with AI-based systems as “black boxes” and have little knowledge of how these systems arrive at their results. In some cases, this ignorance even applies to the data scientists who developed the systems. Occasionally, users are not even aware that they are interacting with an AI-based system.

The inherent complexity of AI-based systems has led to the field of “explainable AI” (XAI). The goal of XAI is to enable users to understand how AI-based systems arrive at their results, thereby increasing user confidence in them.

According to the Royal Society, there are several reasons to support XAI, including:

  • Increasing user confidence in the system
  • Protecting against bias
  • Meeting regulatory standards or policy requirements
  • Improving system design
  • Assessing risk, robustness, and vulnerability
  • Understanding and verifying the results of a system
  • Autonomy, agency (so that the user feels empowered), and fulfillment of social values

This results in the following three basic desirable XAI characteristics for AI-based systems from a stakeholder perspective (see also Section 8.6):

  • Transparency: by this is meant the ease with which the algorithm and the training data used to build the model can be determined.
  • Interpretability: This refers to the understandability of the AI technology for the various stakeholders, including the users.
  • Explainability: this refers to the ease with which users can determine how the AI-based system arrives at a particular result.

Safety and AI

In this curriculum, safety is the expectation that an AI-based system will not cause harm to people, property, or the environment. AI-based systems can be used to make decisions that affect safety. AI-based systems used in medical, manufacturing, defense, security, and transportation applications can affect safety.

Features of AI-based systems that make it difficult to ensure their safety (e.g., not harming people) include:

  • Complexity
  • Non-determinism
  • Probabilistic nature
  • Self-learning
  • Lack of transparency, interpretability and explainability
  • Lack of robustness

The challenges of testing several of these features are discussed in Chapter 8.

Source: ISTQB®: Certified Tester AI Testing (CT-AI) Syllabus Version 1.0

Was this article helpful?

Related Articles