Introduction to Artificial Intelligence (AI)

Definition of AI and AI Effect

The term artificial intelligence (AI) dates back to the 1950s and refers to the goal of building and programming “intelligent” machines capable of mimicking humans. Today’s definition has evolved considerably, and the following definition summarizes the concept:

The ability of a technical system to acquire, process, and apply knowledge and skills.

The way people understand the meaning of AI depends on their current perception. In the 1970s, the idea of a computer system that could beat a human at chess was still in the future, and most considered this to be AI. Today, more than twenty years after the Deep Blue computer-based system defeated world chess champion Garry Kasparov, the “brute force” approach implemented in that system is not considered by many to be true artificial intelligence (i.e., the system did not learn from data and was not capable of learning on its own). Similarly, the expert systems of the 1970s and 1980s contained human expertise in the form of rules that could be repeatedly executed without the expert being present. These were considered AI at the time, but are no longer considered such today.

The changing perception of what constitutes AI is referred to as the “AI effect”. As society’s perception of AI changes, so does its definition. As a result, any definition of today is likely to change in the future and no longer match the definitions of the past.

Narrow, General, and Super AI

At a high level, AI can be divided into three categories:

  • Narrow AI systems (also referred to as weak AI) are programmed to perform a specific task with limited context. This form of AI is currently widely used. Examples include gaming systems, spam filters, test case generators, and voice assistants.
  • General AI systems (also referred to as strong AI) have general (wide-ranging) cognitive capabilities similar to those of humans. These AI-based systems can think like humans and understand their environment and act accordingly. No general AI systems have been realized by 2021.
  • Super AI systems are capable of replicating human cognition (general AI) and have tremendous processing power, virtually unlimited memory, and access to all human knowledge (e.g., through access to the Internet). Super AI systems are expected to soon be smarter than humans. The point at which AI-based systems transition from general AI to super AI is commonly referred to as a technological singularity.

AI-based and conventional systems

In a typical conventional computer system, the software is programmed by humans in an imperative language that includes constructs such as if-then-else and loops. It is relatively easy for humans to understand how the system converts inputs into outputs. In an AI-based system that uses machine learning (ML), patterns in data are used by the system to determine how it should respond to new data in the future (see Chapter 3 for a detailed explanation of ML). For example, an AI-based image processor designed to recognize images of cats is trained with a set of images known to contain cats. The AI independently determines which patterns or features in the data can be used to identify cats.
These patterns and rules are then applied to new images to determine if they contain cats. For many AI-based systems, this results in the prediction procedure being less easy for humans to understand (see Section 2.7).In practice, AI-based systems can be implemented by a variety of technologies, and the “AI effect” can determine what is currently considered an AI-based system and what is considered a conventional system.

AI Technologies

AI can be implemented using a variety of technologies, e.g:

  • Fuzzy logic
  • Search algorithms
  • Inference techniques
    • Rule Mechanisms
    • Deductive classifiers
    • Case-based reasoning
    • Procedural reasoning
  • Machine Learning Techniques
    • Neural Networks
    • Bayesian models
    • Decision Trees
    • Random forest
    • Linear Regression
    • Logistic Regression
    • Clustering Algorithms
    • Genetic Algorithms
    • Support Vector Machine (SVM)

AI-based systems typically employ one or more of these technologies.

AI Development Frameworks

There are many AI development frameworks, some of which specialize in specific areas. These frameworks support a range of activities, such as data preparation, algorithm selection, and compilation of models for execution on various processors, such as central processing units (CPUs), graphics processing units (GPUs), or cloud tensor processing units (TPUs). The selection of a particular framework may also depend on certain aspects such as the programming language used for implementation and its ease of use. The following frameworks are among the most popular (as of April 2021):

  • Apache MxNet: An open source Deep Learning framework used by Amazon for Amazon Web Services (AWS).
  • CNTK: The Microsoft Cognitive Toolkit (CNTK) is an open source deep learning toolkit.
  • IBM Watson Studio: A set of tools that support the development of AI solutions.
  • Keras: A sophisticated open source API written in the Python language that can be built on top of TensorFlow and CNTK.
  • PyTorch: An open source ML library powered by Facebook and used for image processing and natural language processing (NLP) applications. Both Python and C++ interfaces are supported.
  • Scikit-learn: An open-source machine ML library for the Python programming language.
  • TensorFlow: An open source ML framework based on data flow graphs for scalable machine learning, provided by Google.

Note that these development frameworks are constantly evolving, sometimes combined, and sometimes replaced by new frameworks.

Hardware for AI-based systems

A variety of hardware is used for ML model training (see Chapter 3) and model implementation. For example, a model for speech recognition can be run on a simple smartphone, although access to cloud computing power may be required to train the model. A common approach for when the host device is not connected to the Internet is to train the model in the cloud and then deploy it on the host device.

ML typically benefits from hardware that supports the following features:

  • Low-precision arithmetic: this means fewer bits are used for computation (e.g., 8 bits instead of 32, which is usually sufficient for ML).
  • The ability to work with large data structures (e.g., to support matrix multiplication).
  • Massively parallel (concurrent) processing.

General-purpose CPUs provide support for complex operations not typically required for ML applications and have only a few cores. As a result, their architecture is less efficient for training and executing ML models than GPUs, which have thousands of cores and are designed for massively parallel but relatively simple processing of images. As a result, GPUs are generally more powerful than CPUs for ML applications, even though CPUs typically have higher clock speeds. For small-scale ML work, GPUs generally provide the best option.

There is hardware specifically for AI, such as purpose-built application-specific integrated circuits (ASICs) and system on a chip (SoC). These AI-specific solutions have features such as multiple cores, dedicated data management, and in-memory processing capability. They are best suited for edge computing while training the ML model in the cloud.

Hardware with specific AI architectures is currently under development. These include neuromorphic processors that do not use the traditional von Neumann architecture, but instead use an architecture that loosely mimics the neurons of the brain.

Examples of AI hardware vendors and their processors include:

  • NVIDIA: The company offers a range of graphics processors and AI-specific processors, such as the Volta.
  • Google: the company has developed application-specific integrated circuits for training and inference. Google TPUs (Cloud Tensor Processing Units) can be used by users in the Google Cloud, while the Edge TPU is a purpose-built ASIC designed to run AI on individual devices.
  • Intel: The company provides Nervana neural network processors for Deep Learning (both training and inference) and Movidius Myriad image processing units for inference in computer vision and neural network applications.
  • Mobileye: The company makes the EyeQ family of SoC devices that support complex and computationally intensive image processing. These have low power consumption for use in vehicles.
  • Apple: The company makes the Bionic chip for AI on the device in iPhones.
  • Huawei: Their Kirin 970 chip for smartphones has integrated neural network processing for AI.

AI as a Service (AIaaS)

AI components, such as ML models, can be created within an organization, downloaded from a third-party vendor, or used as a service on the Internet (AIaaS). A hybrid approach is also possible, where part of the AI functionality is provided within the system and part is provided as a service.

When using ML as a service, access to an ML model is provided over the Internet, and support for data preparation and storage, model training, evaluation, tuning, testing, and deployment can also be provided.

Third-party providers (e.g., AWS, Microsoft) offer specialized AI services, such as facial and speech recognition. This enables individuals and enterprises to deploy AI using cloud-based services even if they do not have sufficient resources and expertise to develop their own AI services. In addition, ML models deployed as part of a third-party service have likely been trained on a larger and more diverse training dataset than is available to many players, such as those more recently entering the AI market.

Contracts for AI as a Service

These AI services are typically provided with contracts similar to those for non-AI cloud-based Software as a Service (SaaS). A contract for AI-as-a-Service typically includes a service-level agreement (SLA) that specifies availability and security commitments. Such SLAs typically include an uptime for the service (e.g., 99.99% uptime) and a response time for remediation, but rarely define ML functional performance metrics (such as accuracy) in a similar manner (see Chapter 5). AIaaS is often paid for on a subscription basis, and if the contracted availability and/or response time is not met, the service provider typically offers credits for future services. Apart from these credits, most AIaaS contracts provide for limited liability (apart from the fees paid), which means that AI-based systems that rely on AIaaS are usually limited to relatively low-risk applications where service failure would not cause too much damage.

The services are often offered with an initial free trial period instead of an acceptance period. During this period, the user of the AIaaS is expected to test whether the service provided meets their requirements in terms of required functionality and performance (e.g., accuracy). This is usually required to ensure any lack of transparency of the provided service (see Section 7.5).

AIaaS Examples

The following are examples of AIaaS:

  • IBM Watson Assistant: This is an AI chatbot priced based on the number of monthly active users.
  • Google Cloud AI and ML products: These offer document-based AI that includes a form parser and document OCR. Pricing is based on the number of pages being processed.
  • Amazon CodeGuru: This offers a review of ML Java code that provides developers with recommendations to improve their code quality. Pricing is based on the number of lines of source code analyzed.
  • Microsoft Azure Cognitive Search: Provides an AI cloud search. Pricing is based on search units (defined by storage and throughput used).

Pre-built Models

Introduction to pre-trained models

Training ML models can be expensive (see Chapter 3). First, the data must be prepared, and then the model must be trained. The first activity can consume large amounts of human resources, while the second activity can consume a lot of computer resources. Many organizations do not have access to these resources.

A less expensive and often more effective alternative is to use a pre-trained model. This provides similar functionality to the required model and is used as the basis for creating a new model that extends and/or concentrates the functionality of the pre-trained model. Such models are only available for a limited number of technologies, such as neural networks and random forests.

If an image classifier is needed, it could be trained using the publicly available ImageNet dataset, which contains over 14 million images classified into over 1000 categories. This reduces the risk of consuming significant resources with no guarantee of success. Alternatively, an existing model could be reused that has already been trained on this dataset. By using such a pre-trained model, costs would be
costs are saved and the risk that it will not work is largely eliminated.

If a pre-trained model is used without modification, it can be easily embedded in the AI-based system or used as a service.

Transfer Learning

It is also possible to modify an already trained model to meet a second, different requirement. This is known as transfer learning and is used with deep neural networks, where the early layers (see Chapter 6) of the neural network typically perform fairly simple tasks (e.g., distinguishing between straight and curved lines in an image classifier), while the later layers perform more specialized tasks (e.g., distinguishing between building types). In this example, all layers of an image classifier can be reused except for the later layers, so the early layers do not need to be trained. The later layers are then re-trained to meet the specific requirements for a new classifier. In practice, the pre-trained model can be fine-tuned by additional training on new problem-specific data.

The effectiveness of this approach depends largely on the similarity between the function performed by the original model and the function required by the new model. For example, an image classifier that identifies cat species would be much more effective than modifying it to recognize human accents.

There are many pre-trained models, especially by academic researchers. Some examples of such pre-trained models are ImageNet models such as Inception, VGG, AlexNet, and MobileNet for image classification, and pre-trained NLP models such as BERT from Google.

Risks associated with the use of pre-trained models and transfer learning

The use of pre-trained models and transfer learning are both common approaches to building AI-based systems, but they come with some risks. These include:

  • A pre-trained model may be opaque compared to an internally created model.
  • The degree of similarity between the function performed by the pre-trained model and the required functionality may be insufficient. Furthermore, this difference may not be understood by the data scientist.
  • Differences in the data preparation steps (see Section 4.1) used for the pre-trained model during initial development and the data preparation steps used when that model is then deployed in a new system may affect the resulting functional performance.
  • The shortcomings of a pre-trained model are likely to be inherited by those who reuse it and may not be documented. For example, inherited biases (see Section 2.4) may not be apparent if there is a lack of documentation of the data used to train the model. If the trained model is not widely used, there are likely to be more unknown (or undocumented) errors, and more rigorous testing may be required to mitigate this risk.
  • Models created through transfer learning are most likely vulnerable to the same vulnerabilities as the pre-trained model on which it is based (e.g., outside attacks, as discussed in 9.1.1). Furthermore, if an AI-based system is known to contain (or be based on) a particular pre-trained model, then the associated vulnerabilities may already be known to potential attackers.

It should be noted that several of the above risks can be more easily mitigated if thorough documentation is available for the pre-trained model (see Section 7.5).

Standards, Regulations and AI

The IEC/IEC Joint Technical Committee on Information Technology (ISO/IEC JTC1) is developing international standards that contribute to AI. For example, a subcommittee on AI (ISO/IEC JTC 1/SC42) was established in 2017. In addition, ISO/IEC JTC1/SC7, which deals with software and systems engineering, has published a technical report on “Testing AI-based systems.”

Standards on AI are also published at the regional level (e.g., European standards) and at the national level.

The EU-wide General Data Protection Regulation (GDPR) came into force in May 2018 and sets obligations for data controllers in relation to personal data and automated decision making. It includes requirements to assess and improve the functional performance of AI systems, including mitigating potential discrimination, and to ensure the rights of individuals not to be subject to automated decision making. The most important aspect of the GDPR from a testing perspective is that personal data (including predictions) should be accurate. This does not mean that every single prediction made by the system has to be correct, but that the system should be accurate enough for the purposes for which it is used.

The German Institute for Standardization (DIN) has also developed the AI quality metamodel.
Standards on AI are also published by industry associations. For example, the Institute of Electrical and Electronics Engineers (IEEE) is working on a set of standards on ethics and AI (The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems). Many of these standards are still under development at the time of writing.

When AI is used in safety-related systems, the relevant standards apply, e.g. ISO 26262 and ISO/PAS 21448 (SOTIF) for automotive systems. Such standards are usually mandated by government agencies, and in some countries it would be illegal to sell a car if the software it contained did not comply with ISO 26262. Standards in themselves are voluntary documents, and their use is usually only mandated by law or contract. However, many users of standards do so to benefit from the expertise of the authors and to create products of higher quality.

Source: ISTQB®: Certified Tester AI Testing (CT-AI) Syllabus Version 1.0

Was this article helpful?

Related Articles