The test automation team must verify that the automated test environment is working as expected.
These checks are performed, for example, before the automated tests are started.There are a number of steps that can be taken to verify the components of the automated test environment.
Each of these steps is explained in more detail below:
Installation, setup, configuration, and customization of the test tool
The TAS is made up of many components. Each of these components must be considered to ensure reliable and repeatable performance. At the core of a TAS are the executable components, the corresponding function libraries, and the supporting data and configuration files. The process of configuring a TAS can range from using automated installation scripts to manually placing files in the appropriate folders.
Similar to operating systems and other applications, test tools regularly have service packs or optional or required add-ins to ensure compatibility with a particular SUT environment.
Automated installation (or copy) from a central repository has advantages. Tests can be guaranteed to have been run on different SUTs with the same version of the TAS and the same configuration of the TAS, where appropriate. Upgrades to the TAS can be performed through the repository. The use of the repository and the procedure for upgrading to a new version of the TAS should be the same as for standard development tools.
Test scripts with known passed and failed tests
When known passed test cases fail, it is immediately clear that something is fundamentally wrong and should be fixed as soon as possible. Conversely, when test cases pass when they should have failed, we need to identify the component that did not work correctly. It is important to verify that log files and performance metrics are created correctly, and that the test case/script is built and broken down automatically. It is also helpful to run some tests of the different test types and levels (functional tests, performance tests, component tests, etc.). This should also be done at the framework level.
Repeatability in setting up and tearing down the test environment
A TAS is implemented on a variety of systems and servers. To ensure that the TAS functions properly in each environment, a systematic approach to loading and unloading the TAS from a particular environment is required. This is successful when the creation and rebuilding of the TAS have no discernible differences in operation within and between environments. Configuration management of the TAS components ensures that a particular configuration can be created reliably.
Connectivity with internal and external systems/interfaces.
After a TAS is installed in a particular SUT environment and before it is actually deployed in a SUT, a series of tests or preconditions should be performed to ensure connectivity to internal and external systems, interfaces, and so forth. Establishing preconditions for automation is important to ensure that the TAS has been installed and configured correctly.
Intrusivity of the automated test tools
The TAS will often be tightly coupled with the SUT. This is intentional to ensure a high degree of compatibility, especially with regard to interaction at the GUI level. However, this tight integration can also have negative effects. These include: a SUT behaving differently when the TAS is in the SUT environment; the SUT behaving differently than when used manually; SUT performance being affected by the TAS in the environment or when the TAS is running against the SUT.
The level of intrusion/intrusiveness differs depending on the automated testing approach selected. For example:
- If the SUT is accessed through external interfaces, the degree of intrusion is very low.
External interfaces can be electronic signals (for physical switches) or USB signals for USB devices (such as keyboards). This approach simulates the end user in the best way. In this approach, the software of the SUT is not changed at all for testing purposes. The behavior and timing of the SUT are not affected by the testing approach. Interfacing to the SUT in this way can be very complex. Dedicated hardware might be required, hardware description languages are needed to interact with the SUT, etc. This is not a typical approach for software-only systems, but it is more common for products with embedded software.
- When coupling with the SUT at the GUI level, the SUT environment is adapted to inject UI commands and extract information required by the test cases. The behavior of the SUT is not directly changed, but the timing is affected, which can lead to behavioral degradation. The level of intrusion is higher than in the previous point, but the connection with the SUT is less complex in this way. Commercial off-the-shelf tools can often be used for this type of automation.
- Interfacing with the SUT can be done through test interfaces in the software or by using existing interfaces already provided by the software. The availability of these interfaces (APIs) is an important part of the design for testability. The level of intervention in this case can be quite high.
Automated tests use interfaces that may not even be used by the end users of the system (test interfaces), or interfaces may be used in a different context than in the real world. On the other hand, it is very easy and inexpensive to run automated tests via interfaces (API). Testing the SUT via test interfaces can be a solid approach as long as the potential risk is understood.
A high degree of intrusion into the system can lead to errors during testing that do not occur under real-world deployment conditions. If this leads to errors in the automated tests, confidence in the test automation solution can drop dramatically. Developers may require that bugs identified by automated testing first be manually reproduced to support analysis.
Testing framework components
As with any software development project, automated framework components must be tested and reviewed individually. This may include functional and non-functional testing (performance, resource utilization, usability, etc.).
For example, components that provide object verification on GUI systems must be tested for a variety of object classes to determine if object verification is working correctly. Similarly, error logs and reports should provide accurate information about the status of automation and the behavior of the SUT.
Examples of non-functional testing include understanding framework performance degradation and system resource utilization that may indicate issues such as memory leaks. Interoperability of components inside and/or outside the framework.