In addition to the ongoing maintenance tasks required to keep the TAS in sync with the SUT, there are typically many ways to improve the TAS. TAS improvements can be made to provide a number of benefits, such as increased efficiency (further reducing manual intervention), improved usability, additional functionality, and improved support for testing activities. Deciding how to improve the TAS depends on the benefits that will provide the most value to a project.
Specific areas of a TAS that are candidates for improvement include scripting, verification, architecture, pre- and post-processing, documentation, and tool support. These are described in more detail below.
Scripting approaches range from the simple structured approach to data-driven approaches to the more sophisticated keyword-driven approaches, as described in Article Test Automation Architecture Design. It may be useful to update the current TAS scripting approach for all new automated tests. The approach can be upgraded for all existing automated tests, or at least for those that require the most maintenance.
Rather than completely changing the scripting approach, TAS improvements can focus on script implementation. For example:
- Evaluate the overlap of test cases/steps/procedures to consolidate automated tests.
Test cases that contain similar action sequences should not execute these steps multiple times. These steps should be converted to a function and added to a library so they can be reused. These library functions can then be used by different test cases. This increases the maintainability of the testware. If the test steps are not identical but similar, parameterization may be required.
Note: This is a typical approach for keyword-driven testing.
- Establish a troubleshooting process for the TAS and the SUT.
If an error occurs during test case execution, the TAS should be able to resolve that error condition to proceed to the next test case. If an error occurs in the SUT, the TAS must be able to perform the necessary recovery actions on the SUT (e.g., reboot the entire SUT).
- Evaluate the wait mechanisms to ensure that the best type is used.
There are three common wait mechanisms:
- Hard-coded waiting (waiting a certain number of milliseconds) can be the cause of many test automation problems.
- Dynamic waiting by polling, e.g. checking if a certain state change or action has occurred, is much more flexible and efficient:
- Only the required time is waited and no test time is wasted
- If for some reason the process takes longer, polling will simply wait until the condition is met. Remember to include a timeout mechanism, otherwise the test can wait forever in case of a problem.
- An even better way is to subscribe to the event mechanism of the SUT. This is much more reliable than the other two options, but the test script language must support subscribing to events and the SUT must offer these events to the test application. Remember to include a timeout mechanism, otherwise the test may wait forever in case of a problem.
- Treat the test software like software.
Developing and maintaining test software is a form of software development. As such, good coding practices (e.g., use of coding guidelines, static analysis, code reviews) should be used. It may even be a good idea to assign software developers (rather than test engineers) to develop certain parts of the test software (e.g., libraries).
- Evaluate existing scripts for revision/elimination.
Some scripts may be problematic (e.g., they fail from time to time or incur high maintenance costs), and it may make sense to redesign these scripts. Other test scripts can be removed from the suite because they no longer add value.
If an automated regression test suite does not complete overnight, this should not come as a surprise.
If testing takes too long, it may be necessary to test simultaneously on different systems, but this is not always possible. If expensive systems (targets) are used for testing, it may be a limitation that all tests must be run on a single target. It may be necessary to split the regression test suite into several parts, each of which is executed during a specific time period (e.g., a single night). Further analysis of automated test coverage may reveal duplicates. Eliminating duplicates can reduce execution time and provide further efficiency gains. Further analysis of automated test coverage can uncover duplicates. Eliminating duplicates can shorten execution time and bring further efficiency gains.
Before creating new verification capabilities, implement a set of standard verification methods for all automated tests. This avoids re-implementing verification methods for multiple tests. If the verification methods are not identical, but similar, parameterization helps so that one function can be
can be used for multiple object types.
It may be necessary to change the architecture to support testability improvements to the SUT. These changes can be made in the architecture of the SUT and/or in the architecture of the automation. This may result in a significant improvement in test automation, but may also require significant changes and investment in the SUT/TAS. For example, if the SUT is changed to provide APIs for testing, the TAS should also be revised accordingly. Adding this type of functionality at a later stage can be quite expensive; it is much better to consider this at the start of automation (and in the early stages of SUT development – see Article Design for Testability and Automation).
Pre- and post-processing
Provide standard tasks for setup and teardown. These are also referred to as pre-processing (setup) and post-processing (teardown). This eliminates the need to repeatedly perform tasks for each automated test and reduces not only maintenance costs, but also the effort required to implement new automated tests.
This includes all forms of documentation, from script documentation (what the scripts do, how they should be used, etc.) to user documentation for the TAS to reports and logs generated by the TAS.
Add additional TAS features and functionality such as detailed reports, logs, integration with other systems, etc. Add new features only when they are actually used. Adding unused features only adds complexity and reduces reliability and maintainability.
TAS updates and upgrades
Updates or upgrades to new versions of the TAS may make new features available that can be used by test cases (or bugs can be fixed). The risk is that upgrading the framework (either by upgrading the existing test tools or by introducing new tools) may have a negative impact on the existing test cases. Test the new version of the test tool by running sample tests before introducing the new version. The sample tests should be representative of the automated tests of different applications, different test types, and different environments, if applicable.