Automated test suites must be tested for completeness, consistency, and correct behavior. To ensure that the automated test suite is ready for use at a given time, or to determine whether it is suitable for use, various types of verification can be performed.
There are a number of steps that can be taken to verify the automated test suite. These include:
- Running test scripts with known pass and fail tests.
- Verifying the test suite
- Verifying new tests that focus on new features of the framework
- Consideration of test repeatability
- Verifying that the automated test suite contains sufficient test points.
Each of these points is explained in more detail below.
Running test scripts with known passed and failed tests
When known passed test cases fail, it is immediately clear that something is fundamentally wrong and should be fixed as soon as possible. Conversely, when a test suite passes when it should have failed, it is necessary to identify the test case that did not work correctly. It is important to verify the correct creation of log files, performance data, setup and teardown of the test case/script. It is also helpful to run some tests from the different test types and levels (functional tests, performance tests, unit tests, etc.).
Review the test suite
Check the test suite for completeness (all test cases have expected results, test data is available) and correct version with the framework and SUT.
Review new tests that focus on new features of the framework
When a new feature of the TAS is used in test cases for the first time, it should be closely reviewed and monitored to ensure that the feature works correctly.
Consideration of test repeatability
When tests are repeated, the result of the test should always be the same. Test cases in the test set that do not produce a reliable result (e.g., race conditions) could be removed from the active automated test suite and analyzed separately to find the root cause. Otherwise, repeated time is spent on these test runs to analyze the problem.
Intermittent failures need to be analyzed. The problem may be in the test case itself or in the framework (or it could even be a problem in the SUT). Analysis of the log files (of the test case, the framework, and the SUT) may identify the cause of the problem. Troubleshooting may also be necessary. Assistance from the test analyst, software developer, and domain expert may be required to find the root cause.
Verify that there are sufficient verification points in the automated test suite and/or test cases
It must be possible to verify that the automated test suite was executed and produced the expected results. Evidence must be provided to ensure that the test suite and/or test cases were executed as expected. This evidence may include logging at the beginning and end of each test case, recording the test execution status for each completed test case, verifying that post-conditions were met, etc.