Traditionally, organizations have developed manual test cases. When deciding on an automated testing environment, it is necessary to evaluate the current state of manual testing and determine the most effective approach to automating these testing resources. The existing structure of a manual test may or may not be suitable for automation. In this case, a complete rewrite of the test may be required to support automation. Alternatively, relevant components of existing manual tests (e.g., input values, expected results, navigation path) can be extracted from existing manual tests and reused for automation. A manual test strategy that takes automation into account enables tests whose structure facilitates migration to automation.
Not all tests can or should be automated, and sometimes the first iteration of a test may be manual.
Therefore, there are two aspects of the transition to consider: the initial conversion of existing manual tests to automation and the subsequent conversion of new manual tests to automation.
Also note that certain types of tests can only be executed (effectively) in an automated manner, such as reliability testing, stress testing, or performance testing.
With test automation, it is possible to test applications and systems without a user interface. In this case, tests can be performed at the integration level through interfaces in the software. While this type of test case could also be executed manually (using manually entered commands to trigger the interfaces), it may not be practical. With automation, it is possible, for example, to insert messages into a queuing system. In this way, testing can start earlier (and detect errors earlier) when manual testing is not yet possible.
Before starting automated testing, one must weigh the applicability and feasibility of automated versus manual testing. Suitability criteria may include, but are not limited to:
- Frequency of use
- Complexity of automation
- Compatibility of tool support
- Maturity of the test process
- Suitability of the automation for the particular phase of the software product lifecycle
- Sustainability of the automated environment
- Controllability of the SUT
Each of these points is discussed in more detail below.
Frequency of use
The frequency with which a test needs to be executed is one criterion for deciding whether automation is appropriate or not.
Tests that are executed more regularly, such as part of a major or minor release cycle, are better candidates for automation because they are used frequently. In general, the greater the number of application releases – and thus the corresponding test cycles – the greater the benefit of test automation.
When functional tests are automated, they can be used as part of regression testing in subsequent releases. Automated tests used in regression testing provide a high return on investment (ROI) and risk mitigation for the existing code base.
If a test script is run once a year and the SUT changes within the year, it may not be feasible or efficient to create an automated test. The time required to adapt the test to the SUT each year is best managed manually.
Complexity to be automated
In cases where a complex system needs to be tested, automation can be of great benefit to save the manual tester from repeating complex steps that are tedious, time-consuming, and error-prone to execute.
However, certain test scripts can be difficult or not cost-effective to automate. There are a number of factors that can affect this, including: a SUT that is not compatible with existing automated testing solutions; the need to create extensive program code and develop calls to APIs in order to automate; the variety of systems that need to be addressed as part of a test execution; interaction with external interfaces and/or proprietary systems; some aspects of usability testing; the time required to test automation scripts, etc.
Compatibility and tool support
There are a variety of development platforms used to build applications. The challenge for the tester is to know what testing tools are available for a particular platform and to what extent the platform is supported (if at all). Organizations use a variety of testing tools, including those from commercial vendors, open source and homegrown. Each organization has different needs and resources for test tool support. Commercial vendors typically offer paid support and, in the case of market leaders, usually have an ecosystem of experts who can help implement test tools. Open source tools may offer support in the form of online forums where users can get information and ask questions. Internally developed test tools rely on the support of existing staff.
The issue of test tool compatibility should not be underestimated. Embarking on a test automation project without knowing exactly the level of compatibility between the test tools and the SUT can have disastrous consequences. Even if most tests for the SUT can be automated, the most critical tests may not.
Maturity of the test process
To effectively implement automation within a test process, the process must be structured, disciplined and repeatable. Automation brings an entire development process into the existing test process, requiring management of the automation code and associated components.
Suitability of automation for each phase of the software product lifecycle
A SUT has a product lifecycle that can range from years to decades. Early in the development of a system, the system changes and expands to fix bugs and add enhancements to meet end-user requirements.
In the early stages of a system’s development, changes may be too rapid to implement an automated testing solution. As screen layouts and controls are tweaked and enhanced, creating automation in a dynamically changing environment may require constant rework, which is neither efficient nor effective. This would be similar to trying to change a tire on a moving car; it is better to wait until the car stops. For large systems in a sequential development environment, the best time to implement automated testing is when the system has stabilized and contains a core of functionality.
Over time, systems reach the end of their product lifecycle and are either retired or redesigned to use newer and more efficient technologies. For a system nearing the end of its lifecycle, automation is not recommended because there is little benefit to such a short-lived initiative. For systems that are being redesigned using a different architecture but retaining existing functionality, an automated test environment that defines data elements is equally useful for the old and new systems. In this case, reuse of test data would be possible and recoding of the automated environment would be required to be compatible with the new architecture.
Environment sustainability
An automation test environment must be flexible and adaptable to the changes that will occur to the SUT over time. This includes the ability to quickly diagnose and correct problems with the automation, the ease with which automation components can be maintained, and the ease with which new features and support can be added to the automated environment. These attributes are integral to the overall design and implementation of the gTAA.
Controllability of the SUT (preconditions, setup, and stability)
The TAE should identify control and visibility features in the SUT that help create effective automated tests. Otherwise, test automation relies only on interaction with the user interface, resulting in a less maintainable test automation solution. For more information, see Section 2.3 on Design for Testability and Automation.
Technical planning to support ROI analysis
Test automation can benefit a test team to varying degrees. However, implementing an effective automated testing solution requires significant effort and cost. Before spending the effort to develop automated testing, an assessment should be conducted to determine the intended and potential overall benefits and outcome of implementing test automation. Once this is determined, the activities required to implement such a plan should be defined and the associated costs identified to calculate the ROI.
To adequately prepare for the transition to an automated environment, the following areas must be considered:
- Availability of tools in the test environment for test automation.
- Correctness of test data and test cases
- Scope of the test automation effort
- Training of the test team with regard to the paradigm shift
- Roles and responsibilities
- Collaboration between developers and test automators
- Parallel effort
- Reporting on test automation
Availability of tools in the test environment for test automation
Selected test tools must be installed in the test lab environment and verified to be functional. This may include downloading service packs or release updates, selecting the appropriate installation configuration-including add-ins-required to support the SUT, and ensuring that the TAS is functioning correctly
in the test lab environment and in the automation development environment.
Correctness of Test Data and Test Cases
Correctness and completeness of manual test data and test cases is necessary to ensure that their use with automation produces predictable results. Tests executed within automation require unique data for input, navigation, synchronization, and validation.
Scope of test automation
To achieve initial automation success and get feedback on technical issues that may impact progress, it is helpful to start with a limited scope to facilitate future automation tasks. A pilot project can address an area of system functionality that is representative of overall system interoperability. Lessons learned from the pilot help adjust future time estimates and schedules, as well as identify areas that require dedicated technical resources. A pilot project provides a quick way to show initial automation successes, which encourages continued management support.
To achieve this, the test cases to be automated should be chosen wisely. Select the cases that require little effort to automate but provide high added value. Automated regression or smoke tests can be implemented and add significant value, as these tests are usually run quite frequently, even daily. Another good candidate to start with is reliability testing. These tests often consist of multiple steps and are run over and over again, revealing problems that are difficult to detect manually. These reliability tests take little effort to implement, but can add value very soon.
These pilot projects put automation in the spotlight (manual testing effort saved or serious problems uncovered) and pave the way for further enhancements (effort and money).
In addition, priority should be given to tests that are critical to the business, as these will initially provide the greatest benefit. In this context, however, it is important that a pilot project avoids the most technically demanding tests that are to be automated. Otherwise, too much effort will be put into developing the automation without any presentable results. In general, identifying tests that are consistent with a large portion of the application will provide the necessary momentum for automation efforts.
Training the test team to shift the paradigm
There are many different types of testers: some are subject matter experts who come from the end-user community or have worked as business analysts, while others have strong technical skills that allow them to better understand the underlying system architecture. A broad mix of backgrounds is desirable for effective testing. As the test team moves to automation, the roles become more specialized. Changing the composition of the test team is essential to the success of automation, and informing the team of the intended change early on helps alleviate fears about roles or the possible thought of becoming redundant. If the move to automation is approached properly, all members of the testing team should be excited and ready to participate in the organizational and technical change.
Roles and Responsibilities
Test automation should be an activity that everyone can participate in. However, this does not mean that everyone has the same role. Designing, implementing, and maintaining an automated test environment is technical in nature and therefore should be reserved for individuals with good programming skills and a technical background. The result of developing automated tests should be an environment that can be used by both technical and non-technical people. To maximize the value of an automated test environment, individuals with expertise and testing skills are needed because the appropriate test scripts (including the appropriate test data) must be developed. These are used to drive the automated environment and ensure the targeted test coverage. Business experts review reports to confirm the functionality of the application, while technical experts ensure that the automated environment is working correctly and efficiently. These technical experts can also be developers who are interested in testing. Software development experience is essential to designing maintainable software, and this is paramount in test automation. Developers can focus on the test automation framework or test libraries. The implementation of test cases should remain with the testers.
Collaboration between developers and test automators
Successful test automation also requires the involvement of the software development team and testers. Developers and testers need to work much more closely together on test automation so that developers can provide support staff and technical information about their development methods and tools. Test automation engineers may express concerns about the testability of system designs and developer code. This is especially the case when standards are not followed or when developers use strange, homegrown, or even very new libraries/objects. For example, developers might choose a third-party GUI control that is not compatible with the chosen automation tool. Finally, an organization’s project management team must have a clear understanding of the types of roles and responsibilities required for successful automation.
Parallel efforts
As part of the transition activities, many organizations form a parallel team that begins automating the existing manual test scripts. The new automated scripts are then integrated into the test operation, replacing the manual scripts. Before doing so, however, it is often recommended to compare and verify that the automated script performs the same tests and validations as the manual script it replaces.
In many cases, an evaluation of the manual scripts is performed before switching to automation. As a result of such an assessment, it may be determined that the existing manual test scripts need to be restructured to provide a more efficient and effective approach to automation.
Automation of reporting
There are several reports that can be automatically generated by a TAS. These include the pass/fail status of individual scripts or steps within a script, overall test execution statistics, and the overall performance of the TAS. It is equally important to know the correct operation of the TAS so that the reported application-specific results can be considered accurate and complete.
Source: ISTQB®: CTAL Test Automation Engineer Syllabus Version 1.0