How many software managers and executives envision armies (ie racks of computers) automatically running millions of tests over and over again and finding all the problems in the software? What a great idea – computer software that tests computer software! Executives eyes light up: "You mean we do not have to hire and pay humans to do it? This is what the companies that sell the automated tools tell everyone – right? "Just shell out $ 100k for this nice automated tool – look at the pretty cost vs payback PowerPoint graphs and remember how much money this will save you!" – they insist. Except they do not tell you one dirty little secret – it rarely works out for most companies. The tools eventually end up sitting on a shelf in someone's cube.
Why? – Because a tool alone does not solve the testing problem. You still need: 1) a development process that supports automated testing; 2) vigilant support for automation from management during software release cycles; 3) tools that are smart enough to deal with unexpected problems and results while testing (computer software is stupid); 4) initial and ongoing training (as employees leave the company and new ones are hired); 5) and many other items.
What kind of development process is needed to support software test automation? It's very easy to make a change to the software application and break the automated test scripts. For example, say you had written 50 scripts to test the ordering process for a retail website (like Amazon) – one for each state in the USA. Each automated script would select an item, enter the quantity, select the shipping method, add the sales tax for that state and / or locality, total the shopping cart, and then click the Order Now button. Now let's say the software programmers changed the order process by adding an extra step (like a verification step) so that a new page would come up with the order details and a next button and then (upon clicking next) would take you to another page with the Order Now button. You would need to modify every script to add (or insert) the extra order details page and click the Next button. And that's only if you had one script per order (per state). If you had 10 scripts per state, you would need to modify 500 scripts! So the best software development processes try to minimize changes to the software that would impact the automation scripts (not an easy task most of the time). Also, there are ways to structure and create the automation scripts to reduce the script maintenance efforts.
So why does management care about automation? There are really a number of issues regarding testing automation that require management support. To develop the initial set of automated scripts typically takes more time than just to manually test the software. However, the real payback with automation is not the first time you develop and run automation tests – but with subsequent runs of the scripts. Typically, you can click the run button (in a figurative sense) and when the run completes – analyze the results – which usually takes less time than just basically running all the tests. So management needs to support the longer time it takes to develop initial set of automated scripts to get the payback on subsequent runs.
Management also needs to support making changes to the software application that minimize impact to the automation scripts – something that sometimes takes longer to design and implement than a straight-forward change or fix. But again – reducing maintenance of the automation scripts also saves money and time for the project and company. Management needs to understand this concept.
Management must support the cost and training involved with standardizing on an automation tool. There are periodic upgrade costs as well as staff training costs (as resources come and go through the organization).
Many times, software applications encounter unexpected situations and uncommon errors. Hopefully, the code is programmed to gracefully handle these exceptions and error conditions. Unfortunately, most software testing automation scripts do not gracefully handled unforeseen circumstances and change the way they have behaved. There's no real artificial intelligence in the scripts that change the flow or behavior as well as the verification of the expected results. Tools are getting better at this – but there's still a long way to go.
Staff members do come and go. The industry average is about 10% of the staff change-over every year or so. Finding software testers that know a particular testing tool (and many times a particular version of the tool) are somewhat difficult. Plus the specific implementation of the tool in an organization may affect the behavior and execution properties. So ongoing training is a must. Many organizations are not ready to make this kind of commitment.
What works? Well, in my experience, there have been a few software organizations that have successfully automated. Some have created software development processes that consist of a core set of code and then changes are layered on (like an onion) to reduce automated test script breakage. All changes are analyzed for coding and automated testing impacts before approvals are granted for implementation. Management also supports (and budgets for) ongoing training and upgrades of the automated tool.
So because of the many obstacles and hurdles involved – test automation continues to be the holy grail of software testing. It still requires (even with successful automation) lots of human involvement for script coding, maintenance, and results analysis. But the benefits are huge for those companies that can tame the process. Automation can free up your software testing resources to be more creative and look for the hard-to-find test scenarios while robotic automated test scripts cover the common (and boring) testing scenarios. In my experience, test automation does not usually result in faster testing but rather in more testing which in-turn results in a higher quality software product.