Automated Software Testing: In The Beginning
Like other living things, software QA processes evolve over time. From its humble beginnings, test automation has undergone a similar transformation.The first stage of test automation was the ‘Record and Playback’ age. Tools were marketed for their ability to record a typical user session and then faithfully play it back using the same objects and inputs. Good marketing, bad practice, because as soon as the application changed the recording stopped working. But it did get people interested in automation.
The next stage could be called the ‘Script Modularity’ age. The recording concept was retained, but now it was linked to a scripting language that allowed a tool expert to create modular, reusable scripts to perform the actions required in a test case. These scripts could be maintained as separate modules that corresponded roughly to the modules of an application, making it easier to change the test code when the application code changed. Easier, but still not efficient. Complex applications would require complex scripts, which usually require more expertise to maintain.
And what about handling all that test data hardcoded into each script? The mind boggled.
Luckily it didn’t boggle too long, which led to the next stage, the ‘Data-Driven’ age. Tools were constructed that allowed access to large pools of external test data, so that these modular scripts could process iteration after iteration of data input. They could churn through mountains of data, as often as desired. What could be better? Well, there were still maintenance issues, as the number of test scripts still grew in direct, or sometimes geometric, proportion to the growth of applications. Additional tools were created just to manage the test execution tools as the asset inventory climbed ever higher. And all those tool experts, they were getting expensive. But that’s the price of progress, right?
Wrong! Evolution usually favors simplicity, since knobby bits tend to break off and fail, sometimes endangering the entire species. Another level of abstraction was necessary, and this ushered in the ‘Keyword-Driven’ age. The test actions were generalized and stored in function libraries, objects were either inventoried in repositories or identified descriptively by type, and testers who were experts in application testing no longer needed to be test tool experts to execute their automated tests. By choosing from a list of keywords linked to functions, they could now describe their tests in their own terms. Test tool script maintenance was simplified down to occasionally updating the few assets that were required to process the keywords, which meant fewer tool experts (a.k.a, ‘knobby bits’ — lol). Truly, a Golden Age.
Of course, evolution doesn’t stop there. There are many experiments in progress today: business process testing, model-driven testing, intelligent query-driven testing, to name a few. The goal seems to be to find the toolset that provides the greatest test coverage with the least amount of maintenance.
And certainly the field of artificial intelligence will have a major impact on sofware testing in the future.
No comments:
Post a Comment