Friday, October 22, 2010

Mercury LoadRunner Evaluation

The organization had initially invested in a full Mercury Interactive suite of tools - including LoadRunner. As stated in the Mercury's QuickTest Professional evaluation both TestDirector and QuickTest Professional were eventually accepted by the organization but LoadRunner was still effectively "shelf ware". Now the organization plans to implement a commercial data warehouse solution (SAP Business Warehouse) but has developed any in house performance or stress testing solution. Solution "dust off" LoadRunner and quickly implement a targeted performance testing solution to meet this immediate need.

Information Evaluation

No formal evaluation of LoadRunner. was performed since both the immediate success of the performance testing efforts and previous investment in this tool pre-empted any formal evaluation. The tool was first used to meet a specific need but the flexibility and power of the tool encouraged the testing team to capture our initial impressions of LoadRunner. and the lessons learned.
General: The framework supports the creation of a series of business processes or business threads that are captured as virtual user scripts. LoadRunner enabled the testing team to organize, author, and maintain a set of Virtual User Scripts and supporting software libraries in a shareable environment. These scripts were then be used to simulate several users (hundreds to thousands) accessing the application at once by using the LoadRunner Controller.
Development: LoadRunner provides a software development environment designed to meet the needs of experienced Test Automation Engineers. Two primary development languages are supported by LoadRunner: JavaScript and C-script. This makes the development environment accessible to anyone with development experience. Experienced Automation Engineers will have little difficulty in becoming proficient in LoadRunner - the online help and built-in tutorial certainly provides a firm foundation to grow from. It should be noted that this is a rather primitive / limited development environment but little actual development should be required when deploying a performance testing solution using LoadRunner. LoadRunner enabled the testing team to organize, author, and maintain a set of Virtual User Scripts and supporting software libraries in a shareable environment. The actual organization of the code is left up to the Engineers - the test organization must take responsibility for organizing and maintaining this code base.
Virtual User Scripts: LoadRunner provides a simple record and playback mechanism that can be used to create scripts - Virtual User Scripts. These scripts become the baseline for developing a scenario or business thread that will be used to exercise the application / architecture under test. The Virtual User development environment allows the Test Automation Engineer to customize the initial scripts by: defining transactions, defining rendezvous points, controlling playback behavior, supporting data driven scripts (parameterization), and full customization of the underlying code.
Controller: LoadRunner Controller allows the Test Automation engineer to execute several Virtual User Scripts each script being executed by 1 to n virtual users. The Controller can use 1 to n stations (PC's) to execute these scenarios. The Controller provides a full suite of basic monitoring screens that can be used to measure the performance of the application/architecture under test from several perspectives - everything from throughput, to response time, to transaction failure rate... it is important to note that these results can be saved to a report that can be viewed at anytime. The LoadRunnerx. Controller enables the Test Automation engineer to control several aspects of the Performance test: virtual user scripts, number of virtual users per script, ramp up time, ramp down time, ramp up/down of virtual users, execution time - basically a full customization of what, where, when, and how much testing will occur for that run. This is not an ad-hoc assembly of wizards - it is a well thought-out solution that is designed to support performance and load testing.
Maintenance: LoadRunner is a test automation development studio, which allows for both development and maintenance of code, data, and test scripts. Management and control of the test artifacts is left to the testing organization - therefore maintenance can become burdensome if the testing organization does not implement adequate configuration management practices. The architectural framework, on which LoadRunner is built, certainly makes it much easier to accomplish these tasks than with previous generations of testing tools.
Summary: LoadRunner met the immediate needs of the test organization. In a very short period of time (2-days) the Test Automation Engineers were able to build and execute a simple series of performance tests that enabled the Database Designer and SAP consultants to fine-tune the SAP Business Warehouse implementation. LoadRunner is an extremely robust and powerful performance-testing tool able to deal with almost any architectural framework you may want to test - we are still exploring its capabilities.

End-User "Automation Engineer"

Findings
The Test Automation Engineers found that LoadRunner supplied a complete framework to meet our immediate performance testing needs (SAP Business Warehouse). The organization now plans to use LoadRunner whenever an architectural or business change could impact the performance of an application / architecture.
Lessons Learned
From their automation work with LoadRunner the Test Automation engineers learned several valuable lessons:
LoadRunner is very dependent on the accuracy of the virtual user script - most of the issues encountered during script execution were caused by changes in the environment or user setup. The Virtual User script should be tested as a standalone script first using the Virtual User development environment then as a standalone script in the Controller environment using a limited number of virtual users (1 or 2). A fully tested virtual user script can usually be trusted to function as expected - if it does not function as expected you are probably looking at a change in the environment since the script was last executed. The most common issues to look for in a Web enabled environment are: User/Password changes, Server Changes, and changes in the application data.
LoadRunner comes with the capacity to create standalone code and store them in sharable libraries. This allowed the automation engineers to create a common toolkit that could be accessed and maintained by the team.
LoadRunner integrates with Mercury's new Quality Center and TestDirector - unfortunately it does not support performance testing from Quality Center. The integration does allow the Test Automation Engineer to use Quality Center as a common repository of scripts that can be accessed by all members of the team.

Evaluation Summary

LoadRunner is Tier 1 performance and load testing tool that met the immediate need of our Testing Organization. The combination of an effective test automation tool and a comprehensive performance-testing framework provided immediate returns on the automation investment. LoadRunner is one of the most commonly deployed performance testing tools in the world - this gives you immediate access to a large well-established on-line user community via the Mercury Interactive (now HP) web site. If you are doing a price comparison between LoadRunner and several other performance testing tools you will find that it is somewhat pricey - whether you are willing (or should) put forward this additional investment depends upon the complexity and depth of your performance testing needs. LoadRunner is the "800 pound gorilla in the room", but you may find that you only need to execute your performance testing against a "standard" web-enabled application - in which case there are several less expensive options available to you.

Mercury QuickTest Professional (QTP) Evaluation

The organization had initially invested in a suite of testing tools including Mercury's Test Director and QuickTest Professional (QTP). Mercury's Test Director had been embraced by the organization with a well defined set of requirements and manual test cases - unfortunately QuickTest Professional had effectively become "shelf ware". Now the organization needs to generate hundreds of transactions through the interface to validate the load capacity of the interface and the integrity of each transaction type. Solution "dust off" QuickTest Professional and quickly implement an effective automation solution to meet this immediate need.

Evaluation

A formal evaluation of QuickTest Professional (QTP) was only performed once the initial automation effort had been completed. The tool was first used to meet a specific need but the flexibility and power of the tool encouraged the testing team to engage in a proof-of-concept exercise with the tool exploring the possibility of automating several of the more time consuming test cases.

Management / Organization Perspective

General: QuickTest Professional (QTP) enabled the testing team to organize, author, and maintain a set of automated test cases and supporting software libraries in a shareable environment. The framework to support both Keywords and Data Driven is supplied "out-of-the-box" allowing the end-users to easily apply test case design, automation, and execution "best practices". QuickTest Professional comes with an easy to use interface that allows users to develop automation solutions using a bill-of-materials (tree) view of the Actions (Keywords) or a development view (expert).
Development: QuickTest Professional provides a software development environment designed to meet the needs of experienced and junior Test Automation Engineers. The programming language is derived from vbscript - this makes it accessible to anyone with even minimal development experience. Experienced Automation Engineers or developers will have little difficulty in becoming proficient QuickTest Professional developers - the online help and built-in tutorial certainly provides a firm foundation to grow from. The actual organization of the code is left up to the Engineers - the test organization must take responsibility for organizing and maintaining this code base.
Mapping: QuickTest Professional. provides an effective GUI mapping utility that enables a skilled test automation engineer to map and maintain information on the application interface. This information can be captured by script / action or as a shared resource - having a shared GUI resource did not prove to be as useful as one would expect (see lessons learned).
Wizards: QuickTest Professional is a mature Tier 1 test automation development tool. It comes with a large catalogue of integrated automation wizards - most of these wizards are aimed at the Keyword or Data Driven test automation paradigm. This is not an ad-hoc assembly of wizards - it is a well thought-out assembly of tools that are designed to support the Keyword and Data Driven testing paradigms.
Maintenance: QuickTest Professional is a test automation development studio, which allows for both development and maintenance of code, data, and test scripts. Management and control of the test artifacts is left to the testing organization - therefore maintenance can become burdensome if the testing organization does not implement adequate configuration management practices. The architectural framework, on which QuickTest Professional is built, certainly makes it much easier to accomplish these tasks than with previous generations of testing tools that were fundamentally based on the Record and Playback paradigm.
Summary: QuickTest Professional met and later exceeded the expectations of the test organization. For anyone familiar with the Keyword (Action Word) approach to test design and test automation this tool delivers almost immediate return on investment. The initial ramp-up was to create data driven scripts that would enable the automation of hundreds of transactions. This task was accomplished in four (4) business days by two (2) automation engineers. The success of this activity encouraged the testing team to perform a two (2) week proof-of-concept activity involving four (4) team members. During this exercise the team used a data driven Keyword approach that resulted in a Return-on-investment (ROI) of 10 to 1 - the ROI was based on person hours only (10 hours saved for every hour invested).

End-User "Automation Engineer"

Findings
The Test Automation Engineers found that QuickTest Professional supplied a complete framework to meet the organization's automation and test design needs. An enterprise wide test automation solution was implemented almost immediately - the framework supports an enterprise wide solution using the Keyword design paradigm. This was combined with a disciplined approach to Test Case Design and software development standards that led to a very robust solution.
Lessons Learned
From their automation work with QuickTest Professional the Test Automation engineers learned several valuable lessons.
QuickTest Professional is very dependent on the accuracy and robustness of the GUI map. The most effective way to construct and test a GUI map was to map against the most recent version and then test the map against earlier versions of the application. This helped the engineers identify object characteristics that would remain stable between releases - the engineers then used these object characteristics to map all objects of the same type. With QuickTest Professional. the default mapping characteristics almost always worked between releases but this could have been a result of the how the application under test was developed - recommend a cautious approach that verifies the robustness of the default map settings when first working with any application.
QuickTest Professional allows a group of engineers to work together by creating a common GUI map. The capability of creating a common GUI map was not a useful as originally thought - using a disciplined Keyword design approach resulted in most (98%) of objects only being used twice. Each object was usually used once for an "entry" Keyword and once for a "validation" Keyword - therefore the maintenance return on maintaining a common GUI map was minimal. Creating a common library or libraries of Keywords was much more useful.
QuickTest Professional allows a group of engineers to work together by creating a common library of Keywords (shared Keywords). The tool makes it extremely easy to setup these shared libraries - the important lesson here was the organization of libraries was critical to the successful reuse of the Keywords. Organizing the Keywords libraries based on the functional decomposition of the application under test worked extremely well. For example the Customer entity and Billing entity were separated into two separate libraries - therefore if a Keyword was required by a test designer they would check the Customer Keyword library, if it was not there then the designer would request a Keyword from the automation engineers.
QuickTest Professional comes with the capacity to create standalone code and store them in sharable libraries. This allowed the automation engineers to create a common toolkit that could be accessed and maintained by the team.
QuickTest Professional integrates with Mercury's new Quality Center - there was no opportunity to upgrade from Test Director to this new product line. QuickTest Professional integrated smoothly with Test Director but the team could not explore the new Business Process Testing model - on paper it certainly appears to be a good fit for anyone pursuing the Keyword testing paradigm.

Evaluation Summary

QuickTest Professional is Tier 1 Test Automation solution that met the need of our Testing Organization. The combination of an effective test automation tool and a comprehensive testing framework provided immediate returns on the automation investment. The testing organization was able to maintain a return-on-investment of eight hours saved for every hour invested for each test cycle - this was a less than the original 10 to 1 ROI but the maintenance overhead made its presence felt once large portions of the application under test were automated. Most of this overhead was on the test design side - maintaining the Test Cases not the Keywords.

Mercury WinRunner Evaluation

"The organization has created a suite of manual test cases using a text editor but is finding it difficult to maintain, use, and execute these test cases efficiently as the test organization's role grows. The test cases have proven effective in detecting defects before they reach production but the time required to manage and execute these test cases is now impacting the return on investment. Solution - invest in a test automation tool or suite of tools."

Evaluation

The first thing an organization must accomplish is to catalogue what needs or requirements the Testing Software is expected satisfy. There are three categories or "points-of-view" that must be addressed: Management / Organization, Test Architecture, and End-User.

Management / Organization Perspective

Needs Analysis
Management clearly stated the objective for purchasing the Test Automation solution was:
"The selected Test Automaton tool shall enable end-users to author, maintain, and execute automated test cases in a web-enabled, shareable environment. Furthermore the test automation tool shall support test case design, automation, and execution .best practices. as defined by the Test Organization. Minimum acceptable ROI is 5 hours saved for every hour currently invested."
Findings
General: Mercury WinRunner enabled the end-users to organize, author, and maintain a set of automated test cases and supporting software libraries in a shareable environment. Mercury WinRunner did support test case design, automation, and execution "best practices" as defined by the Test Organization but the framework to perform these tasks was certainly not "out-of-the-box". Mercury WinRunner provides the toolbox but it is the responsibility of the users to build and support the testing framework or purchase a testing framework that is compatible with Mercury WinRunner.
Development: Mercury WinRunner provides a software development environment designed to meet the needs of Test Automation Engineers. The programming language, Test Script Language (TSL), has the look and feel of a simplified version of C. Experienced Automation Engineers or developers should have little difficulty in becoming proficient TSL developers. The actual organization of the TSL code is left up to the Engineers - the test organization must take responsibility for organizing and maintaining this code base.
Mapping: Mercury WinRunner provides an effective GUI mapping utility that enables a skilled test automation engineer to map and maintain information on the application interface.
Wizards: Mercury WinRunner is a fully mature Tier 1 test automation development tool. It comes with a large catalogue of automation wizards - most of these wizards are aimed at the play and record test automation paradigm.
Maintenance: Mercury WinRunner is a test automation development studio, which allows for both development and maintenance of TSL code. Management and control of the software is left to the testing organization - therefore maintenance can become burdensome if the testing organization does not implement adequate configuration management practices.
Summary: Mercury WinRunner met the needs of the testing organization but additional investment in a Testing Framework and a clear development paradigm must be implemented with the automation tool. If the testing organization does not implement a clear development paradigm and testing framework any value derived from test automation will be lost to the long-term maintenance burden.

Test Architecture

Needs Analysis
Management has defined the immediate organizational goal but the long-term architectural necessities must be defined by the testing organization. The Testing Organization clearly stated the Architectural requirements for the Test Automation tool were:
"The selected Test Automation tool shall have a history of operational success in the appropriate environments with a well established end-user community to draw upon. The tool must support enterprise wide collaboration over several simultaneous engagements / projects and smooth software upgrade path. The minimum acceptable ROI of 5 hours saved for every hour currently invested must be maintained across the enterprise and during any integration or upgrade activities."
Findings
Summary: Mercury WinRunner enabled the end-users to organize, author, and maintain a set of automated test scripts across the enterprise. WinRunner is used and supported by large geographically dispersed user community - Mercury supplies an open forum where this community can share solutions. Integration is available for configuration management tools, test management tools, requirement management tools, and commercial testing frameworks - including Mercury's own Mercury Quality Center. Mercury's WinRunner is a well established toolset which several vendors from inside and outside the testing tool space has chosen to integrate with.

End-User "Automation Engineer"

Needs Analysis
The End-User needs analysis detailed product capabilities as they apply to the testing process - this list of requirements extended for several pages and included several test automation challenges. In brief the End-User's stated that the Test Automation solution shall:
  1. Support the creation, implementation, and execution of Automated Test Cases.
  2. Support enterprise wide, controlled access to Test Automation (Web enabled preferred).
  3. Support Data Driven Automated Test Cases.
  4. Support Keyword enabled Test Automation.
  5. Enable Test Automation and verification of Web, GUI, .NET, and Java applications.
  6. Support the integration of other toolsets via a published API or equivalent capacity.
Findings
Summary: The Test Automation Engineers found that WinRunner supplied the basic framework required to meet all the organization's automation needs. An enterprise wide test automation solution was implemented once WinRunner was combined with a disciplined approach to Test Case Design and software control.
Lessons Learned
From their automation work with WinRunner the Test Automation engineers learned three (3) valuable lessons:
1. WinRunner is very dependent on the accuracy and robustness of the GUI map. The most effective way to construct and test a GUI map was to map against the most recent version and then test the map against earlier versions of the application. This helped the engineers identify object characteristics that would remain stable between releases - the engineers then used these object characteristics to map all objects of the same type. These mapping characteristics could be captured in the general startup script for all engineers.
2. WinRunner comes with the capacity to execute a standard startup script when it is invoked. This script could be used to standardize several operational aspects of WinRunner: GUI map configuration, standard functions, and shared libraries or scripts. This startup script could also be manipulated to allow each engineer to load shared libraries, GUI maps, and functions and to load their own versions of the same elements. In other words the shared libraries were loaded first and then (when desired) the engineer could load their versions of these libraries or functions allowing them to develop against a common code base without impacting other engineers.
3. WinRunner allows a group of engineers to work together by creating a common GUI map and script libraries. This enables the creation and maintenance of a toolkit that can be used and enhanced by the entire team. Once the automation solution has matured this shared resource can be managed using most configuration management tools - protecting the automation investment.

Evaluation Summary

Mercury WinRunner is Tier 1 Test Automation solution that met the need of our Testing Organization. Once the organization had moved into the Test Automation space the architectural limitations of WinRunner. were recognized and compensated for by applying a supporting test framework. It should be noted that there are several competing testing frameworks that integrate smoothly with WinRunner. It should also be noted that there are several Test Automation tools - including Mercury's own QuickTestPro (QTP) - that provide a more comprehensive testing framework. The testing organization was able to maintain a return-on-investment of eight hours saved for every hour invested for each test cycle.

How to Write Effective Bug Reports

How often do we see the developers requiring more information on the bug reports filed by us? How often do we need to spend more time investigating on the issue after the bug report has been filed? How often do we get to hear from the developers that the bug is not reproducible on their end and we need to improvise on the Steps To Reproduce? In a broad sense, we end up spending more time on these issues rather than investing more time testing the system. The problem lies in the quality of bug reports. Here are some areas which can be improved upon to achieve that perfect bug report.

The Purpose Of A Bug Report

When we uncover a defect, we need to inform the developers about it. Bug report is a medium of such communication. The primary aim of a bug report is to let the developers see the failure with their own eyes. If you can't be with them to make it fail in front of them, give them detailed instructions so that they can make it fail for themselves. The bug report is a document that explains the gap between the expected result and the actual result and detailing on how to reproduce the scenario.

After Finding The Defect

  • Draft the bug report just when you are sure that you have found a bug, not after the end of test or end of day. It might be possible that you might miss out on some point. Worse, you might miss the bug itself.
  • Invest some time to diagnose the defect you are reporting. Think of the possible causes. You might land up uncovering some more defects. Mention your discoveries in your bug report. The programmers will only be happy seeing that you have made their job easier.
  • Take some time off before reading your bug report. You might feel like re-writing it.

Defect Summary

The summary of the bug report is the reader.s first interaction with your bug report. The fate of your bug heavily depends on the attraction grabbed by the summary of your bug report. The rule is that every bug should have a one-liner summary. It might sound like writing a good attention-grabbing advertisement campaign. But then, there are no exceptions. A good summary will not be more than 50-60 characters. Also, a good summary should not carry any subjective representations of the defect.

The Language

  • Do not exaggerate the defect through the bug report. Similarly, do not undertone it.
  • However nasty the bug might be, do not forget that it.s the bug that.s nasty, not the programmer. Never offend the efforts of the programmer. Use euphemisms. 'Dirty UI' can be made milder as 'Improper UI'. This will take care that the programmer's efforts are respected.
  • Keep It Simple & Straight. You are not writing an essay or an article, so use simple language.
  • Keep your target audience in mind while writing the bug report. They might be the developers, fellow testers, managers, or in some cases, even the customers. The bug reports should be understandable by all of them.

Steps To Reproduce

  • The flow of the Steps To Reproduce should be logical.
  • Clearly list down the pre-requisites.
  • Write generic steps. For example, if a step requires the user to create file and name it, do not ask the user to name it like "Mihir's file". It can be better named as "Test File".
  • The Steps To Reproduce should be detailed. For example, if you want the user to save a document from Microsoft Word, you can ask the user to go to File Menu and click on the Save menu entry. You can also just say "save the document". But remember, not everyone will not know how to save a document from Microsoft Word. So it is better to stick to the first method.
  • Test your Steps To Reproduce on a fresh system. You might find some steps that are missing, or are extraneous.

Test Data

Strive to write generic bug reports. The developers might not have access to your test data. If the bug is specific to a certain test data, attach it with your bug report.

Screenshots

Screenshots are a quite essential part of the bug report. A picture makes up for a thousand words. But do not make it a habit to unnecessarily attach screen shots with every bug report. Ideally, your bug reports should be effective enough to enable the developers to reproduce the problem. Screen shots should be a medium just for verification.
  • If you attach screen shots to your bug reports, ensure that they are not too heavy in terms of size. Use a format like jpg or gif, but definitely not bmp.
  • Use annotations on screen shots to pin-point at the problems. This will help the developers to locate the problem at a single glance.

Severity / Priority

  • The impact of the defect should be thoroughly analyzed before setting the severity of the bug report. If you think that your bug should be fixed with a high priority, justify it in the bug report. This justification should go in the Description section of the bug report.
  • If the bug is the result of regression from the previous builds/versions, raise the alarm. The severity of such a bug may be low but the priority should be typically high.

Logs

Make it a point to attach logs or excerpts from the logs. This will help the developers to analyze and debug the system easily. Most of the time, if logs are not attached and the issue is not reproducible on the developer's end, they will revert to you asking for logs.
If the logs are not too large, say about 20-25 lines, you can paste it in bug report. But if it is large enough, add it to your bug report as an attachment, else your bug report will look like a log.

Other Considerations

  • If your bug is randomly reproducible, just mention it in your bug report. But don.t forget to file it. You can always add the exact steps to reproduce anytime later you (or anyone else) discover them. This will also come to your rescue when someone else reports this issue, especially if it.s a serious one.
  • Mention the error messages in the bug report, especially if they are numbered. For example, error messages from the database.
  • Mention the version numbers and build numbers in the bug reports.
  • Mention the platforms on which the issue is reproducible. Precisely mention the platforms on which the issue is not reproducible. Also understand that there is difference between the issue being not reproducible on a particular platform and it not being tested on that platform. This might lead to confusion.
  • If you come across several problems having the same cause, write a single bug report. The fix of the problem will be only one. Similarly, if you come across similar problems at different locations requiring the same kind of fix but at different places, write separate bug reports for each of the problems. One bug report for only one fix.
  • If the test environment on which the bug is reproducible is accessible to the developers, mention the details of accessing this setup. This will help them save time to setting up the environment to reproduce your bug.
  • Under no circumstances should you hold on to any information regarding the bug. Unnecessary iterations of the bug report between the developer and the tester before being fixed is just waste of time due to ineffective bug reporting.

Defect Tracking

To track defects, a defect workflow process has been implemented. Defect workflow training will be conducted for all test engineers. The steps in the defect workflow process are as follows:
a) When a defect is generated initially, the status is set to "New". (Note: How to document the defect, what fields need to be filled in and so on, also need to be specified.)
b) The Tester selects the type of defects:
  • Bug
  • Cosmetic
  • Enhancement
  • Omission
c) The tester then selects the priority of the defect:
  • Critical - fatal error
  • High - require immediate attention
  • Medium - needs to be resolved as soon as possible but not a showstopper
  • Low - cosmetic error
d) A designated person (in some companies, the software manager; in other companies, a special board) evaluates the defect and assigns a status and makes modifications of type of defect and/or priority if applicable).
  • The status "Open" is assigned if it is a valid defect.
  • The status "Close" is assigned if it is a duplicate defect or user error. The reason for "closing" the defect needs to be documented.
  • The status "Deferred" is assigned if the defect will be addressed in a later release.
  • The status "Enhancement" is assigned if the defect is an enhancement requirement.
e) If the status is determined to be "Open", the software manager (or other designated person) assigns the defect to the responsible person (developer) and sets the status to "Assigned".
f) Once the developer is working on the defect, the status can be set to "Work in Progress".
g) After the defect has been fixed, the developer documents the fix in the defect tracking tool and sets the status to .fixed,. if it was fixed, or "Duplicate", if the defect is a duplication (specifying the duplicated defect). The status can also be set to "As Designed", if the function executes correctly. At the same time, the developer reassigns the defect to the originator.
h) Once a new build is received with the implemented fix, the test engineer retests the fix and other possible affected code. If the defect has been corrected with the fix, the test engineer sets the status to "Close". If the defect has not been corrected with the fix, the test engineer sets the status to .Reopen.. Defect correction is the responsibility of system developers; defect detection is the responsibility of the AMSI test team. The test leads will manage the testing process, but the defects will fall under the purview of the configuration management group. When a software defect is identified during testing of the application, the tester will notify system developers by entering the defect into the PVCS Tracker tool and filling out the applicable information.
AMSI test engineers will add any attachments, such as a screen print, relevant to the defect. The system developers will correct the problem in their facility and implement the operational environment after the software has been baselined. This release will be accompanied by notes that detail the defects corrected in this release as well as any other areas that were changed as part of the release. Once implemented, the test team will perform a regression test for each modified area.
The naming convention for attachments will be defect ID (yyy), plus Attx (where x = 1, 2, 3. . . n) (for example, the first attachment for defect 123 should be called 123Att1). If additional changes have been made other than those required for previously specified software problem reports, they will be reviewed by the test manager, who will evaluate the need for additional testing. If deemed necessary, the manager will plan additional testing activities. He will have the responsibility for tracking defect reports and ensuring that all reports are handled on a timely basis.
Configuration Management The CM department is responsible for all CM activities and will verify that all parties involved are following the defined CM procedures. System developers will provide object code only for all application updates. It is expected that system developers will baseline their code in a CM tool before each test release. The AMSI test team will control the defect reporting process and monitor the delivery of associated program fixes. This approach will allow the test team to verify that all defect conditions have been properly addressed.

Mercury TestDirector - Evaluation

"The organization has meticulously tracked Test Requirements and Test Cases using spreadsheets but is finding this to be a cumbersome process as the test organization grows. It has been shown that this process has reduced the number of defects reaching the field but the cost of maintaining the approach is now impacting its effectiveness. Solution - invest in a test management tool or suite of tools."

Evaluation

The first thing an organization must accomplish is to catalogue what needs or requirements the software is expected satisfy. There are three categories or "points-of-view" that were addressed during the evaluation process: Management / Organization, Test Architecture, and End-User.
Needs Analysis: Management / Organization Perspective
Management clearly stated the objective for purchasing the Test Management software was: "The selected Test Management system shall enable end-users to author and maintain requirements and test cases in a web-enabled, shareable environment. Furthermore the test management tool shall support test management "best practices" as defined by the Test Organization. Minimum acceptable ROI is 4 hours saved for every hour currently invested."
Findings: Management / Organization Perspective
General: Mercury TestDirector enabled the end-users to organize, author, and maintain a hierarchy of Requirements, Test Cases, and defects in a web-enabled, shareable environment. The solution supported both the functional decomposition of the application under test (AUT) and the System / Business decomposition of the AUT. In both cases the solution supplied an effective non-technical interface for the business analyst and testers to author, update, report on, and maintain test artifacts. Requirements: Mercury TestDirector supplied an effective non-technical interface for the business analyst to author, update, report on, and maintain requirements. There does seem to be an intrinsic limitation to the "bill-of-material" approach to requirement captures and organization - it is two-dimensional. This limitation would not impact most organizations but if a complex relationship exists between the requirements or other artifacts then Mercury TestDirector would not easily meet the need (i.e. multi-dimensional relationships). Test Cases: Mercury TestDirector once again supplied an effective non-technical interface for the business to author, update, report on, and maintain Test Cases. The standard infrastructure supplied by Mercury TestDirector. does not enforce test case design and test case organization best practices but it does support them. Test Case design is basically a free-form text exercise with a very "thin" organizational overlay in the form of Steps - any standards or design discipline would have to come from the Test Designer. The newest Mercury TestDirector. enhancement Mercury Business Process Testing does seem to go a long way to address this but it was not part of the initial evaluation. Test Execution: Mercury TestDirector once again supplied a non-technical interface for the business to author, update, report on, and maintain Test Sets. The standard infrastructure supplied by Mercury TestDirector. does not enforce test set management and test set organization best practices but it does support them. Mercury TestDirector Test Lab does allow for integration with several test automation tools and supplies an open API to allow the purchaser the ability to integrate with almost any test automation tool. Test Lab, in our opinion, was the weakest link in the tool - management and maintenance of the Test Sets is basically a free-form folder approach. Defect Tracking: Mercury TestDirector once again supplied a non-technical interface for the business to author, update, report on, and maintain Defects. The standard infrastructure supplied by Mercury TestDirector does not enforce Defect management and Defect organization best practices but it does support them. Mercury TestDirector does supply an adequate Defect Tracking tool for most organizations - once again the approach is somewhat two-dimensional and might be found wanting if the test organization wanted to maintain more complex relationships between Defects or other test artifacts. Summary: From a Management / Organization Perspective Mercury TestDirector lives up to its reputation as Tier 1 Test Management tool. There are architectural choices that were made that enhance usability at the expense of functionality but Mercury TestDirector will meet the needs of most Testing Organizations.
Needs Analysis: Test Architecture
An Architectural framework has not been defined by the Test Organization therefore a general set of Architectural guidelines was applied during the evaluation. The Test Management application shall:
  1. Have a record of integrating successfully with all Tier 1 testing software vendors.
  2. Have a history of operational success in the appropriate environments.
  3. Have an established end-user community that is accessible to any end-user.
  4. Support enterprise wide collaboration.
  5. Support customization.
  6. Support several (1 to n) simultaneous engagements / projects.
  7. Provide a well-designed, friendly, and intuitive user interface.
  8. Provide a smooth migration / upgrade path from one iteration of the product to the next.
  9. Provide a rich online-help facility and effective training mechanisms (tutorials, courseware, etc.).
Findings: Test Architecture
Have a record of integrating successfully with all Tier 1 testing software vendors: Mercury TestDirector provides an open, well-defined API that currently supports integration with several tool-sets. Other vendors supply (usually at no cost) integration with Mercury TestDirector. We must give Mercury the highest marks here - if you cannot integrate another providers tool with Mercury TestDirector it will not be the fault of Mercury TestDirector. Have a history of operational success in the appropriate environments: Mercury TestDirector has a long history of being a fully functional web-enabled application that will operate in any windows environment. It should be noted that Mercury TestDirector functionality within a Unix based environment was not part of this evaluation and there are known issues with Mercury TestDirector within the context of a Unix environment. Have an established end-user community that is accessible to any end-user: Mercury TestDirector has a large, established, and supportive user community that is fully supported by Mercury Interactive. During our evaluation we found that all our preliminary questions could be answered through the knowledge base supplied by this community and even more surprising the answers were not always in favor of the tool - highest marks must be given to Mercury Interactive for supporting a free and open forum on their solution. Support enterprise wide collaboration: Mercury TestDirector is a fully functional web-enabled application that supports a concurrent user license model, which fully supports enterprise collaboration. Support customization: Mercury TestDirector supports customization of the application display elements and the data base model through an intuitive interface. Mercury TestDirector supports customization / integration with other tools and applications through its fully published API. During the evaluation it was almost "too easy" to customize the interface and available data elements - perhaps a tutorial on cost benefit analysis should be included in the tool set. .What is the on-going maintenance cost of each additional field or element? Support several (1 to n) simultaneous engagements / projects: Mercury TestDirector can support several simultaneous engagements / projects by allowing the user to create a separate database instance for each project. The issue with this approach is that it makes the re-integration of several projects back into a common baseline a manually intensive process. Provide a well-designed, friendly, and intuitive user interface: Mercury TestDirector hierarchical tree or "bill-of-materials" interface provides an extremely friendly interface. During the evaluation novice users became familiar with, and comfortable using, the solution in one or two days. The only complaint was that there is no .undo. key or undo option. Provide a smooth migration / upgrade path from one iteration of the product to the next: The evaluation did not provide an opportunity to validate the migration / upgrade path from one iteration of Mercury TestDirector to the next. The knowledge base found at the Mercury Interactive web site does indicate that there were issues in the past (over 2 years ago) but recent upgrades seem to have proceeded with little difficulty. Provide a rich online-help facility and effective training mechanisms: Mercury TestDirector does provide a rich online-help facility and an adequate tutorial. During the evaluation we found most users did not use or require the online-help facility for their day-to-day task due to the intuitive nature of the interface.
Needs Analysis: End-User
The End-User needs analysis should be a detailed catalogue of product requirements. These requirements were evaluated on a simple pass / fail criteria. The Test Management solution shall:
  1. Support the authoring of Test Requirements.
  2. Support the maintenance of Test Requirements.
  3. Support enterprise wide controlled access to Test Requirements (Web enabled preferred).
  4. Support discrete grouping or partitioning of Test Requirements.
  5. Support Traceability of requirements to Test Cases and Defects.
  6. Support "canned" and "user defined" queries against Test Requirements.
  7. Support "canned" and "user defined" reports against Test Requirements.
  8. Support coverage analysis of Test Requirements against Test Cases.
  9. Support the integration of other toolsets via a published API or equivalent capacity.
  10. Support the creation of Defects.
  11. Support the maintenance of Defects.
  12. Support the tracking of Defects.
  13. Support enterprise wide controlled access to Defects (Web enabled preferred).
  14. Support integration with all Tier 1 and 2 Test Management tools that support integration.
  15. Enable structured and ad-hoc searches for existing Defects.
  16. Enable the categorization of Defects.
  17. Enable customization of Defect content.
  18. Support "canned" and customized reports.
Findings: End-User
Mercury TestDirector certainly met or even exceeded the user community's expectations during the initial evaluation and eventual implementation. The one functional area that the users believed needed improvement was in the area of Reporting. Once the user community became accustomed to the application they found that the standard reports and the custom reporting capabilities did not meet their expectations.

Evaluation Summary

Mercury TestDirector is a Tier 1 Test Management solution that will meet the needs of most testing organizations. The strength of the application lies in its ability to allow novice users to quickly become proficient in its use and in its ability to quickly convey information. The weakness of the application lies in its somewhat simplistic approach to several aspects of the Test Management space. It should be noted that most organizations will find Mercury TestDirector. a suitable solution for all their Test Management needs only organizations with the most complex testing requirements will find issues with Mercury TestDirector - most of these issues can be addressed by using the Mercury TestDirector open API to integrate the appropriate toolset or application extension.

The Tao of Testing

Fred Brooks says that a third of IT development time and effort should be spent in testing. In any major software development project, with many people working on the coding, testing is essential to make sure that the system performs as the requirements say it should.
However, even if you're a developer team of one, you still have an interest in ensuring that your work has proper Quality Assurance (QA) documentation for three main reasons:
  1. Your future business depends entirely on your professional reputation - good clients will always look for a reputation for delivering their requirements. Anything which enhances that reputation is A Good Thing.
  2. Once the system is handed over to the client, you will then have an audit trail of testing, documenting that the system is working. If it later fails, you have a backup to safeguard you against potential legal and reputational action from a panicking client.
  3. If you want to feel self-interested about this (and most of us do at some point), remember that the client should pay for all of this testing - it's all chargeable time, which will result in the client getting a better system at the end of it.
So what do you have to do?

Test Scripts

Testing is a systematic discipline. You need to ensure that you test every piece of functionality against its specification, and that tests repeated after a bug has been fixed are the same as the test which highlighted the bug in the first place.
The best way to ensure that there are no gaps in your test programme is to produce a test script. This will allow you to check that no area of functionality slips through the net, either at design stage, or while the tests are being performed.
Your script should outline the steps which testers will follow, and list the expected results of each test. The detail you go into will depend on the time and budget available for testing.
A sensible way of distributing the scripts is electronically - often as a word processing document. This will allow testers to record any errors which occur together with the tests which brought them out. You should archive the documents in read only format with the rest of the project documentation. To be on the safe side, the testers should print and sign the sheets, and again, you'll store these with the documentation.

Types of Testing

Usability Testing
Usability testing should happen before a single element of the user interface (including information architecture) is fixed. Performing usability tests at this stage will allow you to change the interface reasonably quickly and cheaply - backing out an interface once it is coded is always going to be difficult. The best way to perform usability testing at this stage is to build a prototype of your proposed interface, and test that. Feedback from the testers will allow you to quickly amend your prototype and go through another iteration. Research shows that you only need to use five testers to perform the usability tests, and find 85% of the usability issues in each iteration. After a few iterations, you're unlikely to have substantive issues left.
Unit Testing
Typically, a system contains a number of pieces such as
  • 'the bit which displays the product'
  • 'the bit which puts the product into the shopping cart'
  • 'the bit which which verifies the credit card and takes the payment'
and so on. Each of these is a unit, and you need to make sure that each unit produces the appropriate output for the input you give it, including sensible error trapping. A reasonably common (but by no means the only) way of doing this might be at a command-line, as this bypasses possible errors introduced by the web server process itself. All you are doing is checking that the basic code does what it says on the tin. Note that for complicated systems, each unit might be a system in its own right with sub-units. The division between system and unit tests in such a case is a little hazy.
System Testing
Once you have all your units behaving as expected, you need to string them together into a system, and test it in a semi-real environment, which is only different from the way it will finally operate in that you're not working with real users and live data.
Integration Testing
As eBusinesses become more complicated, there is a growing need for the systems you produce to be integrated with other systems, like the financial reporting system, the logistics system, the customer database and so on. The purpose of integration testing is to ensure that your system's inputs from and outputs to the other systems are as expected. This means that you will need to ensure that test data fed between the systems is not going to be mistaken for live data. That said, at some point you will need to put a real transaction through your test system as an end-to-end test. A useful (and popular with developers) way of doing this is giving the team working on the site an allowance to spend on the site as 'friendly orders', in return for reporting back any customer-facing inconsistencies in the entire process.
Volume Testing
Far too often, an eBusiness is a victim of its own success. From the Slashdot effect to sheer stupidity of the Marketing department, if your system won't handle the loads put on it by users, you are going to lose both face and money. Larger eRetailers are now building their systems to handle over a thousand simultaneous users. While you may not be in that league, you need to simulate the loads you anticipate, plus leaving enough headroom for traffic growth. Get it wrong, and you may be facing a launch delay of months.
Regression Testing
Unless you are spectacularly lucky, your testing will highlight errors in your system. And there's a better than average chance that fixing those errors will introduce new errors. Regression testing is a matter of going back over your previous tests to ensure that:
  1. The bug you previously found has been fixed
  2. No new bugs have been introduced.
If you are producing release notes for each patch, it should be fairly easy to track down the cause of new errors introduced with a patch. The outcome of Regression is the inevitability that testing is an iterative discipline - you will need to continue to test, fix and regress until you have a system which meets the requirements.
User Acceptance Testing (UAT)
Once you have what appears to you to be a working system, which meets all the requirements, the final piece of work you must undertake before you can ask for your cheque is User Acceptance Testing. This is essentially stepping through all the functionality with the client staff who are actually going to use the system. If your system fails UAT, yet meets the paper requirements, then you have an issue with your requirements documentation. You will need to resolve this with the client - has there been scope changes since the requirements doc was signed off? - before you can justifiably ask the client to sign off all your work and pay you.

Report Errors

Once your testing has highlighted issues with the system, you need a process to ensure that each one is prioritised, diagnosed and fixed.
A common approach is to have a central database which logs each new error, and captures the following information:
  • An ID number
  • Status (new, in progress or resolved)
  • Priority:
    1. Red (ie causes non-functionality in the system. Must get fixed before go-live). I've also seen this subdivided into "Red" and "Mother of Red".
    2. Amber (ie causes interference to user tasks. Should get fixed before go-live).
    3. Green (ie causes annoyance to users. Will get fixed if there is time before go-live).
  • Patch ID which will resolve (or has resolved) the error
  • An owner - a named individual who will take responsibility for ensuring that the fix happens. This need not be the person who actually fixes it.
  • A detailed description of the error, including any error messages, and screenshots where appropriate.
On each update of an error report, you should record an audit trail, outlining what's been changed, who's changed it and when.

Evaluating Testing Software & Tools

Once a testing organization reaches a certain size, level of maturity, or workload the requirement to purchase / build testing software or aides becomes apparent. There are several classes of testing tools available today that make the testing process easier, more effective, and more productive. Choosing the appropriate tool to meet the testing organization's long-term and short-term goals can be a challenging and frustrating process. Following a few simple guidelines and applying a common-sense approach to software acquisition and implementation will lead to a successful implementation of the appropriate tool and a real return on investment (ROI).
One of the simplest questions to ask when looking at testing software is "What is ROI?" The simplest answer is "Anything that reduces the hours required to accomplish any given task". Testing tools should be brought into an organization to improve the efficiency of a proven testing process - the value of the actual process has already been established within the organization or within the industry.
Example: Test Management
The organization has meticulously tracked Test Requirements and Test Cases using spreadsheets but is finding this to be a cumbersome process as the test organization grows. It has been shown that this process has reduced the number of defects reaching the field but the cost of maintaining the approach is now impacting its effectiveness. Solution - invest in a test management tool or suite of tools.
Example: Test Automation
The organization has created a suite of manual test cases using a text editor but is finding it difficult to maintain, use, and execute these test cases efficiently as the test organization's role grows. The test cases have proven effective in detecting defects before they reach production but the time required to manage and execute these test cases is now impacting the return on investment. Solution - invest in a test automation tool or suite of tools.
Example: Defect Management
The test organization has implemented a defect tracking process using e-mail and a relational database but is now finding that defects are being duplicated and mishandled as the volume of defects grows. Solution - upgrade the current in-house solution or invest in a defect management tool.

Needs Analysis

The first thing an organization must accomplish is to catalogue what needs or requirements the Testing Software is expected satisfy. For an organization that is new to the acquisition process this can be a rather intimidating exercise. There are three categories or "points-of-view" that must be addressed: Management / Organization, Test Architecture, and End-User.
Needs Analysis: Management / Organization
Management or the test organization needs to clearly state what the objective is for purchasing testing software. The mission or goal that will be met by acquiring the test software and the expected ROI in terms of person-hours once the tool has been fully implemented. This can be accomplished by creating a simple mission statement and a minimum acceptable ROI. It should be noted that any ROI of less than 3 (hours) to 1 (current hour) should be considered insufficient because of the impact of introducing a new business process into the testing organization. This should be a concise statement on the overall goal (1 to 3 sentences) not a dissertation or catalogue of the products capabilities.
Example: Test Management
The selected Test Management system shall enable end-users to author and maintain requirements and test cases in a web-enabled, shareable environment. Furthermore the test management tool shall support test management .best practices. as defined by the Test Organization. Minimum acceptable ROI is 4 hours saved for every hour currently invested.
Example: Test Automation
The selected Test Automaton tool shall enable end-users to author, maintain, and execute automated test cases in a web-enabled, shareable environment. Furthermore the test automation tool shall support test case design, automation, and execution "best practices" as defined by the Test Organization. Minimum acceptable ROI is 5 hours saved for every hour currently invested.
Example: Defect Management
The selected Defect Management tool shall enable end-users to author, maintain, and track/ search defects in a web-enabled, e-mail-enabled, shareable environment. Furthermore the defect management tool shall support authoring, reporting, and tracking .best practices. as defined by the Test Organization. Minimum acceptable ROI is 4 hours saved for every hour currently invested.

Needs Analysis: Test Architecture

Management has defined the immediate organizational goal but the long-term architectural necessities must be defined by the testing organization. When first approaching the acquisition of testing software test organizations have usually not invested much effort in defining an overall test architecture. Lack of an overall Test Architecture can lead to product choices that may be effective in the short-term but lead to additional long-term costs or even replacement of a previously selected toolset. If an Architectural framework has been defined then the Architectural needs should already be clearly understood and documented - if not then a general set of Architectural guidelines can be applied. The selected Testing Software and tool vendor shall:
  1. Have a record of integrating successfully with all Tier 1 testing software vendors.
  2. Have a history of operational success in the appropriate environments.
  3. Have an established end-user community that is accessible to any end-user.
  4. Support enterprise wide collaboration.
  5. Support customization.
  6. Support several (1 to n) simultaneous engagements / projects.
  7. Provide a well-designed, friendly, and intuitive user interface.
  8. Provide a smooth migration / upgrade path from one iteration of the product to the next.
  9. Provide a rich online-help facility and effective training mechanisms (tutorials, courseware, etc.).
The general architectural requirements for any tool will include more objectives than the nine listed above but it is important to note that any objective should be applied across the entire toolset.

Needs Analysis: End-User

The End-User needs analysis should be detailed dissertation or catalogue of the envisioned product capabilities as they apply to the testing process - probably a page or more of requirements itemized or tabulated in such a way as to expedite the selection process. This is where the specific and perhaps unique product capabilities are listed. The most effective approach is to start from a set of general requirements and then extend into a catalogue of more specific/related requirements.
Example: Test Management
The Test Management solution shall:
  1. Support the authoring of Test Requirements.
  2. Support the maintenance of Test Requirements.
  3. Support enterprise wide controlled access to Test Requirements (Web enabled preferred).
  4. Support discrete grouping or partitioning of Test Requirements.
  5. Support Traceability of requirements to Test Cases and Defects.
  6. Support .canned. and .user defined. queries against Test Requirements.
  7. Support .canned. and .user defined. reports against Test Requirements.
  8. Support coverage analysis of Test Requirements against Test Cases.
  9. Support the integration of other toolsets via a published API or equivalent capacity.
  10. And so on.
The key here is to itemize the requirements to a sufficient level of detail and then apply these requirements against each candidate.
Example: Test Automation
The Test Automation solution shall:
  1. Support the creation, implementation, and execution of Automated Test Cases.
  2. Support enterprise wide, controlled access to Test Automation (Web enabled preferred).
  3. Support Data Driven Automated Test Cases.
  4. Support Keyword enabled Test Automation.
  5. Integrate with all Tier 1 and 2 Test Management tools that support integration.
  6. Integrate with all Tier 1 and 2 Defect Management tools that support integration.
  7. Enable Test Case Design within a non-technical framework.
  8. Enable Test Automation and verification of Web, GUI, .NET, and Java applications.
  9. Support the integration of other toolsets via a published API or equivalent capacity.
  10. And so on.
Once again the key is to itemize the requirements to a sufficient level of detail. It is not necessary that all the requirements are "realistic" in terms of what is available - looking to the future can often lead to choosing the tool that eventually does provide the desired ability.
Example: Defect Management
The Defect Management solution shall:
  1. Support the creation of Defects.
  2. Support the maintenance of Defects.
  3. Support the tracking of Defects.
  4. Support enterprise wide controlled access to Defects (Web enabled preferred).
  5. Support integration with all Tier 1 and 2 Test Management tools that support integration.
  6. Enable structured and ad-hoc searches for existing Defects.
  7. Enable the categorization of Defects.
  8. Enable customization of Defect content.
  9. Support "canned" and customized reports.
  10. And so on.
In all cases understanding of the basic needs will change as you proceed through the process of defining and selecting appropriate Testing Software. In all cases ensure that a particular vendor is not re-defining the initial goal but becoming an educated consumer in any given product space will lead to a redefinition of the basic requirements that should be recognized and documented.

Identify Candidates

Identifying a list of potential software candidates can be accomplished by investigating several obvious sources: Generic Web Search, Quality Assurance and Testing On-line forums, QA and Testing e-magazines, and co-workers. Once a list of potential software candidates has been created an assessment of currently available reviews can be done - with an eye for obvious marketing ploys. It is also important to note which products command the largest portion of the existing market and which product has the fastest growth rate - this relates to the availability of skilled end-users and end-user communities. Review the gathered materials against the needs analysis and create a short list (3 to 5) of candidates for assessment.

Assess Candidates

If you have been very careful and lucky your first encounter with the Vendors Sales force will occur at this time. This can be a frustrating experience if you are purchasing a relatively small number of licenses or an intimidating one if you are going to be placing an order for a large number of licenses. Being vague as to the eventual number of licenses can put you in the comfortable middle ground.
Assessments of any Testing Software should be accomplished onsite with a full demo version of the software. When installing any new Testing Software: install on a typical end-user system, check for .dll. file conflicts, check for registry entry issues, check for file conflicts, and ensure the software is operational. Record any issues discovered during the initial installation and seek clarification and resolution from the vendor.
Once the Testing Software has been installed assess the software against the previous needs analysis - first performing any available online tutorials and then applying the software to your real-world situation. Record any issues discovered during the assessment process and seek clarification and resolution from the vendor. Any additional needs discovered during an assessment should be recorded and applied to all candidates.
The assessment process itself will lead to the assessment team gaining skills in the product space. It is always wise to do one final pass of all candidates once the initial assessment is completed. Each software candidate can now be graded against the needs/ requirements and a final selection made.

Implementation

Implementation is obviously not part of the selection process but is a common point of failure. Test organizations will often invest in testing software but not in the wherewithal to successfully use it. Investing hundreds of thousands of dollars in software but not investing capital in onsite training and consulting expertise is a recipe for disaster. The software vendor should supply a minimum level of training for any large purchase and be able to supply or recommend onsite consultants / trainers that will ensure the test organization can take full advantage of the purchased software as quickly as possible. By bringing in the right mix of training, consulting, and vendor expertise the test organization can avoid much of the disruption any change in process brings and quickly gain the benefits that software can provide.

Test Tools & Aids: Reviews, Test Management, Test Automation, & Defect Tracking

There are several classes of testing tools available today that make the testing process easier, more effective, and more productive. When properly implemented these tools can provide a test organization with substantial gains in testing efficiency. Test tools need to fit into the overall testing architecture and should be viewed as process enablers - not as the "answer". Test organizations will often look to tools to facilitate: Reviews, Test Management, Test Design, Test Automation, and Defect Tracking. It is quite common for a testing tool or family of tools to address one or more of these needs but for convenience they will be addressed from a functional perspective not a "package" perspective.
It is important to note that, as with any tool, improper implementation or ad-hoc implementation of a testing tool can lead to a negative impact on the testing organization. Ensure a rigorous selection process is adhered to when selecting any tool including: needs analysis, on-site evaluation, and an assessment of return on investment (ROI).

Reviews

Reviews and technical inspections are the most cost effective to way detect and eliminate defects in any project. This is also one of the most underutilized testing techniques; consequently there are very few tools available to meet this need. Any test organization that is beginning to realize the benefits of Reviews and Inspections but is encountering scheduling issues between participants should be looking to an on-line collaboration tool. There are several on-line collaboration tools available for the review and update of documents but only one (that I'm aware of) that actually addresses the science and discipline of Reviews - ReviewProTM. by Software Development Technologies. I normally do not "plug" a particular tool but when the landscape is sparse I believe some recognition is in order.

Test Management

Test Management encompasses a broad range of activities and deliverables. The Test Management aid or set of aids selected by an organization should integrate smoothly with any communication (i.e. e-mail, network, etc.) and automation tools that will be applied during the testing effort. Generic management tools will often address the management of resources, schedules, and testing milestones but not the activities and deliverables specific to the testing effort: Test Requirements, Test Cases, Test Results, and Analysis. For more on Test Management see Testing & The Role of a Test Lead / Manager.
Test Requirements
Requirements or Test Requirements often become the responsibility of the testing organization. In order for any set of requirements to be useful to the testing organization they must be: maintainable, testable, consistent, and traceable to the appropriate test cases. The Requirements Management tool must be able to fulfill these needs within the context of the testing teams operational environment. For more on requirements see Testing & The Role of a Test Designer / Tester.
Test Cases
The testing organization is responsible for authoring and maintaining Test Cases. A Test Case authoring and management tool should enable the test organization to: catalogue, author, maintain, manually execute, and auto-execute automated tests. These Test Cases need to be traceable to the appropriate requirements and the results recorded in such a manner as to support coverage analysis. The key integration requirements when looking to Test Case management tools are: Does it integrate with the Test Requirement Aide (?), Does it integrate with the Test Automation tools being used (?), and will it support coverage analysis (?). For more on test cases see Test Deliverables: Test Plan, Test Case, Defect-Fault, and Status Report.
Test Results & Analysis
A Test Management suite of tools and aids needs to be able to report on several aspects of the testing effort. There is an immediate requirement for test case results - Which test case steps passed and which failed? There will be periodic status reports that will address several aspects of the testing effort: Test Cases executed / not executed, Test Cases passed / failed, Requirements tested / not tested, Requirements verified / not verified, and Coverage Analysis. The reporting mechanism should also support the creation of custom or ad-hoc reports that are required by the Test Organization.

Test Automation

Several Test Automation frameworks have been implemented over the years by commercial vendors and testing: Record & Playback, Extended Record & Playback, and Load / Performance. For more on Test Automation see Testing & The Role of a Test Automation Engineer.
Record & Playback
Record and Playback frameworks were the first commercially successful testing solutions. The tool simply records a series of steps or actions against the application and allows a playback of the recording to verify that the behavior of the application has not changed.
Extended Record & Playback
It became quickly apparent that a simple Record and Playback paradigm was not very effective and did not make test automation available to non-technical users. Vendors extended the Record & Playback framework to make the solution more robust and transparent. These extensions included: Data Driven, Keyword, and Component Based solutions.
Load / Performance
Load / Performance test frameworks provide a mechanism to simulate transactions against the application being tested and to measure the behavior of the application while it is under this simulated load. The Load / Performance tool enable the tester to: load, measure, and control the application.

Defect Tracking

The primary purpose of testing is to detect defects in the application before it is released into production; furthermore defects are arguably the only product the testing team produces that is seen by the project team. The Defect Management tools must enable the test organization to: author, track, maintain, trace to Test Cases, and trace to Test Requirements any defects found during the testing effort. The Defect Management tool also needs to support both scheduled and ad-hoc analysis and reporting on defects. For more information on defects see Testing & The Role of a Test Designer / Tester and Test Deliverables: Test Plan, Test Case, Defect-Fault, and Status Report.
Schedule
The test schedule defines when and by whom testing activities will be performed. The information gathered for the body of the Test Plan is used here in combination with the available resource pool to determine the test schedule. Experience from previous testing efforts along with a detailed understanding of the current testing goals will help make the test schedule as accurate as possible. There are several planning / scheduling tools available that make the plan easier to construct and maintain.

Test Case

Test Cases are the formal implementation of a test case design. The goal of any given test case or set of test cases is to detect defects in the system being tested. A Test Case should be documented in a manner that is useful for the current test cycle and any future test cycles - at a bare minimum each test case should contain: Author, Name, Description, Step, Expected Results, and Status (see - "Testing and The Role of a Test Designer Tester").
Test Case Name
The name or title should contain the essence of the test case including the functional area and purpose of the test. Using a common naming convention that groups test cases encourages reuse and help prevents duplicate test cases from occurring.
Test Case Description
The description should clearly state what sequence of business events to be exercised by the test case. The Test Case description can apply to one or more test cases; it will often take more than one test case to fully test an area of the application.
Test Case Step
Each test case step should clearly state the navigation, data, and events required to accomplish the step. Using a common descriptive approach encourages conformity and reuse. Keywords offer on of the most effective approaches to Test Case design and can be applied to both manual and automated test cases (see - "Keyword based Test Automaton").
Expected Results
The expected behavior of the system after any test case step that requires verification / validation - this could include: screen pop-ups, data updates, display changes, or any other discernable event or transaction on the system that is expected to occur when the test case step is executed.
Status
The operational status of the test case - Is it ready to be executed?

Defect-Fault

The primary purpose of testing is to detect defects in the application before it is released into production; furthermore defects are arguably the only product the testing team produces that is seen by the project team. Document defects in a manner that is useful in the defect remediation process - at a bare minimum each defect should contain: Author, Name, Description, Severity, Impacted Area, and Status (see - "Testing and The Role of a Test Designer Tester").
Defect Name
The name or title should contain the essence of the defect including the functional area and nature of the defect.
Defect Description
The description should clearly state what sequence of events leads to the defect and when possible a screen snapshot or printout of the error.
How to replicate
The defect description should provide sufficient detail for the triage team and the developer fixing the defect to duplicate the defect.
Defect severity
The severity assigned to a defect is dependent on: phase of testing, impact of the defect on the testing effort, and the Risk the defect would present to the business if the defect was rolled-out into production.
Impacted area
The Impacted area can be referenced by functional component or functional area of the system - often both are used.

Status Report

A test organization and members of the testing team will be called upon to create Status Reports on a daily, weekly, monthly, and project bases. The content of any status report should remain focused on the testing objective, scope, and schedule milestones currently being addressed. It is useful to state each of these at the beginning of each status report and then publish the achievements or goals accomplished during the current reporting period and those that will be accomplished during the next reporting period. Any known risks that will directly impact the testing effort need to be itemized here, especially any "showstoppers" that will prevent any further testing of one or more aspects of the system.
Reporting Period
The period covered in the current status report with references to any previous status reports that should be reviewed.
Mission Statement
The objective of the current testing effort needs to be clearly stated and understood by the testing team and any other organization involved in the deployment.
Current Scope
The components of the system being tested (hardware, software, middleware, etc.) need to be clearly defined as being "In Scope" and any related components that are not being tested need to be clearly itemized as "Out of Scope".
Schedule Milestones
Any schedule milestones being worked on during the current reporting period need to be listed and their current status clearly stated. Milestones that were scheduled but not addressed during the current reporting period need to be raised as Risks.
Risks
Risks are factors that could negatively impact the current testing effort. An itemized list of risks that are currently impacting the testing effort should be drawn up and their impact on the testing effort described.

Test Deliverables: Test Plan, Test Case, Defect-Fault, and Status Report

There are core sets of test deliverables that are required for any testing phase: Test Plan, Test Case, Defect-Fault, and Status Report. When taken together this set of deliverables take the testing team from planning, to testing, through defect remediation and status reporting. This does not represent a definitive set of test deliverables but it will help any test organization begin the process of determining an appropriate set of deliverables. One common misconception is that these must be presented as a set of documents but there are toolsets / applications available that capture the content and intent of these deliverables without creating a document or set of documents. The goal is to capture the required content in a useful and consistent framework as concisely as possible.

Test Plan

At a minimum the Test Plan presents the test: objectives, scope, approach, assumptions, dependencies, risks, and schedule for the appropriate test phase or phases. Many test organizations will use the test plan to describe the testing phases, testing techniques, testing methods, and other general aspects of any testing effort. General information around the practice of testing should be kept in a "Best Practices" repository - Testing Standards. This avoids redundant and conflicting information being presented to the reader and keeps the Test Plan focused on the task at hand - planning the testing effort (see - "Testing and The Role of a Test Lead Test Manager").
Objectives - Mission Statement
The objective of the current testing effort needs to be clearly stated and understood by the testing team and any other organization involved in the deployment. This should not be a sweeping statement on testing the .whole application. (unless that is actually the goal), instead the primary testing objectives should relate to the purpose of the current release. If this was a Point-of-Sales system and the purpose of the current release was to provide enhanced on-line reporting functionality then the objective / mission statement could be:
"To ensure the enhanced on-line reporting functionality performs to specification and to verify any existing functionality deemed to be In-Scope."
The test objective describes the "why" of the testing effort the details of the "what" will be described in the scope portion of the test plan. Once again any general testing objectives should be documented in the "Best Practices" repository. General or common objectives for any testing effort could include: expanding the test case regression suite, documenting new requirements, automating test cases, updating existing test cases, and so on.
In Scope
The components of the system to be tested (hardware, software, middleware, etc.) need to be clearly defined as being "In Scope". This can take the form of an itemized list of the in scope: requirements, functional areas, systems, business functions, or any aspect of the system that clearly delineates the scope to the testing organization and any other organization involved in the deployment. The "What is to be tested" question should be answered by the in scope portion of the test plan - the aspects of the system that will be covered by the current testing effort.
Out of Scope
The components of the system that will not be tested also need to be clearly defined as being "Out of Scope". This does not mean that these system components will not be executed / exercised, just that test cases will not be included that specifically tests these system components. The "What is NOT to be tested" question should be answered by the out of scope portion of the test plan. Often neglected this part of the test plan begins to deal with the risk based scheduling that all test organizations must address - What parts of the system can I afford not to test? The testing approach section of the test plan should address this question.

Approach

This section defines the testing activities that will be applied against the application for the current testing phase. This addresses how testing will be accomplished against the in scope aspects of the system and any mitigating factors that may reduce the risk of leaving aspects of the system out of scope. The approach should be viewed as a "to do" list that will be fully detailed in the test schedule. The approach should clearly state which aspects of the system are to be tested and how - Backup / Recovery Testing, Compatibility/Conversion Testing, Destructive Testing, Environment Testing, Interface Testing, Parallel Testing, Procedural Testing, Regression Test, Security Testing, Storage Testing, Stress & Performance Testing, and any other testing approach that is applicable to the current testing effort. The reasoning for using any given set of approaches should be described, usually from the perspective of risk.
Assumptions
Assumptions are facts or statements and/or expectations of other teams, which the Test Team believes to be true - assumptions specific to each testing phase should be documented. These are the assumptions upon which the test approach was based - listed assumptions are also risks should they be incorrect. If any of the assumptions prove not to be true, there may be a negative impact on the testing activities. In any environment there is a common set of assumptions that apply to any given release. These common assumptions should be documented in the "Best Practices" repository, only assumptions unique to the current testing effort should be documented and perhaps those common assumptions critical to the current situation.
Dependencies
Dependencies are events or milestones that must be completed in order to proceed within any given testing activity. These are the dependencies that will be presented in the test schedule. In this section the events or milestones which are deemed critical to the testing effort should be listed and any potential impacts / risks to the testing schedule itemized.
Risks
Risks are factors that could negatively impact the testing effort. An itemized list of risks should be drawn up and their potential impact on the testing effort described. Risks that have been itemized in the Project Plan need not be repeated here unless the impact to the testing effort has not already been clearly stated.
Schedule
The test schedule defines when and by whom testing activities will be performed. The information gathered for the body of the Test Plan is used here in combination with the available resource pool to determine the test schedule. Experience from previous testing efforts along with a detailed understanding of the current testing goals will help make the test schedule as accurate as possible. There are several planning / scheduling tools available that make the plan easier to construct and maintain.

Test Case

Test Cases are the formal implementation of a test case design. The goal of any given test case or set of test cases is to detect defects in the system being tested. A Test Case should be documented in a manner that is useful for the current test cycle and any future test cycles - at a bare minimum each test case should contain: Author, Name, Description, Step, Expected Results, and Status (see - "Testing and The Role of a Test Designer Tester").
Test Case Name
The name or title should contain the essence of the test case including the functional area and purpose of the test. Using a common naming convention that groups test cases encourages reuse and help prevents duplicate test cases from occurring.
Test Case Description
The description should clearly state what sequence of business events to be exercised by the test case. The Test Case description can apply to one or more test cases; it will often take more than one test case to fully test an area of the application.
Test Case Step
Each test case step should clearly state the navigation, data, and events required to accomplish the step. Using a common descriptive approach encourages conformity and reuse. Keywords offer on of the most effective approaches to Test Case design and can be applied to both manual and automated test cases (see - "Keyword based Test Automaton").
Expected Results
The expected behavior of the system after any test case step that requires verification / validation - this could include: screen pop-ups, data updates, display changes, or any other discernable event or transaction on the system that is expected to occur when the test case step is executed.
Status
The operational status of the test case - Is it ready to be executed?

Defect-Fault

The primary purpose of testing is to detect defects in the application before it is released into production; furthermore defects are arguably the only product the testing team produces that is seen by the project team. Document defects in a manner that is useful in the defect remediation process - at a bare minimum each defect should contain: Author, Name, Description, Severity, Impacted Area, and Status (see - "Testing and The Role of a Test Designer Tester").
Defect Name
The name or title should contain the essence of the defect including the functional area and nature of the defect.
Defect Description
The description should clearly state what sequence of events leads to the defect and when possible a screen snapshot or printout of the error.
How to replicate
The defect description should provide sufficient detail for the triage team and the developer fixing the defect to duplicate the defect.
Defect severity
The severity assigned to a defect is dependent on: phase of testing, impact of the defect on the testing effort, and the Risk the defect would present to the business if the defect was rolled-out into production.
Impacted area
The Impacted area can be referenced by functional component or functional area of the system - often both are used.

Status Report

A test organization and members of the testing team will be called upon to create Status Reports on a daily, weekly, monthly, and project bases. The content of any status report should remain focused on the testing objective, scope, and schedule milestones currently being addressed. It is useful to state each of these at the beginning of each status report and then publish the achievements or goals accomplished during the current reporting period and those that will be accomplished during the next reporting period. Any known risks that will directly impact the testing effort need to be itemized here, especially any "showstoppers" that will prevent any further testing of one or more aspects of the system.
Reporting Period
The period covered in the current status report with references to any previous status reports that should be reviewed.
Mission Statement
The objective of the current testing effort needs to be clearly stated and understood by the testing team and any other organization involved in the deployment.
Current Scope
The components of the system being tested (hardware, software, middleware, etc.) need to be clearly defined as being "In Scope" and any related components that are not being tested need to be clearly itemized as "Out of Scope".
Schedule Milestones
Any schedule milestones being worked on during the current reporting period need to be listed and their current status clearly stated. Milestones that were scheduled but not addressed during the current reporting period need to be raised as Risks.
Risks
Risks are factors that could negatively impact the current testing effort. An itemized list of risks that are currently impacting the testing effort should be drawn up and their impact on the testing effort described.