Friday, October 22, 2010

Testing Terminology (Test Glossary)

Testing Terminology can be somewhat ambiguous for both experienced testers and neophyte testers alike. The following set of definitions is by no means the industry standard and should be viewed as a guideline in forming an understanding of Testing Terminology. The terms used in this Glossary are organized by subject matter.

Testing Levels / Phases

Testing levels or phases should be applied against the application under test when the previous phase of testing is deemed to be complete . or .complete enough.. Any defects detected during any level or phase of testing need to be recorded and acted on appropriately.
Design Review "The objective of Design Reviews is to verify all documented design criteria before development begins." The design deliverable or deliverables to be reviewed should be complete within themselves. The environment of the review should be a professional examination of the deliverable with the focus being the deliverable not the author (or authors). The review must ensure each design deliverable for: completeness, correctness, and fit (both within the business model, and system architecture).
Design reviews should be conducted by: system matter experts, testers, developers, and system architects to ensure all aspects of the design are reviewed.
Unit Test "The objective of unit test is to test every line of code in a component or module." The unit of code to be tested can be tested independent of all other units. The environment of the test should be isolated to the immediate development environment and have little, if any, impact on other units being developed at the same time. The test data can be fictitious and does not have to bear any relationship to .real world. business events. The test data need only consist of what is required to ensure that the component and component interfaces conform to the system architecture. The unit test must ensure each component: compiles, executes, interfaces, and passes control from the unit under test to the next component in the process according to the process model.
The developer in conjunction with a peer should conduct unit test to ensure the component is stable enough to be released into the product stream.
Function Test "The objective of function test is to measure the quality of the functional (business) components of the system." Tests verify that the system behaves correctly from the user / business perspective and functions according to the requirements, models, storyboards, or any other design paradigm used to specify the application. The function test must determine if each component or business event: performs in accordance to the specifications, responds correctly to all conditions that may be presented by incoming events / data, moves data correctly from one business event to the next (including data stores), and that business events are initiated in the order required to meet the business objectives of the system.
Function test should be conducted by an independent testing organization to ensure the various components are stable and meet minimum quality criteria before proceeding to System test.
System Test "The objective of system test is to measure the effectiveness and efficiency of the system in the "real-world" environment." System tests are based on business processes (workflows) and performance criteria rather than processing conditions. The system test must determine if the deployed system: satisfies the operational and technical performance criteria, satisfies the business requirements of the System Owner / Users / Business Analyst, integrates properly with operations (business processes, work procedures, user guides), and that the business objectives for building the system were attained.
There are many aspects to System testing the most common are:
  • Security Testing: The tester designs test case scenarios that attempt to subvert or bypass security.
  • Stress Testing: The tester attempts to stress or load an aspect of the system to the point of failure; the goal being to determine weak points in the system architecture.
  • Performance Testing: The tester designs test case scenarios to determine if the system meets the stated performance criteria (i.e. A Login request shall be responded to in 1 second or less under a typical daily load of 1000 requests per minute.)
  • Install (Roll-out) Testing: The tester designs test case scenarios to determine if the installation procedures lead to an invalid or incorrect installation.
  • Recovery Testing: The tester designs test case scenarios to determine if the system meets the stated fail-over and recovery requirements.
System test should be conducted by an independent testing organization to ensure the system is stable and meets minimum quality criteria before proceeding to User Acceptance test.
User Acceptance Test "The objective of User Acceptance test is for the user community to measure the effectiveness and efficiency of the system in the "real-world" environment.". User Acceptance test is based on User Acceptance criteria, which can include aspects of Function and System test. The User Acceptance test must determine if the deployed system: meets the end Users expectations, supports all operational requirements (both recorded and non-recorded), and fulfills the business objectives (both recorded and non-recorded) for the system.
User Acceptance test should be conducted by the end users of the system and monitored by an independent testing organization. The Users must ensure the system is stable and meets the minimum quality criteria before proceeding to system deployment (roll-out).

Testing Roles

As in any organization or organized endeavor there are Roles that must be fulfilled within any testing organization. The requirement for any given role depends on the size, complexity, goals, and maturity of the testing organization. These are roles, so it is quite possible that one person could fulfill many roles within the testing organization.
Test Lead or Test Manager The Role of Test Lead / Manager is to effectively lead the testing team. To fulfill this role the Lead must understand the discipline of testing and how to effectively implement a testing process while fulfilling the traditional leadership roles of a manager. What does this mean? The manager must manage and implement or maintain an effective testing process.
Test Architect The Role of the Test Architect is to formulate an integrated test architecture that supports the testing process and leverages the available testing infrastructure. To fulfill this role the Test Architect must have a clear understanding of the short-term and long-term goals of the organization, the resources (both hard and soft) available to the organization, and a clear vision on how to most effectively deploy these assets to form an integrated test architecture.
Test Designer or Tester The Role of the Test Designer / Tester is to: design and document test cases, execute tests, record test results, document defects, and perform test coverage analysis. To fulfill this role the designer must be able to apply the most appropriate testing techniques to test the application as efficiently as possible while meeting the test organizations testing mandate.
Test Automation Engineer The Role of the Test Automation Engineer to is to create automated test case scripts that perform the tests as designed by the Test Designer. To fulfill this role the Test Automation Engineer must develop and maintain an effective test automation infrastructure using the tools and techniques available to the testing organization. The Test Automation Engineer must work in concert with the Test Designer to ensure the appropriate automation solution is being deployed.
Test Methodologist or Methodology Specialist The Role of the Test Methodologist is to provide the test organization with resources on testing methodologies. To fulfill this role the Methodologist works with Quality Assurance to facilitate continuous quality improvement within the testing methodology and the testing organization as a whole. To this end the methodologist: evaluates the test strategy, provides testing frameworks and templates, and ensures effective implementation of the appropriate testing techniques.

Testing Techniques

Overtime the IT industry and the testing discipline have developed several techniques for analyzing and testing applications.
Black-box Tests Black-box tests are derived from an understanding of the purpose of the code; knowledge on or about the actual internal program structure is not required when using this approach. The risk involved with this type of approach is that .hidden. (functions unknown to the tester) will not be tested and may not been even exercised.
White-box Tests or Glass-box tests White-box tests are derived from an intimate understanding of the purpose of the code and the code itself; this allows the tester to test .hidden. (undocumented functionality) within the body of the code. The challenge with any white-box testing is to find testers that are comfortable with reading and understanding code.
Regression tests Regression testing is not a testing technique or test phase; it is the reuse of existing tests to test previously implemented functionality--it is included here only for clarification.
Equivalence Partitioning Equivalence testing leverages the concept of "classes" of input conditions. A "class" of input could be "City Name" where testing one or several city names could be deemed equivalent to testing all city names. In other word each instance of a class in a test covers a large set of other possible tests.
Boundary-value Analysis Boundary-value analysis is really a variant on Equivalence Partitioning but in this case the upper and lower end of the class and often values outside the valid range of the class are used for input into the test cases. For example, if the Class in "Numeric Month of the Year" then the Boundary-values could be 0, 1, 12, and 13.
Error Guessing Error Guessing involves making an itemized list of the errors expected to occur in a particular area of the system and then designing a set of test cases to check for these expected errors. Error Guessing is more testing art than testing science but can be very effective given a tester familiar with the history of the system.
Output Forcing Output Forcing involves making a set of test cases designed to produce a particular output from the system. The focus here is on creating the desired output not on the input that initiated the system response.

No comments:

Post a Comment