Responsibilities and Deliverables
Test Case DesignA test case design is not the same thing as a test case . the design captures what the Test Designer / Tester is attempting to accomplish with one or more test cases. This can be as informal as a set of notes or a formal deliverable that describes the content of the test cases before the actual tests are implemented.
Test Cases
A test case is a sequence of steps designed to test one or more aspect of the application. At a minimum, each test case step should include: a description of the action, supporting data, and expected results. The test case deliverable can be captured using a "test case template" or by using one of the several commercial / freeware / shareware tools available.
Test Case Execution
Test case execution is the actual running or execution of a test case. This can be done manually or by automated scripts that perform the actions of the test case.
Capturing Test Results
Capturing test results is a simple itemization of the success or failure of any given step in a test case. Failure of a test case step does not necessarily mean that a defect has been found -- it simply means the application did not behave as expected within the context of the test case. There are several common reasons for a test case step to fail: invalid test design / expectations, invalid test data, or invalid application state. The tester should ensure that the failure was caused by the application not performing to specification and that can the failure can be replicated before raising a defect.
Document Defects
The tester documents any defects found during the execution of the test case. The tester captures: tester name, defect name, defect description, severity, impacted functional area, and any other information that would help in the remediation of the defect. A defect is the primary deliverable of any tester . it is what is used to communicate to the project team.
Test Coverage Analysis
The tester must determine if the testing mandate and defined testing scope have been satisfied -- then document the current state of the application. How coverage analysis is performed is dependent on the sources available to the tester. If the tester was able to map test cases to well formulated requirements then coverage analysis is a straightforward exercise. If this is not the case the tester must map test cases to functional areas of the application and determine if the coverage is "sufficient" -- this is obviously more of a "gut-check" than a true analysis.
Testing Mandate and Scope
The Test Designer / Tester must have a clear understanding of the Testing Mandate and Testing Scope before proceeding with their task - for more on Testing Mandates and Testing Scope see the associate article "Testing and The Role of a Test Lead / Test Manager". The temptation of any tester is to test "everything"; the problem is that this cannot be done with any application within a reasonable timeframe. The tester must ensure any test cases to be designed and executed fit into the scope of the current testing effort -- if not then either the scope needs to be redefined or the test cases need to be dropped.Test Phases and Test Case Design
The testing phase impacts the style, content, and purpose of any given test case. If the designer can think of the test phases in terms of "levels of abstraction" or "range of view" then the types of tests that need to be implemented for any given phase of testing become apparent.Unit Test
The test designer, in this case the developer, creates test cases that test at the level of a line of code.
Function Test
The test designer creates test cases that test at the level of distinct business events or functional process.
System Test
The test designer creates test cases that test at the level of the system (Stress, Performance, Security, Recovery, etc.) or complete end-to-end business threads.
Acceptance Test
The test designers, in this case a system matter expert or end-user, creates test cases that test at the level of business procedures or operational processes.
Any given test case should not replicate the testing accomplished in a previous phase of testing. One of the most common mistakes that testers and testing organizations make is to replicate the previous coverage accomplished in Function test when creating test cases for System test.
Defect Content
A defect is the most important deliverable a test designer creates. The primary purpose of testing is to detect defects in the application before it is released into production; furthermore defects are arguably the only product the testing team produces that is seen by the project team. The tester must document defects in a manner that is useful in the defect remediation process . at a bare minimum each defect should contain: Author, Name, Description, Severity, Impacted Area, and Status. For example, if a defect was discovered during functional testing of a typical Login screen then the information captured by the defect could look like this:Defect Name / Title
The name or title should contain the essence of the defect including the functional area and nature of the defect. All defects relating to the login screen would begin with "Login Screen" but the remainder of the name would depend on the defect.
Example: "Login Screen -- User not locked out after three failed login attempts"
Defect Description
The description should clearly state what sequence of events leads to the defect and when possible a screen snapshot or printout of the error.
Example: "Attempted to Login using the Login Screen using an invalid User Id. On the first two attempts the application presented the error message "Invalid User Id or Password -- Try Again" as expected. The third attempt resulted in the same error being displayed (ERROR). According to the requirements the User should have been presented with the message "Invalid User Id or Password -- Contact your Administrator" and been locked out of the system."
How to replicate
The defect description should provide sufficient detail for the triage team and the developer fixing the defect to duplicate the defect.
Defect severity
The severity assigned to a defect is dependent on: phase of testing, impact of the defect on the testing effort, and the Risk the defect would present to the business if the defect was rolled-out into production. Using the "Login Screen" example if the current testing phase was Function Test the defect would be assigned a severity of "Medium" but if the defect was detected or still present during System Test then it would be assigned a severity of "High".
Impacted area
The Impacted area can be referenced by functional component or functional area of the system -- often both are used. Using the "Login Screen" example the functional unit would be "Login Screen" and the functional area of the system would be "Security".
Relationships with other Team Roles
Test Lead / ManagerThe Test Designer must obviously have a good working relationship with the Test Lead but more importantly the Test Designer must keep the Test Lead aware of any challenges that could prevent the Test Team from being successful. Often the Test Designer (or Designers) have a much clearer understanding of the current state of the application and potential challenges given their close working relationship with the application under test.
Test Automation Engineer
If the test cases are going to be automated then the Test Designer must ensure the Test Automation Engineer understands precisely what the test case is attempting to accomplish and how to respond if a failure occurs during execution of the test case. The Test Designer should be prepared to make the compromises required in order for the Test Automation Engineer to accomplish the task of automation, as long as these compromises do not add any significant risk to the application.
No comments:
Post a Comment