Wednesday, May 26, 2010

Tuesday, May 25, 2010

Six facts about software application risks

Similar to SDLC (software development lifecycle management), there is RLC or Risk lifecycle management in a software application in which there are different stages involved. The different stages could be risk identification, risk assessment, impact analysis, countermeasure identification, countermeasure assessment, risk plan etc. There are certain facts about Risk:

  • 1. All Risks identified or perceived in a software application do not necessarily happen in real application usage scenario: This is a proven fact that all risks identified or perceived from an application during its pre-launch stage do not happen during post launch real-life usage stage. Some risks perceived may not happen ever. And some unidentified risks may appear later. Whatever is the case, it is always good to identify the risks that may occur during its usage, the more realistic the better. It is not important that they happen in real scenario, more important is to plan how to cope up if at all they happen.
    2. All risks have an impact: All risks have an impact – large, medium or small, but they have. It is the impact that makes its severity high, medium or low and accordingly a plan is prepared to handle the risk, when it happens.
    3. Same risk in different circumstances will have different impact: The same risk will vary in terms of its severity under different circumstances of usage, user base, geographic location, type of application etc.
    4. No application is 100% risk free, whatsoever countermeasures are taken for it, and only thing that gets done with the countermeasures is lowering of risk: A risk plan to countermeasure a risk never fool-proofs a risk’s impact, only it helps in lowering its impact to a certain level.
    5. Risk Impact Cost vs. Countermeasure cost: It is very important to have an analysis of both before deciding on the plan. Some risk may be very severe but its countermeasure cost could be unaffordable.
    6. The biggest risk in any application is identification of wrong risks, impact, and plan: Identification of wrong risk with right estimation of impact and countermeasure is useless. Equally useless is identification of right risk with wrong impact analysis (thereby underestimating or overestimating the impact) and arriving at a wrong countermeasure. Right risk identification with right impact analysis but with wrong countermeasure also is a waste of efforts.

  • Eight Checkpoints for Testers during Software Testing

    Testing is a process of facilitating development team/ project team in improving the quality of software before it is released to the customer for use. Some key essential steps are always there that need to be followed by the Testers during Software Testing to streamline the process. The most important checkpoints for testers during software testing, in my opinion, would be:
    (1)- Complete document of customer and business requirements specifying all the requirements for the development of the product.
    (2)- Latest completed build and URL of the application to hit for Testing.
    (3)- Software requirements for installing the application on PCs for testing or for database connectivity.
    (4)-Training/Demo of the project by development team to Testing Team so as to understand the flow/functionality of the software.
    (5)- Scope of testing should be made clear by the development team/Head.
    (6)- If the build comes for retesting, then it should be accompanied by the revised document which includes the updated changes incorporated in the software.
    (7)- Clarity regarding which member of development team should be contacted in case of any clarification required during the testing phase regarding the functionality of the module or if testers encounter a showstopper in the Software.
    (8)- After release of the bugs list to development team, how much time they will require for fixing the bugs.

    Twenty ways to ensure complete coverage of software testing


    To ensure complete coverage of software testing, testing team must be careful about certain activities that are part of the process. If the software testing is not complete as per the business and customer requirements, it could have a severe adverse effect during or post implementation of the software at the customer site. The more is the coverage, the less are the chances of any bugs passing to the implementation phase. So, to ensure the complete coverage of software product and to find the maximum possible bugs or defects, testers have to ensure following steps:
    1. Ensure that the documents defining the business and customer requirements are complete and correct. How to do that can be taken up in a separate topic.
    2. Ensure that the development team has clearly understood the documents.
    3. Ensure that testers themselves have thoroughly read and understood the documents.
    4. Prepare a clear cut scope of testing based on product documents.
    5. The strategy and Test Planning is as per system requirements.
    6. Decide test methodology and test tools (if any), and test schedule.
    7. Prepare Test Cases based on business rules and customer requirements.
    8. Ensure that the test cases are extensive and sensible to cover the complete requirements testing.
    9. Ensure that during testing no changes in the test environment (coding etc.) is done by development team.
    10. Ensure that development team representatives (1 or all) are present during the complete testing.
    11. Create Test Scenarios based on test cases.
    12. Observe the result of each test case and record it accordingly.
    13. Prepare a comprehensive and detailed test report explaining each of the test case, scenario and its result elaborately.
    14. Ensure that all bugs reported should make sense (no duplication/overlapping of scenarios and no repetition of bugs reported)
    15. Ensure that the complete testing finishes as per schedule.
    16. The final report submitted should clearly state the areas not covered under testing, reason for the same and its impact on the product.
    17. Simulate any bugs that are not clear to development team.
    18. Ensure that you have a tentative plan from the development team when they are fixing all bugs and submitting it back to testing team.
    19. Verify all bugs fixed and ensure that the development team is sitting with testers during verification.
    20. Prepare the final report and submit back to Development team, giving the status of each bug fixed, as verified as fixed or not fixed. Report any new bugs arrived in the software while fixing these bugs.
    http://en.wikipedia.org/wiki/Software_testing

    http://searchsoftwarequality.techtarget.com/generic/0,295582,sid92_gci1325140_mem1,00.html

    http://itknowledgeexchange.techtarget.com/quality-assurance/eight-checkpoints-for-testers-during-software-testing/

    http://en.wikipedia.org/wiki/Data_warehouse

    Types of Test Automation Frameworks

    In a previous posting, we examined the evolution of automation frameworks.
    How are frameworks being implemented today by various QA organizations? Here’s a basic summary of the types of test automation currently in use:
    Ad-Hoc
    • Scripting developed in reactionary mode to test a single issue or fix
    • Test case steps are part of each Action script: high maintenance, low reusability
    • Contains some data inputs stored in test script’s datasheet, but not true data-driven
    Data-Driven
    • Scripts are an assembly of function calls
    • Data for test cases read in from external source (e.g., Excel spreadsheet, ODBC database)
    • Results can be captured externally per script execution (i.e., spreadsheet, database)
    Keyword-Driven
    • Test cases are expressed as sequence of keyword-prompted actions
    • A Driver script executes Action scripts which call functions as prompted by keywords
    • No scripting knowledge necessary for developing and maintaining test cases
    Model-Driven
    • Descriptive programming is used to respond to dynamic applications (e.g., websites)
    • Actually, this is a method which can used within other solution types
    • Objects defined by parameterized code (i.e., regular expressions, descriptive programming)
    • Custom functions used to enhance workflow capabilities
    3rd-Party: HP (Mercury) Quality Center with QuickTest Pro and Business Process Testing
    • Similar to keyword-driven but controlled via HP Quality Center database
    • Can be used for collaborative work efforts: Business Analysts, Testers, Automaters
    • Begins with high-level test requirements:
    • Business Requirements defined
    • Application Areas (shared resources) defined
    • Business Components defined and grouped under Application Areas
    • Test steps defined
    • Tests can be defined as Scripted Components (i.e., QTP scripts with Expert Mode)
    • Business Process Tests and Scripted Components are collected under Test Plan
    • Test Runs are organized from Test Plan components and executed from Test Lab
    • Test Runs can be scheduled and/or executed on remote/virtual test machines (with QTP)
    • Defects can be generated automatically or entered manually per incident
    • Dashboard available for interactive status monitoring
    Intelligent Query-Driven
    • Agile
    • Object Oriented
    • Constructed as a layered framework
    • Test data is compiled at runtime using data-mining techniques
    Of course, a combination of these techniques could also be used based on the scope and depth of test requirements.

    Functional Testing of Web Services: Part IV


    Two other web service test tools which deserve honorable mention in our comparison tests were: soapUI, available as Open Source or in a pro version distributed by eviware; and HP’s QuickTest Professional Web Services Add-In.
    soapUI soapUI logo
    This tool does not require a detailed knowledge of complicated technologies like .NET. Although the user interface is not as intuitive as that of some other tools, a tester who has some experience in programming can learn to create, organize and perform test activities fairly quickly.
    soapUI allows structuring your test project into test suites that contain test cases, which can contain test steps. This structure is well-managed: you can add, modify, delete and change the order of every item in the structure. soapUI provides the tools to manage and run your test cases, and to view the SOAP responses to your test requests. You can even include limited load testing scenarios. For added flexibility, soapUI supports Groovy Script, a scripting language similar to Java.
    QuickTest Pro Web-Services Add-In HP
 logo
    QuickTest Pro with the Web Services Add-In offers a wizard to create a test object that represents the Web Service and port you want to test, and inserts the relevant steps directly into your test or component. This accelerates the process of designing a basic test that checks the operations that your Web service supports. You then update the wizard-generated steps of your test or component by replacing the generated argument values with known valid values, updating the expected values, and selecting the nodes you want to check in your checkpoints.
    A QTP XML Warehouse setting is used to store request data, and a QTP Object Repository is used to store responses as checkpoints for verification.
    The wizard generates a generic XML structure as a place holder for the expected XML return values. However, before you can actually run your test or component, you must replace the default values with the appropriate values for your test. A valid SOAP request can be imported into the XML Warehouse as a separate step, and a separate XML checkpoint would need to be created for each test case. Defining tests is a little cumbersome and the UI is a little clumsy, but if you’re already using QTP for functional testing, including web service regression tests in your test suites is an easy option.

    Test Automation Metrics: Tracking Progress


    Did you ever have one of those days/weeks/months when your boss walks nervously into your office and says in that ‘I-just-came-out-of-a-very-long-and-gruesome-meeting’ tone of voice, “Well, , how’s that automation project coming all?” You figure that just answering “Fine” probably is not enough in this situation, since the fate of the company — and certainly your job — hangs in the balance. Oh, if only you had been tracking your progress all along!
    Well, have no fear, the answer is here!
    In a previous blog entry, I stated that “automated testing is the programmatic execution of test cases.” That gives us the starting point for our metrics: manual test case count. Generally, these counts fall into two categories: strategically Planned Test Cases, and actually written Manual Test Cases.
    Most automation effort tends to be focused on coverting written manual test cases to automated test scripts, so that gives us our third metric, Automated Test Cases.
    Progress can be gauged by the ratio of actual to expected. Let’s asemble our metrics into a table and see how this plays out:
    Tracking 
Automation Progress
    As shown above, the progress of automation is expressed as the ratio between (Automated Test Cases)/(Manual Test Cases). This implies a typical dependency of automation on the creation of separate, detailed manual tests created by subject matter experts. If your organization follows a different sort of QA process, where the person doing the automation is also writing the manual tests and converting them directly into automated test scripts, the formula could be altered to express the ratio of (Automated Test Cases)/(Planned Test Cases).

    Creating a Solid Test Automation Framework with QTP

    HP’s QuickTest Pro can itself be used to build a test automation framework. Listed below are some of the characteristics you should take into consideration during the design phase.
    Test automation scripts and functions should:
    • Be Reusable and Repeatable
    • Require Low Maintenance
    • Allow Unattended Execution
    • Support Platform Independence - IE 6 or IE 7, shouldn’t matter
    • Use Data Abstraction - Keyword-Driven Framework
    HP QuickTest Pro scripts should use:
    • Environment Variables
    • Test execution ‘Driver’ to call ‘Actions’
    • ‘Action’ to call functions - i.e., function call for each action
    • Imported test data (Excel) - common format
    • Results reporting - QTP + Quality Center; Excel “Step Status”
    VBScript functions can be:
    • Simple - Specific purpose; True / False or 0 / 1 return code
    • Complex - Multiple steps (e.g., login); “Fuzzy” return code that maps to business rules
    • Resilient - include Error Handling
    Handling Objects in the Repository is done by:
    • Object changes/additions per app release
    • Replace with descriptive programming whenever possible
      • Functions using regular expressions (e.g., popup windows: IE 6 vs IE 7)
      • Web app objects usually classifiable by type, html tag, or class (QTP test object properties)
      • Reusable methods: look before you write!

    Introduction to HP QuickTest Pro Objects

    HP QuickTest Professional uses an Object Repository to store information about the various fields and controls used to build the user interface of a software application. This repository is essentially a database of the names and properties of all the objects encountered during test script creation.
    “Essentially all configuration and run functionality provided via the QuickTest interface is in some way represented in the QuickTest automation object model via objects, methods, and properties.” – HP (Mercury) QuickTest Professional User’s Guide
    A QuickTest Pro 9.x Object Repository file structure looks something like this:
    QTP Object 
Repository
    Here’s an example of the object definition for an ‘Add’ button contained in the repository above:
    Add button 
defintion
    Note that the value of the “name” property defaults to what was discovered during recording, but the value of any property can be changed by a QTP scripter to a more recognizable value - including the logical name - as part of the repository maintenance process.
    By comparison, these properties are very similar to object definitions found using the Microsoft IE Dev Toolbar.
    Application object:
    App Add button 
object
    Object properties discovered with Microsft IE Developer Toolbar:
    IE Dev Toolbar 
properties
    Note also that whatever works in VBScript for object handling works in QTP - but not necessarily vice-versa!

    function junction: File System Object( Functional testing, HP QuickTest Professional)


    Writing functions that make use of the methods exposed by the Windows file system object might seem trivial at first, but take my word for it, it is a worthwhile activity. If you use HP QuickTest Pro for test automation, or even need to do some system maintenance with VBScript, these simple scripts will become an invaluable part of your toolkit. You will come to depend on their simplicity and robustness when they are used for mundane tasks within your test automation scripts.
    This first function returns the name of a file contained in a long path statement.
    For example, if FullSpec = “C:\Folder1\Folder2\Folder3\MyFile.xls”, the function will return the value “MyFile.xls”:
    ‘————————————————————–
    Function GetLongFileName(FullSpec)
    ‘returns file name with the extension from full path
    ‘assuming last element is file name
    Dim fso
    Set fso = CreateObject(”Scripting.FileSystemObject”)
    GetLongFileName = fso.GetFileName(FullSpec)
    End Function
    Conversely, this next function returns the full path to a file from the fully-pathed file name.
    For example, if FullSpec = “C:\Folder1\Folder2\Folder3\MyFile.xls”, the function will return the value “C:\Folder1\Folder2\Folder3″:
    ‘————————————————————–
    Function GetParentPath(FullSpec)
    ‘returns just the path portion of string
    ‘assuming last element is file name
    ‘ Note: function parses out trailing backslash
    Dim fso
    Set fso = CreateObject(”Scripting.FileSystemObject”)
    GetParentPath = fso.GetParentFolderName(FullSpec)
    End Function
    And as another twist, the function below returns the name of the file without the extension.
    For example, if FullSpec = “C:\Folder1\Folder2\Folder3\MyFile.xls”, the function will return the value “MyFile”. This is useful as part of a routine for renaming converted files:
    ‘————————————————————–
    Function GetFileBase(filespec)
    ‘returns file name without extension from full path
    ‘assuming last element is file name
    Dim fso
    Set fso = CreateObject(”Scripting.FileSystemObject”)
    GetFileBase = fso.GetBaseName(filespec)
    End Function
    Of course, we can do much more than parse file names. Here’s a function that will add a new folder to a specified path, after verifying that the path is found and that the folder doesn’t already exist:
    ‘————————————————————–
    Function AddNewFolder(FullPath, FolderName)
    ‘creates a subfolder under the specified path
    Dim fso, f, fc, nf
    If FolderName = “” Then
    FolderName = “New Folder”
    End If
    Set fso = CreateObject(”Scripting.FileSystemObject”)
    If (fso.FolderExists(FullPath)) Then
    Set f = fso.GetFolder(FullPath)
    Set fc = f.SubFolders
    Else
    AddNewFolder = “Path ” & FullPath & ” not found!”
    Exit Function
    End If
    If (fso.FolderExists(FullPath & “\” & FolderName)) Then
    AddNewFolder = “Folder ” & FolderName & ” exists”
    Exit Function
    Else
    Set nf = fc.Add(FolderName)
    AddNewFolder = 0 ‘ “Folder ” & FolderName & ” added”
    End If
    End Function
    Don’t worry if your functions seem too small and simple: in the end, they really work better that way. If they do one thing, and do it well, they become a sturdy link in the chain of actions that can make up a test case!

    How to Implement a Functional Test Automation Methodology


    A lot of people talk about using a “Methodology”, but what does that really mean? There are many complicated meanings for the word “methodology” itself, in Wikipedia and elsewhere. Personally, I have always taken it to mean “writing down (the -ology part) the way things are done (the method part) for a given process.”
    OK, so I sit down, write up how everything is or should be done for functional test automation, and I have my methodology. What do I do with this scholarly work? Why, implement it, of course! Here are the steps I have used successfully in many automation engagements.
    Discover
    * Conduct Discovery Session(s) with subject matter experts, QA analysts, testers
    * Establish Functional Test Goals
    * Define Application(s) Under Test
    * Review Requirements, Design Specifications, and Manual Functional Tests
    * Identify Business Processes to Automate
    * Identify Test Resources (Tools, Staff, Skills, Environments)
    * Create Test Plan
    * Develop Detailed Project Plan
    Develop
    * Exercise AUT (that’s Application Under Test, for you newbies)
    * Build Test Data
    * Create Business Component Tests
    * Define Test Plan Components
    * Customize Test Scripts and Function Libraries
    * Create Test Sets and Parameters
    * Dry-Run Test Sets (Test the Tests)
    Execute
    * Verify Test Readiness (App Build Complete?)
    * Validate Test Data
    * Execute Test Cycles (Iterations)
    * Review Functional Test Results
    * Identify Defects
    Analyze
    * Analyze defects discovered
    * Submit Defects to Development for Resolution
    * Retest Cycle for Closure
    * Validate results with stakeholders
    Report
    * Perform Test Coverage Analysis
    * Test Execution Metrics
    * Present Report(s) to stakeholders
    Transition
    * Knowledge Transfer
    * Framework Support and Maintenance (Ongoing)
    - each section includes Key Terms, Roles & Responsibilities, and Deliverables
    - effort (number of tasks) diminishes as method progresses
    If you are not a newbie (an oldbie?), then you might recognize that this is a kind of mashup between standard QA processes and test automation. Since test automation is really just another method for executing tests, this is a natural fit. As with all good things, a solid QA process is reusable in different situations, including automation.

    Automation Test Data Management (TDM)


    Test Data Management (TDM) is fundamental to the success of automated testing. For example, consider that one of the most beneficial forms of test automation is data-driven testing, which gives testers the ability to input and manipulate massive amounts of data in a relatively short period of time. If the data is bad, then running the tests could produce a mountain of unreliable results, and a whole lot of wasted time, money and effort. It pays to get data management right, especially with test automation.
    Associated with test data creation are issues of capacity (i.e., disk space), data verification, data confidentiality, and prolonged test durations. If other forms of testing, such as manual or performance, are taking place in the same environment, there can be issues of concurrency, with tests failing simply because key data has changed “behind the scenes” during test execution, with records locked or altered unexpectedly.
    Checking both visible test results and the effects of test execution on the database are essential to successful automation. Every automated test must start with a known data state, and end with the data in a predictable state — or even in its pre-test state.
    Some questions that will help in planning a test data strategy include:
    • How will data be created and entered into the system?
    • Will production data be copied? If so, how will privacy of information be ensured?
    • Will data need to be created from scratch?
    • Who will be responsible for test data?
    • What volume of data is needed to test the application or system, and how frequently will it need to be refreshed?
    • Should date be refreshed completely, or incrementally?
    In some cases, the test automation tools can be leveraged to load all the pre-test data and thus create the initial state of the database prior to executing functional tests.

    Developing Effective Test Plans for Automation

    An application test plan should contain a minimum set of optimized test cases with maximum test coverage of all critical application functions. It should be executed using a tool that easily adapts to changing data and requirements.
    In order to create an effective automation test, it is first necessary to review the application test plan provided by the application owner to evaluate its suitability for automation. You don’t want to automate a “BAD” test case. Consider the exact intent of the test plan and determine if you can create an effective test case (same coverage) more simply with more reliable automation. It is not acceptable to simply write all the automation scripts directly from the manual test plans. This has the same inherent limitation as doing record/replay for every script: the test is unreliable.
    Design test cases and test scripts to be modular. Instead of using one script to perform multiple functions, break the script into separate functions.
    Design test cases and test scripts to be generic in terms of process and repeatable in terms of data. Read test data from a separate source: keep the scripts free of test data so that when you do have to change the process or the data, you only have to maintain one item in one place.
    Consider the goal of the test
    Don’t just blindly follow a manual test plan. See if there is a simple way to accomplish the objective stated in the test plan.
    Focus on modularity and reusability. Create a set of evaluation criteria for functions to be considered when using the automated test tool. These criteria may include:
    • Repeatability of tests
    • Criticality/Risk of applications
    • Simplicity of operation
    • Ease of automation
    • Level of documentation of the function (requirements, specifications, etc.)
    Targeting Test Plans for Automation
    A test plan representing a good candidate for automation would have the following characteristics:
    • Contains a repeatable sequence of actions
    • The sequence of actions is repeated many times
    • It is technically feasible to automate the sequence of actions (tool is capable, no external hardware actions)
    • The behavior of the application under test is the same with automation as without
    • Testing involves non-UI aspects of the application (almost all non-UI functions can and should be executed using automated tests)
    • The same tests must be run on multiple hardware configurations
    • The same tests must be run with varied combinations of other applications to verify compatibility (i.e., Interoperability Testing)

    Automated Software Testing: In The Beginning

    Like other living things, software QA processes evolve over time. From its humble beginnings, test automation has undergone a similar transformation.
    The first stage of test automation was the ‘Record and Playback’ age. Tools were marketed for their ability to record a typical user session and then faithfully play it back using the same objects and inputs. Good marketing, bad practice, because as soon as the application changed the recording stopped working. But it did get people interested in automation.
    The next stage could be called the ‘Script Modularity’ age. The recording concept was retained, but now it was linked to a scripting language that allowed a tool expert to create modular, reusable scripts to perform the actions required in a test case. These scripts could be maintained as separate modules that corresponded roughly to the modules of an application, making it easier to change the test code when the application code changed. Easier, but still not efficient. Complex applications would require complex scripts, which usually require more expertise to maintain.
    And what about handling all that test data hardcoded into each script? The mind boggled.
    Luckily it didn’t boggle too long, which led to the next stage, the ‘Data-Driven’ age. Tools were constructed that allowed access to large pools of external test data, so that these modular scripts could process iteration after iteration of data input. They could churn through mountains of data, as often as desired. What could be better? Well, there were still maintenance issues, as the number of test scripts still grew in direct, or sometimes geometric, proportion to the growth of applications. Additional tools were created just to manage the test execution tools as the asset inventory climbed ever higher. And all those tool experts, they were getting expensive. But that’s the price of progress, right?
    Wrong! Evolution usually favors simplicity, since knobby bits tend to break off and fail, sometimes endangering the entire species. Another level of abstraction was necessary, and this ushered in the ‘Keyword-Driven’ age. The test actions were generalized and stored in function libraries, objects were either inventoried in repositories or identified descriptively by type, and testers who were experts in application testing no longer needed to be test tool experts to execute their automated tests. By choosing from a list of keywords linked to functions, they could now describe their tests in their own terms. Test tool script maintenance was simplified down to occasionally updating the few assets that were required to process the keywords, which meant fewer tool experts (a.k.a, ‘knobby bits’ — lol). Truly, a Golden Age.
    Of course, evolution doesn’t stop there. There are many experiments in progress today: business process testing, model-driven testing, intelligent query-driven testing, to name a few. The goal seems to be to find the toolset that provides the greatest test coverage with the least amount of maintenance.
    Test Coverage
 and Maintenance Level - 1
    Test Coverage 
and Maintenance Level - 2
    And certainly the field of artificial intelligence will have a major impact on sofware testing in the future.

    Software Quality Assurance at Design Time

    Software engineering practise is called service engineering when it focuses on building telecommunications services as software components. Examples of popular services are Call Forward, Call Hold, Call Waiting, Voicemail, etc. Interestingly, service engineering is highly software quality oriented. In this post, I conjecture that the “general” SDLC (other than building telecommunications services) has a lot to learn from service engineering. Focus will be on software quality assurance at design time.
    One of the differences between service engineering and “general” software engineering is the telecommunications Feature Interaction (FI) problem. The latter attracted the interest of a small international academic community mainly between 1985 and 2006. In this period of time, the problem has been thoroughly studied and many effective applications have been deployed by the big players of the telecommunications industry. Small companies and many of the big but emerging telecommunications companies often build development teams that don’t have previous telecommunications background and did never hear about the FI problem. In such conditions, there is a miss of an important process of software quality assurance.
    A feature is a small service. A service is built by assembling several features together. For example, Call Forward is developed using the feature that receives a call request, plus the one that routes a call to a given destination, plus a timer, plus an announcer that plays: “your call is being transferred”, etc. The feature interaction problem (FI) is the undesirable situation that arises when two features or more, running together, interact so that one at least displays an unexpected behavior. FI are considered as software integration defects. For example, you have programmed your home phone to block calls attempted to a given number because you don’t want your kids to dial that number. This is called Call Screening or Call Blocking. Your little smart monsters however, discovered the benefits of FI a long time before. And now they are ready to make a suitable workaround. A friend has to program a forward to that forbidden number on his cell and then they have to simply dial that friend’s number in order to be forwarded to the forbidden one. This is a FI between the Call Forward service and the Call Screening service.
    Any software development or quality assurance manager wants to see the maximum of software defects avoided at the earlier stages of the SDLC. I guess service engineering managers are among the happiest software managers in the world. That’s because detecting FI situations is done at the feature design time. So, the process is part of the SDLC. When a new service has been designed, a model of that design is compared with all the other service models that have been already deployed. If there is any FI, the service design is modified until there is no interaction. To model a service, languages like SDL or LOTOS are often used. Service model comparison is performed by an automatic formal verification. This is not our purpose here.
    Let’s go to the most important part of the story: FI causes. The latter are the runtime software execution conditions that produce FI. I think that being aware of those causes can be useful at the design time and when elaborating the integration test cases.
    The following is a list of FI causes. I will not give telecommunications world examples. Rather, I will try to prove their generality through general but real examples. I’m confident that readers will be able to easily project them on their own software conceptual sphere. Since we will think in “general” software engineering, “feature” will mean any software function.
    1. Assumption violation
    1.1 Feature A uses data that is supposed to be static. However, feature B can modify it. Example: in a billing system there are administrator accounts, sales agent accounts, user accounts, etc. Only the administrators and agents can change the prices. A new feature has been added in order to let sales agents have their own sales agents. The designers have forgotten to restrict second level agents rights in order not to change prices in the system.
    1.2 Feature A is triggered by an event that is supposed to be produced under certain conditions. However, feature B can intercept that event and therefore the feature A will not run. Example: a server feature that is supposed to react to a given TCP packet but a newly developed feature is intercepting and modifying all the received packets on that socket.
    1.3 Feature A gives a meaning to a data that is different from the one given by feature B. Example: SOAP client and server for which a given field has two different meanings.
    1.4 Feature A uses a data that is supposed to be unique. Feature B violates this assumption. Example: an IP address is supposed to be unique but there is a Web interface screen that is allowing users to set the same IP for several network appliances.
    2. Contradictory actions
    2.1 Feature A has to perform an action that is forbidden by feature B. Example: the system admin can lock some database tables while some customers need to access them.
    3. Ambiguous event semantics
    3.1 Two different situations create the same event. Example: many implementations of SIP (which is a VoIP protocol) send back the response 500 Server Internal Error in situations where issuing a 404 Forbidden or 603 Decline is much better.
    As “supplement”, the following is a FI cause for which I couldn’t find general examples. May be because I’m a telecommunications guy.
    4. Race condition
    4.1 Feature A is supposed to run on a given event timeout T. The new feature B has never to run if A can be run. But feature B is programmed to run on a timeout that is less than T. So, B will always run independently from A. Example: Voicemail programmed on 4 rings vs. Call Forward on 3 rings. If it rings 3 times and nobody answers, the call will go to the voicemail instead of being forwarded to the secretary.

    Choosing the Best Application Design Approach

    Sometimes it isn’t clear when designing a new application just which of a multitude of possible approaches to the design represents the “best” approach.  Sometimes it may not just be the application design itself, but also choosing the “right” tool to accomplish the task(s) to be executed.  Sometimes it may be deciding something as different as whether to create the application as a web application or client application.  Once some basic design choices are made, like web or client, the designer may also then be presented with multiple possibilities for accomplishing the task at hand.  Choosing the best approach for accomplishing the task isn’t always clear. Of course, there are those projects which are mandated to be web or client, design tools may be mandated, and of course there are the talents of the design/programming team to consider - which sometimes will restrict design to a particular method.  However, in my experience, there may often be many choices to accomplish a given task - that is to say, choices in how to produce a certain end result.
    In reading a new ITKE blog Taming the Wild, Wild Web  I thought of a common example of what I am referring to.  One can choose to code a web page, for instance, using in-line HTML and specifying literally everything to be included on a page — OR — they can choose to incorporate CSS for example to accomplish the “look and feel” desired.  While an argument can be made that using CSS has many advantages — IF one has limited time and little knowledge of how to use CSS well the developer isn’t going to stop and learn CSS and then do the page.  (…at least that’s my opinion!).
    There will always be pros and cons about a particular method to accomplish a programmed task.  Many times the “best” approach to accomplishing the task will be the method best known to the developer.  At other times the “best” approach may be the one that executes most efficiently - and at yet other times the best approach may simply be the approach that produces the desired end-result for the user, on-time and on-budget.

    Saturday, May 15, 2010

    ISEB Foundation Certificate in Software Testing Practice Exam - 3

    Q1. Software testing activities should start
    a. as soon as the code is written
    b. during the design stage
    c. when the requirements have been formally documented
    d. as soon as possible in the development life cycle

    Q2. Faults found by users are due to:
    a. Poor quality software
    b. Poor software and poor testing
    c. bad luck
    d. insufficient time for testing

    Q3. What is the main reason for testing software before releasing it?
    a. to show that system will work after release
    b. to decide when the software is of sufficient quality to release
    c. to find as many bugs as possible before release
    d. to give information for a risk based decision about release

    4. which of the following statements is not true
    a. performance testing can be done during unit testing as well as during the testing of whole system
    b. The acceptance test does not necessarily include a regression test
    c. Verification activities should not involve testers (reviews, inspections etc)
    d. Test environments should be as similar to production environments as possible

    Q5. When reporting faults found to developers, testers should be:
    a. as polite, constructive and helpful as possible
    b. firm about insisting that a bug is not a “feature” if it should be fixed
    c. diplomatic, sensitive to the way they may react to criticism
    d. All of the above
    Q6. In which order should tests be run?
    a. the most important tests first
    b. the most difficult tests first(to allow maximum time for fixing)
    c. the easiest tests first(to give initial confidence)
    d. the order they are thought of

    Q7. The later in the development life cycle a fault is discovered, the more expensive it is to fix. why?
    a. the documentation is poor, so it takes longer to find out what the software is doing.
    b. wages are rising
    c. the fault has been built into more documentation,code,tests, etc
    d. none of the above

    Q8. Which is not true-The black box tester
    a. should be able to understand a functional specification or requirements document
    b. should be able to understand the source code.
    c. is highly motivated to find faults
    d. is creative to find the system’s weaknesses

    Q9. A test design technique is
    a. a process for selecting test cases
    b. a process for determining expected outputs
    c. a way to measure the quality of software
    d. a way to measure in a test plan what has to be done

    Q10. Testware (test cases, test dataset)
    a. needs configuration management
    just like requirements, design and code
    b. should be newly constructed for each new
    version of the software
    c. is needed only until the software is released
    into production or use
    d. does not need to be documented and commented,
    as it does not form part of the released software system

    Q11. An incident logging system
    a. only records defects
    b. is of limited value
    c. is a valuable source of project information during testing if it contains all incidents
    d. should be used only by the test team.

    Q12. Increasing the quality of the software, by better development methods, will affect the time needed for testing (the test phases) by:
    a. reducing test time
    b. no change
    c. increasing test time
    d. can’t say

    Q13. Coverage measurement
    a. is nothing to do with testing
    b. is a partial measure of test thoroughness
    c. branch coverage should be mandatory for all software
    d. can only be applied at unit or module testing, not at system testing

    Q14. When should you stop testing?
    a. when time for testing has run out.
    b. when all planned tests have been run
    c. when the test completion criteria have been met
    d. when no faults have been found by the tests run

    Q15. Which of the following is true?
    a. Component testing should be black box,
    system testing should be white box.
    b. if u find a lot of bugs in testing,
    you should not be very confident about the
    quality of software
    c. the fewer bugs you find,the better your testing was
    d. the more tests you run, the more bugs you will find.

    Q 16. What is the important criterion in deciding what testing technique to use?
    a. how well you know a particular technique
    b. the objective of the test
    c. how appropriate the technique is for testing the application
    d. whether there is a tool to support the technique

    Q17. If the pseudocode below were a programming language ,how many tests are required to achieve 100% statement coverage?
    1. If x=3 then
    2. Display_messageX;
    3. If y=2 then
    4. Display_messageY;
    5. Else
    6. Display_messageZ;
    7. Else
    8. Display_messageZ;

    a. 1
    b. 2
    c. 3
    d. 4

    Q18. Using the same code example as question 17,how many tests are required to achieve 100% branch/decision coverage?
    a. 1
    b. 2
    c. 3
    d. 4

    Q19. Which of the following is NOT a type of non-functional test?
    a. State-Transition
    b. Usability
    c. Performance
    d. Security

    Q20. Which of the following tools would you use to detect a memory leak?
    a. State analysis
    b. Coverage analysis
    c. Dynamic analysis
    d. Memory analysis

    Q21. Which of the following is NOT a standard related to testing?
    a. IEEE829
    b. IEEE610
    c. BS7925-1
    d. BS7925-2

    Q22. Which of the following is the component test standard?
    a. IEEE 829
    b. IEEE 610
    c. BS7925-1
    d. BS7925-2

    Q23. Which of the following statements are true?
    a. Faults in program specifications are the most expensive to fix.
    b. Faults in code are the most expensive to fix.
    c. Faults in requirements are the most expensive to fix
    d. Faults in designs are the most expensive to fix.

    Q24. Which of the following is not the integration strategy?
    a. Design based
    b. Big-bang
    c. Bottom-up
    d. Top-down

    Q25. Which of the following is a black box design technique?
    a. statement testing
    b. equivalence partitioning
    c. error- guessing
    d. usability testing

    Q26. A program with high cyclometic complexity is almost likely to be:
    a. Large
    b. Small
    c. Difficult to write
    d. Difficult to test

    Q27. Which of the following is a static test?
    a. code inspection
    b. coverage analysis
    c. usability assessment
    d. installation test

    Q28. Which of the following is the odd one out?
    a. white box
    b. glass box
    c. structural
    d. functional

    Q29. A program validates a numeric field as follows:
    "values less than 10 are rejected, values between 10 and 21 are accepted, values greater than or equal to 22 are rejected"
    which of the following input values cover all of the equivalence partitions?
    a. 10,11,21
    b. 3,20,21
    c. 3,10,22
    d. 10,21,22

    Q30. Using the same specifications as question 29, which of the following covers the MOST boundary values?
    a. 9,10,11,22
    b. 9,10,21,22
    c. 10,11,21,22
    d. 10,11,20,21

    Below are the answers of questions of this post:


    ISEB Foundation Certificate in Software Testing Practice Exam - 1

    Q1. We split testing into distinct stages primarily because:


    a) Each test stage has a different purpose.

    b) It is easier to manage testing in stages.

    c) We can run different tests in different environments.

    d) The more stages we have, the better the testing.

    Q2. Which of the following is likely to benefit most from the use of test tools providing test capture and replay facilities?

    a) Regression testing

    b) Integration testing

    c) System testing

    d) User acceptance testing

    Q3. Which of the following statements is NOT correct?

    a) A minimal test set that achieves 100% LCSAJ coverage will also achieve 100% branch coverage.

    b) A minimal test set that achieves 100% path coverage will also achieve 100% statement coverage.

    c) A minimal test set that achieves 100% path coverage will generally detect more faults than one that achieves 100% statement coverage.

    d) A minimal test set that achieves 100% statement coverage will generally detect more faults than one that achieves 100% branch coverage.

    Q4. Which of the following requirements is testable?

    a) The system shall be user friendly.

    b) The safety-critical parts of the system shall contain 0 faults.

    c) The response time shall be less than one second for the specified design load.

    d) The system shall be built to be portable.

    Q5. Analyse the following highly simplified procedure:

    Ask: “What type of ticket do you require, single or return?”

    IF the customer wants ‘return’

    Ask: “What rate, Standard or Cheap-day?”

    IF the customer replies ‘Cheap-day’

    Say: “That will be £11:20”

    ELSE

    Say: “That will be £19:50”

    ENDIF

    ELSE

    Say: “That will be £9:75”

    ENDIF

    Now decide the minimum number of tests that are needed to ensure that all the questions have been asked, all combinations have occurred and all replies given.

    a) 3

    b) 4

    c) 5

    d) 6

    Q6. Error guessing:

    a) supplements formal test design techniques.

    b) can only be used in component, integration and system testing.

    c) is only performed in user acceptance testing.

    d) is not repeatable and should not be used.

    Q7. Which of the following is NOT true of test coverage criteria?

    a) Test coverage criteria can be measured in terms of items exercised by a test suite.

    b) A measure of test coverage criteria is the percentage of user requirements covered.

    c) A measure of test coverage criteria is the percentage of faults found.

    d) Test coverage criteria are often used when specifying test completion criteria.

    Q8. In prioritising what to test, the most important objective is to:

    a) find as many faults as possible.

    b) test high risk areas.

    c) obtain good test coverage.

    d) test whatever is easiest to test.

    Q9. Given the following sets of test management terms (v-z), and activity descriptions (1-5), which one of the following best pairs the two sets?
    v – test control

    w – test monitoring

    x - test estimation

    y - incident management

    z - configuration control

    1 - calculation of required test resources

    2 - maintenance of record of test results

    3 - re-allocation of resources when tests overrun

    4 - report on deviation from test plan

    5 - tracking of anomalous test results

    a) v-3,w-2,x-1,y-5,z-4

    b) v-2,w-5,x-1,y-4,z-3

    c) v-3,w-4,x-1,y-5,z-2 v-2,

    d) w-1,x-4,y-3,z-5

    Q10. Which one of the following statements about system testing is NOT true?

    a) System tests are often performed by independent teams.

    b) Functional testing is used more than structural testing.

    c) Faults found during system tests can be very expensive to fix.

    d) End-users should be involved in system tests.

    Q11. Which of the following is false?

    a) Incidents should always be fixed.

    b) An incident occurs when expected and actual results differ.

    c) Incidents can be analysed to assist in test process improvement.

    d) An incident can be raised against documentation.

    Q12. Enough testing has been performed when:

    a) time runs out.

    b) the required level of confidence has been achieved.

    c) no more faults are found.

    d) the users won’t find any serious faults.

    Q13. Which of the following is NOT true of incidents?

    a) Incident resolution is the responsibility of the author of the software under test.

    b) Incidents may be raised against user requirements.

    c) Incidents require investigation and/or correction.

    d) Incidents are raised when expected and actual results differ.

    Q14. Which of the following is not described in a unit test standard?

    a) syntax testing

    b) equivalence partitioning

    c) stress testing

    d) modified condition/decision coverage

    Q15. Which of the following is false?

    a) In a system two different failures may have different severities.

    b) A system is necessarily more reliable after debugging for the removal of a fault.

    c) A fault need not affect the reliability of a system.

    d) Undetected errors may lead to faults and eventually to incorrect behaviour.

    Q16. Which one of the following statements, about capture-replay tools, is NOT correct?

    a) They are used to support multi-user testing.

    b) They are used to capture and animate user requirements.

    c) They are the most frequently purchased types of CAST tool.

    d) They capture aspects of user behavior.

    Q17. How would you estimate the amount of re-testing likely to be required?

    a) Metrics from previous similar projects

    b) Discussions with the development team

    c) Time allocated for regression testing

    d) a & b

    Q18. Which of the following is true of the V-model?

    a) It states that modules are tested against user requirements.

    b) It only models the testing phase.

    c) It specifies the test techniques to be used.

    d) It includes the verification of designs.

    Q19. The oracle assumption:

    a) is that there is some existing system against which test output may be checked.

    b) is that the tester can routinely identify the correct outcome of a test.

    c) is that the tester knows everything about the software under test.

    d) is that the tests are reviewed by experienced testers.

    Q20. Which of the following characterises the cost of faults?

    a) They are cheapest to find in the early development phases and the most expensive to fix in the latest test phases.

    b) They are easiest to find during system testing but the most expensive to fix then.

    c) Faults are cheapest to find in the early development phases but the most expensive to fix then.

    d) Although faults are most expensive to find during early development phases, they are cheapest to fix then.

    Q21. Which of the following should NOT normally be an objective for a test?

    a) To find faults in the software.

    b) To assess whether the software is ready for release.

    c) To demonstrate that the software doesn’t work.

    d) To prove that the software is correct.

    Q22. Which of the following is a form of functional testing?

    a) Boundary value analysis

    b) Usability testing

    c) Performance testing

    d) Security testing

    Q23. Which of the following would NOT normally form part of a test plan?

    a) Features to be tested

    b) Incident reports

    c) Risks

    d) Schedule

    Q24. Which of these activities provides the biggest potential cost saving from the use of CAST?

    a) Test management

    b) Test design

    c) Test execution

    d) Test planning

    Q25. Which of the following is NOT a white box technique?

    a) Statement testing

    b) Path testing

    c) Data flow testing

    d) State transition testing

    Q26. Data flow analysis studies:

    a) possible communications bottlenecks in a program.

    b) the rate of change of data values as a program executes.

    c) the use of data on paths through the code.

    d) the intrinsic complexity of the code.

    Q27. In a system designed to work out the tax to be paid:An employee has £4000 of salary tax free. The next £1500 is taxed at 10%The next £28000 is taxed at 22%Any further amount is taxed at 40%To the nearest whole pound, which of these is a valid Boundary Value Analysis test case?

    a) £1500

    b) £32001

    c) £33501

    d) £28000

    Q28. An important benefit of code inspections is that they:

    a) enable the code to be tested before the execution environment is ready.

    b) can be performed by the person who wrote the code.

    c) can be performed by inexperienced staff.

    d) are cheap to perform.

    Q29. Which of the following is the best source of Expected Outcomes for User Acceptance Test scripts?

    a) Actual results

    b) Program specification

    c) User requirements

    d) System specification

    Q30. What is the main difference between a walkthrough and an inspection?

    a) An inspection is lead by the author, whilst a walkthrough is lead by a trained moderator.

    b) An inspection has a trained leader, whilst a walkthrough has no leader.

    c) Authors are not present during inspections, whilst they are during walkthroughs.

    d) A walkthrough is lead by the author, whilst an inspection is lead by a trained moderator

    Q31. Which one of the following describes the major benefit of verification early in the life cycle?

    a) It allows the identification of changes in user requirements.

    b) It facilitates timely set up of the test environment.

    c) It reduces defect multiplication.

    d) It allows testers to become involved early in the project.

    Q32. Integration testing in the small:

    a) tests the individual components that have been developed.

    b) tests interactions between modules or subsystems.

    c) only uses components that form part of the live system.

    d) tests interfaces to other systems

    Q33. Static analysis is best described as:

    a) the analysis of batch programs.

    b) the reviewing of test plans.

    c) the analysis of program code.

    d) the use of black box testing

    Q34. Alpha testing is:

    a) post-release testing by end user representatives at the developer’s site.

    b) the first testing that is performed.

    c) pre-release testing by end user representatives at the developer’s site.

    d) pre-release testing by end user representatives at their sites.

    Q35. A failure is:

    a) found in the software; the result of an error.

    b) departure from specified behaviour.

    c) an incorrect step, process or data definition in a computer program.

    d) a human action that produces an incorrect result.

    Q36. In a system designed to work out the tax to be paid:An employee has £4000 of salary tax free. The next £1500 is taxed at 10%The next £28000 is taxed at 22%Any further amount is taxed at 40%Which of these groups of numbers would fall into the same equivalence class?

    a) £4800; £14000; £28000

    b) £5200; £5500; £28000

    c) £28001; £32000; £35000

    d) £5800; £28000; £32000

    Q37. The most important thing about early test design is that it:

    a) makes test preparation easier.

    b) means inspections are not required.

    c) can prevent fault multiplication.

    d) will find all faults.

    Q38. Which of the following statements about reviews is true?

    a) Reviews cannot be performed on user requirements specifications.

    b) Reviews are the least effective way of testing code.

    c) Reviews are unlikely to find faults in test plans.

    d) Reviews should be performed on specifications, code, and test plans.

    Q39. Test cases are designed during:

    a) test recording.

    b) test planning.

    c) test configuration.

    d) test specification.

    Q40. A configuration management system would NOT normally provide:

    a) linkage of customer requirements to version numbers.

    b) facilities to compare test results with expected results.

    c) the precise differences in versions of software component source code.

    d) restricted access to the source code library.


    Below are the answers of the questions in this post:




    1 - A

    2 - A

    3 - D

    4 - C

    5 - A

    6 - A

    7 - C

    8 - B

    9 - C

    10 - D

    11 - A

    12 - B

    13 - A

    14 - C

    15 - B

    16 - B

    17 - D

    18 - D

    19 - B

    20 - A

    21 - D

    22 - A

    23 - B

    24 - C

    25 - D

    26 - C

    27 - C

    28 - A

    29 - C

    30 - D

    31 - C

    32 - B

    33 - C

    34 - C

    35 - B

    36 - D

    37 - C

    38 - D

    39 - D

    40 - B

    ISEB Foundation Certificate in Software Testing Practice Exam - 2

    Q1. A deviation from the specified or expected behaviour that is visible to end-users is called:


    a) an error
    b) a fault
    c) a failure
    d) a defect


    Q2. Regression testing should be performed:
    v) every week
    W) after the software has changed
    x) as often as possible
    y) when the environment has changed
    z) when the project manager says

    a) v & w are true, x, y & z are false
    b) w, x & y are true, v & z are false
    c) w & y are true, v, x & z are false
    d) w is true, v, x, y & z are false

    Q3. IEEE 829 test plan documentation standard contains all of the following except

    a) test items
    b) test deliverables
    c) test tasks
    d) test specifications

    Q4. When should testing be stopped?

    a) when all the planned tests have been run
    b) when time has run out
    c) when all faults have been fixed correctly
    d) it depends on the risks for the system being tested

    Q5. Order numbers on a stock control system can range between 10000 and 99999 inclusive. Which of the following inputs might be a result of designing tests for only valid equivalence classes and valid boundaries?

    a) 1000, 50000, 99999
    b) 9999, 50000, 100000
    c) 10000, 50000, 99999
    d) 10000, 99999, 100000

    Q6. Consider the following statements about early test design:

    i. early test design can prevent fault multiplication
    ii. faults found during early test design are more expensive to fix
    iii. early test design can find faults
    iv. early test design can cause changes to the requirements
    v. early test design normally takes more effort

    a) i, iii & iv are true; ii & v are false
    b) iii & iv are true; i, ii & v are false
    c) i, iii, iv & v are true; ii is false
    d) i & ii are true; iii, iv & v are false

    Q7. Non-functional system testing includes:

    a) testing to see where the system does not function correctly
    b) testing quality attributes of the system including performance and usability
    c) testing a system function using only the software required for that function
    d) testing for functions that should not exist

    Q8. Which of the following is NOT part of configuration management?

    a) auditing conformance to ISO 9000
    b) status accounting of configuration items
    c) identification of test versions
    d) controlled library access

    Q9. Which of the following is the main purpose of the integration strategy for integration testing in the small?

    a) to ensure that all of the small modules are tested adequately
    b) to ensure that the system interfaces to other systems and networks
    c) to specify which modules to combine when, and how many at once
    d) to specify how the software should be divided into modules

    Q10. What is the purpose of a test completion criterion?

    a) to know when a specific test has finished its execution
    b) to ensure that the test case specification is complete
    c) to set the criteria used in generating test inputs
    d) to determine when to stop testing

    Q11. Consider the following statements:
    i. an incident may be closed without being fixed.
    ii. incidents may not be raised against documentation.
    iii. the final stage of incident tracking is fixing.
    iv. the incident record does not include information on test environments.

    a) ii is true, i, iii and iv are false
    b) i is true, ii, iii and iv are false
    c) i and iv are true, ii and iii are false
    d) i and ii are true, iii and iv are false

    Q12. Given the following code, which statement is true about the minimum number of test cases required for full statement and branch coverage?
    Read p
    Read q
    IF p+q > 100 THEN
    Print "Large"
    ENDIF
    IF p > 50 THEN
    Print
    "p Large"
    ENDIF

    a) 1 test for statement coverage, 3 for branch coverage
    b) 1 test for statement coverage, 2 for branch coverage
    c) 1 test for statement coverage, 1 for branch coverage
    d) 2 tests for statement coverage, 2 for branch coverage

    Q13. Consider the following statements:
    i. 100% statement coverage guarantees 100% branch coverage.
    ii. 100% branch coverage guarantees 100% statement coverage.
    iii. 100% branch coverage guarantees 100% decision coverage.
    iv. 100% decision coverage guarantees 100% branch coverage.
    v. 100% statement coverage guarantees 100% decision coverage.

    a) ii is True; i, iii, iv & v are False
    b) i & v are True; ii, iii & iv are False
    c) ii & iii are True; i, iv & v are False
    d) ii, iii & iv are True; i & v are False

    Q14. Functional system testing is:

    a) testing that the system functions with other systems
    b) testing that the components that comprise the system function together
    c) testing the end to end functionality of the system as a whole
    d) testing the system performs functions within specified response times

    Q15. Incidents would not be raised against:

    a) requirements
    b) documentation
    c) test cases
    d) improvements suggested by users

    Q16. Which of the following items would not come under Configuration Management?

    a) operating systems
    b) test documentation
    c) live data
    d) user requirement documents

    Q17. Maintenance testing is:

    a) updating tests when the software has changed
    b) testing a released system that has been changed
    c) testing by users to ensure that the system meets a business need
    d) testing to maintain business advantage

    Q18. What can static analysis NOT find?

    a) the use of a variable before it has been defined
    b) unreachable (“dead”) code
    c) memory leaks
    d) array bound violations

    Q19. Which of the following techniques is NOT a black box technique?

    a) state transition testing
    b) LCSAJ
    c) syntax testing
    d) boundary value analysis

    Q20. Beta testing is:

    a) performed by customers at their own site
    b) performed by customers at the software developer's
    site
    c) performed by an Independent Test Team
    d) performed as early as possible in the lifecycle

    Q21. Given the following types of tool, which tools would typically be used by developers, and which by an independent system test team?
    i. static analysis
    ii. performance testing
    iii. test management
    iv. dynamic analysis

    a) developers would typically use i and iv; test team ii and iii
    b) developers would typically use i and iii; test team ii and iv
    c) developers would typically use ii and iv; test team i and iii
    d) developers would typically use i, iii and iv; test team ii

    Q22. The main focus of acceptance testing is:

    a) finding faults in the system

    b) ensuring that the system is acceptable to all users

    c) testing the system with other systems

    d) testing from a business perspective

    Q23. Which of the following statements about component testing is FALSE?

    a) black box test design techniques all have an associated test measurement technique

    b) white box test design techniques all have an associated test measurement technique

    c) cyclomatic complexity is not a test measurement technique

    d) black box test measurement techniques all have an associated test design technique

    Q24. Which of the following statements is NOT true?

    a) inspection is the most formal review process

    b) inspections should be led by a trained leader

    c) managers can perform inspections on management documents

    d) inspection is appropriate even when there are no written documents

    Q25. A typical commercial test execution tool would be able to perform all of the following, EXCEPT:

    a) calculating expected outputs

    b) comparison of expected outcomes with actual outcomes

    c) recording test inputs

    d) reading test values from a data file

    Q26. The difference between re-testing and regression testing is:

    a) re-testing ensures the original fault has been removed; regression testing looks for unexpected side-effects

    b) re-testing looks for unexpected side-effects; regression testing ensures the original fault has been removed

    c) re-testing is done after faults are fixed; regression testing is done earlier

    d) re-testing is done by developers; regression testing is done by independent testers

    Q27. Expected results are:

    a) only important in system testing

    b) only used in component testing

    c) most useful when specified in advance

    d) derived from the code

    Q28. What type of review requires formal entry and exit criteria, including metrics:

    a) walkthrough

    b) inspection

    c) management review

    d) post project review

    Q29. Which of the following uses Impact Analysis most?

    a) component testing

    b) non-functional system testing

    c) user acceptance testing

    d) maintenance testing

    Q30. What is NOT included in typical costs for an inspection process?

    a) setting up forms and databases

    b) analysing metrics and improving processes

    c) writing the documents to be inspected

    d) time spent on the document outside the meeting

    Q31. Which of the following is NOT a reasonable test objective:

    a) to find faults in the software

    b) to prove that the software has no faults

    c) to give confidence in the software

    d) to find performance problems

    Q32. Which expression best matches the following characteristics of the review processes:
    1. led by the author

    2. undocumented

    3. no management participation

    4. led by a moderator or leader

    5. uses entry and exit criteria

    s) inspection

    t) peer review

    u) informal review

    v) walkthrough

    a) s = 4 and 5, t = 3, u = 2, v = 1

    b) s = 4, t = 3, u = 2 and 5, v = 1

    c) s = 1 and 5, t = 3, u = 2, v = 4

    d) s = 4 and 5, t = 1, u= 2, v = 3

    Q33. Which of the following is NOT part of system testing?

    a) business process-based testing

    b) performance, load and stress testing

    c) usability testing

    d) top-down integration testing

    Q34. Which statement about expected outcomes is FALSE?

    a) expected outcomes are defined by the software's behaviour

    b) expected outcomes are derived from a specification, not from the code

    c) expected outcomes should be predicted before a test is run

    d) expected outcomes may include timing constraints such as response times

    Q35. The standard that gives definitions of testing terms is:

    a) ISO/IEC 12207

    b) BS 7925-1

    c) ANSI/IEEE 829

    d) ANSI/IEEE 729

    Q36. The cost of fixing a fault:

    a) is not important

    b) increases the later a fault is found

    c) decreases the later a fault is found

    d) can never be determined

    Q37. Which of the following is NOT included in the Test Plan document of the Test Documentation Standard?

    a) what is not to be tested

    b) test environment properties

    c) quality plans

    d) schedules and deadlines

    Q38. Could reviews or inspections be considered part of testing?

    a) no, because they apply to development documentation

    b) no, because they are normally applied before testing

    c) yes, because both help detect faults and improve quality

    d) yes, because testing includes all non-constructive activities

    Q39. Which of the following is not part of performance testing?

    a) measuring response times

    b) recovery testing

    c) simulating many users

    d) generating many transactions

    Q40. Error guessing is best used:

    a) after more formal techniques have been applied

    b) as the first approach to deriving test cases

    c) by inexperienced testers

    d) after the system has gone live

    Below are the answers of the questions of this post:


    Thursday, May 13, 2010

    http://whitepapers.techrepublic.com.com/abstract.aspx?docid=919201&promo=100511&tag=tr-left

    Cool SOA Testing stuff with good explanantion

    http://www.thbs.com/pdfs/SOA_Test_Methodology.pdf

    SOA Testing

    http://www.soatesting.com/

    Monday, May 10, 2010

    Unit Testing

    In computer programming, unit testing is a software verification and validation method in which a programmer tests if individual units of source code are fit for use. A unit is the smallest testable part of an application. In procedural programming a unit may be an individual function or procedure.
    Ideally, each test case is independent from the others: substitutes like method stubs, mock objects, fakes and test harnesses can be used to assist testing a module in isolation. Unit tests are typically written and run by software developers to ensure that code meets its design and behaves as intended. Its implementation can vary from being very manual (pencil and paper) to being formalized as part of build automation.

    Benefits

    The goal of unit testing is to isolate each part of the program and show that the individual parts are correct. A unit test provides a strict, written contract that the piece of code must satisfy. As a result, it affords several benefits. Unit tests find problems early in the development cycle.

    Facilitates change

    Unit testing allows the programmer to refactor code at a later date, and make sure the module still works correctly (i.e. regression testing). The procedure is to write test cases for all functions and methods so that whenever a change causes a fault, it can be quickly identified and fixed.
    Readily-available unit tests make it easy for the programmer to check whether a piece of code is still working properly.
    In continuous unit testing environments, through the inherent practice of sustained maintenance, unit tests will continue to accurately reflect the intended use of the executable and code in the face of any change. Depending upon established development practices and unit test coverage, up-to-the-second accuracy can be maintained.

    Simplifies integration

    Unit testing may reduce uncertainty in the units themselves and can be used in a bottom-up testing style approach. By testing the parts of a program first and then testing the sum of its parts, integration testing becomes much easier.
    An elaborate hierarchy of unit tests does not equal integration testing, despite what a programmer may think. Integration testing cannot be to a full extent automated and still relies heavily on human testers.

    Documentation

    Unit testing provides a sort of living documentation of the system. Developers looking to learn what functionality is provided by a unit and how to use it can look at the unit tests to gain a basic understanding of the unit API.
    Unit test cases embody characteristics that are critical to the success of the unit. These characteristics can indicate appropriate/inappropriate use of a unit as well as negative behaviors that are to be trapped by the unit. A unit test case, in and of itself, documents these critical characteristics, although many software development environments do not rely solely upon code to document the product in development.
    On the other hand, ordinary narrative documentation is more susceptible to drifting from the implementation of the program and will thus become outdated (e. g. design changes, feature creep, relaxed practices in keeping documents up-to-date).

    Design

    When software is developed using a test-driven approach, the unit test may take the place of formal design. Each unit test can be seen as a design element specifying classes, methods, and observable behaviour. The following Java example will help illustrate this point.
    Here is a test class that specifies a number of elements of the implementation. First, that there must be an interface called Adder, and an implementing class with a zero-argument constructor called AdderImpl. It goes on to assert that the Adder interface should have a method called add, with two integer parameters, which returns another integer. It also specifies the behaviour of this method for a small range of values.
    public class TestAdder {
     public void testSum() {
      Adder adder = new AdderImpl();
      assert(adder.add(1, 1) == 2);
      assert(adder.add(1, 2) == 3);
      assert(adder.add(2, 2) == 4);
      assert(adder.add(0, 0) == 0);
      assert(adder.add(-1, -2) == -3);
      assert(adder.add(-1, 1) == 0);
      assert(adder.add(1234, 988) == 2222);
     }
    }
    In this case the unit test, having been written first, acts as a design document specifying the form and behaviour of a desired solution, but not the implementation details, which are left for the programmer. Following the 'do the simplest thing that could possibly work' practice, the easiest solution that will make the test pass is shown below.
    interface Adder {
     int add(int a, int b);
    }
    class AdderImpl implements Adder {
     int add(int a, int b) {
      return a + b;
     }
    }
    Unlike other diagram-based design methods, using a unit-test as a design has one significant advantage. The design document (the unit-test itself) can be used to verify that the implementation adheres to the design. UML suffers from the fact that although a diagram may name a class Customer, the developer can call the class Wibble and nothing in the system would note this discrepancy. With the unit-test design method, the tests will never pass if the developer does not implement the solution according to the design.
    It is true that unit testing lacks some of the accessibility of a diagram, but UML diagrams are now easily generated for most modern languages by free tools (usually available as extensions to IDEs). Free tools, like those based on the xUnit framework, outsource to another system the graphical rendering of a view for human consumption.

    Separation of interface from implementation

    Because some classes may have references to other classes, testing a class can frequently spill over into testing another class. A common example of this is classes that depend on a database: in order to test the class, the tester often writes code that interacts with the database. This is a mistake, because a unit test should usually not go outside of its own class boundary, and especially should not cross such process/network boundaries because this can introduce unacceptable performance problems to the unit test-suite. Crossing such unit boundaries turns unit tests into integration tests, and when test cases fail, makes it less clear which component is causing the failure. See also Fakes, mocks and integration tests
    Instead, the software developer should create an abstract interface around the database queries, and then implement that interface with their own mock object. By abstracting this necessary attachment from the code (temporarily reducing the net effective coupling), the independent unit can be more thoroughly tested than may have been previously achieved. This results in a higher quality unit that is also more maintainable.

    Unit testing limitations

    Testing cannot be expected to catch every error in the program: it is impossible to evaluate every execution path in all but the most trivial programs. The same is true for unit testing. Additionally, unit testing by definition only tests the functionality of the units themselves. Therefore, it will not catch integration errors or broader system-level errors (such as functions performed across multiple units, or non-functional test areas such as performance). Unit testing must be done in conjunction with other software testing activities. Like all forms of software testing, unit tests can only show the presence of errors; they cannot show the absence of errors.
    Software testing is a combinatorial problem. For example, every boolean decision statement requires at least two tests: one with an outcome of "true" and one with an outcome of "false". As a result, for every line of code written, programmers often need 3 to 5 lines of test code. This obviously takes time and its investment may not be worth the effort. There are also many problems that cannot easily be tested at all— for example those that are nondeterministic or involve multiple threads. In addition, writing code for a unit test is as likely to be at least as buggy as the code it is testing. Fred Brooks in The Mythical Man-Month quotes: never take two chronometers to sea. Always take one or three. Meaning, if two chronometers contradict, how do you know which one is correct?
    To obtain the intended benefits from unit testing, rigorous discipline is needed throughout the software development process. It is essential to keep careful records not only of the tests that have been performed, but also of all changes that have been made to the source code of this or any other unit in the software. Use of a version control system is essential. If a later version of the unit fails a particular test that it had previously passed, the version-control software can provide a list of the source code changes (if any) that have been applied to the unit since that time.
    It is also essential to implement a sustainable process for ensuring that test case failures are reviewed daily and addressed immediately. If such a process is not implemented and ingrained into the team's workflow, the application will evolve out of sync with the unit test suite, increasing false positives and reducing the effectiveness of the test suite.

    Applications

    Extreme Programming

    Unit testing is the cornerstone of Extreme Programming, which relies on an automated unit testing framework. This automated unit testing framework can be either third party, e.g., xUnit, or created within the development group.
    Extreme Programming uses the creation of unit tests for test-driven development. The developer writes a unit test that exposes either a software requirement or a defect. This test will fail because either the requirement isn't implemented yet, or because it intentionally exposes a defect in the existing code. Then, the developer writes the simplest code to make the test, along with other tests, pass.
    Most code in a system is unit tested, but not necessarily all paths through the code. Extreme Programming mandates a 'test everything that can possibly break' strategy, over the traditional 'test every execution path' method. This leads developers to develop fewer tests than classical methods, but this isn't really a problem, more a restatement of fact, as classical methods have rarely ever been followed methodically enough for all execution paths to have been thoroughly tested. Extreme Programming simply recognizes that testing is rarely exhaustive (because it is often too expensive and time-consuming to be economically viable) and provides guidance on how to effectively focus limited resources.
    Crucially, the test code is considered a first class project artifact in that it is maintained at the same quality as the implementation code, with all duplication removed. Developers release unit testing code to the code repository in conjunction with the code it tests. Extreme Programming's thorough unit testing allows the benefits mentioned above, such as simpler and more confident code development and refactoring, simplified code integration, accurate documentation, and more modular designs. These unit tests are also constantly run as a form of regression test.

    Techniques

    Unit testing is commonly automated, but may still be performed manually. The IEEE does not favor one over the other. A manual approach to unit testing may employ a step-by-step instructional document. Nevertheless, the objective in unit testing is to isolate a unit and validate its correctness. Automation is efficient for achieving this, and enables the many benefits listed in this article. Conversely, if not planned carefully, a careless manual unit test case may execute as an integration test case that involves many software components, and thus preclude the achievement of most if not all of the goals established for unit testing.
    Under the automated approach, to fully realize the effect of isolation, the unit or code body subjected to the unit test is executed within a framework outside of its natural environment, that is, outside of the product or calling context for which it was originally created. Testing in an isolated manner has the benefit of revealing unnecessary dependencies between the code being tested and other units or data spaces in the product. These dependencies can then be eliminated.
    Using an automation framework, the developer codes criteria into the test to verify the correctness of the unit. During execution of the test cases, the framework logs those that fail any criterion. Many frameworks will also automatically flag and report in a summary these failed test cases. Depending upon the severity of a failure, the framework may halt subsequent testing.
    As a consequence, unit testing is traditionally a motivator for programmers to create decoupled and cohesive code bodies. This practice promotes healthy habits in software development. Design patterns, unit testing, and refactoring often work together so that the best solution may emerge.

    Unit testing frameworks

    Unit testing frameworks are most often third-party products that are not distributed as part of the compiler suite. They help simplify the process of unit testing, having been developed for a wide variety of languages.
    It is generally possible to perform unit testing without the support of a specific framework by writing client code that exercises the units under test and uses assertions, exception handling, or other control flow mechanisms to signal failure. Unit testing without a framework is valuable in that there is a barrier to entry for the adoption of unit testing; having scant unit tests is hardly better than having none at all, whereas once a framework is in place, adding unit tests becomes relatively easy. In some frameworks many advanced unit test features are missing or must be hand-coded.

    Language-level unit testing support

    Some programming languages support unit testing directly. Their grammar allows the direct declaration of unit tests without importing a library (whether third party or standard). Additionally, the boolean conditions of the unit tests can be expressed in the same syntax as boolean expressions used in non-unit test code, such as what is used for "if" and "while" statements.
    Languages that directly support unit testing include:
    • Cobra
    • D
     For More in depth about cobra,D other testings just follow Wikipedia "http://en.wikipedia.org/wiki/Unit_testing"