Friday, October 22, 2010

The Tao of Testing

Fred Brooks says that a third of IT development time and effort should be spent in testing. In any major software development project, with many people working on the coding, testing is essential to make sure that the system performs as the requirements say it should.
However, even if you're a developer team of one, you still have an interest in ensuring that your work has proper Quality Assurance (QA) documentation for three main reasons:
  1. Your future business depends entirely on your professional reputation - good clients will always look for a reputation for delivering their requirements. Anything which enhances that reputation is A Good Thing.
  2. Once the system is handed over to the client, you will then have an audit trail of testing, documenting that the system is working. If it later fails, you have a backup to safeguard you against potential legal and reputational action from a panicking client.
  3. If you want to feel self-interested about this (and most of us do at some point), remember that the client should pay for all of this testing - it's all chargeable time, which will result in the client getting a better system at the end of it.
So what do you have to do?

Test Scripts

Testing is a systematic discipline. You need to ensure that you test every piece of functionality against its specification, and that tests repeated after a bug has been fixed are the same as the test which highlighted the bug in the first place.
The best way to ensure that there are no gaps in your test programme is to produce a test script. This will allow you to check that no area of functionality slips through the net, either at design stage, or while the tests are being performed.
Your script should outline the steps which testers will follow, and list the expected results of each test. The detail you go into will depend on the time and budget available for testing.
A sensible way of distributing the scripts is electronically - often as a word processing document. This will allow testers to record any errors which occur together with the tests which brought them out. You should archive the documents in read only format with the rest of the project documentation. To be on the safe side, the testers should print and sign the sheets, and again, you'll store these with the documentation.

Types of Testing

Usability Testing
Usability testing should happen before a single element of the user interface (including information architecture) is fixed. Performing usability tests at this stage will allow you to change the interface reasonably quickly and cheaply - backing out an interface once it is coded is always going to be difficult. The best way to perform usability testing at this stage is to build a prototype of your proposed interface, and test that. Feedback from the testers will allow you to quickly amend your prototype and go through another iteration. Research shows that you only need to use five testers to perform the usability tests, and find 85% of the usability issues in each iteration. After a few iterations, you're unlikely to have substantive issues left.
Unit Testing
Typically, a system contains a number of pieces such as
  • 'the bit which displays the product'
  • 'the bit which puts the product into the shopping cart'
  • 'the bit which which verifies the credit card and takes the payment'
and so on. Each of these is a unit, and you need to make sure that each unit produces the appropriate output for the input you give it, including sensible error trapping. A reasonably common (but by no means the only) way of doing this might be at a command-line, as this bypasses possible errors introduced by the web server process itself. All you are doing is checking that the basic code does what it says on the tin. Note that for complicated systems, each unit might be a system in its own right with sub-units. The division between system and unit tests in such a case is a little hazy.
System Testing
Once you have all your units behaving as expected, you need to string them together into a system, and test it in a semi-real environment, which is only different from the way it will finally operate in that you're not working with real users and live data.
Integration Testing
As eBusinesses become more complicated, there is a growing need for the systems you produce to be integrated with other systems, like the financial reporting system, the logistics system, the customer database and so on. The purpose of integration testing is to ensure that your system's inputs from and outputs to the other systems are as expected. This means that you will need to ensure that test data fed between the systems is not going to be mistaken for live data. That said, at some point you will need to put a real transaction through your test system as an end-to-end test. A useful (and popular with developers) way of doing this is giving the team working on the site an allowance to spend on the site as 'friendly orders', in return for reporting back any customer-facing inconsistencies in the entire process.
Volume Testing
Far too often, an eBusiness is a victim of its own success. From the Slashdot effect to sheer stupidity of the Marketing department, if your system won't handle the loads put on it by users, you are going to lose both face and money. Larger eRetailers are now building their systems to handle over a thousand simultaneous users. While you may not be in that league, you need to simulate the loads you anticipate, plus leaving enough headroom for traffic growth. Get it wrong, and you may be facing a launch delay of months.
Regression Testing
Unless you are spectacularly lucky, your testing will highlight errors in your system. And there's a better than average chance that fixing those errors will introduce new errors. Regression testing is a matter of going back over your previous tests to ensure that:
  1. The bug you previously found has been fixed
  2. No new bugs have been introduced.
If you are producing release notes for each patch, it should be fairly easy to track down the cause of new errors introduced with a patch. The outcome of Regression is the inevitability that testing is an iterative discipline - you will need to continue to test, fix and regress until you have a system which meets the requirements.
User Acceptance Testing (UAT)
Once you have what appears to you to be a working system, which meets all the requirements, the final piece of work you must undertake before you can ask for your cheque is User Acceptance Testing. This is essentially stepping through all the functionality with the client staff who are actually going to use the system. If your system fails UAT, yet meets the paper requirements, then you have an issue with your requirements documentation. You will need to resolve this with the client - has there been scope changes since the requirements doc was signed off? - before you can justifiably ask the client to sign off all your work and pay you.

Report Errors

Once your testing has highlighted issues with the system, you need a process to ensure that each one is prioritised, diagnosed and fixed.
A common approach is to have a central database which logs each new error, and captures the following information:
  • An ID number
  • Status (new, in progress or resolved)
  • Priority:
    1. Red (ie causes non-functionality in the system. Must get fixed before go-live). I've also seen this subdivided into "Red" and "Mother of Red".
    2. Amber (ie causes interference to user tasks. Should get fixed before go-live).
    3. Green (ie causes annoyance to users. Will get fixed if there is time before go-live).
  • Patch ID which will resolve (or has resolved) the error
  • An owner - a named individual who will take responsibility for ensuring that the fix happens. This need not be the person who actually fixes it.
  • A detailed description of the error, including any error messages, and screenshots where appropriate.
On each update of an error report, you should record an audit trail, outlining what's been changed, who's changed it and when.

No comments:

Post a Comment