″We have separate analysis, design, coding and testing departments and most projects follow the corresponding phases in a more or less standard waterfall pattern″, said Jim, the operations manager. ″It mostly works well but we do tend to find a lot of defects at the testing stage and because of tight schedules we often have to ship before the defects are fixed, supplying patches to the customers later.″ It was no surprise to hear that the company was rapidly losing its reputation and its customer base. The sales and marketing department were constantly complaining of abuse and ridicule from the customers during contract negotiations.
In response, as I often do, I drew a little picture of a project timeline, from project inception to product delivery and asked him, ″Where on this line would you least like to discover your defects?″
Occasionally, a respondent may claim that he never likes to find defects anywhere but the overwhelming majority say, ″Just before product delivery″ as did Jim! He was learning that a consequence of using the waterfall process, is that you discover most of your defects during the testing phase, at the end of the project. This puts you at your most vulnerable just when you least want to be. Bearing in mind that an exponential cost of change curve is another ramification of using the waterfall, you are finding your defects where they are the most expensive to fix.
Being the conscientious sort of fellow that he is, Jim always looked for ways of reducing risk when he was planning projects. An easy solution for him was to schedule sufficient time at the end of the project to enable the team to fix all the defects that he expected to find in testing. But then they would have to test the whole system again to make sure that they haven′t introduced any more defects. If they had they′d need to fix them too and test again to make sure they hadn′t introduced some more defects and so on, and so on. If they found a fundamental flaw in the analysis of the requirements or the design of the project, they may even have to start all over. It can be difficult to know how long to leave for system testing at the end of the project.
We then discussed another contributing factor to poor quality, one that is coupled very tightly to defect discovery. Project ″shunt″ is a problem I′m sure you′re all familiar with - the early stages of the project overrun but the deadline remains fixed causing the last stage, of the project to become compressed so the project can still be delivered on the agreed date, avoiding any penalty charges for late delivery. As we all know, the last stage of a waterfall project is the testing stage, the stage that we need to ensure that we allow plenty of time for so that we have time to find the defects, fix them and then test the system again. All too frequently the start date for testing slips, even though the delivery date for the project remains fixed.
Jim recognised then that the problem was multifaceted. The testing stage of the project is where most defects are found; fixing defects sometimes introduces more defects; the later they are found the more expensive they are to fix and there is never enough time at the end of the project to find and fix all of the defects. These are common failings in the production process that were recognised by the manufacturing industries a long, long time ago and although there are many differences between manufacturing and software development, some things are universal. You cannot inspect quality into a product by implementing a testing stage at the end of the process. You need to build quality in and maintain it at every stage.
In the last issue, I discussed where defects came from and the benefits of changing the process so that the customers, testers and programmers could meet and agree the requirements before beginning the coding phase of the project. By codifying the requirements as a set of acceptance tests, they could remove ambiguities in them and allow the coders to test their work as they progress. By testing the quality of the work at every stage, they could be a little more certain that there weren′t too many defects discovered in the formal testing stage of the project.
So Jim and I agreed; what we need to do is provide the developers with a set of tests that they can run each time they change the code to give them confidence that they were still on track and hadn′t broken anything that was previously working. Jim′s usual cycle time for systems testing, however, was around two months, obviously much too long a period for developers to spend testing every time they make a change to the codebase. We needed something that would reduce the time spent testing and I suggested the use of an automated test harness, which Jim readily agreed to.
Jim′s company already had a commercially available test harness called QaRobotRunner (name changed to protect the innocent) and had invested a considerable amount of money both in licensing the product and training their testers to use it. Like most commercially available test harnesses, it has its own scripting language that the testers use to write tests that interact with the application′s functionality through the application′s user interface (UI).
This was why, despite the company′s heavy investment in the product, it was hardly ever used by the test team. The scripts required a lot of training and effort to develop and maintain but were very sensitive to changes in the UI, needing to be rewritten every time it altered. Because the UI was constantly in flux during development, it wasn′t worth writing the scripts until the application was relatively stable, by which time it was far too late, so testing was performed manually for the most part.
The conclusion we came to was that QaRobotRunner may well be an excellent way of testing a finished and stable product but it expected to be used in a waterfall environment where testing doesn′t begin until coding the application has finished. The effort required to maintain it during development made QaRobotRunner relatively useless at that stage. The primary concern with automated testing during development is that the test harness is quick and easy to use, otherwise the developers won′t use it. They need to be able to just press a button and watch it run without having to spend hours setting up the environment and rewriting scripts. Because QaRobotRunner needed to always interact with the UI, that is essentially what it was testing and we felt that in this case, the resources required to maintain it were not warranted.
Fortunately, there is another solution. What if we wrote tests in the same language as the application, tests that could talk to the application′s functionality directly, bypassing the UI and all its attendant problems?
Well, there′s no such thing as a free lunch and although this sounds like the answer to a lot of testing problems, it still leaves us with the problem of how to interface the test harness to the testers. Let′s not forget that although many of them may have experience of programming, that′s not what they were hired for and so we shouldn′t expect them to possess those skills. To train them to an appropriate level would be another considerable investment in time and money. What′s needed is a framework that can talk directly to the application but at the same time, is easy for the testers to work with.
One that I′ve used a lot recently and am very pleased with is the Framework for Integration Testing (FIT), an open source offering from programming guru Ward Cunningham, he of patterns, Wikis and eXtreme Programming fame.
FIT is set of library routines that translate HTML into tests. All it does is read an HTML file searching for tables within it and testing the functionality of the application using the data found in the tables. It then produces an output file containing exactly the same data as the input file but colouring cells containing failed tests red and cells containing passed tests green. Writing the ″glue″ code to interface the HTML to the application functionality using FIT is a trivial task for any programmer and there are versions available in Java, C++ and Python. The pages used for input may be produced using any application that can generate HTML and the output pages can be viewed in any browser.
Simplicity itself! We installed FIT on Jim′s developers′ machines and now there is no need for the testers to learn even a scripting language. As long as they can use a word processor, they can prepare tests in HTML. The developers can run the tests after they′ve made their changes, before they integrate their code into the main codebase, giving them a high level of confidence that their code works the way the customer wants it to and they′ve not introduced anymore defects into the system.
Now Jim′s developers have an ever-growing collection of regression tests in HTML and find and fix most problems before they even make it into the main codebase. Speed of development has increased as a result of less rework and the team is confident that they will be able to deliver the product to the testers without slippage and expect few if any defects to be found.
Jim is pleased that he can view the tests in his own browser and can see new tests passing every day, giving him visible signs of progress.
- The Java version of FIT can be downloaded from Ward Cunningham′s original FIT home page at : http://fit.c2.com/ which also contains documentation, tutorials and many examples of use.
- Michael Feathers has ported FIT to C++, downloadable from: http://fitnesse.org/files/CppFit/fitcpp20030720.zip
- The Object Mentor Fitnesse site: http://www.fitnesse.org/ combines the Java version of FIT with a Wiki to provide a fully integrated acceptance testing framework. Again plenty of documentation and examples.
First published in Application Development Advisor