|
I recently worked on a project for which I was the sole QA person and the ship date was only three months away. The project was to build and deliver a web-based billing system for the local telephone customers of a large telephone company. The team used HTML and JavaScript to present bills, a relational database at the back end, and Java and Enterprise Java Beans in the middle.The development team consisted mainly of highly skilled designers and programmers who knew little about quality assurance. They were motivated to deliver a great product on time. My job was to help them do that, by inventing and executing a QA program for the project.
I was the only team member concerned primarily with product quality, and the delivery date was firm. I didn't have time to develop a well thought-out plan to assure the goodness of the product. Instead, I kept my eyes open for opportunities to improve product quality, and took advantage of these opportunities. I later realized that I have done this many times over the years, and began to think of my capitalizing on quality improvement opportunities as a general strategy--"opportunistic software quality."
Here are examples of the opportunities we discovered during this project and how they helped us deliver a high quality product on time:
There were a few techniques we considered using, but did not use. These were: automated GUI testing, code coverage analysis, exhaustively testing all source code, retrofitting pre-existing tests to a better test framework, and reorganizing the source code to improve building and maintenance.
- Configuration management: Prior to my joining, the team was already using a configuration management system. They used the system to track source code changes and to label the configurations that were delivered to their customer. I increased the rigorous use of this system, making it easier to identify the configuration that corresponded to the one the customer was using, and thus making it easier to fix the defects the customer identified.
- Bug tracking system: Again before I joined, the team put a bug tracking system in place. They used it sporadically and didn't record all defects in it. I became the manager of the bug tracking system, making sure all defects were recorded and addressed. We didn't forget to fix any of the defects we needed to fix, so the product we delivered was better than otherwise.
- Automated nightly builds: Infrequent source code builds are a quality problem. "Code rot" can occur: individual programmers' code changes can break the compilation of another programmers' code. Broken builds can be difficult to repair if the defect was introduced too long ago for the programmer to remember why he changed the code. On previous projects, I found that it is possible to find defects simply by compiling the source code and building the product at regular intervals. On this project, I established a system for automatically building the product every night, after the programmers had gone home. The programmers agreed that they would only check-in source code changes that would at least compile correctly. We found a number of defects with this method. Because we found the defects the within a day of the code change, it was easier for the programmers to fix the defects than it would have been if we hadn't built the code regularly. In addition, builds usually succeeded when they were needed most, such as for an emergency patch release.
- Automated nightly testing: The programmers had built a number of semi-automated tests for particular sections of code. I fully automated the existing tests. Over time, I added tests of other important sections of code. I built a system for automatically executing the tests and analyzing the results. I augmented the nightly builds with the nightly automated tests. This also helped us find new defects within a day of a broken code change, making it easier for the programmers to fix the defects than it would have been if we hadn't built the code regularly.
- QA web site: The team's philosophy was that delivering a high quality product was a group effort, and not merely my responsibility. To give the other team members a view of the state of the product's quality, I built a QA web site for the team. The web site consisted of the most recent automated test results, a way to compare any set of test results with any other set of test results, hyperlinks to web pages that could be used for manual testing, and hyperlinks to written procedures. The test results section of the site made it easy for me to analyze nightly results and update baseline results. The hyperlinks to web pages that could be used for manual testing turned out to be extremely useful for the other team members; they vociferously complained whenever the site was down.
- Manual testing: It wasn't practical to automate all testing, especially GUI testing. I adapted an old test script so we could use it to test the current version of the product. I made it a policy that the script had to be executed at least once per week, and I rigorously followed the policy. With this regular planned testing, augmented by ad hoc testing from other team members, we identified many defects.
- Source code compiler: One of the team members recommended a better Java source code compiler. I modified the nightly build-and-test scripts to use the new compiler. The new compiler used a stricter definition of the Java language; we identified a few defects simply by using the new compiler.
- Improved delivery: One of my responsibilities was to deliver patch releases and beta releases to the customer. Before I joined the team, these deliveries were time consuming and plagued with mistakes. I developed a set of scripts to automatically build and install the product, and a procedure for making the delivery, drastically reducing the number of delivery mistakes.
- HTML validation: On a previous project, I used a tool to automatically check the validity of the HTML code the product delivered to users' web browsers. I used the tool on this project and successfully found a few defects.
Our results were good. The customer discovered only 7% of the total number of known defects. The two best techniques for identifying defects were manual testing and automated nightly testing. Manual testing was responsible for finding 46% of known defects, and automated nightly testing helped us find 29% of known defects. (These figures are slightly out of date; I will present current figures in the final version of the paper and at the conference.)
We delivered a relatively high quality product on time. The customer accepted the delivery and found very few defects. The opportunistic strategy worked.
Richard Kasperowski is president of Altisimo Computing, a software development consulting firm based in Cambridge, Massachusetts. Richard has worked as tester, developer, manager, and consultant since 1988. He has a degree from Harvard University, is a member of the ACM, and usually cycles to his clients' offices.