Quality-Assurance Fundamentals

Like management and technical fundamentals, quality-assurance fundamentals provide critical support for maximum development speed. When a software product has too many defects, developers spend more time fixing the software than they spend writing it. Most organizations have found that they are better off not installing the defects in the first place. The key to not installing defects is to pay attention to quality-assurance fundamentals from Day 1 on.

Some projects try to save time by reducing the time spent on quality-assurance practices such as design and code reviews. Other projects—running late—try to make up for lost time by compressing the testing schedule, which is vulnerable to reduction because it's usually the critical-path item at the end of the project. These are some of the worst decisions a person who wants to maximize development speed can make because higher quality (in the form of lower defect rates) and reduced development time go hand in hand. Figure 4-3 illustrates the relationship between defect rate and development time.

image with no caption
Relationship between defect rate and development time. In most cases, the projects that achieve the lowest defect rates also achieve the shortest schedules.

Source: Derived from data in Applied Software Measurement (Jones 1991).

Figure 4-3. Relationship between defect rate and development time. In most cases, the projects that achieve the lowest defect rates also achieve the shortest schedules.

A few organizations have achieved extremely low defect rates (shown on the far right of the curve in Figure 4-3), at which point, further reducing the number of defects will increase the amount of development time. It's worth the extra time when it's applied to life-critical systems such as the life-support systems on the Space Shuttle—but not when it applies to non-life-critical software development.

IBM was the first company to discover that software quality and software schedules were related. They found that the products with the lowest defect counts were also the products with the shortest schedules (Jones 1991).

Many organizations currently develop software with a level of defects that gives them longer schedules than necessary. After surveying about 4000 software projects, Capers Jones reports that poor quality is one of the most common reasons for schedule overruns (Jones 1994). He also reports that poor quality is implicated in close to half of all canceled projects. A Software Engineering Institute survey found that more than 60 percent of organizations assessed suffered from inadequate quality assurance (Kitson and Masters 1993). On the curve in Figure 4-3, those organizations are to the left of the 95-percent-removal line.

image with no caption

Some point in the neighborhood of the 95 percent is significant because that level of prerelease defect removal appears to be the point at which projects generally achieve the shortest schedules, least effort, and highest level of user satisfaction (Jones 1991). If you're finding more than 5 percent of your defects after your product has been released, you're vulnerable to the problems associated with low quality, and you're probably taking longer to develop your software than you need to.

image with no caption

Projects that are in a hurry are particularly vulnerable to shortchanging quality assurance at the individual-developer level. When you're in a hurry, you cut corners because "we're only 30 days from shipping." Rather than writing a separate, completely clean printing module, you piggyback printing onto the screen-display module. You might know that that's a bad design, that it isn't extendible or maintainable, but you don't have time to do it right. You're being pressured to get the product done, so you feel compelled to take the shortcut.

image with no caption

Three months later, the product still hasn't shipped, and those cut corners come back to haunt you. You find that users are unhappy with printing, and the only way to satisfy their requests is to significantly extend the printing functionality. Unfortunately, in the 3 months since you piggybacked printing onto the screen display module, the printing functionality and the screen display functionality have become thoroughly intertwined. Redesigning printing and separating it from the screen display is now a tough, time-consuming, error-prone operation.

The upshot is that a project that was supposed to place a strong emphasis on achieving the shortest possible schedule instead wasted time in the following ways:

  • The original time spent designing and implementing the printing hack was completely wasted because most of that code will be thrown away. The time spent unit-testing and debugging the printing-hack code was wasted too.

  • Additional time must be spent to strip the printing-specific code out of the display module.

  • Additional testing and debugging time must be spent to ensure that the modified display code still works after the printing code has been stripped out.

  • The new printing module, which should have been designed as an integral part of the system, has to be designed onto and around the existing system, which was not designed with it in mind.

All this happens, when the only necessary cost—if the right decision had been made at the right time—was to design and implement one version of the printing module.

This example is not uncommon. Up to four times the normal number of defects are reported for released products that were developed under excessive schedule pressure (Jones 1994). Projects that are in schedule trouble often become obsessed with working harder rather than working smarter. Attention to quality is seen as a luxury. The result is that projects often work dumber, which gets them into even deeper schedule trouble.

image with no caption

A decision early in a project not to focus on defect detection amounts to a decision to postpone defect detection until later in the project, when it will be much more expensive and time-consuming. That's not a rational decision when time is at a premium.

If you can prevent defects or detect and remove them early, you can realize a significant schedule benefit. Studies have found that reworking defective requirements, design, and code typically consumes 40 to 50 percent of the total cost of software development (Jones 1986b; Boehm 1987a). As a rule of thumb, every hour you spend on defect prevention will reduce your repair time 3 to 10 hours (Jones 1994). In the worst case, reworking a software-requirements problem once the software is in operation typically costs 50 to 200 times what it would take to rework the problem in the requirements stage (Boehm and Papaccio 1988). Given that about 60 percent of all defects usually exist at design time (Gilb 1988), you can save enormous amounts of time by detecting defects earlier than system testing.

image with no caption

Error-Prone Modules

One aspect of quality assurance that's particularly important to rapid development is the existence of error-prone modules. An error-prone module is a module that's responsible for a disproportionate number of defects. On its IMS project, for example, IBM found that 57 percent of the errors were clumped in 7 percent of the modules (Jones 1991). Barry Boehm reports that about 20 percent of the modules in a program are typically responsible for about 80 percent of the errors (Boehm 1987b).

image with no caption

Modules with such high defect rates are more expensive and time-consuming to deliver than less error-prone modules. Normal modules cost about $500 to $1,000 per function point to develop. Error-prone modules cost about $2,000 to $4,000 per function point (Jones 1994). Error-prone modules tend to be more complex than other modules in the system, less structured, and unusually large. They often were developed under excessive schedule pressure and were not fully tested.

image with no caption

If development speed is important, make identification and redesign of error-prone modules a priority. If a module's error rate hits about 10 defects per 1000 lines of code, review it to determine whether it should be redesigned or reimplemented. If it's poorly structured, excessively complex, or excessively long, redesign the module and reimplement it from the ground up. You'll save time and improve the quality of your product.

Testing

The most common quality-assurance practice is undoubtedly execution testing, finding errors by executing a program and seeing what it does. The two basic kinds of execution testing are unit tests, in which the developer checks his or her own code to verify that it works correctly, and system tests, in which an independent tester checks to see whether the system operates as expected.

Testing's effectiveness varies enormously. Unit testing can find anywhere from 10 to 50 percent of the defects in a program. System testing can find from 20 to 60 percent of a program's defects. Together, their cumulative defect-detection rate is often less than 60 percent (Jones 1986a). The remaining errors are found either by other error-detecting techniques such as reviews or by end-users after the software has been put into production.

image with no caption

Testing is the black sheep of QA practices as far as development speed is concerned. It can certainly be done so clumsily that it slows down the development schedule, but most often its effect on the schedule is only indirect. Testing discovers that the product's quality is too low for it to be released, and the product has to be delayed until it can be improved. Testing thus becomes the messenger that delivers bad news that affects the schedule.

The best way to leverage testing from a rapid-development viewpoint is to plan ahead for bad news—set up testing so that if there's bad news to deliver, testing can deliver it as early as possible.

Technical Reviews

Technical reviews include all kinds of reviews that are used to detect defects in requirements, design, code, test cases, or other project artifacts. Reviews vary in level of formality and in effectiveness, and they play a more critical role in development speed than testing does. The following sections summarize the most common kinds of reviews.

Walkthroughs

The most common kind of review is probably the informal walkthrough. The term "walkthrough" is loosely defined and refers to any meeting at which two or more developers review technical work with the purpose of improving its quality. Walkthroughs are useful to rapid development because you can use them to detect defects well before testing. The earliest time that testing can detect a requirements defect, for example, is after the requirement has been specified, designed, and coded. A walkthrough can detect a requirements defect at specification time, before any design or code work is done. Walkthroughs can find between 30 and 70 percent of the errors in a program (Myers 1979, Boehm 1987b, Yourdon 1989b).

image with no caption

Code reading

Code reading is a somewhat more formal review process than a walkthrough but nominally applies only to code. In code reading, the author of the code hands out source listings to two or more reviewers. The reviewers read the code and report any errors to the author of the code. A study at NASA's Software Engineering Laboratory found that code reading detected about twice as many defects per hour of effort as testing (Card 1987). That suggests that, on a rapid-development project, some combination of code reading and testing would be more schedule-effective than testing alone.

image with no caption

Inspections

Inspections are a kind of formal technical review that has been shown to be extremely effective in detecting defects throughout a project. With inspections, developers receive special training in inspections and play specific roles during the inspection. The "moderator" hands out the work product to be inspected before the inspection meeting. The "reviewers" examine the work product before the meeting and use checklists to stimulate their review. During the inspection meeting, the "author" usually paraphrases the material being inspected, the reviewers identify errors, and the "scribe" records the errors. After the meeting, the moderator produces an inspection report that describes each defect and indicates what will be done about it. Throughout the inspection process you gather data on defects, hours spent correcting defects, and hours spent on inspections so that you can analyze the effectiveness of your software-development process and improve it.

image with no caption

CROSS-REFERENCE

For a summary of the schedule benefits of inspections, see Chapter 23, Chapter 23.

As with walkthroughs, you can use inspections to detect defects earlier than you can with testing. You can use them to detect errors in requirements, user-interface prototypes, design, code, and other project artifacts. Inspections find from 60 to 90 percent of the defects in a program, which is considerably better than walkthroughs or testing. Because they can be used early in the development cycle, inspections have been found to produce net schedule savings of from 10 to 30 percent (Gilb and Graham 1993). One study of large programs even found that each hour spent on inspections avoided an average of 33 hours of maintenance, and inspections were up to 20 times more efficient than testing (Russell 1991).

image with no caption

Comment on technical reviews

Technical reviews are a useful and important supplement to testing. Reviews tend to find different kinds of errors than testing does (Myers 1978; Basili, Selby, and Hutchens 1986). They find defects earlier, which is good for the schedule. Reviews are more cost effective on a per-defect-found basis because they detect both the symptom of the defect and the underlying cause of the defect at the same time. Testing detects only the symptom of the defect; the developer still has to find the cause by debugging. Reviews tend to find a higher percentage of defects. And reviews provide a forum for developers to share their knowledge of best practices with each other, which increases their rapid-development capabilities over time. Technical reviews are a critical component of any development effort that is trying to achieve the shortest possible schedule.

Don't let this happen to you! The longer defects remain undetected, the longer they take to fix. Correct defects when they're young and easy to control.

Figure 4-4. Don't let this happen to you! The longer defects remain undetected, the longer they take to fix. Correct defects when they're young and easy to control.

Further Reading on QA Fundamentals

Several of the books referenced elsewhere in this chapter contain sections on general aspects of software quality, reviews, inspections, and testing. Those books include A Manager's Guide to Software Engineering (Pressman 1993), Software Engineering: A Practitioner's Approach (Pressman 1992), Software Engineering (Sommerville 1996), and Code Complete (McConnell 1993). Here are additional sources of information on specific topics:

General software quality

Glass, Robert L. Building Quality Software. Englewood Cliffs, N.J.: Prentice Hall, 1992. This book examines quality considerations during all phases of software development including requirements, design, implementation, maintenance, and management. It describes and evaluates a wide variety of methods and has numerous capsule descriptions of other books and articles on software quality.

Chow, Tsun S., ed. Tutorial: Software Quality Assurance: A Practical Approach. Silver Spring, Md.: IEEE Computer Society Press, 1985. This book is a collection of about 45 papers clustered around the topic of software quality. Sections include software-quality definitions, measurements, and applications; managerial issues of planning, organization, standards, and conventions; technical issues of requirements, design, programming, testing, and validation; and implementation of a software-quality-assurance program. It contains many of the classic papers on this subject, and its breadth makes it especially valuable.

Testing

Myers, Glenford J. The Art of Software Testing. New York: John Wiley & Sons, 1979. This is the classic book on software testing and is still one of the best available. The contents are straightforward: the psychology and economics of program testing; test-case design; module testing; higher-order testing; debugging; test tools; and other techniques. The book is short (177 pages) and readable. The quiz at the beginning gets you started thinking like a tester and demonstrates just how many ways a piece of code can be broken.

Hetzel, Bill. The Complete Guide to Software Testing, 2d Ed. Wellesley, Mass.: QED Information Systems, 1988. A good alternative to Myers's book, Hetzel's is a more modern treatment of the same territory. In addition to what Myers covers, Hetzel discusses testing of requirements and designs, regression testing, purchased software, and management considerations. At 284 pages, it's also relatively short, and the author has a knack for lucidly presenting powerful technical concepts.

Reviews and inspections

Gilb, Tom, and Dorothy Graham. Software Inspection. Wokingham, England: Addison-Wesley, 1993. This book contains the most thorough discussion of inspections available. It has a practical focus and includes case studies that describe the experiences of several organizations who set up inspection programs.

Freedman, Daniel P., and Gerald M. Weinberg. Handbook of Walkthroughs, Inspections and Technical Reviews, 3d Ed. New York: Dorset House, 1990. This is an excellent sourcebook on reviews of all kinds, including walkthroughs and inspections. Weinberg is the original proponent of "egoless programming," the idea upon which most review practices are based. It's enormously practical and includes many useful checklists, reports about the success of reviews in various companies, and entertaining anecdotes. It's presented in a question-and-answer format.

The next two articles are written by the developer of inspections. They contain the meat of what you need to know to run an inspection, including the standard inspection forms.

Fagan, Michael E. "Design and Code Inspections to Reduce Errors in Program Development," IBM Systems Journal, vol. 15, no. 3, 1976, pp. 182–211.

Fagan, Michael E. "Advances in Software Inspections," IEEE Transactions on Software Engineering, July 1986, pp. 744–751.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.144.86.138