Chapter 6. Lessons About Quality

I wrote a perfect software application once, in assembly language, no less. It wasn’t large—an educational chemistry game—but it had zero defects and did everything it was supposed to do correctly. I’ve also written a lot more code that, despite my best efforts, contained errors that I had to correct later. High-quality software is important to me, as it should be to everyone who creates or uses software systems. We should all strive for quality in the work we do—but what does quality mean?

Definitions of Quality

People have tried to define quality for ages, but it’s elusive. I’ve seen many attempts at it, but I’m aware of no all-inclusive yet succinct definition that applies to software. The American Society for Quality (2021a) acknowledges this reality with the first part of its definition of quality: “A subjective term for which each person or sector has its own definition.” That’s true, if not terribly helpful. Different observers will indeed have varying conceptions of what constitutes quality—or the lack thereof—in a given product. Here are some other definitions of quality; all have merit, but none are complete.

• “1) The characteristics of a product or service that bear on its ability to satisfy stated or implied needs; 2) a product or service free of deficiencies,” from the American Society for Quality (2021a).

• The “degree to which a software product satisfies stated and implied needs when used under specified conditions,” from the International Organization for Standardization and the International Electrotechnical Commission (ISO/IEC, 2011).

• Fitness for use, meaning that a product should satisfy a customer’s real needs and lead to customer satisfaction, from quality pioneer Joseph M. Juran (American Society for Quality, 2021b).

• Conformance to requirements, from Philip B. Crosby (1979).

• Zero defects, also from Crosby (1979).

• Value to some person, from Gerald Weinberg (2012).

We can draw two conclusions from these diverse definitions of quality: quality has multiple aspects, and quality is situational. We can probably all agree that, in the context of delivered software, quality describes how well the product does whatever it’s supposed to do; a more rigorous definition is likely to remain elusive. Nonetheless, each project team needs to explore what quality means to its customers, how to assess it, and how to achieve it, and then communicate that knowledge clearly to all project participants (Davis, 1995).

In an ideal world, each project would deliver a product that contains all the features any user would ever need, with zero defects and perfect usability, produced in record time at minimal cost. But that’s a fantasy; quality expectations must be realistic. The decision makers on each project need to determine which aspects of project success are most important and what trade-offs they can appropriately make in the quest to achieve their business objectives.

Planning for Quality

The aggregated impacts of software quality shortcomings across an organization, a nation, or the planet as a whole are staggering. A detailed analysis estimated the total costs of poor software quality in the United States in 2018 at approximately $2.26 trillion if technical debt is not included and $2.84 trillion if it is (Krasner, 2018). Just imagine the economic benefits—at every level—that higher-quality software could yield.

As Figure 6.1 illustrates, the classic project management iron triangle or triple constraint doesn’t explicitly show quality as an adjustable parameter along with scope, cost, and time. You could interpret that absence to mean either that quality is a nonnegotiable expectation or—more likely—that you get whatever level of quality the team can achieve within the constraints the other parameters impose. That is, quality is a dependent, not independent, variable (Nagappan, 2020b).

Image

Figure 6.1 The classic project management iron triangle doesn’t show quality explicitly.

However, development teams and managers sometimes decide to compromise on quality to meet a delivery date or include a richer—if imperfect—feature set that makes their product more attractive to its customers. That’s why my enhanced five-dimensional model from Lesson #31 in Chapter 4 includes quality as an explicit project parameter, along with scope, schedule, staff, and budget. The people who make release decisions might tolerate some number of known defects if they conclude that those defects will have little customer or business impact. The affected users might not agree that the development team made a sensible trade-off decision, though (Weinberg, 2012). If a user’s favorite feature is the one with a defect, the user is likely to see the entire product as flawed—and tell everyone they know about it.

A software system doesn’t need to have many problems to give an impression of low quality. I like to write and record songs, just for fun. I bought an application I can use to write musical scores. Scoring music is a complex problem; the app I use is correspondingly—and unavoidably—complex. It has some usability deficiencies; entering notes is tedious at best. Worse, I’ve encountered numerous software failures as I tried to create or modify scores. It’s highly frustrating to try to enter some ordinary bit of musical data and have the program go nuts, displaying something completely wrong. This application contains an excessively rich set of features, many of which I will never use and some that don’t work well at all. I’d rather have fewer features that satisfy most users’ needs and all work correctly.

Software teams will benefit from creating a quality management plan at the beginning of the project. The plan should establish realistic quality expectations for the product, including defining defect severity classifications (severe, moderate, minor, cosmetic). That will help all project participants to think of quality issues in a consistent way. Establishing common terminology and expectations regarding the various types of software testing required further helps to align stakeholders toward a shared objective of building a high-quality solution.

Multiple Views of Quality

Software quality encompasses many dimensions. It’s more than simply meeting specified requirements (assuming that those requirements are correct), and it’s more than being defect-free. We need to consider numerous characteristics to fully understand what quality means for a given product and its users: features, performance, aesthetics, reliability, usability, cost, timeliness of delivery, and so on (Juran, 2019).

As we saw in Lesson #20, “You can’t optimize all desirable quality attributes,” software project teams need to explore a broad set of quality attribute requirements. Because they can’t create a product that exhibits ideal quality for every attribute, quality compromises often are necessary. Designers must make choices that favor certain attributes over others. The various quality attributes have to be specified precisely and prioritized so that decision makers can make appropriate trade-off choices.

Also, how one stakeholder group perceives quality could conflict with another’s expectations. A developer might consider high-quality code to be written elegantly and to execute efficiently and correctly. However, a maintainer might more highly value code that’s easy to understand and modify. A user could consider high-quality code—to the extent that they think about code at all—as being whatever’s necessary to let them use the product easily and without failures. In this example, the developer and maintainer are focused on the product’s internal quality; the user cares about its external quality.

Building Quality In

Other than cosmetic and aesthetic improvements, quality isn’t something you simply add to the system when you get around to it. You can’t just write several availability goals in a user story and add the story to the product backlog for some future development iteration. You must deliberately build in quality from the beginning through the processes you follow, the objectives you set, and your team members’ attitudes. Some quality attributes impose constraints that affect aspects of the entire development process, not just specific bits of functionality (Scaled Agile, 2021c). Satisfying certain quality attributes presents architectural implications that the team should address from the project’s outset.

It’s hard to retrofit quality into a product built on a shaky foundation. If developers take shortcuts in their haste to implement features, they’ll accrue technical debt that makes it increasingly difficult to modify and extend the codebase. Technical debt refers to accumulated quality shortcomings in implemented software. It has many causes and is a major contributor to missed project deadlines (Pearls of Wisdom, 2014a). That debt comes due eventually. (See Lesson #50, “Today’s ‘gotta get it out right away’ development project is tomorrow’s maintenance nightmare.”)

Customers who suffer because of product quality problems aren’t happy about it. Just about every day, I encounter some website or other product that’s thoughtlessly designed, hard to figure out, wastes my time, or simply doesn’t work right. That’s annoying because I know that often it’s not much harder to build a better product. A software colleague who periodically relates low-quality experiences to me ends his reports with “NWNC,” his shorthand for “Nothing works and nobody cares.” Sadly, he’s right far too often.

The total cost of quality encompasses everything you do to prevent, detect, and correct product defects. (For more on the cost of quality, see Lesson #44, “High quality naturally leads to higher productivity.”) Business analysts, developers, and other project contributors will make mistakes—we’re all human. You need to establish technical practices that minimize the number of defects created. You also need to develop a personal ethic and an organizational culture that value defect prevention and early detection. Strive to find defects early before they do too much damage—that is, before they generate too much rework.

Not every product must be perfect, but every product must exhibit good enough quality, as judged by users and other stakeholders. The early adopters of highly innovative products have a high tolerance for defects, so long as the product lets them do some cool new things. Other domains—such as medical devices, aircraft systems, and reusable software components—demand far more stringent quality standards. The first step is for each project team to decide what quality, in all its many forms, means for their product. After that, they might find the eight lessons about software quality in this chapter helpful.

First Steps: Quality

I suggest you spend a few minutes on the following activities before reading the quality-related lessons in this chapter. As you read the lessons, contemplate to what extent each of them applies to your organization or project team.

1. How does your organization define quality for its products, both internal aspects of quality for developers and maintainers and external quality for end users?

2. Do your project teams document what quality means specifically for each of their projects? Do they set measurable quality goals?

3. How does your organization judge whether each product conforms to its team’s—and its customers’—quality expectations?

4. List software quality practices that your organization is especially good at. Is information about those practices documented to remind team members about them and make it easy to apply them?

5. Identify any problems—points of pain—that you can attribute to shortcomings in how your project teams approach software quality.

6. State the impacts that each problem has on your ability to complete projects successfully. How do the problems impede achieving business success for both the development organization and its customers? Quality problems lead to both tangible and intangible costs, such as unplanned rework, schedule delays, support and maintenance costs, customer dissatisfaction, and uncomplimentary product reviews.

7. For each problem from Step #5, identify the root causes that trigger the problem or make it worse. Problems, impacts, and root causes can blur together, so try to tease them apart and see their connections. You might find multiple root causes that contribute to the same problem, as well as several problems that arise from a single root cause.

8. As you read this chapter, list any practices that would be useful to your team.

Lesson #43. When it comes to software quality, you can pay now or pay more later.

Suppose I’m a BA and I have a conversation with a customer to flesh out some requirement details. I go back to my office and write up what I learned in whatever form my project uses for requirements. The customer emails me the next day and says, “I just talked to one of my coworkers and learned that I had something wrong in that requirement we talked about yesterday.” How much work must I do to correct that error? Very little; I simply update the requirement to match the customer’s current request. Let’s say that making that correction cost ten dollars’ worth of company time.

Alternatively, suppose the customer contacts me a month or six after we had the conversation to point out the same problem. Now how much does it cost to correct that error? It depends on how much work the team has done based on the original, incorrect requirement. Not only does my company still have to pay ten dollars to fix the requirement, but a developer might have to redo some portion of the design. Maybe that costs another thirty or forty dollars. If the developers already implemented the original requirement, they’ll have to modify or recode it. They’ll need to update tests, verify the newly implemented requirement, and run regression tests to see if the code changes broke anything. All that could cost perhaps a hundred dollars more. Maybe someone must revise a web page or a help screen, as well. The bill keeps increasing.

Software’s malleability lets us make changes and corrections whenever warranted. But every change has a price. Even discussing the possibility of adding some functionality or fixing a bug and then deciding not to do it takes time. The longer a requirement defect lingers undetected and the more rework you have to do to correct it, the higher the price tag.

The Cost-of-Repair Growth Curve

The cost of correcting a defect depends on when it was introduced into the product and when someone found it. The curve in Figure 6.2 shows that the cost increases significantly for late-discovered requirements errors. I omitted a numeric scale on the Y-axis because various sources cite different data, and software people debate the exact numbers. The cost ratio depends on the product type, the development life cycle being followed, and other factors.

Image

Figure 6.2 The cost to correct a defect increases rapidly with time

For instance, data from Hewlett-Packard indicated that the cost ratio could be as high as 110:1 if a customer discovered a requirement defect in production versus someone finding it during requirements development (Grady, 1999). Another analysis suggested a relative cost factor of 30:1 to correct errors introduced during requirements development or architectural design that were discovered post-release (NIST, 2002). For highly complex hardware/software systems, the cost amplification factor from discovery in the requirements stage versus the operational stage can range from 29X to more than 1500X (Haskins et al., 2004).

Regardless of the exact numbers, there’s broad agreement that early defect correction is far cheaper than fixing defects following release (Sanket, 2019; Winters, Manshreck, and Wright, 2020). It’s a bit like paying your credit card bill. You can pay the balance due on time, or you can pay a smaller amount now plus the remaining balance along with substantial interest charges and late fees in the future. Johanna Rothman (2000) compared how three hypothetical companies could employ different strategies to deal with defects and consequently experience different relative defect-repair costs. However, in all three scenarios, the later in the project the team fixes a defect, the more it costs.

Some people have argued that agile development greatly flattens the cost-of-change curve (Beck and Andres, 2005). I haven’t yet located any actual project data to support this contention. However, this lesson isn’t about the cost of making a change like adding new functionality—it’s about the price you pay to correct defects. A requirement defect that is discovered before a user story is coded is still less expensive to repair than a requirements defect identified during a sprint review. Scott Ambler (2006) suggested that the relative defect-correction cost is lower on agile projects because of agile’s quick feedback cycles that shorten the time between when some work is done and when its quality is assessed. That sounds plausible, but it only partially addresses the fundamental issue with defect-repair costs.

The issue with cost-to-repair is not only the days, weeks, or months between when the defect was injected into the product and when someone discovers it. The amplification factor depends on how much work was done based on the defective bit that now must be redone. It costs very little to fix a coding error if your pair-programming partner finds it moments after you typed the wrong thing. However, if the customer calls to report the same type of error when the software is in production, it certainly will be far more difficult to rectify. As an example, a developer friend of mine related this recent experience:

This week I missed one comma (literally) in a ColdFusion script on a custom website for a client. It caused a crash, which caused him a delay and hassle. Plus, then there were the back-and-forth emails, and then me opening up all the tools and source code and finding the bug, adding the comma, retesting, and so on. One darn comma.

Harder to Find

Diagnosing a system failure takes longer if the underlying fault was introduced long ago. If you review some requirements and spot an error, you know exactly where the problem lies. However, if a customer reports a malfunction—whether that’s one month or five years after someone wrote the requirement—the detective work is more challenging. Is the failure due to an erroneous requirement, a design problem, a coding bug, or an error in a third-party component? Therein lies Ambler’s argument for lower defect-correction costs on agile projects: defects are revealed shortly after they’re introduced, so it’s easier to locate the fault that leads to a failure.

After you uncover the root cause—the fault—for a customer-reported system failure, you have to recognize all of the affected work products, repair them, retest the system, write release notes, redeploy the corrected product, and reassure the customer that the problem is fixed. That’s a lot of expensive re- stuff to do. Plus, at that point, the problem has affected many more stakeholders than if someone had found it much earlier.

Go Left, Young Man

Serious defects discovered during system testing can lead to a lot of repair work. Those found after release can disrupt user operations and trigger emergency fixes that divert resources from new development. This reality leads us to several thoughts about how to pay less for high-quality software.

Prevent Defects Instead of Correcting Them

Quality control activities, such as testing, code static analysis, and code reviews, look for defects. Quality assurance activities seek to prevent defects in the first place. Improved processes, better technical practices, more proficient practitioners, and taking a little more time to do our work carefully are all ways to prevent errors and avoid their associated correction costs.

Push Quality Practices to the Left

Regardless of the project’s development life cycle, the earlier you find a defect, the cheaper it is to resolve. Each piece of software work involves a micro-sequence of requirements, design, and coding, moving from left to right on a timescale axis. We’ve seen that eradicating requirement errors provides the greatest leverage for time savings down the road. Therefore, we should use all the tools at our disposal to find errors in requirements and designs before they’re translated into the wrong code.

Peer reviews and prototyping are effective ways to detect requirement errors. Pushing testing from its traditional position late in the development sequence—on the right side of the timeline—far to the left is particularly powerful. Strategy options include following a test-driven development process (Beck, 2003), writing acceptance tests to flesh out requirements details (Agile Alliance, 2021b), and—my preference—concurrently writing functional requirements and their corresponding tests (Wiegers and Beatty, 2013).

Every time I write tests shortly after writing requirements, I discover errors in both the requirements and the tests. The thought processes involved with writing requirements and tests are complementary, which is why I find that doing both yields the highest quality outcome. Writing tests through a collaboration between the BA and the tester leverages both the idea of doing it earlier in the process and having multiple sets of eyes looking at the same thing from different perspectives. Writing tests early in the development cycle doesn’t add time to the project; it just reallocates time to a point where it provides greater quality leverage. Those conceptual tests can be elaborated into detailed test scenarios and procedures as development progresses.

During implementation, developers can use static and dynamic code analysis tools to reveal many problems far faster than humans can review code manually. These tools can find run-time errors that code reviewers struggle to spot, such as memory corruption bugs and memory leaks (Briski et al., 2008). On the other hand, human reviewers can spot code logic errors and omissions that automated tools won’t detect.

The timing of quality control activities is important. I once worked with a developer who wouldn’t let anyone review her code until it was fully implemented, tested, formatted, and documented—that is, clear on the right side of her development time scale. At that point, she was psychologically resistant to hearing that she wasn’t done after all. Each issue that someone raised in a code review triggered a defensive response and rationalization about why it was fine the way it was. You’re much better off starting with preliminary reviews on just a portion of a work item—be it requirements, design, code, or tests—to get input from others on how to craft the rest of the item better. Push quality to the left by reviewing early and often.

Track Defects to Understand Them

The most efficient way to control defects is to contain them to the life cycle activity—requirements, design, coding—in which they originated. Record some information about your bugs instead of simply swatting them and moving on. Ask yourself questions to identify the origin of each defect so you can learn what types of errors are the most common. Did this problem happen because I didn’t understand what the customer wants? Did I understand the need accurately but make an incorrect assumption about other system components or interfaces? Did I simply make a mistake while coding? Was a customer change not communicated to everyone who needed to know about it?

Note the life cycle activity (not necessarily a discrete project phase) in which each defect originated and how it was discovered. You can calculate your defect containment percentage from that data to see how many problems are leaking from their creation stage into later development activities, thereby amplifying their cost-to-repair factors. That information will show you which practices are the best quality filters and where your improvement opportunities lie.

Minimizing defect creation and finding them early reduce your overall development costs. Strive to bring your full arsenal of quality weapons to bear from the earliest project stages.

Lesson #44. High quality naturally leads to higher productivity.

Software-developing organizations and individuals would love to be more productive. Quality problems pose one of the greatest barriers to high productivity. Teams plan to get a certain amount of work done in a specified time, but then they have to fix problems found in completed work or reallocate effort to repair a production system. That rework saps time and morale. A way to boost productivity is to create high-quality software from the outset so that teams can spend less time on rework both during development and following deployment (Wiegers, 1996).

I hate rework, doing over something that I’ve already completed. I learned this in my ninth-grade shop class. Our first project was to take a short piece of 2×4 lumber, shape it to specific dimensions, and practice using various tools on it. If we drilled a hole in the wrong spot or planed the wood down below the specified dimensions, we had to start over. It took me nine attempts to get it right.

I noticed that a classmate worked more slowly than me but finished his project in just two tries. He had to do far less rework because of mistakes, and he didn’t have to buy nine pieces of 2×4. Both his work quality and his productivity exceeded mine. I learned a vital lesson: go slow to go fast. Ever since then, I’ve tried to avoid having to do something more than once. Building in quality from the beginning frees up time to devote to new, value-added work. That was true in the woodshop, and it’s even truer in software development.

A Tale of Two Projects

To illustrate how poor quality lowers productivity, let’s compare two real projects from the same company related by consultant Meilir Page-Jones, who worked on Team B. This company’s IT department developed two new core, high-availability applications concurrently to replace twenty-year-old legacy systems. We have two projects, two teams, two approaches, and two very different outcomes. (See the introduction to Chapter 5 for more about this case.)

The Approaches

The managers of Teams A and B created time and budget estimates, all of which got slashed by upper management. Team A created a fairly lengthy and boring textual requirements specification, obtained sign-off, and began coding soon afterward. Their attitude was, “If we don’t start coding now, we’ll never meet the deadline.” Team A developed their database design somehow from the procedural code.

Team B’s project manager firmly believed in software engineering. Team B created their requirements largely in the form of visual models, supplemented with textual descriptions of use cases, data and their relationships, page layouts, and so forth. They developed their database design from a class-association diagram and created test cases early in development from the software models.

The Results

Team A started large and grew even larger. They met their deadline by adding several developers and testers and working a lot of overtime. They ran 50 percent over budget, much of which they spent on debugging in the months before delivery. Following delivery, Team A received at least one message daily from users that their system had crashed or “done something mysterious.” They established a “Commando Squad” to respond to the steady stream of problems.

Team B started very small and grew, though not as much as Team A. By the targeted completion date, Team B had a working but incomplete system. They required two more months to deliver the finished system, which put them 20 percent over schedule and 10 percent over budget. The system worked well and generated only a few undramatic enhancement suggestions.

Several months later, an audit discovered that System A’s mysterious problems were due to a massively corrupted database, which had been accumulating bad information for months. A manual cleansing proved futile. The database was soon corrupt again; no one knew why. Team A did an ugly reversion to the legacy system they were attempting to replace while they totally overhauled their new system. Within a few months, though, their system wouldn’t restart at all; the company finally scrapped it as irreparable. They launched a new project to rebuild System A—with Team B’s manager at the helm!

The Analysis

Team A rushed a poorly designed and hastily-built system into production on schedule without using solid software engineering practices. The team spent months on both pre- and post-delivery rework before the company finally abandoned its investment in the system. Management had expected the people from Team A to be available after delivery to work on the next project, but their Commando Squad was busy chasing down problems and patching in fixes. Since their system was discarded, Team A’s ultimate productivity was zero. Poor quality throughout the project cost the company a great deal of time and money.

Meanwhile, Team B took a little more time to build a high-quality system that required little rework effort and freed up most of the team to move on to the next project. I’ll take Team B over Team A every time.

The Scourge of Rework

There are two major classes of software rework: fixing defects and paying off technical debt. The previous lesson described how defect-correction costs grow over time. Similarly, the longer that shortcomings linger in the code, the more technical debt that accrues, and the more work it will take to improve the design. (See Lesson #50, “Today’s ‘gotta get it out right away’ development project is tomorrow’s maintenance nightmare.”) Refactoring makes code easier to maintain and extend, but sometimes code needs to be refactored because it was generated in haste from a less-than-ideal design. Excessive, unanticipated rework is waste that distracts developers from delivering more customer value.

Too often, organizations implicitly accept rework as a normal part of software development and don’t give it much thought. A certain amount of software rework is inevitable. It’s the nature of knowledge work, imperfect human communication, and our inability to see the future clearly. Reworking a design to accommodate unexpected new functionality is preferable to overdesigning a system to permit potential growth that never materializes. However, each team should strive to minimize avoidable rework by improving their initial work quality.

Project teams don’t always factor the likelihood of rework effort into their planning. Even if their estimates are accurate for the development work, those estimates will be low once rework raises its ugly head. I’ve seen this disparity show up in project plans that didn’t allocate any time for fixing errors found during quality control activities like testing or peer review. I suggest explicitly calling out rework as discrete project tasks instead of burying it inside the defect-detection tasks. Making rework effort visible is the first step toward reducing it.

Organizations that track how much of their software effort goes into rework get some frightening numbers. A bank determined that it spent between 1 and 1.5 million dollars per month on automated retesting (McAllister, 2017). Various studies have shown that software teams may spend 40 to 50 percent of their time on avoidable rework (Charette, 2005). Just think of how your team’s productivity would jump if they had one-third or more of their time back for new development work!

If you’re recording any software work effort metrics, try to separate defect-finding effort from defect-fixing effort. Learn how much time you spend on rework, when, and why. That data reveals high-leverage opportunities for increasing productivity. Here’s a hint: up to 85 percent of rework costs can be attributed to defects in requirements (Marasco, 2007). Having some baseline data regarding your rework burden would let you set improvement targets and see if better software processes and practices drive down your rework levels (Hossain, 2018; Nussbaum, 2020). When my software team did this, we reduced our defect-correction maintenance work from 13.5 percent of our total effort down to a sustained level of about 2 percent (Wiegers, 1996).

The Cost of Quality

Perhaps you’ve heard that quality is free. That was the title of a classic 1979 book by Philip B. Crosby. “Quality is free” means that the additional incremental effort needed to do a job properly the first time is a smart investment. It takes more time and money to fix a problem than to prevent one. Poorly done work is a hassle for anyone downstream in the workflow and has unpleasant side consequences like these:

• Accumulating technical debt that makes it harder and harder to enhance a product.

• Lost opportunities and delays in other projects when rework distracts development staff.

• Customer service outages and the ensuing problem reports, loss of trust, and perhaps lawsuits.

• Warranty claims, refunds, and disgruntled customers.

The term cost of quality refers to the total price a company pays to deliver products and services of acceptable quality. The cost of quality consists of four components (Crosby, 1979; American Society for Quality, 2021c):

Defect Prevention: Quality planning, training, process improvement activities, root cause analysis

Quality Appraisal: Evaluating work products and processes for quality issues

Internal failure: Failure analysis and rework to fix problems before releasing the product

External failure: Failure analysis and rework to fix problems after delivery, handling customer complaints, product repairs and replacements

Skimping on defect prevention and quality appraisal leads to skyrocketing failure costs. Besides the rework costs in time and money, external failures can incur business downsides such as compromised business efficiency (as with Team A above) and the loss of customers. There are plenty of horror stories about companies that suffered massive monetary losses and a loss of public trust following high-profile software failures (McPeak, 2017; Krasner, 2018).

Software organizations would find it insightful to understand their total cost of quality and how those costs are distributed across the various quality activities. That assessment requires data collection and analysis, but it shows organizations exactly where they’re spending money on quality. The data lets the organization decide if that’s where they want to be spending their money.

I built a cost-of-quality spreadsheet model for one of my consulting clients. The model let them calculate just how much a requirement or design error cost them on average, depending on when it was found. Once they knew what percentage of their budget was spent on new software development versus defect prevention, quality appraisal, and internal and external failure, they could reallocate their effort on quality activities for maximum benefit. This sort of analysis reveals the return on investment an organization is getting from defect prevention and early defect discovery.

Ordinary human errors and some rework are inescapable. Rework can add value if it makes the product more capable, efficient, reliable, or usable. A company’s managers might elect to tolerate some rework as an acceptable trade-off between being speedy and spending a little more up front. That business decision might look good on the accounting books, but it could cause more expensive future problems. The techniques described at the end of Lesson #43 also cut down on excessive rework, thereby reducing the organization’s overall cost of quality and boosting productivity.

Did I mention that I hate rework?

Lesson #45. Organizations never have time to build software right, yet they find the resources to fix it later.

The previous lesson described two legacy-system replacement projects that a company conducted at the same time. One project succeeded, albeit a little over schedule and budget. The other substantially overran its budget and delivered, on time, a badly flawed system that ultimately was discarded. When the company abandoned the failed system, they didn’t say, “Well, that didn’t work out. Let’s move onto the next project.” They still needed to replace the legacy system for business purposes. Therefore, they had to try it again, this time using sound software engineering approaches.

I’ve marveled at this great mystery of the software business for a long time. Many project teams work under unrealistic schedule and budget pressures that force them to cut quality corners. The result is often a product that must be extensively—and expensively—repaired or even abandoned. Somehow, though, the organization finds the time, money, and people to perform the repair or replacement work.

Why Not the First Time?

You’d think that if a system were so vital and urgent that management placed great pressure on the IT staff to rush it out, it would be worth building it properly. My high school chemistry class had a sign on the wall that asked, “If you don’t have time to do it right, when will you have time to do it over?” I internalized that message and have carried it with me ever since. When software teams aren’t provided with the time, skilled staff, processes, or tools to do the job right, they’ll inevitably have to do at least parts of it over. As we saw in the previous lesson, such rework is a productivity sinkhole.

Unfortunately, too many people don’t appreciate the value of taking some additional time to build the software right instead of fixing it later. Time for effective quality practices such as technical peer reviews often isn’t built into the schedule. As a result, people hold reviews only if they’ve personally internalized the value. Even if reviews are planned as part of the development process, projects with overly aggressive schedules might skip them because no one has time to participate. Omitting reviews and other quality practices doesn’t mean the defects aren’t there; it just means that someone’s going to find them later, when the consequences are greater.

Large-scale failures often are more the result of management problems than technical issues. Underestimated scope, coupled with an unrealistic hope that developers can work faster than they have in the past, guarantee schedule slips and quality shortfalls. Both at the individual practitioner and management levels, people need to take the actions and time needed for success to avoid wasting potentially huge amounts of time and money.

The $100 Million Syndrome

It seems that the only time disastrous projects are completely abandoned is when a failed government system costs more than $100 million. Corporations need their new systems to conduct business, so they’ll tackle them again; governments sometimes throw in the towel or switch to Plan B. As one example among many, the Federal Aviation Administration’s Advanced Automation Program was launched in 1982 as a sweeping program to modernize its air traffic control (ATC) system. The project’s centerpiece was the Advanced Automation System, which was estimated to cost $2.5 billion by its planned completion in 1996.

The project suffered many delays and cost overruns, partly attributable to requirement changes that triggered extensive rework. The project was terminated in 1994 after the estimated final cost had risen to around $7 billion. Some major components were estimated to be as much as eight years behind schedule (Barlas, 1996). Some of the project work was salvaged for later ATC modernization efforts, but the Federal Government experienced a net loss of about $1.5 billion (DOT, 1998).

A more recent massive project failure strikes close to home for me. After the United States Congress enacted the Affordable Care Act (also known as Obamacare) in 2010, states established healthcare exchanges as marketplaces for residents to acquire health insurance. Some states built their own exchanges, others established state-federal partnerships, and still others relied on the federal exchange, HealthCare.gov. My state of Oregon attempted to build its own, excessively complex health insurance exchange, called Cover Oregon, in 2012. The state engaged a huge software contractor for the implementation. After investing about three years and spending some $305 million of taxpayer money, the state abandoned the project and switched to HealthCare.gov (Wright, 2016). Cover Oregon was a colossal failure that generated colossal lawsuits.

Striking the Balance

Nearly all technical people want to do good work and deliver high-quality products and services. Sometimes that desire clashes with outside factors, such as management-dictated ridiculously short deadlines or regulations imposed by governing bodies. Technical practitioners don’t always know about the business motivation or rationale behind those pressures. Quality—and integrity—need to be part of the discussion when a team contemplates what they can deliver that meets deadlines, achieves business objectives, and includes the right functionality, built in a sustainable way.

Like many people, I have a personal and professional philosophy to “Strive for perfection; settle for excellence.” Not everything’s going to be perfect, but I do my work as well as I can the first time to avoid the cost, time, embarrassment, and potential legal consequences of having to do it over. If that means taking more time to get it right up front, so be it. The long-term payoff is worth the upfront investment.

Lesson #46. Beware the crap gap.

The difference between quality and not-quality—also known as crap—often is surprisingly small. Hold up your hand with your thumb and index finger about an inch apart. I call that little separation the crap gap (Wiegers, 2019e). In many cases, doing just a little additional analyzing, asking, checking, or testing makes the difference between a quality product and one that customers perceive as crap. When I talk about the crap gap, I don’t refer to the ordinary mistakes that all human beings make occasionally, but rather to problems that result from haste, sloppiness, or inattention to details.

The Crap Gap Illustrated

Here’s an example of the crap-gap scenarios we all encounter in daily life (Wiegers, 2021). Recently I bought a major home appliance. I had a question, so I went to the contact form on the manufacturer’s website. The form required me to choose a topic and then a subtopic. However, no subtopics were displayed. No matter which major topic I chose, the only option available on the subtopic list was the default prompt: “Please select a topic.” When I tried to submit the form anyway, I got an error message that a subtopic is required. Because that was impossible, I couldn’t submit the form. I had to call the manufacturer, with all the attendant hassles of trying to reach a helpful support person.

Did no one find this problem while testing the website? Perhaps the function worked just fine in development, but the appropriate tables of options weren’t populated for the production version. Or maybe testing revealed the problem, but someone decided not to fix it then. Many months after I reported the problem, the web page now finally provides customized subtopic lists. Perhaps it didn’t cost the company much more to correct the code later than it would have during initial implementation. But how much customer time was wasted before the company finally fixed that bug? Businesses shouldn’t regard their customers’ time as being free.

As I mentioned earlier, I dislike performing rework, revisiting something that I thought was done because some problem reared its ugly head. An organization’s leaders set the standard by avoiding the crap gap in their own work and not tolerating it in others’ work. Management must shape a culture in which team members are expected, empowered, and enabled to do the job well the first time.

Crap Gap Scenarios in Software

Avoiding the crap gap often is just a matter of thinking a little bit more before proceeding. I encounter too many software products with errors that should have been caught during testing or designs that don’t reflect a proper focus on the user experience. For instance, when I log in to a popular financial services website, it reports that I have one notification. But when I click on the notification icon, a message says, “You don’t have any notifications.” As another example, I recently saw a printed report whose final page said, “Page 5 of 4.” These kinds of defects puzzle me and often waste my time.

Here are some categories in which software teams might encounter issues that could lead to an avoidable quality shortfall.

Assumptions. A BA might make an inaccurate assumption or record an assumption that customers are making but then neglect to verify whether the assumption is valid.

Solution Ideas. Customers often provide input to BAs in the form of solution ideas, not requirements. Unless the BA looks past the proposed solution to understand the real need, it’s easy to solve the wrong problem or specify an inadequate solution, which must be rectified later.

Regression Testing. If a developer doesn’t run a regression test after making a quick code change, they might miss an error in the modified code—a bad fix. Even a small change can unexpectedly break something else.

Exception Handling. Implementations might focus so much on the “happy path” of expected system behavior that they fail to handle common error conditions. Missing, erroneous, or incorrectly formatted data input will cause unexpected results or even a system failure.

Change Impacts. People sometimes implement a change without considering whether it will affect other, unobvious parts of the system or related products. Changing one aspect of a system’s behavior generates an inconsistent user experience if similar functionality that appears elsewhere isn’t modified correspondingly.

Lesson #44, “High quality naturally leads to higher productivity,” described the cost of quality and the notion that quality is free. Quality isn’t truly free in the sense of costing you nothing. Defect prevention, detection, and correction all consume resources. Nonetheless, shrinking the crap gap will pay off as you sidestep avoidable quality problems and their associated costs.

Lesson #47. Never let your boss or your customer talk you into doing a bad job.

A software developer named Chizuko said that her project manager had told her, “To save time, I don’t want you to do any unit testing.” She was shocked at this directive. As an experienced developer, Chizuko knew that unit testing was important to verify that a program was implemented correctly. Chizuko felt that her manager was demanding she take a quality shortcut in the faint hope that it would somehow speed her progress. Perhaps it would save Chizuko some time, but skipping unit testing would doubtless lead to defects being found later than they should. She opted to proceed with unit testing anyway.

I have long believed that we should never let our managers, customers, or colleagues talk us into doing a bad job (Wiegers, 1996). It’s a matter of personal and professional integrity to stick to our principles. We should each commit to following the best professional practices we know, adapting them to be effective in each situation. If you’re pressured into a situation that makes you professionally uncomfortable, try to describe what you need so you’re able to deliver something that won’t constitute doing a bad job. As with so many things, it’s possible to take this philosophy to a no-longer-useful extreme. Seek suitable balance points of professional excellence while not being overly dogmatic or inflexible.

Power Plays

People in power can attempt to influence you to do what you consider to be a bad job in various ways. Suppose someone to whom you present an estimate for upcoming work doesn’t like your numbers. They might pressure you to reduce your estimate to help them, a senior manager, or a customer achieve their own budgetary or delivery goals. It’s an understandable motivation, but that’s not a good reason to change an estimate.

Someone who pushes back against an estimate might feel pressures you’re not aware of. They’re entitled to an explanation of how you derived the estimate and a discussion about whether it could be adjusted. (See Lesson #28, “Don’t change an estimate based on what the recipient wants to hear.”) However, changing an estimate simply because someone doesn’t care for it denies your interpretation of reality. It doesn’t change the likely project outcome.

Rushing to Code

Suppose you work in an internal corporate IT department and a new project comes along. Your business stakeholders might pressure your software team to begin writing code immediately, even without a sound business case and clear requirements. Perhaps they have project funding that they want to spend right away before they lose it. The IT staff also might be eager to get started. Maybe they don’t want to spend time discussing requirements because those will probably change anyway.

As a consequence, a lot of aimless coding gets done toward an obscure outcome. Too often, nobody is held accountable for the missed target, because no target was clearly defined anyway. Might it not be better for the IT department to resist the business pressure to begin the journey until some destination is established?

Lack of Knowledge

People who lean on you to do something you consider inappropriate might not understand the software development practices you advocate. For example, someone might regard holding technical peer reviews of work products as unnecessary. Maybe they don’t think it’s necessary to spend time on requirements elicitation discussions or writing down requirements. Managers or customers could press for the product’s delivery even if it hasn’t satisfied all of its release criteria. Customers don’t always appreciate that taking quality shortcuts might let you deliver something sooner, but that “something” could require extensive patching to be usable.

I once had a manager who didn’t understand how I could write user documentation for a new application before we finished the software. He was a scientist who had done some programming, so he thought he understood software development. I explained that I knew what the system would do, thanks to our requirements and design work. Therefore, I wasn’t wasting time writing help screens and a user manual before we implemented the final line of code.

A customer told me he didn’t understand why a project would take as long as my team anticipated. Based on his limited experience with computers, he declared that the work was a SMOP—a simple matter of programming. I hadn’t heard that expression before, but it certainly didn’t apply to that project, as I attempted to explain to him. People who don’t do it for a living don’t appreciate the considerable difference between computer programming and software engineering.

Shady Ethics

Independent consultants and contractors can be subjected to various kinds of bad-job pressure. A prospective consulting client once asked me to come into their company under false pretenses. He wanted our contract to state that I’d be performing certain work, although I’d actually be doing something different. The client couldn’t get funding for the activity he had in mind, but he had money available for the other service. I viewed his request as unethical, so I declined both the engagement and the client. Accepting the conditions would have constituted professional malpractice on my part and could have exposed me to legal problems if this client’s managers found out what was going on.

Circumventing Processes

Sensible processes are in place for a reason. When users asked me about making a change in some application when I worked at Kodak, I directed them to our very simple change request tool. The information the user submitted would let the appropriate people make good business decisions about requested changes. Some users didn’t want to bother submitting a request; couldn’t I just work the change in? Well, no—sorry. Bypassing reasonable, practical processes for convenience constitutes a bad job in my view.

You might need to explain why the approach you’re advocating is necessary. Point out how it adds quality and value to the project. That information will help the other person understand why you’re resisting their entreaty. However, some people are simply unreasonable. Even with your best efforts to convince them otherwise, they might pressure you to cut corners or follow an inadvisable approach.

Suppose you resist acting in a way that you regard as unprofessional or unethical. The other party might complain to your manager that you’re wasting time on unnecessary activities or being uncooperative. The manager could back you up, or they could exert additional pressure on you to comply. In the second case, it becomes your choice. Will you succumb to the pressure, with its potential negative impacts on the project and your psyche? Or will you continue to work in the way in the best professional way you know? There’s a risk in there, but I vote for the latter.

Lesson #48. Strive to have a peer, rather than a customer, find a defect.

I made a serious error in the manuscript for a book I wrote recently—I got something exactly backward. Fortunately, one of my sharp-eyed peer reviewers caught the error. I was very grateful. It would have been awkward had the book gone to press with that mistake in it.

Even the most skilled writers, business analysts, programmers, and other professionals make mistakes. No matter how good your work is, having others look it over makes it better. Many years ago, I adopted the routine practice of asking colleagues to review my code and other deliverables I created for a software project.

Presenting your creation to other people and asking them to tell you what’s wrong with it is not an instinctive behavior—it’s a learned behavior. It’s human nature to be embarrassed or even resentful when people find problems in what we’ve done. I always feel silly when a reviewer spots a mistake I made, but the phrase “good catch” immediately pops into my mind. When I say “Thanks, good catch” to the reviewer, the tone of the conversation gets more pleasant because I’m expressing gratitude for the finding instead of acting hurt or defensive. I would far prefer to have a friend or colleague discover one of my errors before release than to have a customer find it afterward.

Some people think their work doesn’t need to be reviewed, but the best software developer I’ve ever known felt uncomfortable unless someone else had reviewed his code. He knew how valuable the input from other smart developers was. Different reviewers raise different kinds of issues and provide varying levels of feedback, ranging from superficially obvious to deeply insightful. That’s true whether you’re reviewing a textual manuscript, a requirements specification, or code. All of the perspectives are helpful.

Peer reviews are a true software engineering best practice. After experiencing their benefits for decades, I wouldn’t want to work in an organization where reviews weren’t embedded in the culture.

Benefits of Peer Reviews

Technical peer reviews are a proven technique for improving both quality and productivity. They improve quality by revealing defects earlier than they might otherwise be detected. As we’ve seen, those early discoveries increase productivity because team members spend less time fixing defects later in development or following delivery.

People often wait until they’ve finished an item to ask others to look at it. However, reviewing a work product before it’s complete lets its consumers assess how well the item will meet their needs. It’s frustrating to receive some deliverable like a requirements document, only to discover that it doesn’t contain all the information you need, includes material that’s not useful, or isn’t organized well for your purposes. Providing feedback on a deliverable before it’s finished lets the author adjust it to be more useful to its audience.

Other than when pair programming, we rarely see the internals of someone else’s work unless we have to fix a bug or add an enhancement. Because people other than the original programmer often must modify code in the future, it helps if they’ve had some exposure to it through reviews. If you bring in reviewers from outside the project team, they can learn about some aspects of the product and see how another team operates. This cross-fertilization helps to disseminate effective practices throughout an organization.

I see a lot of discussion about code reviews in the software literature these days. I’m always delighted to see people take reviews seriously, and code reviews are certainly important. However, software teams generate many other artifacts that also are candidates for review. That’s why I prefer to use the more general term peer review. This term doesn’t mean we are reviewing our peers, but that we’re inviting some of our professional peers to review pieces of our work. Besides code, a project team might create plans, requirements in various forms, several types of designs, test plans and scripts, help screens, documentation, and more. Anything that a person creates could contain errors and could benefit from having other people look it over.

Varieties of Software Reviews

You can perform reviews in various ways: with or without a meeting, online or in person, and with varying degrees of rigor. All the approaches have their advantages and limitations. Review meetings can yield a synergistic effect, in which one person’s comment triggers another to spot a problem that no one saw on their own. But reviews with meetings cost more and are harder to schedule than those without meetings. Here are some of the ways that people can examine a colleague’s work product (Wiegers, 2002a).

Peer deskcheck. Ask one coworker to look over something you created and offer suggestions for improvements or corrections. The key here is to pair up with someone who has a sharp eye and the time to help out. Offer to return the favor—it’s only fair.

Passaround. Distribute the item to several of your peers and ask them to give you feedback independently. Review tools are available that let reviewers see and discuss each other’s comments. The passaround is a good approach for conducting asynchronous or distributed reviews when it’s either inconvenient or unnecessary for participants to meet.

Walkthrough. The author leads the discussion, explaining the work product a chunk at a time and soliciting feedback. Walkthroughs are often used for design reviews when brainstorming with colleagues is merited.

Team review. The author distributes the work product and any supporting materials to a few reviewers in advance, so they have time to examine it independently and note any issues. During a meeting, the reviewers bring up their observations. A moderator keeps the discussion on track and ensures that the group covers the work product at a reasonable pace. Going too quickly misses defects; going too slowly takes longer and bores people. A recorder can collect the issues raised on standard forms.

Inspection. The most formal type of review includes several roles that participants perform during a structured meeting: author, moderator, recorder, inspector, and sometimes a reader (Gilb and Graham, 1993; Radice, 2002). Although inspections are the most expensive review method, considerable research indicates that they’re the most effective at revealing defects. Inspections are most appropriate for higher-risk work products.

Even if you don’t perform any of these structured peer reviews, simply inviting a colleague to look over your shoulder and help find a coding error or improve a bit of your design is an excellent idea. Any review is better than no review. Software engineers at Google suggested several code review best practices, including these: be polite and professional, write small changes, write good change descriptions, and keep reviewers to a minimum (Winters, Manshreck, and Wright, 2020).

The Soft Side: Cultural Implications of Reviews

Peer reviews are both a technical activity and an interpersonal interaction. How an organization practices reviews—or doesn’t—reveals its attitudes toward quality and teamwork. If team members hesitate to share their work for fear of being criticized, that’s a red flag. If reviewers criticize the author for making mistakes—or just for doing the work differently from how they would have done it—that’s another red flag. Reviews handled poorly can be damaging to a software team’s culture (Wiegers, 2002b).

In a healthy software engineering culture, team members both offer and accept constructive criticism. They aren’t territorial, guarding their work against prying eyes. They willingly spend part of their time looking over someone else’s work because they recognize the benefits. It’s a mutual back-scratching mindset: you help me, I help you, and everyone wins.

One of my consulting clients had a review program in place. The participants referred to holding a review as “going into the shark tank.” This is not positive imagery. Who would want to go into a shark tank, unprotected, with bait in their hand? Any author who walks out of a review session feeling insulted or attacked will never again voluntarily invite others to look over their work. Those scars can linger for years.

Reviews can enhance collaborative teamwork when they’re performed properly at the right time by the right people. They can also be harmful if the participants aren’t considerate about how they provide feedback. The following guidelines can help reviewers contribute to a constructive activity that people perceive as worthwhile.

• Focus your comments on the work product, not on the author. Reviewers aren’t there to show how smart they are but to improve the team’s collective efforts.

• Phrase comments as observations, not accusations. Say “I” more than “you.” “I didn’t see where these variables are initialized” is easier to hear than “You didn’t initialize these variables.”

• Focus on substance over style by hunting for major defects. The author can help by eliminating easy-to-find issues like typographical errors before the review. Follow standard document templates and code formatting conventions (e.g., use pretty-print) so stylistic matters don’t become distracting topics for debate.

• As an author, set aside your ego enough to be receptive to improvement suggestions. You’re ultimately responsible for the quality of your work, but consider the input your coworkers offer.

You needn’t wait for your organization to establish a review program or a review culture—just ask for a little help from your friends. The basic success factor for any review is the mindset that you’d rather have your colleagues uncover defects instead of assuming that your work is error-free. If you don’t share this philosophy, just pass your work on to the next development stage or to the user, and then wait for the phone to ring.

Lesson #49. Software people love tools, but a fool with a tool is an amplified fool.

My friend Norm is an expert woodworker. He designed and built his woodshop, including the building itself. His shop contains countless hand and power tools, and he knows how and when to use each of them properly and safely. Expert software engineers also know the right tools to use and how to apply them effectively.

Perhaps you’ve heard that “A fool with a tool is still a fool,” sometimes attributed to software engineer Grady Booch. That’s too generous. A tool gives someone who doesn’t quite know what they’re doing a way to do it faster and perhaps more dangerously. That leverage just amplifies their ineffectiveness. All tools have benefits and limitations. To reap the full benefit, practitioners need to understand the tool’s concepts and methods so they can apply it correctly to appropriate problems. When I say tool here, I’m referring both to software packages that facilitate or automate some project work (estimation, modeling, testing, collaboration) and to specialized software development techniques, such as use cases.

Tools can make skilled team members more productive, but they don’t make untrained people better. Providing less capable developers with tools can actually inhibit their productivity if they don’t use them wisely. If people don’t understand a technique and know when—and when not—to use it, a tool that lets them do it faster and prettier won’t help.

A Tool Must Add Value

A software team’s tools can help them build the right product correctly by saving time or boosting quality, but I’ve seen numerous examples of ineffective tool use. My software group once adopted Microsoft Project for project planning. Most of us found Project helpful for recording and sequencing tasks, estimating their duration, and tracking progress. One team member got carried away, though. She was the sole developer on a project with three-week development iterations. She spent a couple of days at the start of each iteration creating a detailed Microsoft Project plan for the iteration, down to one-hour resolution. I’m all in favor of planning, but her use of this tool was time-wasting overkill.

I know of a government agency that purchased a high-end requirements management (RM) tool but benefited little from it. They recorded hundreds of requirements for their project in a traditional requirements specification document. Then they imported those requirements into the RM tool, but the document remained the definitive repository. Whenever the requirements changed, the BA had to update both the document and the contents stored in the RM tool’s database. The only major tool feature that the team exploited was to define a complex network of traceability links between requirements. That’s useful, but later they discovered that no one ever used the extensive traceability reports they generated! This agency’s ineffective tool use consumed considerable time and money but yielded little value.

Modeling tools are easily misused. Analysts and designers sometimes spend excessive effort perfecting models. I’m a big fan of visual modeling to facilitate iterative thinking and reveal errors, but people should create models selectively. Modeling portions of the system that are already well understood and drilling down to the finest details don’t add proportionate value to the project.

Besides automated tools, specialized software practices also can be applied inappropriately. As an example, use cases help me understand what users need to do with a system, so I can then deduce the necessary functionality to implement. But I’ve known some people who tried to force-fit every known bit of functionality into a use case simply because that’s the requirements technique their project employed. If you already know about some needed functionality, I see little value in repackaging it just to say you have a complete set of use cases.

A Tool Must Be Used Sensibly

I was at a consulting client’s site the same day that one of their team members was configuring a change-request tool they’d just purchased. I endorse sensible change control mechanisms, including using a tool to collect change requests and track their status over time. However, the team member configured the tool with no fewer than twenty possible change-request statuses: submitted, evaluated, approved, deferred, and so forth. Nobody’s going to use twenty statuses; around seven should suffice. Making it so complex imposes an excessive and unrealistic burden on the tool users. It could even discourage them from using the tool at all, making them think it’s more trouble than it’s worth.

While teaching a class on software engineering best practices one time, I asked the students if they used any static code analysis tools, such as lint. The project manager said, “Yes, I have ten copies of PC-Lint in my desk.” My first thought was, “You might want to distribute those to the programmers, as they aren’t doing any good in your desk.” Software tools often become shelfware, products that people have available but don’t use for some reason. If people don’t understand how to apply a tool effectively, it’s worthless to them.

I asked the same question about static code analysis at another company. One student said that when his team ran lint on their system’s codebase, it reported 10,000 errors and warnings, so they didn’t use it again. If a sizable program has never been passed through an automated checker, it will probably trigger many alerts. Many of the reports were false positives: inconsequential warnings or issues the team will decide to ignore. But there were likely some real problems in there, lost in the noise. When the option exists, configure the tools so you can focus on items of real concern and not be overwhelmed by distracting minor issues.

I’ve run into the same false-positive problem with a commercial grammar checker I use in my writing. I disregard more than half of the issues it reports because they’re inappropriate for what I’m writing, inconsistent with my writing style, or simply wrong. It takes considerable time to wade through all the reported issues to find the helpful nuggets. Unfortunately, this tool lacks useful configuration options that would improve the signal-to-noise ratio.

A Tool Is Not a Process

People sometimes think that using a good tool means the problem’s solved. However, a tool is not a substitute for a process; a tool supports a process. When one of my clients told me they used a problem-tracking tool, I asked some questions about the process that the tool supported. I learned that they had no defined process for receiving and processing problem reports; they only had a tool. Without an accompanying practical process, a tool can increase chaos if people don’t use it appropriately.

Tools can lead people to think that they’re doing a better job than they are. Automated testing tools aren’t any better than the tests stored in them. Just because you can run automated regression tests quickly doesn’t mean the tests it executes effectively find errors. A code coverage tool could report a high percentage of statement coverage, but that doesn’t guarantee that all the important code was executed. Even a high statement coverage percentage doesn’t tell you what will happen when the untested code is executed, whether all the logic branches were tested in both directions, what will happen with different input data values, or whether any necessary paths were missing from the implemented code. Nor do tools replace people. Humans who test software will find issues beyond those that are programmed into testing tools.

I’ve spoken to people who claimed their project was doing a fine job on requirements because they stored them in a requirements management tool. RM tools do offer many valuable capabilities. However, the ability to generate nice reports doesn’t mean that the requirements stored in the database are any good. RM tools are a vivid illustration of the old computing expression GIGO: garbage in, garbage out. The tool won’t know if the requirements are accurate, clearly written, or complete. They won’t detect missing requirements.

Some tools can scan a set of requirements for conflicts, duplicates, and ambiguous words, but that assessment doesn’t tell you if the requirements are logically correct or even necessary. A team that uses an RM tool first needs to learn how to do a good job of eliciting, analyzing, and specifying requirements. Buying an RM tool doesn’t make you a skilled business analyst. You should learn how to use a technique manually and prove to yourself that it works for you before automating it (Davis, 1995).

Properly applied tools and practices can add great value to a project, increasing quality and productivity, improving planning and collaboration, and bringing order out of chaos. But even the best tools won’t overcome weak processes, untrained team members, challenging change initiatives, or cultural issues in the organization (Costello, 2019). And always remember one of Wiegers’s Laws of Computing: “Artificial intelligence is no substitute for the real thing” (Wiegers, 1989).

Lesson #50. Today’s “gotta get it out right away” development project is tomorrow’s maintenance nightmare.

After a system is released into production, it enters a maintenance status. There are four categories of software maintenance (Merrill, 2019):

Adaptive Modifying the system to work in a changed operating environment

Corrective Diagnosing and repairing defects

Perfective Making changes to increase customer value, such as adding new functionality, improving performance, and enhancing usability

Preventive Optimizing and restructuring code to make it more efficient, easier to understand, more maintainable, and more reliable

For systems that are developed incrementally, adding new, planned functionality and extending existing functionality don’t count as perfective maintenance. That’s just part of the development cycle. However, delivered increments can still require corrective maintenance to fix defects.

This chapter’s previous lessons showed why corrective maintenance gets more expensive over time and how quality problems sap a team’s productivity. Beyond requirement and code defects, software with design problems will continue to eat up resources as developers and maintainers improve the codebase over time through preventive maintenance.

Technical Debt and Preventive Maintenance

In the interest of speed, development teams sometimes take quality shortcuts that generate technical debt. They might not practice good defensive programming, such as input data validations and exception handling. Code that’s written expeditiously could have a haphazard design that works for now but isn’t structured for the long haul. It might not execute efficiently or be easily understandable by someone who must work with it in the future. Perhaps the developers didn’t design the software or a database to accommodate future extensions easily. New functionality that’s quickly pasted onto an otherwise solid demo version or prototype can make it into the production codebase, adding to the technical debt.

Quick code patches can have unexpected side effects. Brittle code shatters when someone changes it, triggering a cascade of modifications needed to keep it working. A future developer might opt to rebuild a troublesome module entirely rather than struggling to incorporate new functionality or make it work in a changed environment.

As with other debts, technical debt must be repaid eventually—with interest. The longer the debt lingers unaddressed in the system, the more interest it accrues. Repaying software technical debt involves refactoring, restructuring, or rewriting code (there’s that ugly re- prefix again.) As Ward Cunningham (1992) explained

A little debt speeds development so long as it is paid back promptly with a rewrite… The danger occurs when the debt is not repaid. Every minute spent on not-quite-right code counts as interest on that debt. Entire engineering organizations can be brought to a stand-still under the debt load of an unconsolidated implementation.

Much preventive maintenance effort is devoted to erasing technical debt. Whether you’re refactoring code from a previous iteration on the current project or working on a fragile legacy system, your goal should be to leave the code in better shape than you found it. That’s more constructive than merely cursing those earlier developers who created the mess that confronts you.

Conscious Technical Debt

There are times that it’s reasonable to accrue some technical debt, provided the team fully appreciates that it will cost more to rework the deficient design in the future. If you expect the code to have a short lifetime, you might decide that it’s not worth taking the time for thoughtful design. Too often, though, that expectation fits in the famous-last-words category. A lot of so-called temporary code, like that you might build into a prototype, finds its way into production software, where it bedevils future maintainers.

If you’re aware that you’re doing expedient design and plan time in future iterations to address the shortcomings—instead of just hoping they won’t cause problems—it might make sense to defer thorough design thinking. Or maybe you’re doing something novel, uncertain, or exploratory. You find a design that more or less works, and that’s good enough for now. You’ll need to improve the design eventually, though, so make sure that you do.

The decision boils down to consciously accepting some technical debt for good reasons, with the full expectation that you’ll need to spend more time on preventive maintenance later as a result. That is, there’s deliberate technical debt, and there’s accidental technical debt (Soni, 2020). Failing to rectify design and code shortcomings makes it harder and harder to work with the system in subsequent iterations or during operation. Those accumulated problems slow down continued development now, and they consume excessive maintenance effort later.

Erasing technical debt adds its own risks to the project. It might feel as though you’re just tuning up something that already works, but those improvements need the same level of verification and approval as other project code. Regression testing and other quality practices to catch bad fixes consume time beyond the code revisions themselves. The more extensive the code and design rework, the greater the risk of inadvertently breaking something else.

Designing for Quality, Now or Later

There’s always something more pressing to work on than fixing up existing code. A manager can find it difficult to commit resources to paying down technical debt while customers demand more software now. They need to bite the bullet. Many software applications live on for decades with an ever-growing—and increasingly crumbly—codebase. As David Rice (2016) pointed out

The primary pain point for working with legacy code is how long it takes to make changes. So if you intend for your code to be long-lived, you need to ensure that it will be entirely pleasurable for future developers to make changes to it.

Perhaps “entirely pleasurable” is too much to expect. Still, strive to build software that lets future developers work on it without agony.

Make preventive maintenance a part of your daily development work, improving designs and code whenever you touch them. Don’t work around the quality shortcomings you encounter—minimize them. Incremental preventive maintenance is like brushing your teeth daily; performing rework to reduce built-up technical debt is like going to the dentist periodically for a cleaning. I spend a few minutes scrubbing and flossing every day to give my dental hygienist as little as possible to do. Both dental work and software development are less painful if you deal with issues as you go along instead of letting them accumulate.

Next Steps: Quality

1. Revisit your definition of quality from the First Steps. Would you change your definition? If so, would you also change your perception of your product’s quality?

2. Identify which of the lessons described in this chapter are relevant to your experiences with software quality.

3. Can you think of any other quality-related lessons from your own experience that are worth sharing with your colleagues?

4. Identify any practices described in this chapter that might be solutions to the quality-related problems you identified in the First Steps at the beginning of the chapter. How could each practice improve the quality of your products?

5. How could you tell if each practice from Step #4 was yielding the desired results? What would those results be worth to you?

6. Identify any barriers that might make it difficult to apply the practices from Step #4. How could you break down those barriers or enlist allies to help you implement the practices?

7. Put into place process descriptions, templates, guidance documents, and other aids to help future project teams apply your local quality best practices effectively.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.239.76.25