Chapter 21. Software Quality Assurance

IN THIS CHAPTER

  • Quality Is Free

  • Testing and Quality Assurance in the Workplace

  • Test Management and Organizational Structures

  • Capability Maturity Model (CMM)

  • ISO 9000

This book's focus so far has been on its title, Software Testing. You've learned how to plan your testing, where to look for bugs, and how to find and report them. If you're new to the field of software testing, you'll most likely first apply your skills in these areas.

It's important, though, to get a sense of the larger picture so that you can understand how much more needs to be accomplished and how far you can go in your career. The intent of Part VI, “The Future,” and of this chapter is to give you an overview of the evolutionary steps beyond software testing, to show you what lies ahead, to outline the challenges, and to hopefully motivate you to make improving software quality your ultimate goal.

Highlights of this chapter include

  • What it costs to create quality software

  • How software testing varies from software quality assurance

  • What different ways a software testing or quality group can fit into a project team

  • How the software Capability Maturity Model is used

  • The ISO 9000 standard

Quality Is Free

Quality is free? Impossible? Nope, it's true. In 1979, Philip Crosby[1] wrote in his book Quality is Free: The Art of Making Quality Certain, that indeed it costs nothing extra (actually it costs less) to produce something of high quality versus something of low quality. Given what you've learned so far about software testing and the work involved in finding and fixing bugs, this may seem impossible, but it's not.

Think back to the graph from Chapter 1, “Software Testing Background,” (repeated here as Figure 21.1) that showed the cost of finding and fixing bugs over time. The later bugs are found, the more they cost—not just linearly more, but exponentially more.

There is very little cost if problems are found early in the project.

Figure 21.1. There is very little cost if problems are found early in the project.

Now, divide the cost of quality into two categories: the costs of conformance and the costs of nonconformance. The costs of conformance are all the costs associated with planning and running tests just one time, to make sure that the software does what it's intended to do. If bugs are found and you must spend time isolating, reporting, and regression testing them to assure that they're fixed, the costs of nonconformance go up. These costs, because they are found before the product is released, are classified as internal failures and fall mostly on the left side of Figure 21.1.

If bugs are missed and make it through to the customers, the result will be costly product support calls, possibly fixing, retesting, and releasing the software, and—in a worst-case scenario—a product recall or lawsuits. The costs to address these external failures fall under the costs of nonconformance and are the ones on the right side of Figure 21.1.

In his book, Crosby demonstrates that the costs of conformance plus the costs of nonconformance due to internal failures is less than the costs of nonconformance due to external failures. Stomp out your bugs early, or ideally don't have any in the first place, and your product will cost less than it would otherwise. Quality is free. It's common sense.

Unfortunately, portions of the software industry have been slow to adopt this simple philosophy. A project will often start with good intentions and then as problems crop up and schedule dates are missed, rules and reason go out the window. Regard for higher future costs is written off in favor of getting the job done today. The trend is turning, however. Companies are now realizing that their cost of quality is high, and that it doesn't need to be. Customers are demanding and their competitors are creating better quality software. Realization is setting in that the words Crosby wrote more than 25 years ago for the manufacturing industry apply just as well to the software industry today.

Testing and Quality Assurance in the Workplace

Depending on the company you work for and the project you're working on, you and your peers can have one of several common names that describes your group's function: Software Testing, Software Quality Assurance, Software Quality Control, Software Verification and Validation, Software Integration and Test, or one of many others. Frequently these names are used interchangeably or one is chosen over the others because it sounds more “official”—Software Quality Assurance Engineer versus Software Tester, for example. It's important to realize, though, that these names have deeper meanings and aren't necessarily plug-in replacements for each other. On one hand there's the philosophy that “it's only a name,” that what you ultimately do in your job is what counts. On the other hand, your job title or your group's name is what others on the project team see. That label indicates to them how they will work with you and what expectations they will have, what deliverables you will provide to them, and what they will give to you. The following sections define a few of the common software-test-group names and should help clarify the differences among them.

Software Testing

It can't be emphasized enough, so here it is, one more time:

The goal of a software tester is to find bugs, find them as early as possible, and make sure they get fixed.

Throughout this book you've learned how to accomplish this goal and the reality and limitations in doing so. Maybe you've realized by now (and if you haven't, that's okay) that software testing can be simply described as an assess, report, and follow-up task. You find bugs, describe them effectively, inform the appropriate people, and track them until they're resolved.

NOTE

The definition of a software tester's job used in this book actually goes a step further than assess, report, and follow-up by tacking on the phrase “and make sure they get fixed.” Although there are test groups that would replace this phrase with simply “and report them,” I believe that to be an effective tester you need to take personal responsibility for the bugs you find, tracking them through their life cycle, and persuading the appropriate people to get them fixed. The easy way out is to simply stick them in the bug database and hope that someone eventually notices and does something with them, but if that's all there was to testing, you could argue why you should bother looking for bugs in the first place.

Being a software tester and working under this charter has a unique and very important characteristic: You aren't responsible for the quality of the software! This may sound strange, but it's true. You didn't put the bugs in the software, you had your project manager and the programmers review and approve your test plan, you executed your plan to the letter and despite all that effort, the software still had bugs. It's not your fault!

Think about it. A doctor can't make someone's fever go down by taking her temperature. A meteorologist can't stop a tornado by measuring the wind speed. A software tester can't make a poor-quality product better by finding bugs. Software testers simply report the facts. Even if a tester works hard to get the bugs he finds fixed, his efforts, alone, can't make an inherently poor-quality product better. Quality can't be tested in. Period.

NOTE

Some companies do believe that quality can be tested in. Rather than improve the process they use to create their software, they believe that adding more testers is the solution. They think that more testers finding more bugs will make their product better. Interestingly, these same people would never consider using more thermometers to lower someone's fever.

Ultimately, if you're working in a group named “Software Testing,” it will be your test manager's responsibility to make sure that everyone on the project team understands this definition of your role. It's often a point of contention when schedules aren't hit and bugs are missed so it's one that should be made perfectly clear up front, preferably in the project's test plan.

Quality Assurance

Another name frequently given to the group that finds software bugs is “Software Quality Assurance (QA).” Chapter 3, “The Realities of Software Testing,” cited the following definition of a person in this role:

A Software Quality Assurance person's main responsibility is to examine and measure the current software development process and find ways to improve it with a goal of preventing bugs from ever occurring.

Now that you know a lot more about software testing, this definition probably sounds a lot more scary than when you first read it back in Chapter 3. A Software QA group has a much larger scope and responsibility than a software testing group—or at least they should according to their name.

The definition of assurance[2] is “a guarantee or pledge” or “a freedom from doubt,” so a QA group's role is to guarantee, without any doubt, that the product is of high quality. You can see why, if you're really a testing group, that you don't want to assume this supposedly more “prestigious” title. Allow a bug, any bug, to be found by a customer and you've failed in your job.

You may be wondering, if software testing alone can't guarantee a product's quality, what a Software QA group would do to achieve it. The answer is having nearly full control over the project, instituting standards and methodologies, carefully and methodically monitoring and evaluating the software development process, feeding back solutions to the process problems they find, performing the testing (or overseeing it), and having the authority to decide when the product is ready to release. It may be an oversimplification to say that it's like having a project manager who's primary goal is “no bugs” as opposed to keeping the product on schedule or under budget, but it's a pretty good description.

You'll learn later in this chapter that moving from software testing to software quality assurance is a gradual process, sort of achieving increasing levels of maturity. It's not a single-step function—yesterday you were a tester and today you're a QAer.

Actually, some of the skills you've learned in this book can be considered software QA skills depending on where you draw the line on bug prevention and where the separation occurs between an internal failure and an external failure. If the goal of software QA is to prevent bugs, you could argue that performing static testing on the product spec, design documents, and code (Chapters 4, “Examining the Specification,” and 6, “Examining the Code”) is a type of software QA because you're preventing bugs from occurring. Bugs found this way never make it through to later be found by the testers testing the finished software.

Other Names for Software Testing Groups

Depending on where you work, your test group may use one of many other names to identify itself. Software Quality Control (SQC) is one that's frequently used. This name stems from the manufacturing industry where QC inspectors sample products taken off the manufacturing line, test them, and, if they fail, have the authority to shut down the line or the entire factory. Few, if any, software test groups have this authority—even ones that call themselves Software QC.

Software Verification and Validation is also commonly used to describe a software test organization. This name is one that actually works pretty well. Although it's a bit wordy, it states exactly what the test group is responsible for and what they do. Look back to Chapter 3 for the definitions of verification and validation. It's even possible to have two groups, one for verification and one for validation.

Integration and Test, Build and Test, Configuration Management and Test, Test and Lab Management, and other compound unrelated names are often a sign of a problem. Many times the software test group takes on roles (voluntarily or not) that are unrelated to testing. For example, it's not uncommon for a test group to own the job of configuration management or building the product. The problem with this is twofold:

  • It takes away resources that should be used for testing the product.

  • The test group's goal is ultimately to break things, not to make things, and owning the software's build process creates a conflict of interest.

It's best to let the programmers or a separate team build the software. Testing should concentrate on finding bugs.

Test Management and Organizational Structures

Besides a test group's name and its assumed responsibilities, there's another attribute that greatly affects what it does and how it works with the project team. That attribute is where it fits in the company's overall management structure. A number of organizational structures are possible, each having its own positives and negatives. Some are claimed to be generally better than others, but what's better for one may not necessarily be better for another. If you work for any length of time in software testing, you'll be exposed to many of them. Here are a few common examples.

Figure 21.2 shows a structure often used by small (fewer than 10 or so people) project teams. In this structure, the test group reports into the Development Manager, the person managing the work of the programmers. Given what you've learned about software testing, this should raise a red flag of warning to you—the people writing the code and the people finding bugs in that code reporting to the same person has the potential for big problems.

The organizational structure for a small project often has the test team reporting to the development manager.

Figure 21.2. The organizational structure for a small project often has the test team reporting to the development manager.

There's the inevitable conflict of interest. The Development Manager's goal is to have his team develop software. Testers reporting bugs just hinder that process. Testers doing their job well on one side make the programmers look bad on the other. If the manager gives more resources and funding to the testers, they'll probably find more bugs, but the more bugs they find, the more they'll crimp the manager's goals of making software.

Despite these negatives, this structure can work well if the development manager is very experienced and realizes that his goal isn't just to create software, but to create quality software. Such a manager would value the testers as equals to the programmers. This is also a very good organization for communications flow. There are minimal layers of management and the testers and programmers can very efficiently work together.

Figure 21.3 shows another common organizational structure where both the test group and the development group report to the manager of the project. In this arrangement, the test group often has its own lead or manager whose interest and attention is focused on the test team and their work. This independence is a great advantage when critical decisions are made regarding the software's quality. The test team's voice is equal to the voices of the programmers and other groups contributing to the product.

In an organization where the test team reports to the project manager, there's some independence of the testers from the programmers.

Figure 21.3. In an organization where the test team reports to the project manager, there's some independence of the testers from the programmers.

The downside, however, is that the project manager is making the final decision on quality. This may be fine, and in many industries and types of software, it's perfectly acceptable. In the development of high-risk or mission-critical systems, however, it's sometimes beneficial to have the voice of quality heard at a higher level. The organization shown in Figure 21.4 represents such a structure.

A quality assurance or test group that reports to executive management has the most independence, the most authority, and the most responsibility.

Figure 21.4. A quality assurance or test group that reports to executive management has the most independence, the most authority, and the most responsibility.

In this organization, the teams responsible for software quality report directly to senior management, independent and on equal reporting levels to the individual projects. The level of authority is often at the quality assurance level, not just the testing level. The independence that this group holds allows them to set standards and guidelines, measure the results, and adopt processes that span multiple projects. Information regarding poor quality (and good quality) goes straight to the top.

Of course, with this authority comes an equal measure of responsibility and restraint. Just because the group is independent from the projects doesn't mean they can set unreasonable and difficult-to-achieve quality goals if the projects and users of the software don't demand it. A corporate quality standard that works well on database software might not work well when applied to a computer game. To be effective, this independent quality organization must find ways to work with all the projects they deal with and temper their enthusiasm for quality with the practicality of releasing software.

Keep in mind that these three organizational structures are just simplified examples of the many types possible and that the positives and negatives discussed for each can vary widely. In software development and testing, one size doesn't necessarily fit all, and what works for one team may not work for another. There are, however, some common metrics that can be used to measure, and guidelines that can be followed, that have been proven to work across different projects and teams for improving their quality levels. In the next two sections, you'll learn a little about them and how they're used.

Capability Maturity Model (CMM)

The Capability Maturity Model Integration[3] for Software (CMMI) is an industry-standard model for defining and measuring the maturity of a software company's development process and for providing direction on what they can do to improve their software quality. It was developed by the software development community along with the Software Engineering Institute (SEI) and Carnegie Mellon University, under direction of the U.S. Department of Defense.

What makes CMMI special is that it's generic and applies equally well to any size software company—from the largest software company in the world to the single-person consultant. Its five levels (see Figure 21.5) provide a simple means to assess a company's software development maturity and determine the key practices they could adopt to move up to the next level of maturity.

The Software Capability Maturity Model is used to assess a software company's maturity at software development.

Figure 21.5. The Software Capability Maturity Model is used to assess a software company's maturity at software development.

As you read on and learn what each of the five levels entails, think about the following: If you take the entire universe of software companies today, most are at Maturity Level 1, many are at Maturity Level 2, fewer are at Maturity Level 3, a handful are at Maturity Level 4, and an elite few are at Maturity Level 5. Here are descriptions of the five CMMI Maturity Levels:

  • Level 1: Initial. The software development processes at this level are ad hoc and often chaotic. The project's success depends on heroes and luck. There are no general practices for planning, monitoring, or controlling the process. It's impossible to predict the time and cost to develop the software. The test process is just as ad hoc as the rest of the process.

  • Level 2: Repeatable. This maturity level is best described as project-level thinking. Basic project management processes are in place to track the cost, schedule, functionality, and quality of the project. Lessons learned from previous similar projects are applied. There's a sense of discipline. Basic software testing practices, such as test plans and test cases, are used.

  • Level 3: Defined. Organizational, not just project specific, thinking comes into play at this level. Common management and engineering activities are standardized and documented. These standards are adapted and approved for use on different projects. The rules aren't thrown out when things get stressful. Test documents and plans are reviewed and approved before testing begins. The test group is independent from the developers. The test results are used to determine when the software is ready.

  • Level 4: Managed. This maturity level is under statistical control. Product quality is specified quantitatively beforehand (this product won't release until it has fewer than 0.5 defects per 1,000 lines of code) and the software isn't released until that goal is met. Details of the development process and the software's quality are collected over the project's development, and adjustments are made to correct deviations and to keep the project on plan.

  • Level 5: Optimizing. This level is called Optimizing (not “optimized”) because it's continually improving from Level 4. New technologies and processes are attempted, the results are measured, and both incremental and revolutionary changes are instituted to achieve even better quality levels. Just when everyone thinks the best has been obtained, the crank is turned one more time, and the next level of improvement is obtained.

Do any of these levels sound like the process used at a software development company you know? It's scary to think that a great deal of software is developed at Level 1—but it's often not surprising after you use the software. Would you want to cross a bridge that was developed at Level 1, ride an elevator, fly on a plane? Probably not. Eventually—hopefully—consumers will demand higher quality software and you'll see companies start to move up in their software development maturity.

NOTE

It's important to realize that it's not a software tester's role to champion a company's move up in software development maturity. That needs to be done at a corporate level, instituted from the top down. When you begin a new testing job, you should assess where the company and your new team is in the different maturity levels. Knowing what level they operate in, or what level they're striving for, will help you set your expectations and give you a better understanding of what they expect from you.

For more information on the Capability Maturity Model, visit the Software Engineering Institute's website at www.sei.cmu.edu/cmmi.

ISO 9000

Another popular set of standards related to software quality is the International Organization for Standardization's (ISO) 9000. ISO is an international standards organization that sets standards for everything from nuts and bolts to, in the case of ISO 9000, quality management and quality assurance.

You may have heard of ISO 9000 or noticed it in advertisements for a company's products or services. Often it's a little logo or note next to the company name. It's a big deal to become ISO 9000 certified, and a company that has achieved it wants to make that fact known to its customers—especially if its competitors aren't certified.

ISO 9000 is a family of standards on quality management and quality assurance that defines a basic set of good practices that will help a company consistently deliver products (or services) that meet their customer's quality requirements. It doesn't matter if the company is run out of a garage or is a multi-billion-dollar corporation, is making software, fishing lures, or is delivering pizza. Good management practices apply equally to all of them.

ISO 9000 works well for two reasons:

  • It targets the development process, not the product. It's concerned about the way an organization goes about its work, not the results of the work. It doesn't attempt to define the quality levels of the widgets coming off the assembly line or the software on the CD. As you've learned, quality is relative and subjective. A company's goal should be to create the level of quality that its customers want. Having a quality development process will help achieve that.

  • ISO 9000 dictates only what the process requirements are, not how they are to be achieved. For example, the standard says that a software team should plan and perform product design reviews (see Chapters 4 and 6), but it doesn't say how that requirement should be accomplished. Performing design reviews is a good exercise that a responsible design team should do (which is why it's in ISO 9000), but exactly how the design review is to be organized and run is up to the individual team creating the product. ISO 9000 tells you what to do but not how to do it.

NOTE

A company becoming certified as having met ISO 9000 is an indication that it has achieved a specified level of quality control in its development process. It doesn't mean that its products have a specified level of quality—although it's probably a safe bet that its products are better quality than a company's that doesn't meet ISO 9000.

For this reason, especially in the European Union but becoming more frequent in the United States, customers are expecting their suppliers to be ISO 9000 certified. If two suppliers are competing for the same contract, the one with ISO 9000 certification will have the competitive edge.

The sections of the ISO 9000 standard that deal with software are ISO 9001 and ISO 9000-3. ISO 9001 is for businesses that design, develop, produce, install, and service products. ISO 9000-3 is for businesses that develop, supply, install, and maintain computer software.

It's impossible to detail all the ISO 9000 requirements for software in this chapter, but the following list will give you an idea of what types of criteria the standard contains. It will also, hopefully, make you feel a little better, knowing that there's an international initiative to help companies create a better software development process and to help them build better quality software.

Some of the requirements in ISO 9000-3 include

  • Develop detailed quality plans and procedures to control configuration management, product verification and validation (testing), nonconformance (bugs), and corrective actions (fixes).

  • Prepare and receive approval for a software development plan that includes a definition of the project, a list of the project's objectives, a project schedule, a project specification, a description of how the project is organized, a discussion of risks and assumptions, and strategies for controlling it.

  • Communicate the specification in terms that make it easy for the customer to understand and to validate during testing.

  • Plan, develop, document, and perform software design review procedures.

  • Develop procedures that control software design changes made over the product's life cycle.

  • Develop and document software test plans.

  • Develop methods to test whether the software meets the customer's requirements.

  • Perform software validation and acceptance tests.

  • Maintain records of the test results.

  • Control how software bugs are investigated and resolved.

  • Prove that the product is ready before it's released.

  • Develop procedures to control the software's release process.

  • Identify and define what quality information should be collected.

  • Use statistical techniques to analyze the software development process.

  • Use statistical techniques to evaluate product quality.

These requirements should all sound pretty fundamental and common sense to you by now. You may even be wondering how a software company could even create software without having these processes in place. It's amazing that it's even possible, but it does explain why much of the software on the market is so full of bugs. Hopefully, over time, competition and customer demand will compel more companies in the software industry to adopt ISO 9000 as the means by which they do business.

If you're interested in learning more about the ISO 9000 standards for your own information or if your company is pursuing certification, check out the following websites:

  • International Organization for Standardization (ISO), www.iso.ch

  • American Society for Quality (ASQ), www.asq.org

  • American National Standards Institute (ANSI), www.ansi.org

Summary

One of Murphy's laws states that there's never enough time to do something right, but there's always enough time to do it over—sounds like a CMM Level 1 company, doesn't it? Forget about Murphy and think Philip Crosby. He was correct when he declared that quality really is free. It's just a matter of the software development team following a process, taking their time, being disciplined, and attempting to do it right the first time.

Of course, despite everyone's best efforts, mistakes will still be made, and bugs will still occur. The goal of software quality assurance, though, is to make sure that they are truly mistakes and aren't caused by fundamental problems in the development process. Software testing will always be necessary even in the best run organizations, but if everything runs perfectly, you might be reduced to saying, “Nope, I didn't find any bugs today, hopefully maybe tomorrow.”

You've almost completed this book and your tour of software testing. There's one more chapter to cover where you'll learn how to gain more experience in software testing and where to look for more information.

Quiz

These quiz questions are provided for your further understanding. See Appendix A, “Answers to Quiz Questions,” for the answers—but don't peek!

1:

Why are there testing costs associated with the costs of conformance?

2:

True or False: The test team is responsible for quality.

3:

Why would being called a QA Engineer be a difficult title to live up to?

4:

Why is it good for a test or quality assurance group to report independently to senior management?

5:

If a company complied with the ISO 9000-3 standard for software, what CMM level do you think they would be in and why?



[1] Philip Crosby, Joseph Juran, and W. Edwards Deming are considered by many to be the “fathers of quality.” They've written numerous books on quality assurance and their practices are in use throughout the world. Although their writings aren't specifically about software, their concepts, often in-your-face common sense, are appropriate to all fields. Good reading.

[2] The American Heritage Dictionary of the English Language, Third Edition.

[3] CMM, Capability Maturity Model, CMMI, Capability Maturity Model Integration, and Carnegie Mellon are registered in the U.S. Patent and Trademark Office.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.220.124.177