Chapter 18. Writing and Tracking Test Cases

IN THIS CHAPTER

  • The Goals of Test Case Planning

  • Test Case Planning Overview

  • Test Case Organization and Tracking

In Chapter 17, “Planning Your Test Effort,” you learned about the test planning process and the creation of a project test plan. The details and information that the test plan communicates are necessary for the project to succeed, but they are a bit abstract and high level for an individual tester's day-to-day testing activities.

The next step down in the test planning process, writing and tracking test cases, is one that more directly influences your typical tasks as a software tester. Initially you may be involved only in running test cases that someone else has written, but you'll very soon be writing them for yourself and for other testers to use. This chapter will teach you how to effectively develop and manage those test cases to make your testing go as efficiently as possible.

Highlights of this chapter include

  • Why writing and tracking test cases is important

  • What a test design specification is

  • What a test case specification is

  • How test procedures should be written

  • How test cases should be organized

The Goals of Test Case Planning

The early chapters of this book discussed the different software development models and the various testing techniques that can be used, based on those models, to perform effective testing. In a big-bang or code-and-fix model, the testers are at the mercy of the project, often having to guess what testing to perform and whether what they find are indeed bugs. In the more disciplined development models, testing becomes a bit easier because there's formal documentation such as product specs and design specs. The software creation—the design, architecture, and programming—becomes a true process, not just a chaotic race to get a product out the door. Testing in that environment is much more efficient and predictable.

There's the old saying, “What's good for the goose is good for the gander,” meaning what's beneficial to one person or group is also beneficial to another. Hopefully, from what you've learned so far, you would think it's heresy for a programmer to take the product spec and immediately start coding without developing a more detailed plan and distributing it for review. A tester, then, taking the test plan and instantly sitting down to think up test cases and begin testing should seem just as wrong. If software testers expect the project managers and the programmers to be more disciplined, instill some methods, and follow some rules to make the development process run more smoothly, they should also expect to do the same.

Carefully and methodically planning test cases is a step in that direction. Doing so is very important for four reasons:

  • Organization. Even on small software projects it's possible to have many thousands of test cases. The cases may have been created by several testers over the course of several months or even years. Proper planning will organize them so that all the testers and other project team members can review and use them effectively.

  • Repeatability. As you've learned, it's necessary over the course of a project to run the same tests several times to look for new bugs and to make sure that old ones have been fixed. Without proper planning, it would be impossible to know what test cases were last run and exactly how they were run so that you could repeat the exact tests.

  • Tracking. Similarly, you need to answer important questions over the course of a project. How many test cases did you plan to run? How many did you run on the last software release? How many passed and how many failed? Were any test cases skipped? And so on. If no planning went into the test cases, it would be impossible to answer these questions.

  • Proof of testing (or not testing). In a few high-risk industries, the software test team must prove that it did indeed run the tests that it planned to run. It could actually be illegal, and dangerous, to release software in which a few test cases were skipped. Proper test case planning and tracking provides a means for proving what was tested.

NOTE

Don't confuse test case planning with the identification of test cases that you learned in Part II, “Testing Fundamentals.” Those chapters taught you how to test and how to select test cases, similar to teaching a programmer how to program in a specific language. Test case planning is the next step up and is similar to a programmer learning how to perform high-level design and properly document his work.

Test Case Planning Overview

So where exactly does test case planning fit into the grand scheme of testing? Figure 18.1 shows the relationships among the different types of test plans.

The different levels of test documents all interact and vary on whether their importance is the document itself or the process of creating it.

Figure 18.1. The different levels of test documents all interact and vary on whether their importance is the document itself or the process of creating it.

You're already familiar with the top, or project level, test plan and know that the process of creating it is more important than the resulting document. The next three levels, the test design specification, the test case specification, and the test procedure specification are described in detail in the following sections.

As you can see in Figure 18.1, moving further away from the top-level test plan puts less emphasis on the process of creation and more on the resulting written document. The reason is that these plans become useful on a daily, sometimes hourly, basis by the testers performing the testing. As you'll learn, at the lowest level they become step-by-step instructions for executing a test, making it key that they're clear, concise, and organized—how they got that way isn't nearly as important.

The information presented in this chapter is adapted from the IEEE Std 829-1998 Standard for Software Test Documentation (available from standards.ieee.org). This standard is what many testing teams have adopted as their test planning documentation—intentional or not—because it represents a logical and commonsense method for test planning. The important thing to realize about this standard is that unless you're bound to follow it to the letter because of the type of software you're testing or by your corporate or industry policy, you should use it as a guideline and not a standard. The information it contains and approaches it recommends are as valid today as they were when the standard was written in 1983. But, what used to work best as a written document is often better and more efficiently presented today as a spreadsheet or a database. You'll see an example of this later in the chapter.

The bottom line is that you and your test team should create test plans that cover the information outlined in IEEE 829. If paper printouts work best (which would be hard to believe), by all means use them. If, however, you think a central database is more efficient and your team has the time and budget to develop or buy one, you should go with that approach. Ultimately it doesn't matter. What does matter is that when you've completed your work, you've met the four goals of test case planning: organization, repeatability, tracking, and proof.

TIP

There are many good templates available on the Web for test plans that are based on IEEE 829. Just do a search for “Test Plan Template” to find them.

Test Design

The overall project test plan is written at a very high level. It breaks out the software into specific features and testable items and assigns them to individual testers, but it doesn't specify exactly how those features will be tested. There may be a general mention of using automation or black-box or white-box testing, but the test plan doesn't get into the details of exactly where and how they will be used. This next level of detail that defines the testing approach for individual software features is the test design specification.

IEEE 829 states that the test design specification “refines the test approach [defined in the test plan] and identifies the features to be covered by the design and its associated tests. It also identifies the test cases and test procedures, if any, required to accomplish the testing and specifies the feature pass/fail criteria.”

The purpose of the test design spec is to organize and describe the testing that needs to be performed on a specific feature. It doesn't, however, give the detailed cases or the steps to execute to perform the testing. The following topics, adapted from the IEEE 829 standard, address this purpose and should be part of the test design specs that you create:

  • Identifiers. A unique identifier that can be used to reference and locate the test design spec. The spec should also reference the overall test plan and contain pointers to any other plans or specs that it references.

  • Features to be tested. A description of the software feature covered by the test design spec—for example, “the addition function of Calculator,” “font size selection and display in WordPad,” and “video card configuration testing of QuickTime.”

    This section should also identify features that may be indirectly tested as a side effect of testing the primary feature. For example, “Although not the target of this plan, the UI of the file open dialog box will be indirectly tested in the process of testing the load and save functionality.”

    It should also list features that won't be tested, ones that may be misconstrued as being covered by this plan. For example, “Because testing Calculator's addition function will be performed with automation by sending keystrokes to the software, there will be no indirect testing of the onscreen UI. The UI testing is addressed in a separate test design plan—CalcUI12345.”

  • Approach. A description of the general approach that will be used to test the features. It should expand on the approach, if any, listed in the test plan, describe the technique to be used, and explain how the results will be verified.

    For example, “A testing tool will be developed to sequentially load and save pre-built data files of various sizes. The number of data files, the sizes, and the data they contain will be determined through black-box techniques and supplemented with white-box examples from the programmer. A pass or fail will be determined by comparing the saved file bit-for-bit against the original using a file compare tool.”

  • Test case identification. A high-level description and references to the specific test cases that will be used to check the feature. It should list the selected equivalence partitions and provide references to the test cases and test procedures used to run them. For example,

    Check the highest possible value

    Test Case ID# 15326

    Check the lowest possible value

    Test Case ID# 15327

    Check several interim powers of 2

    Test Case ID# 15328

    It's important that the actual test case values aren't defined in this section. For someone reviewing the test design spec for proper test coverage, a description of the equivalence partitions is much more useful than the specific values themselves.

  • Pass/fail criteria. Describes exactly what constitutes a pass and a fail of the tested feature. What is acceptable and what is not? This may be very simple and clear—a pass is when all the test cases are run without finding a bug. It can also be fuzzy—a failure is when 10 percent or more of the test cases fail. There should be no doubt, though, what constitutes a pass or a fail of the feature.

Test Cases

Chapters 4 through 7 described the fundamentals of software testing—dissecting a specification, code, and software to derive the minimal amount of test cases that would effectively test the software. What wasn't discussed in those chapters is how to record and document the cases you create. If you've already started doing some software testing, you've likely experimented with different ideas and formats. This section on documenting test cases will give you a few more options to consider.

IEEE 829 states that the test case specification “documents the actual values used for input along with the anticipated outputs. A test case also identifies any constraints on the test procedure resulting from use of that specific test case.”

Essentially, the details of a test case should explain exactly what values or conditions will be sent to the software and what result is expected. It can be referenced by one or more test design specs and may reference more than one test procedure. The IEEE 829 standard also lists some other important information that should be included:

  • Identifiers. A unique identifier is referenced by the test design specs and the test procedure specs.

  • Test item. This describes the detailed feature, code module, and so on that's being tested. It should be more specific than the features listed in the test design spec. If the test design spec said “the addition function of Calculator,” the test case spec would say “upper limit overflow handling of addition calculations.” It should also provide references to product specifications or other design docs from which the test case was based.

  • Input specification. This specification lists all the inputs or conditions given to the software to execute the test case. If you're testing Calculator, this may be as simple as 1+1. If you're testing cellular telephone switching software, there could be hundreds or thousands of input conditions. If you're testing a file-based product, it would be the name of the file and a description of its contents.

  • Output specification. This describes the result you expect from executing the test case. Did 1+1 equal 2? Were the thousands of output variables set correctly in the cell software? Did all the contents of the file load as expected?

  • Environmental needs. Environmental needs are the hardware, software, test tools, facilities, staff, and so on that are necessary to run the test case.

  • Special procedural requirements. This section describes anything unusual that must be done to perform the test. Testing WordPad probably doesn't need anything special, but testing nuclear power plant software might.

  • Intercase dependencies. Chapter 1, “Software Testing Background,” included a description of a bug that caused NASA's Mars Polar Lander to crash on Mars. It's a perfect example of an undocumented intercase dependency. If a test case depends on another test case or might be affected by another, that information should go here.

Are you panicked yet? If you follow this suggested level of documentation to the letter, you could be writing at least a page of descriptive text for each test case you identify! Thousands of test cases could take thousands of pages of documentation. The project could be outdated by the time you finish writing.

This is another reason why you should take the IEEE 829 standard as a guideline and not follow it to the letter—unless you have to. Many government projects and certain industries are required to document their test cases to this level, but in most other instances you can take some shortcuts.

Taking a shortcut doesn't mean dismissing or neglecting important information—it means figuring out a way to condense the information into a more efficient means of communicating it. For example, there's no reason that you're limited to presenting test cases in written paragraph form. Figure 18.2 shows an example of a printer compatibility table.

Test cases can be presented in the form of matrix or table.

Figure 18.2. Test cases can be presented in the form of matrix or table.

Each line of the matrix is a specific test case and has its own identifier. All the other information that goes with a test case—test item, input spec, output spec, environmental needs, special requirements, and dependencies—are most likely common to all these cases and could be written once and attached to the table. Someone reviewing your test cases could quickly read that information and then review the table to check its coverage.

Other options for presenting test cases are simple lists, outlines, or even graphical diagrams such as state tables or data flow diagrams. Remember, you're trying to communicate your test cases to others and should use whichever method is most effective. Be creative, but stay true to the purpose of documenting your test cases.

Test Procedures

After you document the test designs and test cases, what remains are the procedures that need to be followed to execute the test cases. IEEE 829 states that the test procedure specification “identifies all the steps required to operate the system and exercise the specified test cases in order to implement the associated test design.”

The test procedure or test script spec defines the step-by-step details of exactly how to perform the test cases. Here's the information that needs to be defined:

  • Identifier. A unique identifier that ties the test procedure to the associated test cases and test design.

  • Purpose. The purpose of the procedure and reference to the test cases that it will execute.

  • Special requirements. Other procedures, special testing skills, or special equipment needed to run the procedure.

  • Procedure steps. Detailed description of how the tests are to be run:

    • Log. Tells how and by what method the results and observations will be recorded.

    • Setup. Explains how to prepare for the test.

    • Start. Explains the steps used to start the test.

    • Procedure. Describes the steps used to run the tests.

    • Measure. Describes how the results are to be determined—for example, with a stopwatch or visual determination.

    • Shut down. Explains the steps for suspending the test for unexpected reasons.

    • Restart. Tells the tester how to pick up the test at a certain point if there's a failure or after shutting down.

    • Stop. Describes the steps for an orderly halt to the test.

    • Wrap up. Explains how to restore the environment to its pre-test condition.

    • Contingencies. Explains what to do if things don't go as planned.

It's not sufficient for a test procedure to just say, “Try all the following test cases and report back on what you see….” That would be simple and easy but wouldn't tell a new tester anything about how to perform the testing. It wouldn't be repeatable and there'd be no way to prove what steps were executed. Using a detailed procedure makes known exactly what will be tested and how. Figure 18.3 shows an excerpt from a fictional example of a test procedure for Windows Calculator.

A fictional example of a test procedure shows how much detail can be involved.

Figure 18.3. A fictional example of a test procedure shows how much detail can be involved.

Detail Versus Reality

An old saying, “Do everything in moderation,” applies perfectly well to test case planning. Remember the four goals: organization, repeatability, tracking, and proof. As a software tester developing test cases, you need to work toward these goals—but their level is determined by your industry, your company, your project, and your team. It's unlikely that you'll need to document your test cases down to the greatest level of detail and, hopefully, you won't be working on an ad hoc seat-of-your-pants project where you don't need to document anything at all. Odds are, your work will lie somewhere in between.

The trick is finding the right level of moderation. Consider the test procedure shown in Figure 18.3 that requires Windows 98 to be installed on a PC to run the tests. The procedure states in its setup section that Windows 98 is required—but it doesn't state what specific version of Windows 98. What happens with Windows 98 SE or with the various service pack updates? Does the test procedure need to be updated to reflect the change? To avoid this problem, the version could be omitted and replaced with “latest available,” but then what happens if a new release comes out during the product cycle? Should the tester switch OS releases in the middle of the project?

Another issue is that the procedure tells the tester to simply install a “clean copy” of Win98. What does clean copy mean? The procedure lists a couple of tools, WipeDisk and Clone, to be used in the setup process and refers the tester to a document that explains how to use them. Should the procedure steps be more detailed and explain exactly where to obtain this other document and these tools? If you've ever installed an operating system, you know it's a complex process that requires the installer to answer many questions and decide on many options. Should this procedure or a related procedure go into that level of detail? If it doesn't, how can it be known what configuration the tests were run on? If it does, and the installation process changes, there could be hundreds of test procedures to update. What a mess.

Unfortunately, there is no single, right answer. Highly detailed test case specs reduce ambiguity, make tests perfectly repeatable, and allow inexperienced testers to execute tests exactly as they were intended. On the other hand, writing test case specs to this level takes considerably more time and effort, can make updates difficult, and, because of all the details, bog down the test effort, causing it to take much longer to run.

When you start writing test cases, your best bet is to adopt the standards of the project you're working on. If you're testing a new medical device, your procedures will most likely need to be much more detailed than if you're testing a video game. If you're involved in setting up or recommending how the test design, test cases, and test procedures will be written for a new project, review the formats defined by the IEEE 829 standard, try some examples, and see what works best for you, your team, and your project.

Test Case Organization and Tracking

One consideration that you should take into account when creating the test case documentation is how the information will be organized and tracked. Think about the questions that a tester or the test team should be able to answer:

  • Which test cases do you plan to run?

  • How many test cases do you plan to run? How long will it take to run them?

  • Can you pick and choose test suites (groups of related test cases) to run on particular features or areas of the software?

  • When you run the cases, will you be able to record which ones pass and which ones fail?

  • Of the ones that failed, which ones also failed the last time you ran them?

  • What percentage of the cases passed the last time you ran them?

These examples of important questions might be asked over the course of a typical project. Chapter 20, “Measuring Your Success,” will discuss data collection and statistics in more detail, but for now, consider that some sort of process needs to be in place that allows you to manage your test cases and track the results of running them. There are essentially four possible systems:

  • In your head. Don't even consider this one, even for the simplest projects, unless you're testing software for your own personal use and have no reason to track your testing. You just can't do it.

  • Paper/documents. It's possible to manage the test cases for very small projects on paper. Tables and charts of checklists have been used effectively. They're obviously a weak method for organizing and searching the data but they do offer one very important positive—a written checklist that includes a tester's initials or signature denoting that tests were run is excellent proof in a court-of-law that testing was performed.

  • Spreadsheet. A popular and very workable method of tracking test cases is by using a spreadsheet. Figure 18.4 shows an example of this. By keeping all the details of the test cases in one place, a spreadsheet can provide an at-a-glance view of your testing status. They're easy to use, relatively easy to set up, and provide good tracking and proof of testing.

    A spreadsheet can be used to effectively track and manage test suites and test cases.

    Figure 18.4. A spreadsheet can be used to effectively track and manage test suites and test cases.

  • Custom database. The ideal method for tracking test cases is to use a Test Case Management Tool, a database programmed specifically to handle test cases. Many commercially available applications are set up to perform just this specific task. Visit some of the web links listed in Chapter 22, “Your Career as a Software Tester,” for more information and recommendations from other testers. If you're interested in creating your own tracking system, database software such as FileMaker Pro, Microsoft Access, and many others provide almost drag-and-drop database creation that would let you build a database that mapped to the IEEE 829 standard in just a few hours. You could then set up reports and queries that would allow you to answer just about any question regarding the test cases.

The important thing to remember is that the number of test cases can easily be in the thousands and without a means to manage them, you and the other testers could quickly be lost in a sea of documentation. You need to know, at a glance, the answer to fundamental questions such as, “What will I be testing tomorrow, and how many test cases will I need to run?”

Summary

It's time again to remind you of the four reasons for carefully planning your test cases: organization, repeatability, tracking, and proof of testing. These can't be stressed enough because it's very easy to become lazy and neglect a very important part of a tester's job—to document exactly what you do.

You wouldn't want to drive a car that was designed and tested by an engineering team that scribbled their work on the back of a cocktail napkin or lived next to a nuclear power plant where the control software was tested by a team of ad hoc testers. You would want the engineers who built and tested those systems to use good engineering practices, to document their work, and to make sure that they did what they originally planned.

As a new tester, you may not have control over what level of planning and documentation your project is using, but you should work to make your job as efficient as possible. Find out what's necessary and what's not, investigate ways to use technology to improve the process, but never cut corners. That's the difference between a professional and a hack.

This chapter and Chapter 17 dealt with planning and documenting what you intend to test. The next two chapters will cover how to document the results of your testing and how to tell the world that you found a bug.

Quiz

These quiz questions are provided for your further understanding. See Appendix A, “Answers to Quiz Questions,” for the answers—but don't peek!

1:

What are the four reasons for test case planning?

2:

What is ad hoc testing?

3:

What's the purpose of a test design specification?

4:

What is a test case specification?

5:

Other than a traditional document, what means can you use to present your test cases?

6:

What's the purpose of a test procedure specification?

7:

At what level of detail should test procedures be written?

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.12.166.131