Chapter 14
A Roadmap to Effective Test Automation

About This Chapter

Chapter 13, Testing with Databases, introduced a set of patterns specific to testing applications that have a database. These patterns built on the techniques described in Chapter 6, Test Automation Strategy; Chapter 9, Persistent Fixture Management; and Chapter 11, Using Test Doubles. This was a lot of material to become familiar with before we could test effectively with and without databases!

This raises an important point: We don't become experts in test automation overnight—these skills take time to develop. It also takes time to learn the various tools and patterns at our disposal. This chapter provides something of a roadmap for how to learn the patterns and acquire the skills. It introduces the concept of "test automation maturity," which is loosely based on the SEI's Capability Maturity Model (CMM).

Test Automation Difficulty

Some kinds of tests are harder to write than others. This difficulty arises partly because the techniques are more involved and partly because they are less well known and the tools to do this kind of test automation are less readily available. The following common kinds of tests are listed in approximate order of difficulty, from easiest to most difficult:

  1. Simple entity objects (Domain Model [PEAA])

    • Simple business classes with no dependencies

    • Complex business classes with dependencies

  2. Stateless service objects

    • Individual components via component tests

    • The entire business logic layer via Layer Tests (page 337)

  3. Stateful service objects

    • Customer tests via a Service Facade [CJ2EEP] using Subcutaneous Tests (see Layer Test)

    • Stateful components via component tests

  4. "Hard-to-test" code

    • User interface logic exposed via Humble Dialog (see Humble Object on page 695)

    • Database logic

    • Multi-threaded software

  5. Object-oriented legacy software (software built without any tests)
  6. Non-object-oriented legacy software

As we move down this list, the software becomes increasingly more challenging to test. The irony is that many teams "get their feet wet" by trying to retrofit tests onto an existing application. This puts them in one of the last two categories in this list, which is precisely where the most experience is required. Unfortunately, many teams fail to test the legacy software successfully, which may then prejudice them against trying automated testing, with or without test-driven development. If you find yourself trying to learn test automation by retrofitting tests onto legacy software, I have two pieces of advice for you: First, hire someone who has done it before to help you through this process. Second, read Michael Feathers' excellent book [WEwLC]; he covers many techniques specifically applicable to retrofitting tests.

Roadmap to Highly Maintainable Automated Tests

Given that some kinds of tests are much harder to write than others, it makes sense to focus on learning to write the easier tests first before we move on to the more difficult kinds of tests. When teaching automated testing to developers, I introduce the techniques in the following sequence. This roadmap is based on Maslow's hierarchy of needs [HoN], which says that we strive to meet the higher-level needs only after we have satisfied the lower-level needs.

  1. Exercise the happy path code

    • Set up a simple pre-test state of the SUT

    • Exercise the SUT by calling the method being tested

  2. Verify direct outputs of the happy path

    • Call Assertion Methods (page 362) on the SUT's responses

    • Call Assertion Methods on the post-test state

  3. Verify alternative paths

    • Vary the SUT method arguments

    • Vary the pre-test state of the SUT

    • Control indirect inputs of the SUT via a Test Stub (page 529)

  4. Verify indirect output behavior

    • Use Mock Objects (page 544) or Test Spies (page 538) to intercept and verify outgoing method calls

  5. Optimize test execution and maintainability

    • Make the tests run faster

    • Make the tests easy to understand and maintain

    • Design the SUT for testability

    • Reduce the risk of missed bugs

This ordering of needs isn't meant to imply that this is the order in which we might think about implementing any specific test.1 Rather, it is likely to be the order in which a project team might reasonably expect to learn about the techniques of test automation.

Let's look at each of these points in more detail.

Exercise the Happy Path Code

To run the happy path through the SUT, we must automate one Simple Success Test (see Test Method on page 348) as a simple round-trip test through the SUT's API. To get this test to pass, we might simply hard-code some of the logic in the SUT, especially where it might call other components to retrieve information it needs to make decisions that would drive the test down the happy path. Before exercising the SUT, we need to set up the test fixture by initializing the SUT to the pre-test state. As long as the SUT executes without raising any errors, we consider the test as having passed; at this level of maturity we don't check the actual results against the expected results.

Verify Direct Outputs of the Happy Path

Once the happy path is executing successfully, we can add result verification logic to turn our test into a Self-Checking Test (see page 26). This involves adding calls to Assertion Methods to compare the expected results with what actually occurred. We can easily make this change for any objects or values returned to the test by the SUT (e.g., "return values," "out parameters"). We can also call other methods on the SUT or use public fields to access the post-test state of the SUT; we can then call Assertion Methods on these values as well.

Verify Alternative Paths

At this point the happy path through the code is reasonably well tested. The alternative paths through the code are still Untested Code (see Production Bugs on page 268) so the next step is to write tests for these paths (whether we have already written the production code or we are striving to automate the tests that would drive us to implement them). The question to ask here is "What causes the alternative paths to be exercised?" The most common causes are as follows:

  • Different values passed in by the client as arguments
  • Different prior state of the SUT itself
  • Different results of invoking methods on components on which the SUT depends

The first case can be tested by varying the logic in our tests that calls the SUT methods we are exercising and passing in different values as arguments. The second case involves initializing the SUT with a different starting state. Neither of these cases requires any "rocket science." The third case, however, is where things get interesting.

Controlling Indirect Inputs

Because the responses from other components are supposed to cause the SUT to exercise the alternative paths through the code, we need to get control over these indirect inputs. We can do so by using a Test Stub that returns the value that should drive the SUT into the desired code path. As part of fixture setup, we must force the SUT to use the stub instead of the real component. The Test Stub can be built two ways: as a Hard-Coded Test Stub (see Test Stub), which contains hand-written code that returns the specific values, or as a Configurable Test Stub (see Test Stub), which is configured by the test to return the desired values. In both cases, the SUT must use the Test Stub instead of the real component.

Many of these alternative paths result in "successful" outputs from the SUT; these tests are considered Simple Success Tests and use a style of Test Stub called a Responder (see Test Stub). Other paths are expected to raise errors or exceptions; they are considered Expected Exception Tests (see Test Method) and use a style of stub called a Saboteur (see Test Stub).

Making Tests Repeatable and Robust

The act of replacing a real depended-on component (DOC) with a Test Stub has a very desirable side effect: It makes our tests both more robust and more repeatable.2 By using a Test Stub, we replace a possibly nondeterministic component with one that is completely deterministic and under test control. This is a good example of the Isolate the SUT principle (see page 43).

Verify Indirect Output Behavior

Thus far we have focused on getting control of the indirect inputs of the SUT and verifying readily visible direct outputs by inspecting the post-state test of the SUT. This kind of result verification is known as State Verification (page 462). Sometimes, however, we cannot confirm that the SUT has behaved correctly simply by looking at the post-test state. That is, we may still have some Untested Requirements (see Production Bugs) that can only be verified by doing Behavior Verification (page 468).

We can build on what we already know how to do by using one of the close relatives of the Test Stub to intercept the outgoing method calls from our SUT. A Test Spy "remembers" how it was called so that the test can later retrieve the usage information and use Assertion Method calls to compare it to the expected usage. A Mock Object can be loaded with expectations during fixture setup, which it subsequently compares with the actual calls as they occur while the SUT is being exercised.

Optimize Test Execution and Maintenance

At this point we should have automated tests for all the paths through our code. We may, however, have less than optimal tests:

Make the Tests Run Faster

Slow Tests is often the first behavior smell we need to address. To make tests run faster, we can reuse the test fixture across many tests—for example, by using some form of Shared Fixture (page 317). Unfortunately, this tactic typically produces its own share of problems. Replacing a DOC with a Fake Object (page 551) that is functionally equivalent but executes much faster is almost always a better solution. Use of a Fake Object builds on the techniques we learned for verifying indirect inputs and outputs.

Make the Tests Easy to Understand and Maintain

We can make Obscure Tests easier to understand and remove a lot of Test Code Duplication by refactoring our Test Methods to call Test Utility Methods that contain any frequently used logic instead of doing everything on an in-line basis. Creation Methods (page 415), Custom Assertions (page 474), Finder Methods (see Test Utility Method), and Parameterized Tests (page 607) are all examples of this approach.

If our Testcase Classes (page 373) are getting too big to understand, we can reorganize these classes around fixtures or features. We can also better communicate our intent by using a systematic way of naming Testcase Classes and Test Methods that exposes the test conditions we are verifying in them.

Reduce the Risk of Missed Bugs

If we are having problems with Buggy Tests or Production Bugs, we can reduce the risk of false negatives (tests that pass when they shouldn't) by encapsulating complex test logic. When doing so, we should use intent-revealing names for our Test Utility Methods. We should verify the behavior of nontrivial Test Utility Methods using Test Utility Tests (see Test Utility Method).

What's Next?

This chapter concludes Part I, The Narratives. Chapters 114 have provided an overview of the goals, principles, philosophies, patterns, smells, and coding idioms related to writing effective automated tests. Part II, The Test Smells, and Part III, The Patterns, contain detailed descriptions of each of the smells and patterns introduced in these narrative chapters, complete with code samples.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.17.203.68