Mechanics for Success

The prior section, Mind-Sets for Successful Adoption of TDD, emphasizes a philosophy for success with TDD. This section discusses various specific techniques that will help keep you on track.

What’s the Next Test?

As you embark on learning TDD, one of the biggest questions always on your mind will be, “What’s the next test I should write?” The examples in this book should suggest a few answers to this question.

One answer is to write the test that results in the simplest possible increment of production code. But just what does that mean?

Uncle Bob has devised a scheme to categorize each increment as a transformation form (see The Transformation Priority Premise). All transformations are prioritized from simplest (highest priority) to most complex (lowest priority). Your job is to choose the transformation with the highest-priority order and write the test that generates that transformation. Incrementing in transformation priority order will produce an ideal ordering of tests. That’s the premise—the Transformation Priority Premise (TPP).

If the TPP sounds complex, that’s because it is. It’s a theory. So far, it’s been demonstrated to provide value for many algorithmic-oriented solutions.

Otherwise, you can use the following list of questions to help decide:

  • What’s the next most logically meaningful behavior?

  • What’s the smallest piece of that meaningful behavior you can verify?

  • Can you write a test that demonstrates that the current behavior is insufficient?

Let’s work through an example and test-drive a class named SQL whose job is to generate SQL statements (select, insert, delete, and so on) given database metadata about a table.

We seek meaningful behavior with each new test. That means we don’t directly test-drive getters, setters, or constructors. (They’ll get coded as needed when driving in useful behaviors.) Generating a SQL statement that operates on a table seems useful and small enough. We can choose from drop table or truncate table.

Test name

GeneratesDropUsingTableName

Implementation

return "drop" + tableName_

That’s trivial and represents a couple minutes coding. We can quickly add support for truncate.

SQL generation so far seems to involve appending the table name to a command string, which suggests a refactored implementation that simplifies creating drop and truncate statements. (The variables Drop and Truncate represent constant strings, each with a trailing space, in the example.)

 
std::​string​ dropStatement() ​const​ {
 
return​ createCommand(Drop);
 
}
 
 
std::​string​ truncateStatement() ​const​ {
 
return​ createCommand(Truncate);
 
}
 
 
std::​string​ createCommand(​const​ std::​string​& name) ​const​ {
 
return​ name + tableName_;
 
}

We recognize that our behavior is insufficient. We’ll have a problem if client code passes in an empty table name.

Sometimes it’s worth considering exceptional cases early on, before you get too deep in all the happy path cases. Other times, you can save them until later. You can often code an exceptional test in a short time, needing only a small bit of minimally invasive code to make it pass.

Test name

ConstructionThrowsWhenTableNameEmpty

Implementation

if (tableName_.empty()) throw ...

Next we tackle the select statement as a meaningful increment. The simplest case is to support select *.

Test name

GeneratesSelectStar

Implementation

return createCommand(SelectStarFrom)

It’s important to support iterating columns, since most developers consider it better practice than select *.

Test name

GeneratesSelectWithColumnList

Implementation

return Select + columnList() + From + tableName_

Now we’re hitting a tiny bit more complexity. It might take a few minutes to implement columnList, but getting the test to pass is still not an extensive effort.

Our select statement is insufficient. We explore the ability to specify a where clause.

Test name

GeneratesSelectWhereColumnEqual

Implementation

return selectStatement() + whereEq(columnName, value)

The implementation for GeneratesSelectWithColumnList would probably reside in a member function with signature std::string selectStatement() const. The subsequent test, GeneratesSelectWhereColumnEqual, can simply reuse selectStatement. This is what we want: each small test increment builds upon a prior small increment with minimal impact.

Over time, you’ll get a good mental picture of the implementation required to get a given test to pass. Your job is to make life easy on yourself, so you’ll get better at choosing the test that corresponds to the implementation with the simplest increment.

You will make less than ideal choices from time to time when choosing the next test. See Suboptimal Test Order, for an example. A willingness to backtrack and discard a small bit of code can help you better learn how to take more incremental routes through an implementation.

Ten-Minute Limit

TDD depends on short feedback cycles. As long as you’re sticking to the red-green-refactor cycle, you should do well with TDD. But it’s still possible to bog down. From time to time, you’ll struggle getting a test to pass. Or you’ll attempt to clean up some code but break a few tests in the process. Or you’ll feel compelled to bring up the debugger to understand what’s going on.

Struggle is expected, but set a limit on the length of your suffering. Allow no more than ten minutes to elapse from the time your tests last passed. Some developers go as far as to use a timer. You need not be quite so strict about the time, but it is important to realize when your attempt at a solution derails.

If the time limit hits, discard your current effort and start over. A good version control tool like git makes reverting easy, rapid, effective, and very safe. (If you don’t use git, consider running it locally with a bridging tool like git-svn.)

Don’t get overly attached to code, particularly not code you struggled with. Throw it out with extreme prejudice. It’s at most ten minutes of code, and your solution was likely poor. Take a short break, clear your mind, and approach the problem with a new, improved outlook.

If you were stymied by something that didn’t work before, take smaller steps this second time and see where that gets you. Introduce additional asserts to verify all questionable assumptions. You’ll at least pinpoint the very line of code that caused the problem, and you’ll probably build a better solution.

Defects

You will have defects. It’s inevitable. However, TDD gives you the potential to have close to zero defects in your new code. I saw one test-driving team’s defect report for software that showed only fifteen defects during its first eleven months in production. Other great TDD success stories exist.

Your dumb logic defects will almost disappear with TDD. What will remain? It will be all the other things that can and do go wrong: conditions no one ever expected, external things out of sync (for example, config files), and curious combinations of behavior involving several methods and/or classes. You’ll also have defects because of specification problems, including inadvertent omissions and misunderstandings between you and the customer.

TDD isn’t a silver bullet. But it is a great way to help eliminate dumb logic mistakes we all make. (More importantly, it’s a beautiful way to shape your system’s design well.)

When QA or support finally does shove a defect in your face, what do you do? Well, you’re test-driving. You might first write a few simple tests to probe the existing system. Tests with a tighter focus on related code might help you better understand how the code behaves, which in turn might help you decipher the failure. You can sometimes retain these probes as useful characterization tests (Chapter 8, Legacy Challenges). Other times, you discard them as one-time tools.

Once you think you’ve pinpointed the problem source, don’t simply fix it and move on. This is TDD! Instead, write a test that you think emulates the behavior that exposed the defect. Ensure that the test fails (red!). Fix it (green!). Refactor (refactor!).

Disabling Tests

Normally, work on one thing at a time when test-driving code. Occasionally you’ll find yourself with a second test that fails while you’re working on getting a first test to pass. Having another test always fail means that you won’t be able to follow the red-green-refactor cycle for the first test (unless they are both failing for the same reason).

Instead of allowing the second, failing test to distract you or divert your attention, you can disable it temporarily. Commenting out the test code would work, but a better mechanism is to mark the test as disabled. Many tools support the ability to explicitly disable a test and can remind you when you run your tests that one or more tests are disabled. The reminder should help prevent you from the crime of accidentally checking in disabled tests.

Using Google Mock, you disable a test by prepending DISABLED_ to its name.

 
TEST(ATweet, DISABLED_RequiresUserNameToStartWithAnAtSign)

When you run your test suite, Google Mock will print a reminder at the end to let you know you have one or more disabled tests.

 
[----------] Global test environment tear-down
 
[==========] 17 tests from 3 test cases ran. (2 ms total)
 
[ PASSED ] 17 tests.
 
 
YOU HAVE 1 DISABLED TEST

Don’t check in code with disabled (or commented-out) tests unless you have a really good reason. The code you integrate should reflect current capabilities of the system. Commented-out tests (and production code too) waste time for other developers. “Is this test commented out because the behavior no longer exists? Is it broken? In-flux? Does it run slowly and people enable it when they want to run it? Should I talk to someone else about it?”

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.218.97.75