Testing vs. Test-Driving: Parameterized Tests and Other Toys

Despite the word test appearing in its name, TDD is less about testing than it is about design. Yes, you produce unit tests as a result of practicing TDD, but they are almost a by-product. It might seem like a subtle difference, but the true goal is to allow you to keep the design clean over time so that you may introduce new behavior or change existing behavior with high confidence and reasonable cost.

With a testing mentality, you seek to create tests that cover a breadth of concerns. You create tests for five types of cases: zero, one, many, boundary, and exceptional cases. With a test-driving mentality, you write tests in order to drive in code that you believe meets desired specifications. While both testing and test-driving are about providing enough confidence to ship code, you stop test-driving as soon you have the confidence that you’ve built all you need (and your tests all pass, of course). In contrast, a good tester seeks to cover the five types of cases as exhaustively as reasonable.

Nothing prohibits you from writing additional after-the-fact tests when doing TDD. Usually, though, you stop as soon as you believe you have a correct and clean implementation that covers the cases you know you must support. Stated another way, stop once you can’t think of how to write a test that would fail.

As an example, consider the Roman number converter (see Appendix 2, Code Kata: Roman Numeral Converter), which converts an Arabic numeral to a corresponding Roman numeral. A good tester would probably test at least a couple dozen conversions to ensure that all the various digits and combinations thereof were covered. In contrast, when test-driving the solution, I could stop at about a dozen tests. At that point, I have the confidence that I’ve built the right algorithm, and the remainder of the work is simply filling in a digit-to-digit conversion table. (In the appendix, I drive through a few more assertions for confidence and demonstration purposes.)

The genesis of many code-level testing tools was to support the writing of tests, not to support doing TDD. As such, many tools provide sophisticated features to make testing easier. For example, some tools allow you to define dependencies between tests. It’s a nice optimization feature if you have a suite of integration tests (see Unit Tests, Integration Tests, and Acceptance Tests) that run slowly; you can speed up your test run by stringing tests together in a certain order. (The maintenance costs of and pains from using such tightly coupled tests increase.) But when doing TDD, you seek fast, independent tests and therefore don’t need the complexity of test dependencies.

Nothing is wrong with wanting or using any of these testing features from time to time. However, question the desire: Does this feature suggest that I’m outside the bounds of TDD? Is there an approach that better aligns with the goals of TDD?

This section will cover, briefly, some tempting test tool features. Refer to your test tool for further details about these features if you still feel compelled to use them (even after I attempt to dissuade you).

Parameterized Tests

The Roman numeral converter (Appendix 2, Code Kata: Roman Numeral Converter) must convert the numbers from 1 through 3999. Perhaps it would be nice if you could simply iterate a list of expected inputs and outputs and pump this into a single test that took an input and an output as arguments. The parameterized tests feature exists in some test tools (Google Mock included) to support this need.

Let’s demonstrate with this very trivial class called Adder:

c3/18/ParameterizedTest.cpp
 
class​ Adder {
 
public​:
 
static​ ​int​ sum(​int​ a, ​int​ b) {
 
return​ a + b;
 
}
 
};

Here is a normal TDD-generated test that drove in the implementation for sum:

c3/18/ParameterizedTest.cpp
 
TEST(AnAdder, GeneratesASumFromTwoNumbers) {
 
ASSERT_THAT(Adder::sum(1, 1), Eq(2));
 
}

But that test covers only a single case! Yes, and we’re confident that the code works, and we shouldn’t feel the need to create a bunch of additional cases.

For more complex code, it might make us a tad more confident to blast through a bunch of cases. For the Adder example, we first define a fixture that derives from TestWithParam<T>, where T is the parameter type.

c3/18/ParameterizedTest.cpp
 
class​ AnAdder: ​public​ TestWithParam<SumCase> {
 
};

Our parameter type is SumCase, designed to capture two input numbers and an expected sum.

c3/18/ParameterizedTest.cpp
 
struct​ SumCase {
 
int​ a, b, expected;
 
SumCase(​int​ anA, ​int​ aB, ​int​ anExpected)
 
: a(anA), b(aB), expected(anExpected) {}
 
};

With these elements in place, we can write a parameterized test. We use TEST_P, P for parameterized, to declare the test.

c3/18/ParameterizedTest.cpp
 
TEST_P(AnAdder, GeneratesLotsOfSumsFromTwoNumbers) {
 
SumCase input = GetParam();
 
ASSERT_THAT(Adder::sum(input.a, input.b), Eq(input.expected));
 
}
 
SumCase sums[] = {
 
SumCase(1, 1, 2),
 
SumCase(1, 2, 3),
 
SumCase(2, 2, 4)
 
};
 
INSTANTIATE_TEST_CASE_P(BulkTest, AnAdder, ValuesIn(sums));

The last line kicks off calling the test with injected parameters. INSTANTIATE_TEST_CASE_P takes the name of the fixture as its second argument and takes the values to be injected as the third argument. (The first argument, BulkTest, represents a prefix that Google Mock appends to the test name.) The ValuesIn function indicates that the injection process should use an element from the array sums to inject into the test (GeneratesLotsOfSumsFromTwoNumbers) each time it’s called. The first line in the test calls GetParam, which returns the injected value (a SumCase object).

Cool! But in the dozen-plus years I’ve been doing TDD, I’ve used parameterized tests less than a handful of times. It works well if you have a lot of simple data you want to crunch through. Perhaps someone gave you a spreadsheet with a bunch of data cases. You might dump those values as parameters (and maybe even write a bit of code to pull the parameters directly from the spreadsheet). These are perfectly fine ideas, but you’re no longer in the realm of TDD.

Also, remember that a goal of TDD is to have tests that document behaviors by example, each named to aptly describe the unique behavior being driven in. Parameterized tests can meet this need, but more often than not, they simply act as tests.

Comments in Tests

The code examples distributed as part of the documentation for a prominent test tool include a number of well-annotated tests. Comments appear, I presume, for pedantic reasons.

 
// Tests the c'tor that accepts a C string.
 
TEST(MyString, ConstructorFromCString)

It takes a lot to offend me, but this comment comes pretty close. What a waste of typing effort, space, and time for those who must read the code.

Of course, comments aren’t a test tool feature but a language feature. In both production code and test code, your best choice is to transform as many comments as you can into more-expressive code. The remaining comments will likely answer questions like “Why in the world did I code it that way?”

Outside of perhaps explaining a “why,” if you need a comment to explain your test, it stinks. Tests should clearly document class capabilities. You can always rename and structure your tests in a way (see One Assert per Test and Arrange-Act-Assert/Given-When-Then) that obviates explanatory comments.

In case I haven’t quite belabored the point, don’t summarize a test with a descriptive comment. Fix its name. Don’t guide readers through the test with comments. Clean up the steps in the test.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.143.255.36