Tests Come FIRST

Wondering if you’ve built a good unit test? Vet it against the FIRST mnemonic, devised by Brett Schuchert and Tim Ottinger. The mnemonic reminds you of a key part of TDD’s definition: tests come first.

FIRST breaks down into the following:

  • F for Fast

  • I for Isolated

  • R for Repeatable

  • S for Self-verifying

  • T for Timely

Fast

TDD supports incremental and iterative development through its core cycle of specify, build, and refactor. How long should a cycle take? The shorter, the better. You want to know as soon as your code either doesn’t work or breaks something else. The more code you grow between introducing a defect and discovering it, the more time you stand to waste in pinpointing and fixing the problem. You want ultra-rapid feedback!

We all make mistakes as we code. We all also initially craft code that exhibits less-than-ideal design characteristics. Much as writers create rough drafts, we create rough code for our first pass. But code gets harder to change as we build slop upon slop. Our best hope for sanity? Continually examine and clean up each small bit of code.

Not only must you ensure your changed or new unit test runs, you must ensure your small change doesn’t break something in a far-slung corner of your system. You want to run all existing unit tests with each small change.

Ideally, you want to code a tiny bit of logic, perhaps a line or two, before getting feedback. But doing so incurs the cost of compiling, linking, and running your test suite.

How important is it to keep this cycle cost low? If it takes on average three or four seconds to compile, link, and run your tests, your code increments can be small and your feedback high. But imagine your suite takes two minutes to build and run. How often will you run it? Perhaps once every ten to fifteen minutes? If your tests take twenty minutes to run, you might run a couple times a day.

In the absence of rapid feedback, you will write fewer tests, refactor your code less, and increase the time between introducing a problem and discovering it. Falling back to these old results means that you’ll likely see few of the potential benefits of TDD. You might choose to abandon TDD at this point. Don’t be that guy!

The Cost of Building

Build times in C++ present a hefty challenge. A compile and link in a sizeable system can require several minutes and sometimes much more.

The lion’s share of the build time directly relates to the dependency structure of your code. Code dependent on a change must be rebuilt.

Part of doing TDD well requires crafting a design that minimizes rampant rebuilds. If your heavily used class exposes a large interface, clients must rebuild when it changes, even if your changes have little to do with their interests in your class. Per the Interface Segregation Principle (ISP) (Agile Software Development, Principles, Patterns, and Practices [Mar02]), forcing clients to depend upon interfaces they don’t use indicates a design deficiency.

Similarly, abusing other principles can result in longer build times. The Dependency Inversion Principle (DIP) tells you to depend upon abstractions, not details (Agile Software Development, Principles, Patterns, and Practices [Mar02]). If you change details of a concrete class, all its clients must rebuild.

You can introduce an interface—a pure virtual void class—that your concrete class realizes. Client code interacts through the abstraction provided by the interface and isn’t triggered to recompile if the implementation details of the concrete class change.

If you’re introducing new private methods as part of refactoring, you can find yourself waiting impatiently on long rebuilds. You might consider using the “pointer to implementation” (PIMPL) idiom. To use PIMPL, extract your concrete details to a separate implementation class. Delegate to the implementation as needed from the interface functions. You’re then free to change the implementation all you want, creating new functions at will, without triggering recompiles on the code dependent on the public interface.

With TDD, your design choices no longer represent nebulous concerns; they directly relate to your ability to succeed. Success in TDD is a matter of keeping things clean and fast.

Dependencies on Collaborators

Dependencies on what you’re changing increases build time. For running tests, the concern about dependencies moves in the other direction: dependencies from what you’re testing on other code increases test execution time.

If you test code that interacts with another class that in turn must invoke an external API (for example, a database call), the tests must wait on the API call. (They’re now integration tests, not unit tests.) A few milliseconds to establish a connection and execute a query might not seem like much. But if most tests in your suite of thousands must incur this overhead, the suite will take several minutes or more to complete.

Running a Subset of the Tests

Most unit testing tools allow you to run a subset of the entire test suite. Google Test, for example, allows you to specify a filter. For example, passing the following filter to your test executable will run all tests whose fixture name starts with Holding and whose test name includes the word Avail. Running a smaller subset of tests might save you a bit of execution time.

 
./test --gtest_filter=Holding*.*Avail*

Just because you can doesn’t mean that you should...at least not habitually. Regularly filtering your test run suggests you have a bigger problem—your tests have too many dependencies on slower things. Fix the real problem first!

When you aren’t able to easily run all your tests, don’t immediately jump to running a single unit test at a time. Find a way to run as many tests as possible. At least try to run all of the tests in a given fixture (for example, Holding*.*) before giving up and running only a single test at a time.

Running a subset of the tests might save you time up front, but remember that the fewer tests you run, the more likely you will find problems later. The more you find problems later, the more likely they’ll take longer to fix.

Isolated

If you’re doing TDD, each of your tests should always fail at least once. When you’re creating a new test, you’ll know the reason it fails. But what about three days or three months down the road? Will the reason a test fails be clear? Creating tests that can fail for several reasons can waste time for you or someone else needing to pinpoint the cause.

You want your tests to be isolated—failing for a single reason. Small and focused tests, each driving in the existence of a small bit of behavior, increase isolation.

Also, each test should verify a small bit of logic independent from external concerns. If the code it tests interacts with a database, file system, or other API, a failure could be because of one of many reasons. Introducing test doubles (see Chapter 5, Test Doubles) can create isolation.

Not only should tests be independent from external production system factors, they should also be independent from other tests. Any test that uses static data runs the risk of failing because of stale data.

If your test requires extensive setup or if the production code could be holding on to stale data, you might find yourself digging to find out that a subtle system change broke the test. You might introduce a precondition assertion that verifies any assumptions your test makes in its Arrange portion.

c7/2/libraryTest/HoldingTest.cpp
 
TEST_F(ACheckedInHolding, UpdatesDateDueOnCheckout)
 
{
*
ASSERT_TRUE(IsAvailableAt(holding, *arbitraryBranch));
 
holding->CheckOut(ArbitraryDate);
 
ASSERT_THAT(holding->DueDate(),
 
Eq(ArbitraryDate + date_duration(Book::BOOK_CHECKOUT_PERIOD)));
 
}

When a precondition assertion fails, you’ll waste less time finding and fixing the problem. If you find yourself employing this technique often, though, find a way to simplify your design instead—precondition asserts suggest that the level of understanding you have about your system is insufficient. They might also suggest you’re burying too much information in setup.

Repeatable

Quality unit tests are repeatable—you can run them time after time and always obtain the same results, regardless of which other tests (if any) ran first. I appreciate the rapid feedback my test suite provides so much that I’ll sometimes run it a second time, just to get the gratification of seeing the tests all pass. Every once in a while, though, my subsequent test run will fail when the previous run succeeded.

Intermittent test failures are bad news. They indicate some level of nondeterministic or otherwise varying behavior in your test runs. Pinpointing the cause of variant behavior can require considerable effort.

Your tests might fail intermittently for one of the following reasons:

  • Static data: A good unit test doesn’t depend upon the side effects of other tests and similarly doesn’t let these remnants cause problems. If your test can potentially fail because of lingering static data, you might not see it fail until you add new tests or remove others. In some unit testing frameworks, tests are added to a hash-based collection, meaning that their order of execution can change as the number of tests changes.

  • Volatility of external services: Avoid writing unit tests that depend on external forces out of your control, such as the current time, file system, databases, and other API calls. Introduce test doubles (Chapter 5, Test Doubles) as needed to break the dependency.

  • Concurrency: Threaded or other multiprocessing execution will introduce nondeterministic behavior that can be exceptionally challenging for unit tests. Refer to Chapter 9, TDD and Threading for a few suggestions on how to test-drive multithreaded code.

Self-Verifying

You automate tests to get your human self out of the picture—to eliminate slow and risky manual testing. A unit test must execute code and verify that it worked without involving you. A unit test must have at least one assertion; it must have failed at least once in the course of its existence, and there must be some way for it to fail sometime in the future.

Avoid any concessions to this guideline. Don’t add cout statements to your tests as substitutes for assertions. Manually verifying console or log file output wastes time and increases risk.

Devious programmers looking to bump up their code coverage numbers (a goal sometimes demanded by misguided managers) quickly figure out that they can write tests without assertions. These nontests are a complete waste of effort, but executing a broad swath of code without asserting anything does improve the metrics.

Timely

When do you write tests? In a timely fashion, meaning that you write them first. Why? Because you’re doing TDD, of course, and you’re doing TDD because it’s the best way to sustain a high-quality codebase.

You also don’t write a bunch of tests in advance of any code. Instead, you write one test at a time, and even within that one test you write one assertion at a time. Your approach is as incremental as it can be, viewing each test as a small bit of specification that you use to immediately drive in accordant behavior.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.144.41.229