Chapter 8. Discover and Explore

In this part we have seen how a traffic light system evolves. We worked from the outside into our application. One of the main differences from the first part is that we used the glue code to motivate the design for our domain classes. Another thing that enabled us to work in this way is that the designer of the system is the same person—or better, pair—as the one writing the glue code.

In fact, while implementing the glue code for the test automation, we noticed possible designs for the production class. In the first case we decided to implement a state pattern once we could see how the glue code differentiates the light states. In the second case we could extract the light switching controller directly from the glue class.

One thing that we realized too late was the need for unit tests for our glue code. Due to the lack of driving the code through unit tests, we ran into the problem with multiple responsibilities in the controller code when we extracted the code and put it under test. If we had done this step earlier, we could have realized the multiple responsibilities in the code sooner by reviewing the tests. Another take-away here was that we should strive for adding unit tests for our support code as early as possible. With tests in place, we might have found out about the multiple responsibilities way sooner, before heading for the extraction.

The ATDD approach makes use of test automation. Since test automation is software development, it makes sense to implement the test automation code using TDD whenever possible. You might be able to get started with functional tests from a business perspective rather quickly. But as you pile up more and more glue code, you will also become less and less aware of side effects to changes in that code. With proper unit tests even for your glue code in place, you can avoid that trap.

So, with ATDD you may also drive the production code as well as discover and explore the domain. But keep yourself aware that you had better add unit tests to your code soon enough before painting yourself into the corner. As a wrap-up, we will take a closer look at these more technical characteristics of the ATDD approach.

Discover the Domain

In this part we used the specifications to discover the domain of the underlying system. Initially, we had a vague idea of the underlying structure for the production code. While automating one example after another, we validated our thoughts. When we had a clear idea about how the production design might look, we started to write the production class for the domain code.

The implementation of the glue code helped to discover the domain. Once we could spot an implementation that supported the examples identified thus far, we could reflect on the code and try to find patterns that motivate the code for the domain. Once we could see the domain behavior growing in the glue code, the step toward either extracting the domain code from the glue code or writing it in parallel in a new class became obvious.

For the light state transitions, we discovered the need for a controller to control more than one light and coordinate state transitions between lights. The discovery of the controller gave the implementation new momentum. As we could see where we were heading, we extracted the domain code from the glue code. We noticed that we had forgotten a validator for different light state combinations. This discovery helped to improve the code even further.

For the evolution of the state concept to represent a light state, we developed the code in parallel to the existing glue code. We drove the design of the different light states based upon our previous experiences. In order to drive the design, we used test-driven development with microtests for the different states and transitions. Once we had finished the state pattern, we were able to replace the existing logic in the glue code with our new state enumeration and validate our new domain object against the already existing acceptance tests.

One of the problems with the latter approach to ATDD is that we could have found out way too late through a wrong implementation. If we had hooked up the new domain object in the glue code and had seen many errors in the acceptance tests, we would have needed to take a large step back to a clean basis. Such big steps are risky in terms of code design.

Because of this, I usually try to wait one passing acceptance longer to see the resulting design more clearly before taking such a large step. In case of the light state enumeration, I proposed waiting a bit longer than I would have when developing the code using TDD. In general, I know that I can rely on some of my expertise, but when it comes to acceptance tests, I try to withhold this thought and move one step closer toward the emerging design. This is a trade-off decision. While the design of the glue code might start to degenerate quickly if I wait too long, I also know that I might end up with an even worse design for my production code if I push for the refactoring too early.

Larger refactorings are not a big problem because today’s IDEs support small-and large-scale refactorings in the code base quite well. So, the longer I can play around with different options for my domain code, the better my decision will be regarding the resulting production code.

Another point is crucial when working on a team of more than one pair of programmers. You want others to see what you do early. Continuous Integration (CI) [DMG07] of tiny little changes help to avoid the merging hell. With tiny adaptations you can oversee side effects while merging. If you extend the batch size of the changes, you might find yourself in the situation where you need to merge one month of code in several separate branches. I once heard from a company where this took two weeks. Usually I check in my results very often, several times per day.

Drive the Production Code

Starting with the examples first manifests a specification for the software. You can directly validate whether your software fulfills the specification by automating it. This is why Gojko Adzic proposed to rename Acceptance Test-driven Development to Specification by Example [Adz09, Adz11].

In the first part we saw how programmers and testers can work in parallel using the ATDD approach. In this part we paired together on some tests. Working from the example to the production code drives the design of the domain. Instead of working in parallel, we started with the examples. By automating the examples, we not only discovered the domain, but also drove the implementation of the production code.

The advantage here lies in the fact that every relevant line of production code will have automated acceptance tests when applying ATDD in this manner. The architecture of your production system will be testable by definition, and it will be covered to a large degree by automated tests.

Another advantage is the fact that you covered actual requirements with your automated tests. Despite checking for irrelevant conditions, the examples express what the domain experts really care about.

The high degree of coverage for your production code provides you with valuable feedback when moving to the next iteration. In 2008 I started to play around with the FitNesse code base. Since it’s an open source system, I could download the source code and see what it does. At first, I played a bit with the code, trying to add a new functionality or two. Later I fixed bugs in the system. There were roughly 2000 unit tests in place alongside 200 acceptance tests. These tests ran all together in less than five minutes. The test suite provided me with feedback within the next five minutes whenever I broke something. This not only helped me learn about the code base, but it also helped me notice side effects to my changes.

A good functional test suite can help introduce new colleagues to a system as I was able to introduce myself to the FitNesse code base. Through short execution times and quick feedback your colleagues will also be able to work with your code base. You can easily achieve collective code and test ownership with this background. In such a system everyone feels safe changing anything without the excuses that I can’t do any changes in the code or tests since it’s another person’s portion of the code or test base.

When your software needs extensions or maintenance, you will also revisit your examples. Since the examples in your automated acceptance tests tell the story of the project so far, you have a documentation system in place. Because the tests are executable and tell you by the press of a button whether your product fulfills the specification, you created an executable specification for your product. This also means that the specification should be kept up to date. These benefits all derive naturally when you drive your code base with the examples.

Test Your Glue Code

One of the immediate lessons for me after we failed dramatically with the extraction of the validator is to test my glue code. Just after extracting the behavior and retrofitting unit tests to the production code, we could realize that our initial design was flawed. We could have prevented that by test-driving our glue code right from the start.

On second thought, this is probably context-free advice. I often find myself exploring the domain of the production system. During exploration mode I come up with questions about the domain so quickly that I leave out unit tests to my code. In exploration mode I try to find out more about the domain itself. When I feel comfortable with my understanding of the domain, I fall back to the code base before I went into exploration mode and start over using TDD. I do this consciously in order to let my thoughts wander. I start interviewing the problem domain in code and try to reach a better understanding of the problem in exploration mode. During this thinking mode, I find my brain in a similar state as during brainstorming. My right hemisphere is highly engaged, coming up with rich thoughts. If I break this stream of thoughts and ideas, I risk losing momentum.

Unfortunately, this often comes back at me. Once I run out of new ideas, I find myself with code that is hard to test. When I find myself in such a mess, I know that it’s time to throw away what I have produced so far and start over from scratch. This might sound painful, to throw away your code that looks so beautiful in the editor, but it’s not the code that should last as an end result, rather the thinking and learning that you have put into it. If I start over now, I know that I will end up with a more flexible design and can even drive the known steps with unit tests right from the start.

In general, testing my glue code can happen in two different ways. In the first case, my glue code is straightforward, contains no structuring elements like if and loops, and each function contains no more than ten lines. This code is easy to understand. It’s short so that a reader of the code can grasp in an instant what it’s about. When I extract functions from my glue code, I keep an eye on the method names and give them intent-revealing names. With such simple code I don’t have any worries that I find myself unable to understand it in a year from now. In order to test this simple code, I am happy if I execute my acceptance tests and see them passing. Executing the acceptance tests in order to test my glue code is fine in this case.

The second case of glue code consists of branches and loops. Even here, methods and functions are no longer than ten lines, but the conditional logic gives me a headache at times. I know that I will be lost in a year from now if I let this code stand on its own. In this case I know it’s time for me to get my hands dirty by writing unit tests for this code. This helps the reader of the code to understand my thought process and to make extensions to the code if necessary. Just executing the acceptance tests is not enough in this case because I would have to execute many of them to cover all the conditional outcomes.

As a general rule, whenever I end up with a conditional statement in my glue code, I think hard how to unit test this piece. At one company I discovered the need for an interface for service calls toward an EJB system. I could easily mock that interface for the unit tests of my glue code. With guidance from Growing Object-oriented Software, Guided by Tests [FP09] I created a fluent API to test and mock the remote calls to the system under test for fast unit tests. Since this was a well-known API for me, I could easily grow tests for this glue code with high statement coverage (above 90%). When executing the tests in the test framework against the real system, I could replace the service executor with one that used reflection to fetch the proper local and remote home for the EJB service. The ServiceExecutor became a needle eye for calls to the other system that I could conveniently replace for my unit tests.

Since every single call to the remote service goes through a single point, I called this approach the Needle Eye pattern. Calls to an external system became a needle eye. This means that there are either some restrictions to this subsystem or there is some convention in place. In our case, the package names of the EJB2 services followed a strict naming convention. This allowed us to use reflection in order to find the local and remote home for a given service input. We encapsulated the calls to the remote system in a simple interface that provided an execute method for a common service input class and returned the output from that service (see Listing 8.1).

Listing 8.1. Signature of the EJB2 service executor for the Needle Eye pattern

1 public Output execute(Input serviceInput);

For our fixture classes, we used dependency injection in the constructor to replace the concrete service executor that used reflection to locate the real service home with a mocked one that the unit test would prepare with proper expectations. From this point we could also simulate thrown exceptions from the system under test, thereby achieving a high statement coverage of the unit tests for our glue code. This gave us great confidence in our test code while the project evolved.

Value Your Glue Code

In general, I treat my test code at least as well as my production code. Usually I value my glue code even higher than I value my production code because my glue code sits between two vital points in my development project.

The production code, on one hand, could be changed in every single iteration. Changes to an existing feature may happen nearly every time. That makes the code that talks directly to the application under test quite unstable. In Agile Software Development—Patterns, Principles, and Practices [Mar02], Robert Martin refers to the Stable Dependencies Principle. Packages in the code should depend on more stable packages. Since the code near to the unstable code of the application is unstable as well, it makes sense to encapsulate changes to this part of the code and make it depend upon more stable structures in the code base. Usually I achieve this with either a wrapper or an implementation of the delegation pattern (see [GHJV94]).

The second consideration for my glue code consists of the instability in the test language. Your test data form a language that represents the domain for your application. As development on the application proceeds, your test data also evolve. This means that the glue code that connects the test data with the application under test also needs to evolve. As our understanding of the underlying concepts change, my glue code needs to have the flexibility to change as well.

Usually I put behaviors that change alongside the development of the test language in their own classes. This involves for example concepts like money (a floating point number with a currency symbol), durations, or time. Another solution to this problem sometimes is to use an enumeration in the production code and provide the necessary conversion with an editor like we saw it in this part with the light state enum. The main advantage of this approach is that I can provide a conversion method easily. Sometimes I find out that I needed that concept in the domain anyway. At this point I am usually glad that I already have the conversion mechanism from my glue code as a by-product.

Between these two unstable interfaces lies some tiny layer of code that interconnects the two domain representations with each other. The smaller I can keep this layer, the better designed my domain classes usually are. When I find I need a lot more code to connect production concepts with acceptance test concepts, I reflect on my current design. More often than not I find my domain object lacking a basic concept. By extracting that concept from the glue code to my domain code, I keep my glue code as simple as possible—but no simpler than that.

Putting it all together, we have highly unstable dependencies toward the production system and toward the test data. There should be a thin line connecting the two worlds with each other. If this is not the case, take a larger view of your production, support, and glue code and see if you can improve anything there. If you don’t see an opportunity to do so, continue to grow new tests, but keep an eye on this particular code. Over time your understanding of the domain may change, and you will see new opportunities for this portion in your code.

Summary

The “driven” part of ATDD puts emphasize on how to drive your whole application from the outside in. In this part we saw how acceptance tests help you to discover the domain of your application.

Discovering the domain alone is not enough. We also needed to drive our production code from the acceptance tests. Working from the acceptance tests inwards towards the system enabled us to see crucial design decisions for our production classes. In the long run this will help us a lot.

We also learned that we should value the code that we write to test our systems. One major reason I see test automation approaches failing at clients, is that they don’t value their test code. Test automation is software development. That means we have to refactor it and align to the same principles as our production code. I found the Single Responsibility Principle, the Open-Closed Principle, the Interface Segregation Principle, and the Dependency Inversion Principle especially helpful for designing my glue code.

One side effect from such principles is that we should test our glue code. By that I mean not to execute it through the acceptance tests, but to really use unit tests to drive your glue code. Whenever possible divide your glue code to smaller functions that you can test easily in separation. Even if this is impossible for you, you should strive to find ways as the Needle Eye pattern to decouple the system under test from your glue code. Otherwise a small change to the system can cause a large change to your safety-net of acceptance test. Don’t go there.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.16.137.108