C H A P T E R  5

image

Test-First Development

Refactoring is a powerful tool, enabling you to improve the design of your software. It means taking your working code and changing it, sometimes deeply, to make it perform the same tasks it performed before changing it. Somehow this could sound quite pointless: why be at risk of losing a piece of working code just to improve its design? Shouldn't the functionality of working code be valued more than its structure?

Any wise programmer should ask herself questions like these at the first mention of refactoring. We have already called attention many times to the importance of preserving our existing code base and why this should be our main goal. Then how can we change it without introducing errors? How should I explore new ways to arrange my software? Is it possible for me to refactor my code protected by a strong and soft safety net? Yes, it is, because automated tests are at hand, one of the most powerful tools ever in the history of software development.

Building Value One-Way

You have been coding thousands and thousands of lines of code for years, and you have spent thousands of hours debugging that code, but how often did you stop and think about how unproductive debugging your code is? Yes, we really mean it: debugging is not productive. Debugging is just a way to finally deliver what you were already supposed to have delivered.

Even the best of our customers is willing to pay to get some technical value to invest in her business. In a healthy environment the customer should also try to maximize her return on investment (ROI) by getting the most value for the lowest expense, taking into consideration any short-term goal together with longer-term ones. In the same healthy environment we should support these needs by trying to maximize our ROI by reducing costs, not by delivering lower quality.

Debugging is a step back from a healthy developer-customer system because debugging is always a cost. Even on a fixed price contract featuring a warranty option on bugged features, the customer will pay a cost for debugging, because a dollar lost now is not paid back by the same dollar next week. If debugging were considered a standard practice in the automotive market, we would be buying cars incapable of bringing us home 60–70% of the time, attached to warranties providing us with ones that actually function. It can happen, we know, but I bet you wouldn't be pleased to see it happen 60–70% of the time. Those numbers are considered low in the software industry indeed!

Obviously this way of conceiving software development has its roots in the reality of the software industry and engineering, and no one should be faulted for debugging code if a defect is found. What we would like to advocate here is a way to model your software development process, being aware of the fact that during the last 20 years many techniques arose to reduce or even erase the need for debugging, making the development of software a one-way process, from customer requirements to implementation with few or no features bouncing between being done or in progress.

Chaos in a Cage

We have more than one kind of automatic test to rely upon. In this book we will focus on unit tests and functional tests. Unit tests are meant to be closer to the developer's point of view, while functional tests are meant to test software correctness and conformity in fulfilling customer requirements from a user point of view. We will see those two kinds of tests in better detail in the following chapter, but we want to make clear here that both together constitute a way to defend your code from chaotic evolution and attack the complexity every developer inherently has to cope with.

Before we go on, let us introduce you to unit tests and functional tests.

Unit Tests

Unit tests confirm that a single unit of code computes the correct output when passed a well-defined input. The developer writes a test that automatically passes on a meaningful set of different inputs and checks whether the output is right. A unit test is meant to test only a single unit of code, thus any interaction between that code and some external actor providing a service or some data should be simulated in a safe way, so that a developer knows which few lines of code are wrong whenever a unit test fails. If anything like a database or a web service is needed by the code we are testing, it must be simulated with a fake version to avoid tainting our testing scope.

What does a unit test look like? It depends on the testing framework you use, but a wide range of frameworks sticks to the xUnit de facto standard. When writing this book, we opted for PHPUnit and, while we will describe its use in detail in the next chapter, we want you to have a first look right now. The following test is nothing more than a test class—yes, a class itself!—testing the Sale class to return the right price after a given state is set:

class SaleTest extends PHPUnit_Framework_TestCase
{
  public function testGetPrice()
  {
    $sale = new Sale();
    $sale->amount = 10;
    $this->assertEquals(100, $sale->getPrice());
  }
}

With a complete suite of tests like that, a development team can prevent chaos from creeping into its design at a very low level, testing a single object's behavior and—as too often overlooked—mutual interaction between objects.

Functional Tests

Functional tests are a powerful way to use our software on the end-user's behalf and to verify its behavior's correctness. They describe the interaction of the user with the machine we are building our software upon, reproducing every click and drag-and-drop relentlessly. This kind of test relies on the whole system to run with no isolation of units coming into play. Every testing action involves the user interface, the controller, the model, its logic, and real data as if it were a real person using the software we are testing.

What does a functional test look like? It depends a lot on the testing framework you decide to use. Unlike unit tests, at the time of writing this book, no shared standard has emerged yet, nor does some widely adopted technology seem to pave the way towards convergence to a common rule. For the purpose of explaining functional tests in this book we chose Selenium RC, a tool that is part of the Selenium project that provides you with an API to use a browser in many programming languages, PHP included. It is obviously focused on web applications, but we think this is an advantage here, since most of the PHP applications are built for the web environment. A typical Selenium RC test in PHP looks like this:

class Example extends PHPUnit_Extensions_SeleniumTestCase
{
  function setUp()
  {
    $this->setBrowser("*firefox");
    $this->setBrowserUrl("http://www.google.com/");
  }

  function testMyTestCase()
  {
    $this->open("/");
    $this->type("q", "selenium rc");
    $this->click("btnG");
    $this->waitForPageToLoad("30000");
    $this->assertTrue($this->isTextPresent("Results * for selenium rc"));
  }
}

Functional tests like this are obviously several magnitudes slower than the unit tests, but in the right amount they can provide many valuable tests, even reaching corners of our software that are hardly testable without these tools.

You Don't Know What You've Got 'til It's Gone

What's the value brought in by a complete unit and functional test suite covering the most hidden spot of our application and the most complex synergy between two classes? While we could be tempted to see those tests as just a bug-ratio reduction tool, there's a lot more in them. Analyzing the way a web software project typically makes people closely relate to each other and the way bugs affect those relationships, we will be able to see much more value in using automated tests. The following sections will show you how automated tests can bring in value that goes beyond strictly technical issues, addressing many typical concerns often overlooked by usual project management by strongly empowering the team to be successful.

Trust Me: Communication

Among the ways automated tests address chaos is that they provide better communication among team members and between the team and the customer. Software is hard to keep tidy. It is such a chaotic beast that we cannot even be sure we understand the customer's requirements until the day we deliver it, unless we use some unambiguous way to agree on what has to be done. In the manufacturing industry, each product is described by a well-defined list of tests to distinguish acceptable products from bad ones. Automated software tests represent the same type of constraint: unit tests state how the system is supposed to do its job, while functional tests state what the system is expected to do.

Functional tests are a perfect tool to supplement or even replace traditional customer requirements documents. Every detail emerging from conversations among the customer, the interaction designers, and the development team should be frozen into some functional test, not to be forgotten, overlooked, misunderstood, or left untested.

Unit tests then define how those detailed requirements become working software. Every requirement is not considered fulfilled until every design detail of its implementation is well covered by a proper set of unit tests. This way unit tests become a conversation place for developers to agree upon the structure of the software they are all committed to, improving the spread of implicit knowledge about the project.

If you are new to automated tests and used to traditional documenting processes, you may think that good old ways to write documentation always worked, and so learning a brand-new way would be just a waste of time. We won't argue about the value of well-written documentation—though we could argue about the effort required for keeping it up to date. What we strongly oppose, though, is thinking there's some real alternative to writing tests to find out whether your software is doing what it's supposed to do. Writing those tests before the implementation of the working code would be a way to get even more value from your test suite, enabling their use as documentation.

Listen to What You Say, and Write It Down

Every time a programmer implements a feature requested by the customer he looks for a quick way to check when it's done, so he can start working on the next task. This is how code is developed by everyone, everywhere, anytime; it couldn't be different. Whenever we do something to reach a planned goal we have to find a way to know when the goal is reached, no matter the issue with which we are coping. That said, once we have found a way to test the effectiveness of our work, why not capture that test and reuse it freely?

The best thing about automated tests is that they are cheap, they are coded by developers (the only ones to know which details deserve to be tested first and how), and they can be run at will. This causes teams using automated tests to run them as often as possible since they provide a complete and objective status about the quality of their code. No one we introduced to automated tests ever went back, not once in years. This is because we always need something to report our progress instantaneously.

Feedback is also an issue in the customer-developer relationship: the customer has the right to understand and be sure that everything he asked for is going to be delivered, and he also has the right to information that is as close to real time as possible. The money invested by the customer in development deserves our respect, and letting him know how well the investment is going is a minimum requirement. Functional tests are the tool to give the customer almost real-time feedback about the project's progress rate. As far as every requirement is expressed in a functional test, the more functional tests pass, the closer we are to the end of the project.

The most important strategic team commitment about testing should be to keep them quick. Non-automated tests or tests taking too long will be run with less frequency. This will naturally tend to cause more new functionalities to be tested at once, to reduce testing overhead. This strategy will bring failure more often on a larger code changeset, making it harder to spot the problem, and raising the bug rate again.

Pleasant Constraints

Refactoring is a technique that involves changing critically large portions of already-working code. This can be very dangerous, because of the unintended consequences and side effects. Thousands of details spread across hundreds of thousands of lines of code must all be correct, with hundreds of components communicating with each other in a hard-to-predict way. As the codebase grows larger, we are exposed to an exponentially larger risk.

Even if you are not using refactoring, you should be aware that usually in the software industry half of the development occurs after the initial release. That means that the software needs to be safely changed anyway and that those changes must be cheap. Many common techniques make changes safer: encapsulation, layering, external components, web services, and open sourcing. But the most effective way to ease changes preventing disasters is an automated test suite that tests the code the developers intend to change and the system behavior the customer is going to get in return.

An up-to-date test suite will not only snipe emerging bugs, keeping the system close to its quality requirements, but will also greatly reduce the scope to look within to fix the bug. The developer's duty then becomes to write those tests, freezing requirements in a formal language, making sure they are correct, complete, version-controlled, maintained, and considered part of the released product. Even if they end up constituting a large portion of the overall codebase, they will be many times worth the effort they will require to be created and maintained.

As more functional tests add up with time, they become an ever-growing repository of requirements that will never be forgotten, not even once through the whole system lifespan. This is great news for development and testing teams that have to run through the whole system every time a change is made, in order to be sure no regression has been introduced by the change itself. This is great news for the customer, too, because quality assurance teams can be very skilled and proficient, but they are human—sooner or later they will be wrong. Automated tests don't get distracted, bored, annoyed, angry, or tired. They just perform their duty.

Create Trust

Communication, feedback, and safety nets all together deliver a fourth crucial value in the life of a team: trust. Communication is the root of trust. Proven working software builds the customer's trust in the team. An unambiguous design specification increases the team members' trust in each other. A widely applied test suite may communicate many times a day how good the team is performing, improving the manager's trust in the team. A software project in which trust is not increasing day by day, minute by minute, is a dead project. Defensive behaviors will start creeping in and people will begin to cheat. There is no way to squeeze value out of a cheating team, since defensive behavior will not create business value—just like debugging won't.

A winning project is one devoted to building value, not one that incentivizes barriers. Any tool, technique, or mindset capable of raising trust among the whole team to the highest level is always worth its cost. Automated tests are one of those tools.

Test-Driven Development

At least a brief acknowledgment should be given to a discipline based on a test-first programming approach, unit tests, and refactoring: test-driven development (TDD).

This set of techniques is a way to discover the best design for our systems in an amazingly effective way to combine top-down and bottom-up thinking. On one hand, it empowers developers to think of the main tasks and responsibilities without worrying about the interactions going on at lower levels; on the other hand it requires us to develop the simplest actors and interactions that solve our problem. The main mantra of test-driven development is Red, Green, Refactor. What does it all mean?

  • Red: We write a failing test expressing in a well-defined way what we want our unit of code to do.
  • Green: We write the minimal amount of code needed to make the test pass.
  • Refactor: We refactor the code we wrote to improve its design without breaking tests. In case we need to create new tests and new units of code to accommodate the newly improved design, we apply TDD to those components, too.

While we consider this theme to be too crucial to be reductively explained in a single chapter of this book, it is also true that very few written resources exist on this subject. This is mostly due to the inherently non-theoretic nature of this discipline, making it very hard to define and explain it by means of a book. No wonder Kent Beck's main book on TDD [TDD] is based on a series of examples to make TDD clear to the reader.

TDD, by the way, represents the state-of-the-art use of automated tests as a tool for not only checking and anticipating the correctness of code, but also guiding its design.

Summary

In this chapter, we discovered the technical and strategic value of automated tests, learned about several kinds of automated tests, and scratched the surface of test-driven development. We also learned there is a way to inject value in our software, making it constantly grow by reducing its wandering across chaotic states. The next chapter will unveil what tools we can rely upon while taming the beast of software complexity.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.147.69.50