Something is better than nothing

Don't get caught up in the purity of total isolation or worry about obscure test methods. First thing, start testing.

How to do it...

You have just been handed an application that was developed by others no longer with your company. Been there before? We all have. Probably on several occasions. Can we predict some of the common symptoms?

  • Few (if any) automated tests
  • Little documentation
  • Chunks of code that are commented out
  • Either no comments in the code, or comments that were written ages ago and are no longer correct

And here is the fun part: we don't know all of these issues up front. We are basically told where to check out the source tree, and to get cracking. For example, it's only when we run into an issue and seek out documentation that we discover what does (or does not) exist.

Maybe I didn't catch everything you have encountered in that list, but I bet I hit a fair number. I don't want to sound like an embittered software developer, because I'm not. Not every project is like this. But I'm sure we have all had to deal with this at one time or another. So what do we do? We start testing.

But the devil is in the details. Do we write a unit test? What about a thread test or an integration test? You know what? It doesn't matter what type of test we write. In fact, it doesn't matter if we use the right name.

When it's just you and the code sitting in a cubicle, terminology doesn't matter. Writing a test is what matters. If you can pick out one small unit of code and write a test, then go for it! But what if you picked up a jumbled piece of spaghetti code that doesn't come with nicely isolated units?

Consider a system where the smallest unit you can get hold of is a module that parses an electronic file and then stores the parsed results in a database. The parsed results aren't handed back through the API. They just silently, mysteriously end up in the database. How do we automate that?

  1. Write a test that starts by emptying all the tables relevant to the application.
  2. Find one of your users who has one of these files and get a copy of it.
  3. Add code to the test that invokes the top-level API to ingest the file.
  4. Add some more code that pulls data out of the database and checks the results. (You may have to grab that user to make sure it is working correctly.)

Congratulations, you just wrote an automated test! It probably didn't qualify as a unit test. In fact, it may look kind of ugly to you. But so what? Maybe it took five minutes to run, but isn't that better than no test at all?

How it works...

Since the database is the place where we can assert results, we need to have a cleaned out version before every run of our test. This is definitely going to require coordination if other developers are using some of the same tables. We may need our own schema allocated to us, so that we can empty tables at will.

The modules probably suffer from a lack of cohesion and too much tight coupling. While we can try to identify why the code is bad, it doesn't advance our cause of building automated tests.

Instead, we must recognize that if we try to jump immediately into unit level testing, we would have to refactor the modules to support us. With little or no safety net, the risk is incredibly high, and we can feel it! If we try to stick to textbook unit testing, then we will probably give up and deem automated testing as an impossibility.

So we have to take the first step and write the expensive, end-to-end automated test to build the first link in a chain. That test may take a long time to run and not be very comprehensive in what we can assert. But it's a start. And that is what's important. Hopefully, after steady progress in writing more tests like this, we will build up enough of a safety net to go back and refactor this code.

That can't be everything!

Does "just write the test" sound a little too simple? Well the concept is simple. The work is going to be hard. Very hard.

You will be forced to crawl through lots of APIs and find out exactly how they work. And guess what? You probably won't be handed lots of intermediate results to assert. Understanding the API is just so that you can track down where the data travels to.

When I described the data of our situation as "mysteriously ending up in the database", I was referring to the likelihood that the APIs you have probably weren't designed with lots of return values aimed at testability.

Just don't let anyone tell you that you are wasting your time building a long-running test case. An automated test suite that takes an hour to run and is exercised at least once a day probably instills more confidence than clicking through the screens manually. Something is better than nothing.

See also

  • Cash in on your confidence
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.191.144.194