Chapter 9

Testing—Not Just for Testers

There are many different kinds of testing and nearly all of them are annoying to the typical programmer. Programmers are driven to create things, build products, add features, and make things work. Testing is just the opposite. It's about finding weaknesses, exploiting edge cases, burrowing into a system, and making it break. It is a totally different mindset, which, to many people, is a good reason why programmers should not be allowed to test their own work.

“I don't understand—it worked on my machine”

—Almost every programmer at some point

But there is one kind of testing that is universally accepted as being suitable for programmers, and that is unit testing. Everyone agrees it would be a good idea if programmers did more of it, and some people think that programmers should write unit tests even before they write any functional code. Writing tests first is called test -driven development (TDD) and although it hasn't quite achieved the same degree of popular acceptance as plain old unit testing (POUT), TDD evangelists claim that it makes programmers more productive and improves code quality.

Some aspects of unit testing remain difficult—most notably, factoring out external dependencies (such as databases and networks) from unit tests without introducing an undesirable level of complexity. Adding unit tests to legacy code can be hindered by excessive coupling (where component A depends on B, C, D, E), which can make isolating a given function for proper unit testing difficult.

Tools and techniques are available that can tackle these difficulties. One of the most common techniques is to factor out concrete dependencies (that is, classes) into abstract dependencies (that is, interfaces), which makes substituting fake (or “mock”) classes for the purpose of testing easy. Tools are available that allow a programmer to dynamically (that is, at runtime) break these dependencies and substitute fake classes and functionality via mock objects. Some of these tools are free, and some are very much not free.

Unit Tests

A unit test is a method for testing a unit of code to determine its fitness for use. This immediately raises the question of what constitutes a “unit,” and thereby hangs a pointless debate. Some programmers believe a unit equates to a class, and others think a unit equates to a method or function. Still others think a unit is simply the smallest isolatable component of a program. Perhaps the most sophisticated opinion is that a unit is a single path of execution through a function or method (perhaps one of many such paths, depending on how much logic the function contains).

Pragmatic programmers define a unit as being “one thing” and then excuse themselves from the debate to get back to writing clean code and shipping software.

Whichever definition is used, all sane programmers agree that a unit test is not something that is done by hand. A unit test is an automated test that exercises an isolated component and confirms that the outcome is according to expectations.

Test-Driven Development

Many programmers are strongly in favor of writing tests before they write the actual code to be tested. These programmers reason that writing tests first has a number of benefits:

  • It forces a programmer to answer the question, “What should this code do?” at a very detailed level. In other words, writing the tests first requires the programmer to think about concrete requirements before coding. This can reduce the time spent reworking code during later stages of development.
  • It encourages the programmer to write the minimum amount of code necessary to pass the test, and then to stop. This keeps code-bloat to a minimum.
  • It encourages the programmer to write code that is easy to test. Code that is easy to test is usually also modular and has minimal dependencies.

TDD is an increasingly popular school of thought and experience of TDD is prized by many interviewers.

Behavior-driven development

Behavior-driven development (BDD) is an evolution of TDD that produces the same outcome (coded unit tests) but which places more emphasis on specifying the behavior of “units” and introduces a domain-specific vocabulary to aid team communications and testing documentation. BDD activities are usually supported by a tool such as NBehave.

Red, green, refactor

Just like any decent school of thought, TDD has its own mantra. The TDD mantra is “Red, green, refactor.” This mantra describes an ideal process of writing code and tests.

  • Red: First, one or more tests are written that will be used to exercise the not-yet-written code. This test will fail and cause the testing framework to show a red test result.
  • Green: The next stage is to write the minimum amount of code required to make the newly written test pass. The idea is that you give little consideration to writing elegant or extensible code at this stage, focusing on nothing more than making the tests pass. Passing tests are usually displayed in green in the UI of the testing framework.
  • Refactor: The final stage is to take the code that passes the new test and to rewrite it so that it is maintainable and meets other important quality criteria. One of the key benefits of first having a test in place is that it increases the programmers' confidence that they have not inadvertently broken the correct behavior of their newly written code while refactoring it.

Writing Good Unit Tests

The idea of unit testing has been around for quite some time and although areas of debate persist, a few widely accepted good practices have risen to the surface.

Run quickly

Foremost among the good practices is that a unit test should run quickly. A single test that takes a whole second to run might seem harmless when it is first created, but when joined by 10,000 other tests each taking a second, then the test suite suddenly becomes impractical (10,000 seconds is nearly three hours) to run as often as needed. Anything that you do in a test suite that slows down test execution is probably not the right thing to do—for instance, a database round-trip is usually not part of a good unit test. It simply takes too long.

Be simple

A unit test should be simple. Unit tests should be easy to write and simple to read. Complex unit tests can easily end up bogging down development. You must avoid creating a burdensome suite of tests where programmers in your team avoid updating the tests simply because they are hard to understand. (Obviously the same applies to all code, not just tests, but the point is well applied here.)

A unit test should ideally test just one thing. This makes tracing the cause of a test failure easy to do. If a unit test covers more than one thing then finding out which thing failed the test takes more effort than it ought to. This is not a hard and fast rule; sometimes it can be expedient to test several things in a single unit test in order to minimize the repetition of setup/teardown routines.

Be self-evident

A unit test should be self-evident. In other words the point of a test should be plain as day. If you have to wrinkle your nose and squint before you can see what a unit test does then the test is probably too complex. All the usual rules of good coding practice apply to unit tests (with the possible exception of Don't Repeat Yourself—unit tests do tend to look similar). To help make a unit test self-evident choose good method and variable names, use whitespace to assist readers, and use code comments when some aspect of your test is unavoidably opaque.

Be helpful when failing

A unit test should be helpful when it fails, it should indicate exactly what went wrong, and when a test fails you should be pleased. You should be happy because it means that the test has noticed something has gone wrong and prevented you from noticing it later, say during an important customer demo or when your rocket reaches the stratosphere. A good unit test not only serves to catch your mistakes, but also pinpoints the source of a problem when it finds one. The output of a unit test should resemble the finest bug report you've ever seen. Good bug reports make me happy.

Be self-contained

A unit test should be self-contained. If a test has dependencies that are out of its control then it can fail at any time and you won't know why without spending time investigating. Suppose the test relies on an environment variable having a certain value. If another test is added to the test suite and it relies on that variable having a different value then one of these tests is bound to fail. This can devolve into a race-condition (or whack-a-mole if you prefer) where one test sets a value only to have it immediately set to a different value by another test running at the same time.


Note

If your code is riddled with globally shared state variables then you have other more significant things to worry about than the stability of your unit tests.

Testing Slow Things

The ideal that a unit test must run quickly generates a surprising amount of controversy. The ideal of “must run quickly” seems to be interpreted by some programmers as if it means you shouldn't test slow things. These programmers think that if you have a routine that reaches out to the database, that communicates over a network, or that thrashes the hard disk for a while, then because the test might be slow you shouldn't test it.

That idea is plainly wrong. The only time you can ignore testing is if the correctness of your application doesn't matter.

In short, any worthwhile tests that do not run quickly should be segregated, kept apart from the suite of unit tests. They could be run in the background at periodic intervals, maybe once per day if they are really slow. They might be split into several sets of tests and run in parallel on different machines. They might serve as a useful suite of regression tests just prior to staging an application for user acceptance testing.

There are all kinds of things you can do with these slow tests, but you mustn't be tempted to mix them up with unit tests or toss them out just because they are slow.

Unit Testing Frameworks

It is perfectly possible, if a little masochistic, to write unit tests without using a unit testing framework. However, the advantages of a unit testing framework are many, and the disadvantages…actually, I can't think of any. If you write unit tests then you should use a framework.

One of the best known and widely used family of unit testing frameworks is xUnit. Most popular programming languages have a variant of xUnit:

  • NUnit for .NET
  • CppUnit for C++
  • JUnit for Java
  • Test::Class for Perl
  • Test::Unit for Ruby
  • PHPUnit for PHP
  • Unittest (previous known as PyUnit) for Python
  • DUnit for Delphi

If you write code using Visual Studio then you are probably aware of Microsoft's Visual Studio unit testing framework, known affectionately as “The Microsoft Visual Studio Unit Testing Framework.” Support for this testing framework is built into the Visual Studio IDE and a command-line utility (MSTest.exe) can be used to run these tests. Many programmers understandably use “MS Test” as kind of shorthand for the Visual Studio framework though strictly speaking they are different things. The Microsoft framework is very similar to NUnit, the testing framework that predates it. Both of these frameworks use code attributes to indicate test classes (that is, the classes containing unit tests) and test methods (that is, the actual unit tests).

Here is an example of a Visual Studio test class containing one unit test. The attributes used to indicate a test class and a test method are highlighted.

[TestClass]
public class TestClass
{
    [TestMethod]
    public void MyTest()
    {
        Assert.IsTrue(true);
    }
}

The equivalent attributes in NUnit are [TestFixture] and [Test]:

[TestFixture]
public class TestClass
{
    [Test]
    public void MyTest()
    {
        Assert.IsTrue(true);
    }
}

Notice that this test contains exactly one line that calls the static IsTrue method of the Assert class. As written this test is useless; it doesn't test anything. A typical test compares an expected result to an actual result, something more like this:

[TestClass]
public class TestClass
{
    [TestMethod]
    public void MyTest()
    {
        bool actualResult = Foo();

        Assert.IsTrue(actualResult);
    }
}

Mock Objects

Sometimes an object is bound up with another object in such a way that makes testing it in isolation difficult. You might, for instance, have a method that performs a calculation using data retrieved from a database. For a unit test you ideally want to avoid retrieving actual data from a database because

  • You want tight control over the data that is used for this test; for instance, you want to ensure that your test covers rare edge cases that might not exist in the database when the test is run. Alternatively, you want to avoid the overhead of ensuring that certain data exists in the database before running the unit test.
  • You want to avoid the overhead of establishing a database connection and retrieving data from the database so that (in keeping with the attributes of a good unit test) the test will run quickly.

Consider the following method:

     
    public decimal CalcFoo()
    {
        var df = new DataFetcher();
        var data = df.GetData();

        var result = data.Take(100).Average();

        return result;
    } 

The CalcFoo method relies on the GetData method of the concrete DataFetcher class. If you replace the reference to DataFetcher with a reference to an IDataFetcher interface then you can easily substitute a fake (mock) class when you need to; that is, when you write a unit test. Here is the revised code that references an interface:

public decimal CalcFoo(IDataFetcher df)
{
    var data = df.GetData();

    var result = data.Take(100).Average();

    return result;
} 

For the sake of completeness, here is the code for the IDataFetcher interface and the DataFetcher class:

interface IDataFetcher
{
    void Combobulate();
    List<decimal> GetData();
    bool IsFancy { get; set; }
}

public class DataFetcher : IDataFetcher
{

    public List<decimal> GetData()
    {
        var result = new List<Decimal>();

        #region Data-intensive code here

        // ...

        #endregion

        return result;
    }

    public bool IsFancy { get; set; }

    public void Combobulate()
    {
        #region data intensive combobulation
        // ...
        #endregion
    } 
}

A mock object would typically implement this interface with a fixed return value for the GetData() method, as shown here:

public class FakeDataFetcher : IDataFetcher
{

    // Fake method, returns a fixed list of decimals
    public List<decimal> GetData()
    {
        return new List<decimal> {1,2,3};
    }

    // Don't need this property for my unit test
    public bool IsFancy { get; set; }

    public void Combobulate()
    {
        // Don't need this method for my unit test
        throw new NotImplementedException();
    }
}

Now you can write a unit test using this FakeDataFetcher with complete control over the data that will be fed to the FancyCalc method.

[TestMethod]
public void FancyCalcTest()
{
    var fakeDataFetcher = new FakeDataFetcher();

    var fc = new FancyCalc();
    var result = fc.CalcFoo(fakeDataFetcher);

    Assert.IsTrue(result == 2m);
}

In summary, you have isolated the FancyCalc method, breaking its dependency on DataFetcher, and now you can test it with specific data in each unit test you write. Note that this method has one significant drawback—you would need to create a slightly different version of DataFetcher for every variation of data you want to test. This drawback is one of the reasons why mocking frameworks were invented.

Mocking frameworks (Moq) remove the need to implement the entire interface for the purpose of overriding just one method (or just a few methods). If you were using Moq (to pick a mocking framework at random) then you would be able to specify an implementation for GetData without the need to also specify an implementation for IsFancy and Combobulate. This is clearly more of a help when interfaces are more extensive than shown in this simple example.

Incidentally, this technique of referencing interfaces instead of concrete classes is a form of dependency injection, where dependencies like Data“““Fetcher can be easily switched out for alternative Implementations.

QUESTIONS

Now that you have some concepts on testing down, it's time to try your hand at answering some questions.

1. What should you test?
Consider the following code; do you think this qualifies as a unit test?
 private static void TestRandomIntBetween()
 {
     int expectedResult = 99;

     int actualResult = RandomIntBetween(98, 100);

     if (expectedResult == actualResult)
         Console.WriteLine(Test succeeded);
     else
         Console.WriteLine(Test failed); 
 }
2. What exactly should you test for?
Consider the following code; do you think this is an appropriate unit test?
 private static void TestRandom()
 {
     int unexpectedResult = 42;
     Random rand = new Random();

     int actualResult = rand.Next(1, 1000000);

     if (unexpectedResult != actualResult)
         Console.WriteLine(Test succeeded);
     else
         Console.WriteLine(Test failed); 
 }
3. TDD part 1
Write a unit test that calls a method named IsLeapYear, passing it an arbitrary date in the year 2013. The IsLeapYear method has a signature as follows:
 public static bool IsLeapYear(DateTime date)
The year 2013 is not a leap year so your test should assert that this method returns false. You do not need to write the IsLeapYear method.
4. TDD part 2
Assume a fictitious scenario where someone else in your team has copied the following code from the Web:
 public static bool IsLeapYear(DateTime date)
 {
     return date.Year % 4 == 0;
 }
Also assume that you spend another three or four seconds searching the Web and manage to find a reliable source that describes a leap year as “years that are divisible by 400 or years that are divisible by 4 but not by 100.”
Now:
1. Write a test for the earlier IsLeapYear method that shows the method fails for the year 1900 (not a leap year).
2. Rewrite the IsLeapYear method according to the previously described algorithm.
3. Rerun your tests to show that your implementation of the previously describe algorithm is correct.
5. Unit and integration testing
What is the difference between unit and integration testing?
6. Additional benefits of unit testing
Apart from the obvious benefit of testing, what are some other reasons for writing unit tests?
7. Why use mock objects?
Describe some of the key reasons why a programmer might want to use mock objects when writing unit tests?
8. Limits of unit testing
Unit testing has many benefits; what are some of its limitations?
9. Thinking up good test values
Suppose you are given a function that returns the most common character contained in a string. For example, if the string is “aaabbc” then the function should return “a.” If the string is empty or has more than one “most common” character, then the function should return null.
What test values should you use in unit testing to ensure that this function works correctly?
10. For code coverage, what percent should you aim for?
When a unit test runs a line of code it is counted toward an overall figure of code coverage. If 100 lines of code are in an application, and unit testing runs 75 of these lines then the code coverage is 75 percent.
What is the ideal number for code coverage as a percentage of all code in an application?
11. What tests the unit tests?
If programmers are so concerned about writing tests to ensure the correctness of their code, why don't they write unit tests that test other unit tests?

ANSWERS

1. What should you test?
Consider the following code; do you think this qualifies as a unit test?
 private static void TestRandomIntBetween()
 {
     int expectedResult = 99;

     int actualResult = RandomIntBetween(98, 100);

     if (expectedResult == actualResult)
         Console.WriteLine(”Test succeeded”);
     else
         Console.WriteLine(”Test failed”); 
 }
This is not a trick question. This is a simple example of a unit test that confirms an expected result (99) is obtained when calling the RandomIntBetween method. It doesn't use a unit-testing framework, but that has no bearing on whether it qualifies as a valid unit test. This is a valid unit test, albeit a primitive one.
2. What exactly should you test for?
Consider the following code; do you think this is an appropriate unit test?
 private static void TestRandom()
 {
     int unexpectedResult = 42;
     Random rand = new Random();

     int actualResult = rand.Next(1, 1000000);

     if (unexpectedResult != actualResult)
         Console.WriteLine(“Test succeeded”);
     else
         Console.WriteLine(“Test failed”); 
 }
This test obtains a random number between 1 and 1,000,000 and checks that the number is not equal to 42. It will “succeed” in 999,999 cases and “fail” in one case. You might reasonably guess that the programmer was thinking that a random number between 1 and 1 million would never return the number 42 (what are the chances?!) and consequently wrote a test of the Random class that relies on this faulty assumption.
As a rule (and it is a firm rule, not just guidance) tests should not rely on chance outcomes in order to succeed or fail.
Another problem is shown here: It is inappropriate for a unit test to test the underlying framework. In this question the test is creating an instance of a .NET System.Random object and then confirming that it does, indeed, return a random number that is not equal to 42.
The application programmer does not benefit except for the dubious theoretical potential for detecting an error in the .NET Framework. This theoretical benefit is far outweighed by the cost of writing, debugging, and maintaining this test into the future. (Of course, if you do happen to find a bug in the .NET Framework, then I take it all back. You rock.)
Tests like this are written by programmers who are new to unit testing and who aren't sure what they ought to be testing. In a nutshell your tests should test the code you have written, not the code in the underlying framework.
3. TDD part 1
Write a unit test that calls a method named IsLeapYear, passing it an arbitrary date in the year 2013. The IsLeapYear method has a signature as follows:
 public static bool IsLeapYear(DateTime date)
The year 2013 is not a leap year so your test should assert that this method returns false. You do not need to write the IsLeapYear method.
The code is as follows:
 using System;
 using Microsoft.VisualStudio.TestTools.UnitTesting;

 namespace UnitTests
 {
     [TestClass]
     public class TestClass
     {
         [TestMethod]
         public void IsLeapYear2013()
         {
            Assert.IsFalse(IsLeapYear(new DateTime(2013, 1, 1)));
         }

         public static bool IsLeapYear(DateTime date)
         {
             // code not yet written...

             return false;
         }

     }
 }
4. TDD part 2
Assume a fictitious scenario where someone else in your team has copied the following code from the Web:
 public static bool IsLeapYear(DateTime date)
 {
     return date.Year % 4 == 0;
 }
Also assume that you spend another three or four seconds searching the Web and manage to find a reliable source that describes a leap year asyears that are divisible by 400 or years that are divisible by 4 but not by 100.”
Now:
Write a test for the IsLeapYear code that show it fails for the year 1900 (not a leap year).
Rewrite the IsLeapYear method according to the previously described algorithm.
Rerun your tests to show that your implementation of the previously described algorithm is correct.
This question has three parts. Your answer to the first part should look something like this:
        [TestMethod]
        public void IsNotLeapYear1900()
        {
            Assert.IsFalse(IsLeapYear(new DateTime(1900, 1, 1)));
        }
Converting the given algorithm to code is simple. You must account for two clauses. The first is a test for “years that are divisible by 400”:
 (date.Year % 400 == 0)
The second clause covers “years that are divisible by 4, but not by 100”:
 (date.Year % 4 == 0 && date.Year % 100 != 0)
So now you can rewrite your IsLeapYear method as:
       
    public static bool IsLeapYear(DateTime date)
    {
        return (date.Year % 400 == 0) 
            || (date.Year % 4 == 0 && date.Year % 100 != 0);
    }
Finally, you can rerun the test and show that it now passes for the year 1900 (not a leap year). Figure 9.1 shows the output from running this test using MSTest.exe.

Figure 9.1 Test run with MSTest.exe

9.1
5. Unit and integration testing
What is the difference between unit and integration testing?
Sometimes the difference between unit and integration testing is subtle, and sometimes it isn't a useful distinction to make. However, some widely accepted general rules exist about what makes a good unit test, and if a unit test breaks these rules it is probably not a unit test in the strictest sense.
A unit test should
  • Test a single unit
  • Have few or no external dependencies except on the code being tested
  • Not query a database or external resource
  • Not touch the file system
  • Not rely on environment or configuration variables
  • Not rely on being run in a certain order
  • Not rely on the outcome of any other test
  • Run quickly
  • Be consistent, producing the same result each time it is run
A test that does not follow these rules is probably an integration test. These can be very valuable tests but they should not be mixed in with unit tests.
6. Additional benefits of unit testing
Apart from the obvious benefit of testing, what are some other reasons for writing unit tests?
Some other reasons include the following:
  • Unit testing helps to find errors earlier in the software development life-cycle (SDLC) where they are cheaper to fix.
  • When a good suite of unit tests is in place the programmer who subsequently makes a code modification has reassurance (in the form of unit tests that pass) that his or her change has not broken the system in some way.
  • Unit testing encourages the programmer to consider a systematic enumeration of edge cases. These are re-runnable and can serve as a form of regression testing.
  • Unit tests provide concrete and runnable examples of how to use the application code, serving as a form of “live” technical documentation.
7. Why use mock objects?
Describe some of the key reasons why a programmer might want to use mock objects when writing unit tests.
Some key reasons include the following:
  • For isolation, meaning the test can focus on just one unit of the application. This is especially valuable when testing legacy code, or code that was written in a way that makes it difficult to test due to tight coupling of components
  • To ensure the test runs quickly, by eliminating dependencies on hardware, databases, networks, and other external factors
  • To control inputs to a function by faking an otherwise non-deterministic component
  • To enable testing before all dependencies are implemented
8. Limits of unit testing
Unit testing has many benefits; what are some of its limitations?
Most programmers and most interviewers are enthusiastic about unit testing, but some interviewers will also want to know whether you have had enough experience with unit testing to understand some of its limitations.
Here are some situations in which the benefits of unit testing might be outweighed by the cost:
  • Writing unit tests for trivial code (for example, automatic properties in .NET). This is rarely worth the effort.
  • Testing code that contains no logic. For example, if it is simply a thin shell that passes through to another API
  • If a test is significantly more difficult to write than the code it is intended to test. In this situation, writing unit tests might not be worth it. This is probably an indication that the code being tested is in need of simplification, in which case once it is simplified, unit tests should still be written.
  • When you cannot possibly tell whether the return values from a unit of code are correct.
  • When the code is exploratory. When code is being written purely as a way to explore an idea, and (here's the catch) you can guarantee that the code will be thrown away, then writing unit tests is probably not worth it.
In addition to these scenarios some logical limitations exist:
  • Unit testing by definition will not test the integration of components.
  • Unit testing can prove that code does or does not contain a specific set of errors, but it cannot prove that the code contains no errors at all except in trivial cases.
  • The amount of code that is required for testing may be far greater than the amount of code being tested. The effort required to write and maintain this code might not be worthwhile.
  • Unit testing code might itself contain bugs, and this is especially true when a significant amount of unit test code has been written.
  • If a suite of unit tests is not run frequently then the unit test code can easily get out of sync with the code it is supposed to test. When this happens the effort of bringing the unit test code back into sync might not be worthwhile
It is also worth looking quickly at a few invalid reasons for not writing unit tests. These are answers that you should avoid giving at an interview, for obvious reasons:
  • “Unit testing takes too long.” This is probably the most common objection to writing unit tests. It ignores the ever-present need to perform testing regardless of how the testing is performed, and it overlooks the longer term benefit of having a suite of automated tests rather than performing tests by hand or not at all.
  • “Unit testing can't test every possible combination of inputs so it isn't worth doing.” This is a logical fallacy, a form of false dilemma, because benefit can still be gained from writing unit tests even if they do not represent every possible combination of inputs. The choice is not a binary one.
  • “Unit testing is a form of code bloat.” This argument has some merit in the sense that a large amount of poorly organized code will be a strain on any team. This is, however, an argument to keep all code, including unit test code, well-organized and in a state that facilitates ongoing maintenance and support. It is not a valid reason to avoid writing unit tests.
9. Thinking up good test values
Suppose you are given a function that returns the most common character contained in a string. For example, if the string isaaabbcthen the function should returna.” If the string is empty or if it has more than onemost commoncharacter then the function should return null.
What test values should be used in unit testing to ensure that this function works correctly?
The answer to this question is a list of values that test the function for correct behavior not just in the normal case (that is, the string given example) but also for edge cases. An edge case is an unlikely or unusual input value that the programmer might have overlooked or ignored when writing the function. Some reasonable constraints can be assumed; for instance, you can assume that the supplied string is a valid string and not some other data type. On the other hand you cannot assume that the string is not a null value. Your list of test values should include at least one of each of the following types of string.
  • The string given in the question: “aaabbc
  • A string with no “most common” character: “abc
  • An empty string: ““
  • A null string
  • A string containing numeric characters: “1112223
  • A string containing punctuation symbols: “!!!$$%
  • A string containing whitespace: “
  • A string containing accented characters: “éééááó
  • A string containing unicode characters: “»»»1/21/23/4
  • A string containing string-quotation characters: “””” ‘ ‘ “
  • A string containing non-printable/control characters: “”
  • A very large string
  • A string with a large quantity of each character
  • A string with a large but equal quantity of each character
10. What percent of code coverage should you aim for?
When a unit test runs a line of code it is counted toward an overall figure of code coverage. If there are 100 lines of code in an application and unit testing runs 75 of these lines then the code coverage is 75 percent.
What is the ideal number for code-coverage as a percentage of all code in an application?
This is a bit of a trick question, because no consistent correlation exists between the percentage of code coverage and the quality of unit tests. For example, having 100 percent code coverage without testing anything except that the code runs without crashing is possible, and having a very low code coverage figure while still having very valuable unit tests is also quite possible.
Experience should inform your judgment about the right balance of unit test quantity, code coverage, and the quality of your unit tests. No rote calculation exists that you can perform to arrive at a correct figure for code coverage.
If your interviewer is persistent, it probably means he has a figure in mind. A common suggestion is 80 percent and if you feel compelled to pick a number then you might as well go with that. You should also mention the prospect of diminishing returns beyond a figure of around 80 percent, and also that many other factors will influence the choice of a code-coverage target. The goal of writing unit tests is to produce useful tests, not to achieve an arbitrary code coverage figure.
11. What tests the unit tests?
If programmers are so concerned about writing tests to ensure the correctness of their code, why don't they write unit tests that test other unit tests?
Two main reasons exist why programmers do not normally write tests for tests:
  • Unit tests should be simple, meaning they should contain very little or no logic. This means they contain very little that is worth testing.
  • Because unit tests are, by their nature, a comparison of results obtained from application code against expected results the unit tests are in effect validated by the application code. If results are not what a test expects, then it could be the application code that is wrong or it could equally well be the test that is wrong. Either way the test failure will need to be investigated and resolved, which means that unit tests and application code are effective tests of each other.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.216.38.221