Chapter 9

Testing with JUnit

Testing is the cornerstone of any professional software product. Testing itself comes in many forms, at many different levels. Software is tested as a whole, complete artifact, but when broken down, each of the individual components should be tested, and therefore be testable, too.

By thinking of testing at the start of the development process, the software as a whole becomes easier to test, both for any full testing teams, if you are lucky enough to have one, and also for any automated tests. Plenty of rich, full-featured, complete testing solutions are available for testing code written for Java. One of the most ubiquitous and well-understood libraries for this is JUnit. JUnit provides a lightweight and simple interface for creating tests on any level, including unit tests, integration tests, system tests, or something more exotic like user interface tests. JUnit itself is properly integrated into many build tools such as Ant and Maven, and the more up-to-date ones such as Gradle and SBT. These tools automatically stop builds and flag errors should any test fail.

By writing tests first and making sure the tests are fully integrated into a build means that the tests will run any time a build is created, such as before code is checked in, or when a release artifact is built. This, in turn, builds confidence that the system is working correctly. Any time a test fails for a seemingly unknown reason; this can often be due to new code introducing bugs elsewhere in the system. This is known as a regression.

Development using JUnit is helped by the fact that the library is well integrated in most IDEs, allowing single tests or even multiple suites to be run within a couple of clicks. This gives extremely quick feedback from small changes as to whether the test passes. The smaller the change made between test runs, the more confidence you can have that your change made a test pass, rather than some environmental change or even a change in a seemingly unrelated piece of code.

This chapter covers how to use JUnit for several areas of the development life cycle, from unit testing small, isolated pieces, to combining these pieces with integration tests, to testing full running systems using unit tests.

Unit tests are a great way to show intent in any kind of assessment. As you will see, the JUnit Assert class provides a great way to open a dialogue with anyone else reading the code, showing exactly the intent for writing a test and any expectation or assumption made.


What value do JUnit tests give?


JUnit tests are often used with the development approach known as Test-Driven Development (TDD). The process of TDD is to perform short, iterative loops: You write a test based on expectations and assertions about what your code should do. Considering you would not yet have written the code for these tests, they should not pass. You then write the code to make the tests pass. Once the test passes, you repeat the process for a new piece of functionality. If your tests completely cover a specification of what you are trying to achieve, then once all the tests pass, you have the confidence that your code works and is correct.

The tests do not need to be confined to making sure functionality is correct: You can specify non-functional requirements, and only pass tests once those requirements are met. Some examples could be making sure a server under load responds to requests within a certain time, or that certain security parameters are met.

For any reasonable-sized project, you will probably be using a build system, such as Maven or Ant. The tests you have written can, and should, be integrated into your build, so that any changes to the underlying code that break the tests means your build halts, and you must fix the code before your build works again. This saves you from introducing bugs once the code has been released. For Maven, the tests are automatically run as part of the build. For Ant, tests are integrated with a single command.

Confidence in your code being correct means you can put increased reliance in automated tools. For developing server-side architectures, you can move toward a continuous delivery model, where fully tested code is automatically released to a production server with no human intervention after check-in.


How are JUnit tests run?


As previously mentioned, JUnit is well integrated into many build tools, but it is still possible to invoke it manually on the command line.

In the JUnit library, the class JUnitCore contains the main method used for starting the tests from the command line. The arguments are a list of classes to test:

$ /path/to/java –cp /path/to/junit.jar:. [classes to test]

For Maven, which works with project layouts following a specific convention, simply running mvn test in the root directory of the project will find and run all tests in the project. Maven itself is highly configurable, and specifying only a dependency on the JUnit library will specifically use JUnit for testing. Chapter 19 describes how to set up Maven and declare project dependencies.

You can run Maven with a declaration of which tests to run. If you set the system property test, only the tests set in that property will be run:

mvn test –Dtest=SystemTest

The test parameter can take wildcards, so setting the system property to a value such as -Dtest=*IntegrationTest would run any test suffixed with IntegrationTest.

The JUnit Test Life Cycle

When you run a test suite, each test follows a prescribed set of steps. These steps can help you modularize your tests and reuse as much code as possible.


What happens when running a JUnit test?


A test suite is usually confined to a class. Before annotations arrived in JUnit 4, you would need your class to extend TestSuite.

You can define a method, or set of methods, to run once, before the whole suite is run. These may do some long-running computations, such as preparing the filesystem for the upcoming tests, or some kind of pretesting notification to a build server or similar.

To run some code once as the test suite is started, you specify a public static method that returns void, and annotate it with @BeforeClass. The method is static, so you do not have access to a fully constructed instance of the test suite class, such as the instance variables or instance methods.

Mirroring this annotation is @AfterClass. Methods with this annotation are run after all tests have completed.

As soon as the @BeforeClass annotated methods have completed successfully, the test runner performs the following steps for each test in the suite:

1. A new instance of the suite is constructed. As with all Java classes, any code in the constructor is run. Test suite classes may only declare a single constructor with no arguments.
2. Immediately following the object construction, any public methods with a @Before annotation and with a void return type are run. These usually set up anything common across all tests, such as mock objects or objects with state. Because this is run before each test, you can use this to return stateful objects to their correct state, or perhaps set the filesystem to a state expected for your test. Because both the constructor and @Before annotated methods are run before each test, you can do any test setup in either of these positions. The convention is to perform the setup in the @Before methods, to keep symmetry with the equivalent @After method.
3. The test is then run. Tests that are defined with the @Test annotation, are public, and again, have a void return type.
4. Following a successful, or unsuccessful, run of the test, the @After annotated (again, public void) methods are called. This will tidy up anything the test may have dirtied, such as a database or filesystem, or perhaps perform some post-test logging.

The order in which the @Before, @After, and @Test methods run is not guaranteed, so you cannot do some partial setup in one @Before method, and expect another @Before method written later in the source file to finish that setup. This is the heart of JUnit: Your tests should be independent and atomic.

Listing 9-1 shows all of the steps for a suite with two tests, using a counter to verify the order in which all the components are run.

Listing 9-1: The life cycle of a JUnit test

public class JUnitLifecycle {

    private static int counter = 0;

    @BeforeClass
    public static void suiteSetup() {
        assertEquals(0, counter);
        counter++;
    }

    public JUnitLifecycle() {
        assertTrue(Arrays.asList(1, 5).contains(counter));
        counter++;
    }

    @Before
    public void prepareTest() {
        assertTrue(Arrays.asList(2, 6).contains(counter));
        counter++;
    }

    @Test
    public void peformFirstTest() {
        assertTrue(Arrays.asList(3, 7).contains(counter));
        counter++;
    }

    @Test
    public void performSecondTest() {
        assertTrue(Arrays.asList(3, 7).contains(counter));
        counter++;
    }

    @After
    public void cleanupTest() {
        assertTrue(Arrays.asList(4, 8).contains(counter));
        counter++;
    }

    @AfterClass
    public static void suiteFinished() {
        assertEquals(9, counter);
    }
}

The counter used in the code is a static variable, because for this test suite to run, the test runner instantiates JUnitLifecycle twice, one for each @Test.

Any test methods annotated with @Ignore are ignored. The @Ignore annotation is often used for tests that are known to fail and would break a continuous integration build. This is often a sign of code smell: The code backed by the tests has changed, but the tests have not. Code smell is a symptom that usually points to a deeper problem. This can lead to code not covered by tests, and reduces the confidence of a correct codebase. Any @Ignored tests in code submitted for an interview would be treated with concern.

Should you ever find yourself with a justifiable reason for annotating some tests with @Ignore, be sure to include a comment with a date as to why the test is ignored, describing how and when this will be rectified.

You can also use the @Ignore annotation at the class level, instructing the JUnit runner to skip a whole suite of tests.

Best Practices for Using JUnit

The near-ubiquity of JUnit nowadays and the expressiveness of the library give you great power to show your potential in an interview.


How do you verify that your tests are successful?


One of the core classes in the JUnit library is the Assert class. It contains many static methods for expressing an assumption, which then verifies that assumption is true. Some of the key methods and their function are:

  • assertEquals—Two objects are equal according to their equals method.
  • assertTrue and assertFalse—The given statement matches the Boolean expectation.
  • assertNotNull—An object is not null.
  • assertArrayEquals—The two arrays contain the same values, checking equality by equals if comparing Object arrays.

If the assertion does not hold, an exception is thrown. Unless that exception is expected, or caught, that exception will fail the JUnit test.

There is also the fail method, which you can use if your test has reached a failing state. Most of the assertXXX methods call fail when necessary.

Take a look at the Assert class, and see what other assertion methods exist and how you can make use of them.

Each of the assertXXX methods are overloaded in pairs, with an extra String parameter available:

public static void assertTrue(String message, boolean condition)
public static void assertTrue(boolean condition)

This String parameter is a message that is displayed when assertion fails. Listing 9-2 shows a simple example of its usage.

Listing 9-2: Assertion with a failure message

@Test
public void assertionWithMessage() {
    final List<Integer> numbers = new ArrayList<>();
    numbers.add(1);

    assertTrue("The list is not empty", numbers.isEmpty());
}

junit.framework.AssertionFailedError: The list is not empty

Listing 9-3 shows what happens if you use the assertTrue method without a message string.

Listing 9-3: Assertion without a failure message

@Test
public void assertionWithoutMessage() {
    final List<Integer> numbers = new ArrayList<>();
    numbers.add(1);

    assertTrue(numbers.isEmpty());
}

junit.framework.AssertionFailedError: null

The message is merely the statement null. In a larger test with several assertTrue or assertFalse assertions, this can often lead to confusion as to why exactly a certain assertion is failing. If you use assertEquals without a message, then you are notified as to the difference when comparing two unequal values, but the error message has no explanation.

There are very few situations in which you would not provide this message parameter; especially when your code is being assessed for an interview.

However, it is possible to go one better. Although these messages are only ever printed for failing situations, when casually reading the code, these can be read as expectations rather than failure explanations.

This confusion, admittedly wrong, can be mitigated with the language used for writing the test. If you use that error message to say what should happen, then the code reads well, and when it is displayed as an error message, it still makes sense. Take the assertion from Listing 9-2:

assertTrue("The list is not empty", numbers.isEmpty());

This can be casually seen as a contradiction. The first parameter says "The list is not empty", but the second parameter says numbers.isEmpty—these are opposites. Had this been written instead as

assertTrue("The numbers list should not be empty", numbers.isEmpty());

the message on the top of a failed assertion stack trace would still make sense, and the code would be clearer for anyone, whether it’s a peer reviewing your code or an interviewer assessing a coding test.

For brevity, the convention of this book is to not include the failure message string parameter in any JUnit tests, because the intention should be clear from the discussion.


How can you expect certain exceptions?


If you are testing a failing situation in your code, and you expect an exception to occur, you can notify the test of your expected exception type. If that exception is thrown in the test, it passes. Completion of the test without that exception being thrown is a failure. Listing 9-4 shows this in action.

Listing 9-4: Expecting exceptions

@Test(expected = NoSuchFileException.class)
public void expectException() throws IOException {
    Files.size(Paths.get("/tmp/non-existent-file.txt"));
}

The parameter to the @Test annotation provides an indication to the test runner that this test should throw an exception. The method still needs to declare throws IOException here because IOException is a checked exception.

Had the test in Listing 9-4 been longer than one line, and the expected exception had been something more general, such as RuntimeException or even Throwable, it would be very hard to differentiate between the expected exception and a problem elsewhere, such as the test environment, as Listing 9-5 shows.

Listing 9-5: A poorly defined test for expecting exceptions

@Test(expected = Exception.class)
public void runTest() throws IOException {
    final Path fileSystemFile = Paths.get("/tmp/existent-file.txt");

    // incorrect usage of Paths.get
    final Path wrongFile = Paths.get("http://example.com/wrong-file");

    final long fileSize = Files.size(fileSystemFile);
    final long networkFileSize = Files.size(wrongFile);

    assertEquals(fileSize, networkFileSize);
}

Every single line in this test has the possibility for throwing an exception, including the assertion itself. In fact, if the assertion fails, it throws an exception, and therefore the test incorrectly passes!

It is advisable to use the expected parameter on the @Test annotation sparingly. The most reliable tests using this parameter have only one line in the method body: the line that should throw the exception. Therefore, it is crystal clear to see how and why a test could start failing, such as shown in Listing 9-4 earlier.

Of course, methods can throw exceptions, and you should still be able to test for those exceptions. Listing 9-6 details a clearer way to handle this. Imagine a utility method, checkString, which throws an exception if the String passed to the method is null.

Listing 9-6: Explicitly expecting exceptions

@Test
public void testExceptionThrowingMethod() {
    final String validString = "ValidString";
    final String emptyValidString = "";
    final String invalidString = null;

    ParameterVerifications.checkString(validString);
    ParameterVerifications.checkString(emptyValidString);
    try {
        ParameterVerifications.checkString(invalidString);
        fail("Validation should throw exception for null String");
    } catch (ParameterVerificationException e) {
        // test passed
    }
}

This is explicitly testing which line should throw an exception. If the exception is not thrown, the next line is run, which is the fail method. If a different type of exception is thrown than the one expected by the catch block, this is propagated to the calling method. Because there is no expectation of an exception leaving the method, the test fails. The comment in the catch block is necessary because it informs anyone reading the code that you intended to leave it blank. This is advisable for any time you leave a code block empty.


How can a test fail if it does not complete quickly enough?


The @Test annotation can take two parameters. One is expected (which you have already seen), which allows tests to pass when a certain type of exception is thrown. The other parameter is a timeout, which takes a value of type long. The number represents a number of milliseconds, and if the test is running for longer than that time, the test fails.

This test condition can help meet certain non-functional conditions. For example, if you were writing a service that had a requirement to respond within a second, you would write an integration test to make sure this requirement was met. Listing 9-7 demonstrates this.

Listing 9-7: Failing long-running tests

@Test(timeout = 1000L)
public void serviceResponseTime() {
    // constructed against a real service
    final HighScoreService realHighScoreService = ...
    final Game gameUnderTest = new Game(realHighScoreService);
    final String highScoreDisplay = gameUnderTest.displayHighScores();
    assertNotNull(highScoreDisplay);
}

This integration test calls out to a real-world high score service, and if the test does not complete within a second, an exception is thrown with the notification that the test timed out.

Of course, this timeout is for the whole test to complete, not the specific long-running method call. Similar to the exception expectation, if you want to explicitly check that a method call took less than a certain amount of time, you can either have that single method call in the test, or run the method call in a separate thread, as shown in Listing 9-8.

Listing 9-8: Explicitly timing tests

@Test
public void manualResponseTimeCheck() throws InterruptedException {
    final HighScoreService realHighScoreService = 
            new StubHighScoreService();

    final Game gameUnderTest = new Game(realHighScoreService);

    final CountDownLatch latch = new CountDownLatch(1);
    final List<Throwable> exceptions = new ArrayList<>();

    final Runnable highScoreRunnable = new Runnable() {
        @Override
        public void run() {
            final String highScoreDisplay = 
                    gameUnderTest.displayHighScores();
            try {
                assertNotNull(highScoreDisplay);
            } catch (Throwable e) {
                exceptions.add(e);
            }
            latch.countDown();
        }
    };

    new Thread(highScoreRunnable).start();
    assertTrue(latch.await(1, TimeUnit.SECONDS));

    if(!exceptions.isEmpty()) {
        fail("Exceptions thrown in different thread: " + exceptions);
    }
}

Managing the timing has added quite a lot of code. If you are not familiar with threads or CountDownLatches, these are covered in Chapter 11. What happens here is that the main executing thread waits for the thread running the test to complete, and if that takes longer than a second, the assertion surrounding latch.await returns false, so the test fails.

Also, JUnit will only fail a test for any failed assertions on the thread running the test, so you need to capture and collate any exceptions from assertions (or otherwise) from the spawned thread, and fail the test if any exceptions were thrown.


How does the @RunWith annotation work?


The @RunWith annotation is a class-level annotation, and it provides a mechanism for changing the default behavior of the test runner. The parameter to the annotation is a subclass of Runner. JUnit itself comes with several runners, the default being JUnit4, and a common alternative is the Parameterized class.

When a JUnit test is annotated with @RunWith(Parameterized.class), several changes are made to the life cycle of the test and the way the test is run. A class-level method providing the test data is expected, and this returns an array of data to use for testing. This data could be hard-coded in the test, or for more sophisticated tests, this could be dynamically produced, or even pulled in from a filesystem, database, or another relevant storage mechanism.

However this data is generated, each element in the array from this method is passed into the constructor for the test suite, and all tests run with that data.

Listing 9-9 shows a test suite run with the Parameterized runner. It provides a layer of abstraction over the tests; all of the tests are run against each of the data sets.

Listing 9-9: Using the Parameterized test runner

@RunWith(Parameterized.class)
public class TestWithParameters {

    private final int a;
    private final int b;
    private final int expectedAddition;
    private final int expectedSubtraction;
    private final int expectedMultiplication;
    private final int expectedDivision;

    public TestWithParameters(final int a,
                              final int b,
                              final int expectedAddition,
                              final int expectedSubtraction,
                              final int expectedMultiplication,
                              final int expectedDivision) {
        this.a = a;
        this.b = b;
        this.expectedAddition = expectedAddition;
        this.expectedSubtraction = expectedSubtraction;
        this.expectedMultiplication = expectedMultiplication;
        this.expectedDivision = expectedDivision;
    }

    @Parameterized.Parameters
    public static List<Integer[]> parameters() {
        return new ArrayList<Integer[]>(3) {{
            add(new Integer[] {1, 2, 3, -1, 2, 0});
            add(new Integer[] {0, 1, 1, -1, 0, 0});
            add(new Integer[] {-11, 2, -9, -13, -22, -5});
        }};
    }

    @Test
    public void addNumbers() {
        assertEquals(expectedAddition, a + b);
    }

    @Test
    public void subtractNumbers() {
        assertEquals(expectedSubtraction, a - b);
    }

    @Test
    public void multiplyNumbers() {
        assertEquals(expectedMultiplication, a * b);
    }

    @Test
    public void divideNumbers() {
        assertEquals(expectedDivision, a / b);
    }
}

This listing introduces some new concepts. First, a new annotation, @Parameterized.Parameters, needs to be placed on a public class method, and it returns a list of arrays. The objects in each array are passed to the constructor of the test, with the same ordering in the array as the ordering to the constructor.

One thing to bear in mind is that for test suites that require many parameters, it can be unwieldy or unclear as to which position in the provided array matches which constructor argument.

For this listing, the parameters() class method returns a readily constructed instance of an ArrayList.


What can you do if you want to customize the running of your tests?


At times it may be appropriate to create your own test runner.

One property of a good suite of unit tests is that they should be atomic. This means that they should be able to be run in any sequence, with no dependency on any other test in the suite.


Using Double Braces
The parameters for the test in Listing 9-9 use syntactic sugar, with double braces following the constructor. Inside the double braces, instance methods are called. What is happening here is that an anonymous subclass of ArrayList has been created, and inside that implementation, a block of code is defined to be run after the object has been constructed. Because the anonymous block is not defined as static, it is at the instance level, and so instance methods and variables can be called. The following code shows the logical, but more verbose, equivalent of that code, which should highlight exactly what is happening if it is not yet clear:
public class ListOfIntegerArraysForTest 
                           extends ArrayList<Integer[]> {
    {
        this.add(new Integer[]{1, 2, 3, -1, 2, 0});
        this.add(new Integer[]{0, 1, 1, -1, 0, 0});
        this.add(new Integer[]{-11, 2, -9, -13, -22, -5});
    }

    public ListOfIntegerArraysForTest() {
        super(3);
    }
}

@Parameterized.Parameters
public static List<Integer[]> parameters() {
    return new ListOfIntegerArraysForTest();
}
If you are still unsure of how this works, try running the code with some logging statements, or step through it in a debugger in your favorite IDE.
It is not advisable to use this double-curly-bracket technique for production code, because it creates a new anonymous class for each declaration, which will take up space in the PermGen area of the JVM’s memory for each new anonymous class. There will also be the overhead actually loading the class, such as verification and initializing the class. But for testing, this provides a neat and concise way of creating immutable objects, with the object declaration and construction all held together. Be wary that this can be confusing to people who are not familiar with the construct, so perhaps include a comment, or talk directly with your team to make them aware of this.

Though this is generally an implied rule, there is nothing within the JUnit library to enforce this. However, it should be possible to write your own test runner that runs the test methods in a random order. Should any test be dependent on any other, running the tests in a different order could expose this.

The JUnit runners are concrete implementations of the Runner abstract class. Listing 9-10 shows the class structure, along with the methods, that must be implemented for a custom runner.

Listing 9-10: The Runner class

public abstract class Runner implements Describable {
    public abstract Description getDescription();
    public abstract void run(RunNotifier notifier);

    public int testCount() {
        return getDescription().testCount();
    }
}

In your implementation of the Runner, there is just one convention to follow: You must provide a single argument constructor, which takes a Class object. JUnit will call this constructor once, passing in the class of the test suite. You are then free to do whatever you require with that class.

The Runner abstract class shows two methods that must be implemented: getDescription, which informs which methods to run and in which order, and run, which actually runs the test.

Given a class of the test suite and these two methods, it should be clear that this gives total flexibility as to what is run, how it is run, and how to report the outcome.

The Description class is a relatively straightforward class that defines what exactly is to be run, and the order of the run. To create a Description instance for running a set of JUnit tests, you call the static method createSuiteDescription. To add actual tests, you add child Description objects with the addChild method. You create the actual test description instances with another static method, createTestDescription. Considering that Description objects themselves contain Descriptions, you can create a tree of Descriptions several instances deep. In fact, this is what the Parameterized runner does: For a test suite, it creates a child for each of the parameters, and for each of those children, it creates a child for each test.

Given that the getDescription method on the Runner interface has no parameters, you need to create the instance to return before the method is called: on construction of the runner. Listing 9-11 shows the construction and the getDescription method of the RandomizedRunner.

Listing 9-11: The RandomizedRunner description generation

public class RandomizedRunner extends Runner {

    private final Class<?> testClass;
    private final Description description;

    private final Map<String, Method> methodMap;

    public RandomizedRunner(Class<?> testClass)
                            throws InitializationError {

        this.testClass = testClass;
        this.description =
                Description.createSuiteDescription(testClass);

        final List<Method> methodsUnderTest = new ArrayList<>();

        for (Method method : testClass.getMethods()) {
            if (method.isAnnotationPresent(Test.class)) {
                methodsUnderTest.add(method);
            }
        }

        Collections.shuffle(methodsUnderTest);

        methodMap = new HashMap<>();

        for (Method method : methodsUnderTest) {
            description.addChild(Description.createTestDescription(
                    testClass,
                    method.getName())
            );

            System.out.println(method.getName());
            methodMap.put(method.getName(), method);
        }
    }

    @Override
    public Description getDescription() {
        return description;
    }

    @Override
    public void run(RunNotifier runNotifier) {
        // not yet implemented
    }

    public static void main(String[] args) {
        JUnitCore.main(RandomizedRunner.class.getName());
    }
}

Reflection is used to find all of the methods from the given testClass suite; each is tested for the presence of the @Test annotation, and if so, is captured for running as a test later.

Notice that there is no constraint on using the @Test annotation for marking the methods to be run as tests. You could have used any annotation, or indeed any kind of representation, such as a prefix of test on any relevant method. (This was the standard before JUnit 4, before annotations were part of the Java language.) In fact, as you will see in the section “Creating System Tests with Behavior-Driven Development” later in this chapter, these tests are run without using the @Test annotation.

For brevity, this Runner implementation is more limited than the default implementation in JUnit. It assumes that all methods annotated with @Test will be runnable as a test. There is no understanding of any other annotation such as @Before, @After, @BeforeClass, and @AfterClass. Also note that @Ignore is not checked either: Merely the presence of @Test will mark that method for testing.

As each method on the class is checked for the @Test annotation, it is placed into a holding collection, and is then randomly sorted using the utility method Collections.shuffle.

The methods are added to the Description object in that random order. The description uses the method name to keep track of the test to run. So, for convenience, the method object is added to a mapping against its name for later lookup by the description’s method name.

The Description object itself informs the JUnit framework of what is going to be run; it’s still up to the runner to actually run the tests and decide if the test suite has succeeded, or if not, what has failed. When the framework is ready to run the tests, it calls the Runner’s run method. Listing 9-12 is the RandomizedRunner class’s implementation.

Listing 9-12: The RandomizedRunner’s run method

@Override
public void run(RunNotifier runNotifier) {
    runNotifier.fireTestStarted(description);
    for (Description descUnderTest : description.getChildren()) {
        runNotifier.fireTestStarted(descUnderTest);
        try {
            methodMap.get(descUnderTest.getMethodName())
                    .invoke(testClass.newInstance());

            runNotifier.fireTestFinished(descUnderTest);
        } catch (Throwable e) {
            runNotifier.fireTestFailure(
                    new Failure(descUnderTest, e.getCause()));
        }
    }
    runNotifier.fireTestFinished(description);
}

The run method has one parameter: a RunNotifier object. This is the hook back to the JUnit framework to say whether each test has succeeded or failed, and if it has failed, what exactly happened.

Remember that for the RandomizedRunner, the Description object had two levels: a top level for the suite, and then a child for each test. The run method needs to notify the framework that the suite has started running. It then calls each of the children, in turn. These children were added to the description in the random order.

For each of these children, a new instance of the test suite class is created, and the test method is invoked using reflection. Before the invocation, the framework is notified that the test is about to be run.


JUnit and IDEs
You may recognize that IDEs such as IntelliJ and Eclipse have very tight integration with JUnit: during a test run, their interfaces show exactly what test is running, what has already run, what has passed, and what has failed. The IDEs are using the calls to the RunNotifier to update their own test runner display.

If and when the test passes, the RunNotifier is told that the test has completed. But if the invocation throws any kind of exception, the catch block updates the RunNotifier of the failure. One point to note here is that the Failure object takes two parameters: the description object that has failed and the exception instance causing the failure. Considering this method was invoked using reflection, any exception caught will be a java.lang.reflect.InvocationTargetException, with the causing exception chained to that exception. It is therefore much clearer to any user of the RandomizedRunner to display the cause, ignoring the exception from reflection.

Now, any test suites marked with @RunWith(RandomizedRunner.class) can take advantage of having the test methods run in a non-specific order. Should multiple runs of the suite result in flickering tests, this will show an issue with one or more of the tests: They may depend on being run in a certain order, or the tests may not be cleaning their environment before or after their run.

A couple of possible extensions to this runner could be to try and figure out which test is causing a problem. When creating the Description object, you could add each method under test several times before the list of all methods is shuffled. Another approach could be to perform a binary search on the tests. Should any test fail, you could split the test list in two and rerun each list run. If all the tests in that sublist pass, the problematic test is most likely in the other half. You could further split the list in a similar manner until the test causing problems has been isolated.

Eliminating Dependencies with Mocks


What is the difference between a unit test and an integration test?


JUnit has something of an unfortunate name, in that it can be used for both unit testing and integration testing, among other things. The usual definition for a unit test is a test that purely tests a single unit of functionality. It will have no side effects. A well-defined unit test, run several times with no other changes, will produce exactly the same result each time.

Integration tests are a more complex beast. As their name suggests, integration tests examine several parts of a system to make sure that when integrated, these parts behave as expected. These tests often cover things like calling and reading from a database, or even involving calls to another server. There can often be dummy data inserted into a database, which will be expected for the test. This can lead to brittle tests, should database schemas or URL endpoints change. It is possible to use in-memory databases to test database interaction within integration tests. This is covered in Chapter 12.

For any non-complex system, testing single classes in isolation can be difficult, because they will have dependency on external factors, or other classes, which in turn can have dependency on external factors, and so on. After all, any running application that has no external dependencies cannot affect the outside world—it is essentially useless!

To break down the dependencies between classes and the outside world, you have two main things to follow: the use of dependency injection and mocking. The two go hand in hand. Dependency injection and specifically how to use that with integration testing is covered in more detail in Chapter 16, but Listing 9-13 highlights a small example.

Listing 9-13: A game with a dependency on a high score service

public interface HighScoreService {
    List<String> getTopFivePlayers();
    boolean saveHighScore(int score, String playerName);
}

public class Game {

    private final HighScoreService highScoreService;

    public Game(HighScoreService highScoreService) {
        this.highScoreService = highScoreService;
    }

    public String displayHighScores() {
        final List<String> topFivePlayers =
                highScoreService.getTopFivePlayers();
        final StringBuilder sb = new StringBuilder();

        for (int i = 0; i < topFivePlayers.size(); i++) {
            String player = topFivePlayers.get(i);
            sb.append(String.format("%d. %s%n", i+1, player));
        }

        return sb.toString();
    }
}

You could imagine an implementation of the HighScoreService class calling a database, which would sort results before returning the list for the top players, or perhaps the class calling a web service to get the data.

The Game class depends on the HighScoreService—it should not care how and where the high scores come from.

To test the algorithm in the displayHighScores() method, an instance of the HighScoreService is needed. If the implementation used in testing is configured to call an external web service, and that service is down for one of any number of reasons, then the test will fail, and you can do little to rectify it.

However, if a special instance is created, especially for testing, that reacts exactly as expected, this instance could be passed to the object when it is constructed, and it would allow testing of the displayHighScores() method. Listing 9-14 is one such implementation.

Listing 9-14: Testing the Game class with a stub

public class GameTestWithStub {

    private static class StubHighScoreService
            implements HighScoreService {
        @Override
        public List<String> getTopFivePlayers() {
            return Arrays.asList(
                    "Alice",
                    "Bob",
                    "Charlie",
                    "Dave",
                    "Elizabeth");
        }

        @Override
        public boolean saveHighScore(int score, String playerName) {
            throw new UnsupportedOperationException(
                    "saveHighScore not implemented for this test");
        }
    }

    @Test
    public void highScoreDisplay() {
        final String expectedPlayerList =
                "1. Alice
" +
                        "2. Bob
" +
                        "3. Charlie
" +
                        "4. Dave
" +
                        "5. Elizabeth
";

        final HighScoreService stubbedHighScoreService =
                new StubHighScoreService();
        final Game gameUnderTest = new Game(stubbedHighScoreService);

        assertEquals(
                expectedPlayerList,
                gameUnderTest.displayHighScores());
    }
}

Using a stub implementation of the HighScoreService means that the Game class will return the same list every time, with no latency from a network or database processing, and no issues around if a given high score service is running at the time of the test. This is not a silver bullet, though. You would want to verify other properties of this call to the high score service; for instance, that the service is called once and only once to generate this list, and not that the Game instance is doing something inefficient, such as calling the service once for each of the five players, or that it’s not called at all and somehow the Game is using a stale cached value.

Introducing a mock object can take care of these criteria. You can think of a mock as a smarter stub. It has the capability to respond differently to different method calls, regardless of the arguments passed, and it can record each call to assert that the calls were actually made as expected.

One such Java library is called Mockito. Listing 9-15 expands on Listing 9-14, testing the Game class more thoroughly.

Listing 9-15: Testing the Game class with a mock

import static org.mockito.Mockito.mock;
import static org.mockito.Mockito.times;
import static org.mockito.Mockito.verify;

public class GameTestWithMock {

    private final Game gameUnderTest;
    private final HighScoreService mockHighScoreService;

    public GameTestWithMock() {
        final List<String> firstHighScoreList = Arrays.asList(
                "Alice",
                "Bob",
                "Charlie",
                "Dave",
                "Elizabeth");
        final List<String> secondHighScoreList = Arrays.asList(
                "Fred",
                "Georgia",
                "Helen",
                "Ian",
                "Jane");

        this.mockHighScoreService = mock(HighScoreService.class);

        Mockito.when(mockHighScoreService.getTopFivePlayers())
                .thenReturn(firstHighScoreList)
                .thenReturn(secondHighScoreList);

        this.gameUnderTest = new Game(mockHighScoreService);
    }

    @Test
    public void highScoreDisplay() {
        final String firstExpectedPlayerList =
                        "1. Alice
" +
                        "2. Bob
" +
                        "3. Charlie
" +
                        "4. Dave
" +
                        "5. Elizabeth
";

        final String secondExpectedPlayerList =
                        "1. Fred
" +
                        "2. Georgia
" +
                        "3. Helen
" +
                        "4. Ian
" +
                        "5. Jane
";

        final String firstCall = gameUnderTest.displayHighScores();
        final String secondCall = gameUnderTest.displayHighScores();

        assertEquals(firstExpectedPlayerList, firstCall);
        assertEquals(secondExpectedPlayerList, secondCall);

        verify(mockHighScoreService, times(2)).getTopFivePlayers();
    }
}

This test can be broadly split into three parts: The test and mock are set up (which is done in the test suite constructor), the test itself is run, and then the assertions are checked.

In the constructor, a mock HighScoreService class is created, using Mockito’s mocking framework. This creates the special instance, which is an implementation of the HighScoreService interface. Following this mock generation, the mock is instructed how to react to the getTopFivePlayers() call. It is told to give different responses to its first and second call, simulating the data from the service changing between calls. It is not relevant for this test, but for any subsequent calls to getTopFivePlayers(), the mock will return the second list repeatedly. Once the mock has been configured, this instance is passed to the constructor of the Game class.

The test is then executed. The displayHighScores() method is called twice, and an expectation has been made that for each call to displayHighScores(), a call is made to getTopFivePlayers() on the HighScoreService instance. The responses from each call are captured by the mock itself.

The responses are then asserted against the expectation of how the Game will format the top five players. Finally, the test checks with Mockito that the mock was used in the correct way. This test expected the getTopFivePlayers() method on the mock to be called twice. If this verification is not correct, an appropriate Mockito exception is thrown, failing the test.

Take a moment to make sure you fully understand how the mock interacts with the Game class. Note that, for the high scores, the Game class has been verified to work properly, at least for the test cases presented, without any thought given to the implementation of the HighScoreService. Any future change to a HighScoreService implementation that does not change the interface definition will have no effect on this test. A clear separation has been made between the Game and the HighScoreService.

The mocks that Mockito provides are extremely flexible. Other libraries can only mock interfaces, but Mockito mocks can mock concrete implementations too. This is extremely useful for working with legacy code, where perhaps dependency injection wasn’t used properly, or interfaces were not used at all.

Mockito cannot mock final classes, due to the way they are stored and constructed in the JVM. The JVM takes precautionary measures for security to make sure a final class is not replaced.


How expressive can you make your tests?


You can use the Hamcrest matcher library in conjunction with JUnit to help articulate assertions and improve readability of tests. The construction of an assertion using Hamcrest matchers is a form of Domain-Specific Language (DSL). Listing 9-16 shows a simple example.

Listing 9-16: Using the Hamcrest matchers

import static org.hamcrest.MatcherAssert.*;
import static org.hamcrest.Matchers.*;

public class HamcrestExample {

    @Test
    public void useHamcrest() {
        final Integer a = 400;
        final Integer b = 100;

        assertThat(a, is(notNullValue()));
        assertThat(a, is(equalTo(400)));
        assertThat(b - a, is(greaterThan(0)));
    }
}

These assertion lines attempt to read like English. The library has several methods that return an instance of Matcher. When the matcher is evaluated, if it is not true, the assertThat method throws an exception. Similar to the Assert class methods in JUnit, the assertThat method can take an extra String parameter, which is displayed on the assertion failing. Again, try to make the reason read well in the code, and also highlight an error by describing what should happen.

The is method is more syntactic sugar. Although it does not add any functional value, it provides help to the programmer, or anyone reading the code. Because the is method merely delegates to the matcher inside the method, it is optional, serving only to make the code more legible.

The use of Hamcrest matchers often divides opinion. If you like them, use them. If not, then unless you are specifically asked to use them, nobody will expect you to.

Creating System Tests with Behavior-Driven Development


What is Behavior-Driven Development?


In relation to unit testing, Behavior-Driven Development (BDD) is a much newer concept. BDD tests are composed of two components: the test script, which is written in as close to natural language as possible, and the code backing the test script to run.

A general motivator behind these tests is to provide a very high-level test, independent of any implementing code, essentially treating a system as a black box. These tests are often called system tests. Listing 9-17 is a script, called a feature, for use in a BDD test.

Listing 9-17: A feature for a BDD test

Feature: A simple mathematical BDD test

  Scenario Outline: Two numbers can be added
    Given the numbers <a> and <b>
    When the numbers are added together
    Then the result is <result>

  Examples:
    |  a |  b | result |
    |  1 | 10 |     11 |
    |  0 |  0 |      0 |
    | 10 | -5 |      5 |
    | -1 |  1 |      0 |

This has been written to run with the Cucumber library. Cucumber originated as a BDD framework for the Ruby language, and was later rewritten to run natively on the JVM. Other BDD frameworks, such as JBehave, work in a similar way.

It is quite easy to imagine that a non-technical user of the system, perhaps the product owner or a major stakeholder, could write scripts like this, or perhaps even a customer, if that is relevant.

The script follows a prescribed set of instructions, called steps. The steps are split into three: Given, When, and Then.

  • The Given steps prime the system into a known state, which is necessary for the following steps.
  • The When steps perform the action under test. This will, or at least should, have a measurable effect on the system under test.
  • The Then steps measure that the When steps were performed successfully.

The ordering of the steps is important. Given is followed by When, which is followed by Then. It is possible to have multiple steps performed for each of these keywords; but rather than having multiple Given/When/Then, you use And instead:

Given a running web server
And a new user is created with a random username
... etc

Of course, this script needs to be backed up by actual running code in order to verify that the test passes. Listing 9-18 is a possible implementation for the feature in Listing 9-17.

Listing 9-18: Cucumber steps

public class CucumberSteps {

    private int a;
    private int b;
    private int calculation;

    @Given("^the numbers (.*) and (.*)$")
    public void captureNumbers(final int a, final int b) {
        this.a = a;
        this.b = b;
    }

    @When("^the numbers are added together$")
    public void addNumbers() {
        this.calculation = a + b;
    }

    @Then("^the result is (.*)$")
    public void assertResult(final int expectedResult) {
        assertEquals(expectedResult, calculation);
    }
}

@RunWith(Cucumber.class)
public class CucumberRunner { }

Each of the steps is defined in its own method, with an annotation for that method. The annotation has a string parameter, which itself is a regular expression. When the test is run, the runner searches the methods with the correct annotation to find a regular expression match for that step.

Listing 9-17 parameterized the scenario, with what is called a scenario outline. This means that each step can be called with a variable input. This input can be captured with the regular expression grouping. A parameter of the appropriate type is then required on the method, giving the code access to the different test inputs.

Note that the implementation of Cucumber scenarios requires a specific JUnit runner, provided by the Cucumber jar. The @RunWith annotation must reside on a separate class definition than the steps. Even though the Cucumber tests are run through the JUnit library, using this runner means that no @Test annotations are necessary.

If a method can be used by more than one of steps classifications, then merely adding a new annotation to the method will allow that code to be run for that step.

Once you have written a few tests, a small vocabulary will be available, and you can reuse these Given/When/Then lines to make new tests, requiring minimal new coding effort (or perhaps none at all). Designing tests this way gives plenty of scope for those writing the test scripts to think up their own scenarios, and writing tests from steps already implemented. Following on from the addition example, imagine a new requirement to be able to subtract numbers. You could write the new scripts, and you would need to add only one new method:

@When("^the first number is subtracted from the second$")
public void subtractAFromB() {
    this.calculation = b - a;
}

For multiple steps where And is used, the correct Given/When/Then annotation is used.

BDD tests are a useful vehicle for managing a test over several changes in state. This could work very well with database interaction. The Given step could prime the database ready for the test, checking for or inserting necessary data. The When step performs the test, and the Then step asserts that the test was successful. As with the regular JUnit runner, you can provide @Before and @After annotated methods that are run before each scenario.

Summary

If you are not writing tests when you write code, now is the time to start. You will find that the more experience you get writing tests, the easier it is to write your production code. You will break down problems into manageable, testable chunks. Any future changes that break existing code will be spotted sooner rather than later. Eventually, writing tests will become second nature.

Try to break up any tests that you do write into logical units. Write unit tests for single pieces of functionality, which have no side effects. Any dependencies will be mocked. Write integration tests for any test code that has a side effect, such as writing to a filesystem or database, and clean up any side effects when the test completes, whether it passes or fails. One convention is to suffix any integration test class with IntegrationTest, and unit tests simply with Test.

When faced with a technical test for an interview, writing tests is an ideal way to help express why you wrote code in a certain way. Even in a time-limited test, writing even basic tests first will help structure your code, and help avoid errors in any last-minute refactoring.

The error message parameter on the Assert class methods is a perfect way to open a dialogue with anyone reading your code; it is a space to write exactly what you are thinking, and exactly what your code should do. Use this space to communicate your thoughts.

Importing the JUnit jar is very simple in IntelliJ and Eclipse. Both these IDEs recognize when you are writing test code, and will give you the option to import the library directly: no need for editing Maven POMs and reimporting whole projects.

The technical interview we use with my current employer does not advise writing tests at all, but whenever a completed interview is received, the first thing we do is check what tests have been written. If there are no tests at all, it is highly unlikely that candidate will be asked in for a face-to-face interview.

Chapter 10 looks at the Java Virtual Machine, how the Java language interacts with it, and what an interviewer may ask about how to optimize its performance and memory footprint.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.118.10.32