3

The TDD Process

The first two chapters have introduced you to the TDD process by showing you the steps involved. You have seen build failures when declaring multiple tests. You have seen what can happen when we get ahead of ourselves and write code that isn’t needed yet. That was a small example with a test result, but it still showed how easy it is to sometimes let code slip into a project before there are tests to support the code. And you also saw how the code starts out with a simple or partial implementation, gets working first, and then is enhanced.

We will cover the following topics in this chapter:

  • How build failures come first and should be seen as part of the process
  • Why you should write only enough code to pass your tests
  • How to enhance a test and get another pass

This chapter will begin by introducing you to the TDD process. For a more detailed walkthrough with more code, refer to Chapter 10, The TDD Process in Depth.

Now, it’s time to begin learning about the TDD process in a more deliberate manner.

Technical requirements

All code in this chapter uses standard C++ that builds on any modern C++ 17 or later compiler and standard library. The code is based on and continues from the previous chapter.

You can find all the code for this chapter at the following GitHub repository:

https://github.com/PacktPublishing/Test-Driven-Development-with-CPP

Build failures come first

In the previous chapter, you saw how the first step to getting multiple tests to run was to write multiple tests. This caused a build failure. When you’re programming, it’s common to write code that doesn’t build at first. These are normally considered mistakes or errors that need to be fixed right away. And gradually, most developers learn to anticipate build errors and avoid them.

When following TDD, I want to encourage you to stop avoiding build errors, because the way to avoid build errors usually means that you work on enabling a new feature or making changes to code before you try to use the new feature or updated code. This means that you’re making changes while focused on the details and it is easy to overlook bigger issues, such as how easy it will be to use the new feature or updated code.

Instead, start out by writing code in the way that you think it should be used. That’s what was done with the tests. I showed you in the previous chapter that the end result of adding another test should look like this:

#include "../Test.h"
TEST
{
}
TEST
{
    throw 1;
}

The project was built and was not completed due to errors. This lets us know what needed to be fixed. But before making the changes, I showed what we really wanted the tests to look like:

#include "../Test.h"
TEST("Test can be created")
{
}
TEST("Test with throw can be created")
{
    throw 1;
}

Changes were made to enable multiple tests only once we had a clear idea of what the tests should look like. Had I not taken this approach, it’s possible that some other solution would have been found to name the tests. It might have even worked. But would it have been as easy to use? Would we be able to simply declare a second TEST like the code showed and give each a name right away? I don’t know.

But I do know that there have been many times when I did not follow this advice and ended up with a solution that I did not like. I had to go back and redo the work until I was happy with the result. Had I started with the result that I wanted in the first place, then I would have made sure to write code that directly led to that result.

All of this is really just a shift in focus. Instead of diving into the detailed design of what you are coding, take a step back and first write test code to use what you intend to make.

In other words, let the tests drive the design. This is the essence of TDD.

The code you write will not build yet because it relies on other code that doesn’t exist, but that’s OK because it gives you a direction that you’ll be happy about.

In a way, writing code from this user point of view gives you a goal and makes that goal real before you even start. Don’t settle for a vague idea of what you want. Take the time to write the code as you want it to be used first, build the project, and then work to fix the build errors.

Is it really necessary to try building your project when you know it will fail to build? This is a shortcut that I’ll take sometimes, especially when the build takes a long time or when the build failure is obvious. I mean, if I call a method that doesn’t exist yet, I’ll often write the method next without building. I know it will fail to build and what needs to be done to fix it.

But there are times when this shortcut can lead to problems, such as when working with overloaded methods or template methods. You might write code to use a new overload that doesn’t yet exist and think that the code will fail to build, when what actually happens is that the compiler will choose one of the existing overloaded versions to make the call. This is also the case with templates.

You can find a good example of an expected build failure that actually built with no warnings or errors in Chapter 5, Adding More Confirm Types. The result was not what was wanted and building first allowed the problem to be caught right away.

The point is that building your project will let you know about these situations. If you expect the build to fail but it compiles anyway, then you know that the compiler was able to figure out a way to make the code work that maybe you weren’t expecting. This can lead to valuable insight. Because when you add the intended new overload, it’s possible that existing code will start calling your new method too. It’s always better to be aware of this situation, rather than being surprised by a hard-to-find bug.

When you’re still working on getting your tests to build, you don’t need to worry about passing. In fact, it’s easier if you let the tests fail at first. Focus on the intended usage instead of getting passing tests.

Once your code builds, how much should you implement? That’s the topic of the next section. The main idea is to do as little as possible.

Do only what is needed to pass

When writing code, it’s easy to think of all the possibilities of how a method might be used, for example, and to write code to handle each possibility right away. This gets easier with experience and is normally viewed as a good way to write robust code without forgetting to handle different use cases or error conditions.

I urge you to scale back your eagerness to write all this at once. Instead, do only what is needed to pass a test. Then, as you think of other use cases, write a test for each, before extending your code to handle them. The same applies to error cases. As you think of some new error handling that should be added, write a test that will cause that error condition to arise before handling it in your code.

To see how this is done, let’s extend the test library to allow for expected exceptions. We have two test cases right now:

#include "../Test.h"
TEST("Test can be created")
{
}
TEST("Test with throw can be created")
{
    throw 1;
}

The first makes sure that a test can be created. It does nothing and passes. The second test throws an exception. It actually just throws a simple int value of 1. This causes the test to fail. It might seem demotivating to see one or more of your tests fail. But remember, we just got the test to build and that is an accomplishment you should feel good about.

When we initially added the second test in the previous chapter, the goal was to make sure that multiple tests could be added. And the int was thrown to make sure that any exceptions would be treated as a failure. We weren’t yet ready to fully handle thrown exceptions. That’s what we’re going to do now.

We’re going to take the existing code that throws an exception and turns it into an expected exception, but we are going to follow the advice given here and do the absolute minimum to get this working. That means we’re not going to jump right into a solution that tries to throw multiple different exceptions, and we’re not yet going to handle the case where we think an exception should be thrown but it doesn’t get thrown.

Because we’re writing the testing library itself, our focus sometimes will be on the tests themselves. In many ways, the tests become similar to whatever project-specific code you’ll be working with. So, while right now we need to be careful not to add a bunch of tests all at once, you’ll want to be careful later not to add a bunch of extra code that doesn’t yet have tests all at once. You’ll see this shift once we get the test library to a more feature-complete version and then start using it to create a logging library. At that point, the guidance will apply to the logging library and we’ll want to avoid adding extra logic to handle different logging scenarios without first adding tests to exercise those scenarios.

Starting with the end usage in mind, we need to think about what the TEST macro usage should look like when there is an expected exception. The main thing we need to communicate is the type of exception that we expect to be thrown.

There will only be one type of exception needed. Even if some code under test throws multiple exception types, we don’t want to list more than one exception type per test. That’s because, while it’s okay for code to check different error conditions and throw a different exception type for each error, each test itself should be written to only test one of these error conditions.

If you have a method that can sometimes throw different exceptions, then you should have a test for each condition that leads to each exception. Each test should be specific and always either lead to a single exception or no exception at all. And if a test expects an exception to be thrown, then that exception should always be thrown in order for the test to be considered to pass.

Later in this chapter, we’ll get to the more complicated situation of not catching an exception when one is expected. For now, we want to do only what is needed. Here is what the new usage looks like:

TEST_EX("Test with throw can be created", int)
{
    throw 1;
}

The first thing you’ll notice is that we need a new macro in order to pass the type of exception that is expected to be thrown. I’m calling it TEST_EX, which stands for test exception. Right after the name of the test is a new macro argument for the type of exception that is expected. In this case, it’s an int because the code throws 1.

Why do we need a new macro?

Because macros are not really functions. They just work with simple text replacement. We want to be able to tell the difference between a test that doesn’t expect any exceptions to be thrown versus a test that does expect an exception to be thrown. Macros don’t have the ability to be overloaded like a method or function, with each different version declared with different parameters. A macro needs to be written with a specific number of parameters in mind.

When a test doesn’t expect any exception to be thrown, it doesn’t make any sense to pass some placeholder value for the exception type. It’s better to have one macro that takes just the name and means that no exception is expected, and another macro that takes a name and an exception type.

This is a real example of where the design needs to compromise. Ideally, there would not be a need for a new macro. We’re doing the best with what the language gives us here. Macros are an old technology with their own rules.

Going back to the TDD process, you can see that we’re again starting with the end usage in mind. Is this solution acceptable? It doesn’t exist yet. But if it did, would it feel natural? I think so.

There’s no real point in trying to build right now. This is a time when we’ll take a shortcut and skip the actual build. In fact, in my editor, the int type is already highlighted as an error.

It complains that we’re using a keyword wrongly and it might look strange to you as well. You can’t just pass types, whether they are keywords or not, as method arguments. Remember that a macro is not really a method though. Once the macro has been fully expanded, the compiler will never see this strange usage of int. You can pass types as template parameters. But macros don’t support template parameters either.

Now that we have the intended usage, the next step is to think about the solution that will enable this usage. We don’t want the test author to have to write a try/catch block for the expected exception. That’s what the test library should do. This means we’ll need a new method inside the Test class that does have a try/catch block. This method can catch the expected exception and ignore it for now. We ignore it because we are expecting the exception, which means if we catch it, then the test should pass. If we let the expected exception continue outside of the test, then the runTests function will catch it and report a failure due to an unexpected exception.

We want to keep the catch all inside runTests because that’s how we detect unexpected exceptions. For unexpected exceptions, we don’t know what type to catch because we want to be ready to catch anything.

Here, we do know what type of exception to expect because it is being provided in the TEST_EX macro. We can have the new method in the Test class catch the expected exception. Let’s call this new method runEx. All the runEx method needs to do is look for the expected exception and ignore it. If the test throws something else, then runEx won’t catch it. But the runTests function will be sure to catch it.

Let’s look at some code to understand better. Here is the TEST_EX macro in Test.h:

#define TEST_EX( testName, exceptionType ) 
class MERETDD_CLASS : public MereTDD::TestBase 
{ 
public: 
    MERETDD_CLASS (std::string_view name) 
    : TestBase(name) 
    { 
        MereTDD::getTests().push_back(this); 
    } 
    void runEx () override 
    { 
        try 
        { 
            run(); 
        } 
        catch (exceptionType const &) 
        { 
        } 
    } 
    void run () override; 
}; 
MERETDD_CLASS MERETDD_INSTANCE(testName); 
void MERETDD_CLASS::run ()

You can see that all runEx does is call the original run method inside of a try/catch block that catches the exceptionType specified. In our specific case, we will catch an int and ignore it. All this does is wrap up the run method with a try/catch block so that the test author doesn’t have to.

The runEx method is also a virtual override. That’s because the runTests function needs to call runEx instead of calling run directly. Only then will expected exceptions be caught. We don’t want runTests to sometimes call runEx for tests with an expected exception and to call run for those tests without an expected exception. It will be better if runTests always calls runEx.

This means we need to have a default implementation of runEx that just calls run without a try/catch block. We can do that in the TestBase class, which will need to declare the virtual runEx method anyway. The run and runEx methods look like this inside TestBase:

    virtual void runEx ()
    {
        run();
    }
    virtual void run () = 0;

The TEST_EX macro that expects an exception will override runEx to catch the exception, and the TEST macro that does not expect an exception will use the base runEx class implementation, which just calls run directly.

Now, we need to modify the runTests function to call runEx instead of run, like this:

inline int runTests (std::ostream & output)
{
    output << "Running "
        << getTests().size()
        << " tests
";
    int numPassed = 0;
    int numFailed = 0;
    for (auto * test: getTests())
    {
        output << "---------------
"
            << test->name()
            << std::endl;
        try
        {
            test->runEx();
        }
        catch (...)
        {
            test->setFailed("Unexpected exception thrown.");
        }

Only the first half of the runTests function is shown here. The rest of the function remains unchanged. It’s really just the single line of code in the try block that now calls runEx that needed to be updated.

We can now build the project and run it to see how the tests perform. The output looks like this:

Running 2 tests

---------------

Test can be created

Passed

---------------

Test with throw can be created

Passed

---------------

All tests passed.

Program ended with exit code: 0

The second test used to fail but now it passes because the exception is expected. We also followed the guidance for this section, which is to do only what is needed to pass. The next step in the TDD process is to enhance a test and get another pass.

Enhancing a test and getting another pass

What happens if a test that expects an exception does not see the exception? That should be a failure, and we’ll handle it next. This situation is a little different because the next pass is really going to be a failure.

When you’re writing tests and following the guidance to first do the minimum amount to get a first passing result and then enhancing the test to get another pass, you’ll be focused on passing. That’s good because we want all the tests to eventually pass.

Any failure should almost always be a failure. It doesn’t usually make sense to have expected failures in your tests. What we’re about to do here is a bit out of the ordinary and it’s because we’re still developing the test library itself. We need to make sure that a missing exception that was expected and did not occur is able to be caught as a failed test. We then want to treat that failed test as a pass because we’re testing the ability of the test library to be able to catch these failures.

Right now, we have a hole in the test library because adding a third test that expects an int to be thrown but never actually throws an int is seen as a passing test. In other words, the tests in this set all pass:

#include "../Test.h"
TEST("Test can be created")
{
}
TEST_EX("Test with throw can be created", int)
{
    throw 1;
}
TEST_EX("Test that never throws can be created", int)
{
}

Building this works okay and running it shows that all three tests pass:

Running 3 tests

---------------

Test can be created

Passed

---------------

Test with throw can be created

Passed

---------------

Test that never throws can be created

Passed

---------------

All tests passed.

Program ended with exit code: 0

This is not what we want. The third test should fail because it expected an int to be thrown but that did not happen. But that also goes against the goal that all tests should pass. There is no way to have an expected failure. Sure, we might be able to add this concept into the testing library, but that would add extra complexity.

If we were to add the ability for a test to fail but still be treated as if it passed, then what would happen if the test failed for some unexpected reason? It would be easy for a bad test to be written that fails for multiple reasons but actually gets reported as a pass because the failure was expected.

While writing this, I initially decided not to add the ability to have expected failures. My reasoning was that all tests should pass. But that left us in a bind, because how else can we verify that the test library itself can properly detect missing expected exceptions?

We need to close the hole exposed by the third test.

There is no good answer to this dilemma. So, what I’m going to do is get this new test to fail and then add the ability to treat a failure as a success. I don’t like the alternatives, which is to leave the test in the code but comment it out so that it wouldn’t actually run, or to delete the third test entirely.

What finally convinced me to add support for successful failing tests was the idea that everything should be tested, especially big things, such as the ability to make sure that an expected exception is always thrown. You probably won’t need to use the ability to mark a test as an expected failure but if you do, then you will be able to do the same thing. We are in a unique situation because we need to test something about the test library itself.

Alright, let’s get the new test to fail. The minimum amount of code needed for this is to return if the expected exception was caught. If the exception was not caught, then we throw something else. The code to update is the TEST_EX macro override of the runEx method, like this:

    void runEx () override 
    { 
        try 
        { 
            run(); 
        } 
        catch (exceptionType const &) 
        { 
            return; 
        } 
        throw 1; 
    } 

The rest of the macro is unchanged, so only the runEx override is shown here. We return when the expected exception is caught, which will cause the test to pass. And after the try/catch block, we throw something else that will cause the test to fail.

If you find it strange to see a simple int value being thrown, remember that our goal is to do the absolute minimum needed at this point. You would never want to leave code that throws something like this and we will fix that next.

This works and is great because it is the minimum amount needed to do what we want, but the result looks strange and misleading. Here is the test result output:

Running 3 tests
---------------
Test can be created
Passed
---------------
Test with throw can be created
Passed
---------------
Test that never throws can be created
Failed
Unexpected exception thrown.
---------------
Tests passed: 2
Tests failed: 1
Program ended with exit code: 1

You can see that we got a failure but the message says Unexpected exception thrown.. This message is almost the exact opposite of what we want. We want it to say that an expected exception was not thrown. Let’s fix this before we continue turning it into an expected failure.

First, we need some way for the runTests function to detect the difference between an unexpected exception and a missing exception. Right now, it just catches everything and treats any exception as unexpected. If we were to throw something special and catch it first, then that could be the signal that an exception was missing. And anything else that gets caught would be unexpected. OK, what should this special throw be?

The best thing to throw is going to be something that the test library defines specifically for this purpose. We can define a new class just for this.

Let’s call it MissingException and define it inside the MereTDD namespace, like this:

class MissingException
{
public:
    MissingException (std::string_view exType)
    : mExType(exType)
    { }
    std::string_view exType () const
    {
        return mExType;
    }
private:
    std::string mExType;
};

Not only will this class signal that an expected exception was not thrown but it will also keep track of the type of exception that should have been thrown. The type will not be a real type in the sense that the C++ compiler understands types. It will be the text representation of that type. This actually fits well with the design because that’s what the TEST_EX macro accepts, a piece of text that gets substituted in the code for the actual type when the macro is expanded.

Inside the TEST_EX macro implementation of the runEx method, we can change it to look like this:

    void runEx () override 
    { 
        try 
        { 
            run(); 
        } 
        catch (exceptionType const &) 
        { 
            return; 
        } 
        throw MereTDD::MissingException(#exceptionType); 
    } 

Instead of throwing an int like before, the code now throws a MissingException. Notice how it uses another feature of macros, which is the ability to turn a macro parameter into a string literal with the # operator. By placing # before exceptionType, it will turn the int provided in the TEST_EX macro usage into an "int" string literal, which can be used to initialize the MissingException with the name of the type of exception that is expected.

We’re now throwing a special type that can identify a missing exception, so the only piece remaining is to catch this exception type and handle it. This happens in the runTests function, like this:

        try
        {
            test->runEx();
        }
        catch (MissingException const & ex)
        {
            std::string message = "Expected exception type ";
            message += ex.exType();
            message += " was not thrown.";
            test->setFailed(message);
        }
        catch (...)
        {
            test->setFailed("Unexpected exception thrown.");
        }

The order is important. We need to first try catching MissingException before catching everything else. If we do catch MissingException, then the code changes the message that gets displayed to let us know what type of exception was expected but not thrown.

Running the project now shows a more applicable message for the failure, like this:

Running 3 tests

---------------

Test can be created

Passed

---------------

Test with throw can be created

Passed

---------------

Test that never throws can be created

Failed

Expected exception type int was not thrown.

---------------

Tests passed: 2

Tests failed: 1

Program ended with exit code: 1

This clearly describes why the test failed. We now need to turn the failure into a passing test and it will be nice to keep the failure message. We’ll just change the status from Failed to Expected failure. Since we’re keeping the failure message, I have an idea for something that will make this ability to mark failed tests as passing a safer feature.

What do I mean by a safer feature? Well, this was one of my biggest concerns with adding the ability to have expected failures. Once we mark a test as an expected failure, then it would be too easy for the test to fail for other reasons. Those other reasons should be treated as real failures because they were not the expected reason. In other words, if we just treat any failure as if a test passes, then what happens if a test fails for a different reason? That would be treated as a pass too and that would be bad. We want to mark failures as passing but only for expected failures.

In this particular case, if we were to just treat a failure as a pass, then what would happen if the test was supposed to throw an int but instead threw a string? That would definitely cause a failure and we need a test case for this too. We might as well add that test now. We don’t want to treat the throwing of a different exception the same as not throwing any exception at all. Both are failures but the tests should be specific. Anything else should cause a legitimate failure.

Let’s start with the end usage in mind and explore how best to express the new concept. I thought about adding an expected failure message to the macro but that would require a new macro. And really, it would require a new macro for each macro we already have. We’d need to extend both the TEST macro and the TEST_EX macro with two new macros, such as FAILED_TEST and FAILED_TEST_EX. That doesn’t seem like a very good idea. What if, instead, we add a new method to the TestBase class? It should look like this when used in the new tests:

// This test should fail because it throws an
// unexpected exception.
TEST("Test that throws unexpectedly can be created")
{
    setExpectedFailureReason(
        "Unexpected exception thrown.");
    throw "Unexpected";
}
// This test should fail because it does not throw
// an exception that it is expecting to be thrown.
TEST_EX("Test that never throws can be created", int)
{
    setExpectedFailureReason(
        "Expected exception type int was not thrown.");
}
// This test should fail because it throws an
// exception that does not match the expected type.
TEST_EX("Test that throws wrong type can be created", int)
{
    setExpectedFailureReason(
        "Unexpected exception thrown.");
    throw "Wrong type";
}

Software design is all about trade-offs. We are adding the ability to have a failing test turn into a passing test. The cost is extra complexity. Users need to know that the setExpectedFailureReason method needs to be called inside the test body to enable this feature. But the benefit is that we can now test things in a safe manner that would not have been possible otherwise. The other thing to consider is that this ability to set expected failures will most likely not be needed outside of the test library itself.

Expected failure reasons are also a little hard to get right. It’s easy to miss something, such as a period at the end of the failure reason. The best way I found to get the exact reason text is to let the test fail and then copy the reason from the summary description.

Until now, we haven’t been able to have a test that specifically looks for a completely unexpected exception. Now, we can. And for the times when we expect an exception to be thrown, we can now check the two failure cases that go along with this, when the expected type is not thrown and when something else is thrown.

All of this is better than the alternative of either leaving these tests out or commenting them out, and we can do all this without adding more macros. Of course, the tests won’t compile yet because we haven’t created the setExpectedFailureReason method. So, let’s add that now:

    std::string_view reason () const
    {
        return mReason;
    }
    std::string_view expectedReason () const
    {
        return mExpectedReason;
    }
    void setFailed (std::string_view reason)
    {
        mPassed = false;
        mReason = reason;
    }
    void setExpectedFailureReason (std::string_view reason)
    {
        mExpectedReason = reason;
    }
private:
    std::string mName;
    bool mPassed;
    std::string mReason;
    std::string mExpectedReason;
};

We need a new data member to hold the expected reason, which will be an empty string, unless set inside the test body. We need the setExpectedFailureReason method to set the expected failure reason and we also need an expectedReason getter method to retrieve the expected failure reason.

Now that we have this ability to mark tests with a specific failure reason that is expected, let’s look for the expected failures in the runTests function:

        if (test->passed())
        {
            ++numPassed;
            output << "Passed"
                << std::endl;
        }
        else if (not test->expectedReason().empty() &&
            test->expectedReason() == test->reason())
        {
            ++numPassed;
            output << "Expected failure
"
                << test->reason()
                << std::endl;
        }
        else
        {
            ++numFailed;
            output << "Failed
"
                << test->reason()
                << std::endl;
        }

You can see the new test for tests that did not pass in the else if block. We first make sure that the expected reason is not empty and that it matches the actual failure reason. If the expected failure reason matches the actual failure reason, then we treat this test as a pass because of an expected failure.

Building the project and running it now shows that all five tests are passing:

Running 5 tests

---------------

Test can be created

Passed

---------------

Test that throws unexpectedly can be created

Expected failure

Unexpected exception thrown.

---------------

Test with throw can be created

Passed

---------------

Test that never throws can be created

Expected failure

Expected exception type int was not thrown.

---------------

Test that throws wrong type can be created

Expected failure

Unexpected exception thrown.

---------------

All tests passed.

Program ended with exit code: 0

You can see the three new tests that have expected failures. All of these are passing tests and we now have an interesting ability to expect tests to fail. Use it wisely. It is not normal to expect tests to fail.

We still have one more scenario to consider. And I’ll be honest and say that I took a break for an hour or so before I thought of this. We need to make sure the test library covers everything we can think of because you’ll be using it to test your code. You need to have a high level of confidence that the test library itself is as bug-free as possible.

Here’s the case we need to handle. What if there is a test case that’s expected to fail for some reason but it actually passes? Right now, the test library first checks whether the test has passed, and if so, then it doesn’t even look to see whether it was supposed to have failed. If it passes, then it passes.

But if you go to all the trouble to set an expected failure reason and the test passes instead, what should be the outcome? What we have is a failure that should have been treated as a pass that actually passed instead. Should this be a failure after all? A person could go dizzy thinking about these things.

If we treat this as a failure, then we’re back to where we started with a test case that we want to include but that is designed to ultimately fail. And that means we either have to live with a failure in the tests, ignore the scenario and skip the test, write the test and then comment it out so it doesn’t normally run, or find another solution.

Living with a failure is not an option. When using TDD, you need to get all your tests to a passing state. It does no good to expect a failure. That’s the whole reason we went to all the trouble of allowing failing tests to be expected to fail. Then, we can call those failures passes because they were expected.

Skipping a test is also not an option. If you decide something is really not an issue and doesn’t need to be tested, then that’s different. You don’t want a bunch of useless tests cluttering up your project. This seems like something important that we don’t want to skip, though.

Writing a test and then disabling it so it doesn’t run is also a bad idea. It’s too easy to forget the test ever existed.

We need another solution. And no, it’s not going to be to add another level where a passing test that should have failed in an expected way instead is treated as a failure, which we will then somehow mark as passing again. I’m not even sure how to write that sentence, so I’m going to leave it as confusing as it sounds. That path leads to a never-ending cycle of pass-fail-pass-fail-pass thinking. Too complicated.

The best idea I can come up with is to treat this case as a missed failure. That will let us test for the scenario and always run the test but avoid the true failure that would cause automated tools to reject the build, due to failures found.

Here is the new test that shows the scenario just described. It will currently pass without any problems:

// This test should throw an unexpected exception
// but it doesn't. We need to somehow let the user
// know what happened. This will result in a missed failure.
TEST("Test that should throw unexpectedly can be created")
{
    setExpectedFailureReason(
        "Unexpected exception thrown.");
}

Running this new test does indeed pass unnoticed like this:

Running 6 tests
---------------
Test can be created
Passed
---------------
Test that throws unexpectedly can be created
Expected failure
Unexpected exception thrown.
---------------
Test that should throw unexpectedly can be created
Passed
---------------
Test with throw can be created
Passed
---------------
Test that never throws can be created
Expected failure
Expected exception type int was not thrown.
---------------
Test that throws wrong type can be created
Expected failure
Unexpected exception thrown.
---------------
All tests passed.
Program ended with exit code: 0

We need to check in the runTests function for passed tests whether the expected error result has been set and, if so, then increment a new numMissedFailed count instead of the passed count. The new count should be summarized at the end too, but only if it’s anything other than zero.

Here is the beginning of runTests, where the new numMissedFailed count is declared:

inline int runTests (std::ostream & output)
{
    output << "Running "
        << getTests().size()
        << " tests
";
    int numPassed = 0;
    int numMissedFailed = 0;
    int numFailed = 0;

Here is the part of runTests that checks for passing tests. Inside of here is where we need to look for a passing test that was supposed to have failed with an expected failure reason:

        if (test->passed())
        {
            if (not test->expectedReason().empty())
            {
                // This test passed but it was supposed
                // to have failed.
                ++numMissedFailed;
                output << "Missed expected failure
"
                    << "Test passed but was expected to fail."
                    << std::endl;
            }
            else
            {
                ++numPassed;
                output << "Passed"
                    << std::endl;
            }
        }

And here is the end of the runTests function that summarizes the results. This will now show the test failures missed, if there are any:

    output << "---------------
";
    output << "Tests passed: " << numPassed
           << "
Tests failed: " << numFailed;
    if (numMissedFailed != 0)
    {
        output << "
Tests failures missed: " <<         numMissedFailed;
    }
    output << std::endl;
    return numFailed;
}

The summary at the end started getting more complicated than it needed to be. So, it now always shows the passed and failed count and only the failures missed if there were any. We now get a missed failure for the new test that was expected to fail but ended up passing.

Should the missed failures be included in the failure count? I thought about this and decided to only return the number of actual failures for all the reasons just explained that led to this scenario in the first place. Remember that it’s highly unlikely that you will ever find yourself needing to write a test that you intend to fail and then treat as a pass. So, you should not have missed failures either.

The output looks like this:

Running 6 tests

---------------

Test can be created

Passed

---------------

Test that throws unexpectedly can be created

Expected failure

Unexpected exception thrown.

---------------

Test that should throw unexpectedly can be created

Missed expected failure

Test passed but was expected to fail.

---------------

Test with throw can be created

Passed

---------------

Test that never throws can be created

Expected failure

Expected exception type int was not thrown.

---------------

Test that throws wrong type can be created

Expected failure

Unexpected exception thrown.

---------------

Tests passed: 5

Tests failed: 0

Tests failures missed: 1

Program ended with exit code: 0

We should be good now for this part. You have the ability to expect an exception and rely on your test to fail if the exception is not thrown, and the test library fully tests itself with all the possible combinations around expected exceptions.

This section also demonstrated multiple times how to continue enhancing your tests and getting them to pass again. If you follow this process, you’ll be able to gradually build your tests to cover more complicated scenarios.

Summary

This chapter has taken the steps we’ve already been following and made them explicit.

You now know to write code the way you want it to be used first, instead of diving into the details and working from the bottom up in order to avoid build failures. It’s better to work from the top, or an end user point of view, so that you will have a solution you’ll be happy with, instead of a buildable solution that is hard to use. You do this by writing tests as you would like your code to be used. Once you are happy with how your code will be used, then build it and look at the build errors to fix them. Getting the tests to pass is not the goal yet. This slight change in focus will lead to designs that are easier and more intuitive to use.

Once your code builds, the next step is to do only what is needed to get the tests to pass. It’s always possible that a change will cause tests that used to pass to now fail. That’s okay and is another good reason to do only what is needed.

And finally, you can enhance a test or add more tests before writing the code needed to pass everything again.

The test library is far from complete though. The only way to cause tests to fail right now is to throw an exception that is not expected. You can see that, even at a higher level, we’re following the practice of doing only what is needed, getting that working, and then enhancing the tests to add more.

The next enhancement is to let the test programmer check conditions within a test to make sure everything is working correctly. The next chapter will begin this work.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.144.98.13